CNL-TN-2026-021 Technical Note

MacroscopeVR Technical Specification and Project Report

Published: February 11, 2026 Version: 1

MacroscopeVR Technical Specification and Project Report

Document: CNL-DR-2026-001 Version: 2.0 Date: February 11, 2026 Author: Dr. Michael P. Hamilton, Canemah Nature Laboratory Platform: MacroscopeVR — A Participatory Planetary Observatory


Abstract

MacroscopeVR is a web-based Participatory Planetary Observatory that transforms consumer 360° photography into interactive three-dimensional structural models of ecological environments. The platform applies SHARP monocular Gaussian splatting — a neural network that reconstructs 1.18 million 3D ellipsoids from a single photograph — to panoramic imagery captured at biological field stations, producing explorable terrariums that preserve the geometric and spectral characteristics of vegetation, terrain, and built structures. A 25-position spherical sampling grid (CNL-SP-2026-013) decomposes each equirectangular panorama into perspective views spanning horizon, upper canopy, lower canopy, zenith, and nadir, yielding comprehensive structural coverage from a single capture point.

The current dataset encompasses 33 ecological research stations, 457 panoramas, and 11,425 individual views totaling approximately 258GB of Gaussian splat and point cloud data. Analysis of SHARP output reveals three distinct Gaussian populations — physical surfaces, atmospheric fill, and reconstruction artifacts — separable through opacity and scale thresholds, enabling GPU-accelerated interactive filtering in a browser-based quad-view analysis laboratory.

The platform's contribution model parallels iNaturalist's democratization of species identification: where iNaturalist compressed taxonomic expertise into smartphone-accessible AI classification, MacroscopeVR compresses structural ecology — traditionally requiring specialized instruments, plot mensuration, and forestry training — into a single 360° photograph processed through an automated pipeline. The system accepts geotagged input from multiple device types (360° cameras, smartphones, LiDAR scanners) and derives geographic and ecosystem classification from iNaturalist APIs using coordinates alone, requiring no manual metadata entry. Species observations and three-dimensional habitat context are captured simultaneously by the same observer at the same location, linking organism and environment in a unified, explorable record.

MacroscopeVR represents the convergence of a forty-year research program — spanning LaserDisc interactive video, desktop Qt applications, Google Earth integration with SketchUp 3D visualization, CENS wireless sensor networks, EcoView/econode environmental monitoring with ArcGIS, and the VeLEA ecological array — with contemporary advances in neural 3D reconstruction. Each generation pursued the same cognitive journey from planetary context to organism-level detail; MacroscopeVR is the first to make that journey participatory, computationally active, and accessible to anyone with a consumer camera.


1. Project Overview

1.1 Mission

MacroscopeVR (formerly ecoSPLAT) is a Participatory Planetary Observatory — a web-based platform for distributed, device-agnostic capture of three-dimensional structural environments using consumer 360° cameras, smartphones, and LiDAR devices. Geotagged captures are processed through automated pipelines (SHARP Gaussian splatting, point cloud conversion, physical filtering) and served through a coherent multi-scale visualization platform: globe → panorama → terrarium → splat lab → species.

The platform currently serves 33 ecological research stations across North America and Costa Rica, comprising 457 panoramas and 11,425 individual views. It is designed to scale to unlimited geotagged contributions from citizen scientists worldwide.

1.2 Conceptual Lineage

MacroscopeVR descends from a forty-year research program in multi-scale ecological observation, each generation advancing the core insight that understanding ecosystems requires navigating seamlessly between planetary context and organism-level detail:

1980s — LaserDisc Macroscope. Original concept at UC James San Jacinto Mountains Reserve. Interactive ecological navigation using LaserDisc random-access video — revolutionary at the time for enabling non-linear exploration through curated imagery at multiple scales. Hierarchical navigation from landscape to site to organism, pre-authored paths through fixed frames. The conceptual architecture was correct; the technology was thirty years premature.

1990s — macroscopeQT. Desktop application (Qt framework) with multi-scale environmental visualization, sensor integration, and hierarchical navigation. Brought the Macroscope concept into software with richer interactivity than analog video permitted. Still fundamentally curated — content was authored and organized by researchers.

2000s — Google Earth Integration. Collaboration with Sean Askay, whose Master's thesis "New Visualization Tools for Environmental Sensor Networks: Using Google Earth as an Interface to Micro-Climate and Multimedia Datasets" (UCLA, 2007) integrated Keyhole and then Google Earth with SketchUp 3D graphics for ecological site visualization. Simultaneously, the Center for Embedded Networked Sensing (CENS, $40M NSF Science and Technology Center) deployed wireless sensor networks and cameras across field stations, generating real-time environmental data streams. The Macroscope vision expanded from curated imagery to live sensor telemetry embedded in geographic context.

2010s — Sensor Networks and VeLEA. EcoView platform at Blue Oak Ranch Reserve (UC Santa Cruz Natural Reserve System) with econodes — custom wireless sensor network nodes — integrated with ArcGIS for geospatial analysis of environmental data. VeLEA (Very Large Ecological Array) — desktop application with multi-scale environmental visualization, sensor integration, and geospatial data fusion. Demonstrated that ecological monitoring required not just visualization but continuous automated observation across instrumented landscapes.

2025 — ecoSLAM PADD Design. CNL-SP-2026-013 specified a hardware-maximalist Portable Assessment Device: ZED stereo camera (meso-scale depth), iPad LiDAR (macro-scale), Insta360 (context), Jetson Orin (edge compute), BirdWeather PUC (acoustic/environmental). Cost: $1,600–$6,000. Assumed 3D reconstruction required expensive sensors or multi-image overlap.

2025 August — Spatial Intelligence Framework v2. Abstract MEO/observatory architecture with multi-modal capture, processing workflows, database integration, and cross-modal intelligence patterns. Dual-mode concept: observatory (continuous temporal monitoring) vs ecoSLAM (intensive spatial surveys). Still assumed traditional photogrammetry pipelines (Agisoft Metashape, COLMAP Structure from Motion).

2026 January — SHARP Monocular 3D Reconstruction. Single Insta360 equirectangular image → neural network → 1.18M Gaussian ellipsoids encoding 3D structure. No depth sensor, no stereo pair, no overlapping images, no ground control points. Collapsed the entire $4,600 multi-scale hardware stack into a $500 consumer camera plus software.

2026 February — MacroscopeVR. ecoSPLAT viewer operational with globe, panoramas, terrariums, species, and analysis tools serving 33 research stations. The instrument naturally grew an observatory, and that observatory — globe to station to panorama to terrarium to organism — recapitulated the original 1980s Macroscope navigation hierarchy. Platform identity evolved to MacroscopeVR: a Participatory Planetary Observatory.

The through-line across four decades: the cognitive journey from global context to local detail is the fundamental operation of ecological understanding. Each technology generation — LaserDisc, desktop software, Google Earth, sensor networks, neural 3D reconstruction — provided better tools for that journey. MacroscopeVR is the first generation where the tools are accessible to anyone with a consumer camera and curiosity.

1.3 Platform Architecture

MacroscopeVR parallels iNaturalist's democratization of species identification. Where iNaturalist compressed years of taxonomic training into "point phone at organism, AI suggests ID," MacroscopeVR compresses vegetation structure assessment — traditionally requiring plot mensuration, specialized instruments, and forestry training — into "take a 360° panorama from standing position, pipeline extracts 3D structure."

The contribution model accepts dual input: species images for taxonomic identification and photosphere imagery for structural characterization. Both are geotagged and timestamped, linking organism and habitat at the same location. The result is what has never existed: species observations with full three-dimensional habitat context computationally attached.


2. System Architecture

2.1 Hardware Infrastructure

System Hardware Role
Data MacBook Pro M4 Max Heavy SHARP processing, batch operations
Galatea Mac Mini M4 Pro, 1Gb fiber Production web server, SHARP daemon
Hogwarts Mac Mini M1 Camera integration, testing
Sauron Intel NUC i9 + GPUs Secondary compute

2.2 Software Stack

Server: Apache 2 (WebMon managed), PHP 8.3+, MySQL 8.4+ (mysqli, no PDO), macOS Sonoma

Frontend: Vanilla JavaScript ES modules (no framework), Three.js (WebGL2 3D rendering), Pannellum (panoramic viewing), Mapbox GL JS (globe visualization with satellite imagery)

Processing: Python 3, SHARP (Apple ML Research, Gaussian splatting), ImageMagick v7, custom batch automation scripts

Authentication: PHP session-based, shared with canemah.org, credentials stored outside web root

2.3 Directory Structure

viewer/
  index.php              -- SPA shell (PHP session + auth), view layer switching
  includes/
    header.php           -- 106px header bar (logo, breadcrumb, status, admin)
    footer.php           -- 48px footer (copyright, links)
  js/
    config.js            -- constants, paths, API keys, 25-position grid definition
    api.js               -- fetch wrappers (stations, homepage, species, SHARP)
    dashboard.js         -- state manager, strip, sidebar, HUD, breadcrumb,
                            narrative loader, context-sensitive sidebar
    globe.js             -- Mapbox globe, markers, fly-to
    panorama.js          -- Pannellum init/destroy, grid hotspot overlay, gyroscope
    terrarium.js         -- WebGL2 point cloud + image viewer, 4 display modes
    inat.js              -- iNaturalist species map + sidebar
    marbles.js           -- WebGL marble spheres for globe markers
  css/
    variables.css        -- design tokens (colors, typography, spacing)
    dashboard.css        -- layout grid, header, footer, strip, sidebar, HUD,
                            narrative column, responsive breakpoints
    globe.css            -- marker and marble styles
    panel.css            -- station info panel
    terrarium.css        -- terrarium nav, hotspot, sidebar styles
    pannellum.css        -- Pannellum overrides, grid hotspot styles
    inat.css             -- species map, popup, list, taxon filter styles

api/
  stations.php           -- REST: station list with content fields, species count,
                            terrarium inventory via filesystem scan
  homepage_api.php       -- public: homepage narrative sections as JSON
  species_api.php        -- cached species list + iNat sync (paginated, 1-50km)
  sharp_api.php          -- SHARP job queue API (ecoSLAM_DB)

admin/
  index.php              -- dashboard landing (stats, panel links)
  stations.php           -- station editor (map relocator, metadata, sidebar content,
                            thumbnail defaults, iNat species)
  pages.php              -- homepage content editor (narrative sections)
  homepage_api.php       -- admin CRUD for homepage sections (session-protected)
  stations_api.php       -- admin station update API (session-protected)
  auth_check.php         -- session guard for admin routes
  login.php / logout.php -- session management

tools/
  extract_grid.py        -- equirectangular → 25 perspective tiles (CNL-SP-2026-013)
  batch_process.py       -- batch grid extraction across all stations
  batch_sharp.py         -- batch SHARP inference + point cloud conversion
  splat_to_pointcloud.py -- Gaussian splat PLY → point cloud PLY (with physical filtering)
  splat_filter.py        -- standalone interactive filter explorer (Finder dialogs)
  sharp_daemon.py        -- background daemon polling sharp_jobs table
  run_sharp.py           -- single-image SHARP pipeline wrapper
  audit_stations.py      -- database/filesystem audit
  update_station_json.py -- regenerate station.json from DB

standalone/
  terrarium_validate.html  -- 3,474-line research workbench (8 shaders, 4 modes,
                              5 tools, DBSCAN, PCA morphology) — parts catalog
  splat_filter_lab.html    -- quad-view GPU filter lab (968 lines)

2.4 Databases

virtual_field — the catalog (what's available, where it lives)

Table Records Purpose
vf_stations 36 (33 active, 3 ghost) Station metadata: coordinates, elevation, area, ecoregion, climate zone, descriptions
vf_homepage_content 7 Homepage narrative sections (hero, timeline, splat explainer, stats)
vf_species_cache varies iNaturalist species cache per station (taxon_id, names, photo_url, counts)

ecoSLAM_DB — the workbench (processing and job management)

Table Records Purpose
sharp_jobs 3+ SHARP processing queue (status, timestamps, error logs)
observation_equipment 6 Capture devices
observation_sessions 1 Observation metadata
+ 12 empty schema tables 0 Future observation/correlation tracking

The separation is deliberate: the viewer only reads from virtual_field (the catalog), processing tools write to ecoSLAM_DB while working (the workbench) and update virtual_field when finished.

2.5 View States

The viewer navigates six modes via a thumbnail strip:

Globe → Panorama → Terrarium → Species → Splat Lab (new)
            ↑                                   ↑
       Pannellum 360°                    Quad-view GPU filter
       gyroscope support                 opacity/scale sliders
       grid hotspot overlay              histogram visualization

Each mode has its own strip thumbnail context, sidebar content, rendering engine, and responsive behavior.


3. Processing Pipeline

3.1 Grid Extraction (CNL-SP-2026-013)

Each equirectangular panorama is decomposed into 25 perspective views at 50° FOV:

Ring Cells Elevation Azimuth Spacing
Horizon g00–g08 9 views at 40°
Upper g09–g15 +40° 7 views at ~51.4°
Lower g16–g22 -40° 7 views at ~51.4°
Zenith g23 +70° 1 view
Nadir g24 -70° 1 view

Implementation: extract_grid.py using OpenCV perspective projection from equirectangular source. Batch processing via batch_process.py with --resume support. Total extraction: 457 panoramas × 25 views = 11,425 perspective JPGs in ~25 minutes on Data.

3.2 SHARP 3D Reconstruction

Each perspective image is processed through SHARP (Apple ML Research, 2024) for monocular Gaussian splatting:

  • Input: single perspective JPG (any resolution)
  • Output: PLY file containing ~1.18M 3D Gaussian ellipsoids
  • Each Gaussian encodes: position (xyz), color (spherical harmonics), opacity (sigmoid-encoded logit), scale (exp-encoded log values), rotation (quaternion)
  • Processing: ~5 seconds per view after initial model load (directory batching vs 18 seconds per-file)
  • Backend: Metal Performance Shaders (MPS) on Apple Silicon

3.3 Gaussian Population Analysis

SHARP-generated splat data contains three distinct populations:

Population Characteristics Purpose
Physical objects Compact scale, high opacity (≥0.5), on surfaces Trees, ground, structures — the actual 3D structure
Atmosphere/fill Large scale, low opacity (<0.1), distributed in volume Sky, gaps, ambient light modeling
Artifacts Variable, depth-plane slices, edge fringe SHARP reconstruction noise, floaters

Understanding these populations is foundational to the filtering approach. The Splat Lab provides interactive tools to manipulate visibility by population.

3.4 Point Cloud Conversion

splat_to_pointcloud.py converts SHARP Gaussian PLY to simplified xyz+rgb point cloud PLY:

  • Standard mode: all points, 15 bytes/vertex
  • Physical mode (--physical): opacity threshold (default ≥0.5) + max scale threshold (default ≤0.05) to isolate physical object Gaussians
  • CLI arguments: --opacity-threshold, --max-scale, --physical-opacity, --physical-scale
  • Distribution analysis with ASCII histograms via print_stats()

3.5 Batch Processing

batch_sharp.py orchestrates full-station processing:

  • Directory-mode SHARP inference (amortized model load)
  • Automatic point cloud conversion post-inference
  • Resume support for interrupted runs
  • Current dataset: 258GB of splat and point cloud PLY files, estimated 635GB at completion
  • Storage capacity: 3.2TB available on Galatea

3.6 SHARP Daemon

sharp_daemon.py runs on Galatea via launchd plist:

  • Polls sharp_jobs table for queued work
  • Runs SHARP with Metal/MPS backend under user context (not Apache's _www)
  • Updates job status: queued → processing → complete/error
  • Auto-restart via launchd on failure

This architecture solves the GPU access problem: Apache's _www user has no Metal GPU context. The daemon runs as mikehamilton with full MPS access, and PHP communicates via the database job queue.


4. Completed Milestones

Phase 1: Foundation and Pipeline (January–February 2, 2026)

4.1 ecoSLAM Specification

Authored CNL-SP-2026-013 defining the 25-position spherical grid, SHARP processing pipeline, point cloud conversion, and metric extraction methodology. Established the terrarium as the atomic unit of 3D ecological measurement.

4.2 Prototype Viewer and Research Workbench

Built terrarium_validate.html — a monolithic standalone research tool (3,474 lines) containing:

  • WebGL2 rendering with 8 shader programs
  • 4 display modes: image, points, splat, wireframe
  • 5 interactive tools: orbit camera, point-to-point measurement, frustum box selection, flood-fill grow selection, find-similar with PCA morphological descriptors
  • DBSCAN 6D clustering with automated parameter optimization
  • Named object management with save/clear
  • Web Worker for k-nearest-neighbor wireframe generation

The grow selection tool validated the core thesis: ecological structure is faithfully represented in SHARP point clouds. A birch trunk test (2 clicks, 2,739 points) produced a clean ground-to-canopy selection, while DBSCAN fragmented the same trunk into 98 horizontal slices. This demonstrated that the problem was algorithmic (DBSCAN's density-based approach fragments depth-plane artifacts), not representational.

4.3 Batch Processing Campaign

  • Grid extraction: 457 panoramas × 25 views = 11,425 perspective JPGs extracted in ~25 minutes
  • SHARP processing: directory batching reduced time from 18s/file to ~5s/view
  • Point cloud conversion: all processed views converted to simplified xyz+rgb PLY
  • Total output: 258GB of splat and point cloud data with zero errors
  • Tools delivered: extract_grid.py, batch_process.py, batch_sharp.py, splat_to_pointcloud.py

4.4 Processing Daemon Architecture

Designed and implemented the SHARP daemon (sharp_daemon.py) with launchd management on Galatea. Solved the Metal GPU access problem by running processing under user context rather than Apache's _www. PHP submits jobs to sharp_jobs table; daemon polls, processes, and updates status.

Phase 2: Web Application (February 8–9, 2026)

4.5 Data Layer and API

Built REST API endpoints on Galatea:

  • stations.php — station list with metadata, panorama inventory, terrarium availability via filesystem scan
  • homepage_api.php — homepage narrative sections as JSON from vf_homepage_content
  • species_api.php — iNaturalist species cache with sync, configurable radius (1-50km), pagination (200/page, up to 10,000 species)
  • sharp_api.php — SHARP job queue submission, status, and management

4.6 Globe Interface

Mapbox GL globe with:

  • WebGL marble spheres as station markers, textured with panorama thumbnails
  • Fly-to animation on station click
  • Station count display (filtered to exclude 3 ghost stations without data)
  • Satellite imagery base layer

4.7 Terrarium Viewer Integration (Task 1)

Complete panorama-to-3D navigation flow:

  • 25 Pannellum hotspots overlaid on panorama at grid coordinates (yaw=azimuth, pitch=elevation)
  • Hotspots styled by availability: grid-available (orange glow, has point cloud), grid-image (dim, perspective image only)
  • Click-through to WebGL2 terrarium viewer with 4 display modes (image, points, splat, wireframe)
  • Directional navigation thumbnails for cell-to-cell movement within a panorama
  • Sidebar switches to terrarium controls (mode selector, point size, wireframe density)
  • Image mode with correct aspect ratio (uScale letterbox/pillarbox shader uniform)

4.8 Database Audit and Station Integration (Task 2)

  • audit_stations.py created — full inventory comparing filesystem against database
  • 33 stations on disk confirmed, 3 ghost stations identified (archbold, blackfork, umiss — possible missing YouTube source videos)
  • Duplicate coordinates found (chinacamp/rushranch) and corrected
  • Rush Ranch relocated to 38.1683, -122.0348 (Solano Land Trust, verified against OBFS records)
  • notes and is_active columns added to vf_stations
  • Institution data populated for 23 NULL fields
  • All 33 station.json files regenerated from database
  • 458 thumbnails generated via ImageMagick across all stations
  • Symlinked station_thumb.jpg for globe marble textures

4.9 SPA Shell and Authentication (Task 5)

  • Converted index.html to index.php with PHP session management
  • Shared header (header.php): 106px bar with 100px ecoSPLAT logo, breadcrumb (station/view), station stats, conditional admin link
  • Shared footer (footer.php): 48px bar with copyright, location, Journal/Canemah.org/Contact links
  • Breadcrumb updates on every state change
  • Canemah authentication integration with session guard (auth_check.php)
  • Credentials stored in /Library/WebServer/secure/credentials/ecosplat.php outside web root
  • Admin API endpoints require session auth; public endpoints remain open

4.10 Homepage Narrative (Task 4B)

  • Left column (380px) with project story from vf_homepage_content (7 sections)
  • Sections: hero, timeline, splat explainer, stats bar
  • Content served via homepage_api.php, editable in admin pages editor
  • Narrative visible in home state, hides when station selected
  • Responsive: stacks above globe at 40vh on tablet, 30vh on phone

4.11 Context-Sensitive Sidebar (Task 4C)

Content adapts by view mode:

  • Globe: place description + facility description cards (blue/green accents)
  • Habitat/Ecosystems: ecology description card (orange accent)
  • Terrarium: mode selector, point size, wireframe density controls
  • Species: cached species list grouped by taxon with photos and counts
  • Empty fields show "Add [field] in Admin" placeholder

4.12 Responsive Design (Task 4D)

Three breakpoints:

  • Desktop (>1024px): full sidebar + thumbnail strip
  • Tablet (768–1024px): icon strip, sidebar as fixed drawer, narrative stacks
  • Phone (<480px): compact icons, bottom sheet sidebar, minimal header

4.13 Admin Dashboard (Task 6A-6B)

Station management (admin/stations.php):

  • Full metadata editor with sortable table and inline editing
  • Mapbox satellite map with draggable marker and radius overlay for coordinate editing
  • Geography fields: elevation, area, established year, climate zone, ecoregion
  • Viewer sidebar content: place_description, facility_description, ecology_description
  • Thumbnail defaults: photopoint, season, year, grid cell
  • Default species: iNaturalist taxon search and selection
  • "Save and Next Unverified" workflow for systematic coordinate verification

Homepage content editor (admin/pages.php):

  • Per-section save with textareas and JSON hints
  • Active/inactive toggle per section
  • About page tab placeholder

Phase 3: Species and Mobile (February 9–10, 2026)

4.14 iNaturalist API Integration (Task 3)

Complete species view layer:

  • inat.js module with Mapbox satellite map displaying observations as taxon-colored markers
  • Sidebar species list grouped by iconic taxon (Plantae, Aves, Mammalia, Insecta, Fungi, Reptilia, Amphibia) with photos and observation counts
  • Species data cached to vf_species_cache table
  • Sync endpoint: species_api.php?station=slug&sync=1&radius=20
  • Pagination: 200/page, up to 50 pages = 10,000 species per station
  • Station-specific radius multipliers (1×, 2×, 5×) stored in database
  • Taxon isolation filters (click taxon to show only that group)
  • Binomial nomenclature display (genus + species only, not subspecies)
  • Common name display with three-tier photo fallback logic (taxon photo → default photo → observation photo URL)
  • Enhanced popups: common name header, 2× size, lightbox-style photo viewing
  • Auto-close popups on map click
  • Genus-level filtering for subspecies deduplication
  • Dimmed styling for filtered-out taxon rows with strikethrough text

4.15 UI Refinements from Beta Testing

Based on feedback from Merry (beta tester on iPad Mini):

  • Label changes: "Globe" → "Landscape", "Habitat" → "Ecosystems", "Species" → "Biodiversity"
  • Species list redesigned with common names displayed prominently
  • Font sizes increased for accessibility on tablet screens
  • Sidebar persistence bug fixed (sidebar staying open across view changes)
  • Taxon count correction (was double-counting some species)

4.16 Header Metadata Redesign

  • Station name typography improvements for readability at various lengths
  • Context-sensitive status information architecture (station stats adapt by view mode)
  • Data density tradeoffs between header and sidebar resolved

4.17 Mobile Touch and Sensor Integration

Panorama (Pannellum):

  • iOS 13+ DeviceOrientationEvent permission handling
  • Gyroscope-driven look-around via quaternion-based rotation matrix
  • Portrait/landscape orientation detection and compensation
  • Compass bearing display in HUD
  • Year selector always-visible fix

Terrarium (WebGL2):

  • Multi-touch gesture support: single-finger orbit, pinch-zoom, tap-to-measure
  • Image mode guards matching mouse behavior
  • Touch event coordinate transformation for WebGL canvas

4.18 WebXR VR Implementation (Task 4F)

  • webxr.js module with progressive enhancement for VR headsets
  • Stereo camera rendering for panorama viewer (Pannellum in VR)
  • Stereo camera rendering for terrarium point cloud viewer
  • VR button appears when WebXR available
  • Self-contained GL context management (separate from main viewer)
  • Controller interaction implementation
  • Dashboard UI wiring and CSS styling
  • Debugged: canvas visibility, GL context conflicts, model matrix calculations

Phase 4: Analysis Toolkit (February 11, 2026)

4.19 Gaussian Splat Population Analysis

Identified and characterized the three Gaussian populations in SHARP output through systematic investigation of opacity and scale distributions. Established that opacity is stored as sigmoid-decoded logits and scale as exp-decoded log values. This understanding is foundational to all filtering approaches.

4.20 Physical Filtering Enhancement

Enhanced splat_to_pointcloud.py with --physical flag:

  • Combined opacity threshold (default ≥0.5) and max scale threshold (default ≤0.05)
  • Isolates physical object Gaussians from atmosphere fill and artifacts
  • Distribution analysis with population estimates
  • CLI arguments for fine-tuning thresholds per dataset

4.21 Standalone Interactive Filter Explorer

splat_filter.py — Python tool with native macOS Finder dialogs:

  • Full opacity and scale distribution analysis with ASCII histograms
  • Preview filter results before saving
  • Iterative threshold adjustment in terminal session
  • Direct PLY export of filtered point clouds

4.22 Splat Filter Lab

splat_filter_lab.html — integrated browser-based GPU filter lab (968 lines):

  • Client-side SHARP PLY parsing (62+ Gaussian properties per vertex)
  • WebGL2 rendering with GPU-based filtering via shader uniforms (instant response)
  • Interactive histogram visualization with threshold overlays
  • Three view modes: Physical, Ghost (atmosphere only), Removed (artifacts)
  • Four presets: None, Light, Standard, Strict
  • Browser-native PLY export of filtered data
  • Quad-view layout: four independent cameras (Nadir/top-down, Front, Left, Right)
  • Per-viewport mouse interaction (drag to orbit, scroll to zoom)
  • Viewport labels with hover highlighting and crosshair dividers
  • 60 FPS with 1.18M Gaussians across four simultaneous viewports

4.23 Terrarium Image Aspect Ratio Fix

Grid images in terrarium viewer were stretching to fill viewport. Added uScale uniform to image vertex shader computing proper letterbox/pillarbox ratios from loaded image dimensions. Images now display with correct proportions.


5. Active Task List

Task A: Terrarium Inventory Table

Status: Not started Database: virtual_field Priority: High — eliminates per-request filesystem scanning

Create vf_terrariums table: one row per grid cell per panorama per station. Columns: station slug, panorama identifier, cell index (g00–g24), image extracted (bool), SHARP completed (bool), point cloud converted (bool), file sizes, timestamps. Processing tools write status on completion. Viewer API reads from table instead of scanning directories.

Task B: Platform Identity

Status: Not started Priority: High — precedes narrative rewrite

Rename ecoSPLAT to MacroscopeVR across all codebase references, browser title, header/footer, homepage content, API documentation, config constants, admin dashboard. Subtitle: "A Participatory Planetary Observatory." The ecoSLAM specification and methodology retain their names as the underlying measurement theory.

Task C: Homepage Narrative Rewrite

Status: Not started Priority: High Depends on: Task B

Rewrite the seven vf_homepage_content sections to reflect PPO framing: MacroscopeVR identity, forty-year Macroscope arc, contribution model, citizen science parallel with iNaturalist, multi-scale navigation vision, updated splat explainer with population analysis, current dataset statistics.

Task D: Data Model Redesign

Status: Not started Priority: High — foundational architecture change Depends on: Task A

Shift from station-centric to place-based, device-agnostic architecture:

  • vf_places — geolocated points replacing stations as the atomic unit. GPS coordinates, iNaturalist-derived geographic hierarchy (Places API), ecosystem classification (inferred from observations). A "station" is a place with an institutional name and repeated visits. A citizen contribution is a place with a single panorama and a GPS tag. Same schema, same pipeline, same viewer.
  • vf_panoramas — one per capture at a place. Timestamp, device type (Insta360, Antigravity A1, iPhone, Polycam), observer, source format (equirectangular, standard photo, LiDAR scan), processing status.
  • vf_terrariums — one per grid cell per panorama (extended from Task A). Processing status, file references, structural metrics.

Input routing by source type:

  • 360° equirectangular (Insta360, Antigravity A1 drone) → grid extraction → SHARP → terrariums
  • Standard photo (iPhone, any camera) → direct SHARP as single terrarium
  • LiDAR scan (Polycam, iPad Pro) → point cloud/mesh ingestion, skips SHARP entirely
  • Species images → iNaturalist-style taxonomic identification pipeline

All paths converge at the same place record. iNaturalist APIs provide geographic and taxonomic context from coordinates alone.

Task E: Contribution Pipeline GUI

Status: Not started Priority: Medium Depends on: Task D, existing Galatea infrastructure (daemon, launchd, SHARP all operational)

Frontend for the upload → route → process → validate → publish workflow. Parallels iNaturalist contribution model: provide geotagged input, backend determines source type and routes to appropriate pipeline, human validates missing metadata or classification, processed result publishes to viewer. Dual input: species images and structural panoramas from the same location and time.

Task F: Splat Lab View Mode

Status: Prototype delivered Priority: Medium Depends on: Task B

Integrate the quad-view GPU filter lab as a navigable view state in MacroscopeVR. Appears as a strip thumbnail category alongside panorama, terrarium, and species views. Loads raw SHARP splat PLY files for interactive manipulation.

Progressive enhancement roadmap (tools migrating from terrarium_validate.html parts catalog):

  • Phase 1: Current filter lab capabilities (delivered)
  • Phase 2: Measurement tool (point picking, 3D distance)
  • Phase 3: Grow selection (flood-fill with color similarity threshold)
  • Phase 4: Find-similar (PCA morphological descriptors)
  • Phase 5: Segmentation (DBSCAN clustering, named objects)

Task G: Deploy Session Deliverables to Galatea

Status: Ready for transfer Priority: Immediate

Files: terrarium.js (aspect ratio fix), splat_to_pointcloud.py (physical filtering), splat_filter.py (interactive explorer), splat_filter_lab.html (quad-view filter lab).


6. Priority Order and Dependencies

1. Task G — Deploy deliverables .............. immediate, no dependencies
2. Task A — Terrarium inventory table ........ prerequisite for D
3. Task B — Platform identity rename ......... prerequisite for C and F
4. Task C — Homepage narrative rewrite ....... depends on B
5. Task D — Data model redesign .............. depends on A
6. Task F — Splat Lab view mode .............. depends on B
7. Task E — Contribution pipeline GUI ........ depends on D

Dependency Graph:

Task G (deploy) .............. immediate, no dependencies
Task A (inventory table) ---> Task D (data model extends it)
Task B (identity) ---------> Task C (narrative uses new name)
Task B (identity) ---------> Task F (lab needs UI integration)
Task D (data model) -------> Task E (pipeline writes to new schema)

Tasks A, B, and G are independent and can proceed in parallel. Task F is independent of D and can proceed once B is complete.


7. Removed Tasks

Task Reason
Task 4E (Mobile HUD) Paused — gyro/compass pipeline challenges in Pannellum, not high priority
Task 4F (WebXR) Completed February 10. Full-site VR navigation (globe in VR) is a separate future project
Task 6C (Modularize terrarium_validate) Superseded by Task F. Splat Lab is the successor; terrarium_validate becomes a parts catalog
Task 6D (Marble-Building Pipeline) Absorbed into Task E. Contribution pipeline replaces admin-only wizard
Task 6E (Photosphere CRUD) Absorbed into Task D. Place-based data model includes panorama table design
Task 7A-7C (Galatea Environment) Already operational. Daemon, launchd, Python environment all configured

8. Key Learnings and Technical Principles

8.1 SHARP Point Cloud Characteristics

SHARP-generated point clouds require fundamentally different analysis approaches than traditional LiDAR data. Uniform density and depth-plane artifacts mean that automated clustering algorithms (DBSCAN) fragment ecological objects into horizontal slices rather than identifying coherent organisms. The solution: leverage human visual intelligence through interactive tools (grow selection, flood-fill with color similarity thresholds) rather than attempting fully automated segmentation.

8.2 The Three Gaussian Populations

Every SHARP reconstruction contains three distinct populations distinguishable by opacity and scale. Physical filtering (opacity ≥0.5, scale ≤0.05) reliably isolates structural elements from atmosphere fill and artifacts. These thresholds are empirically derived but consistent across diverse ecosystems. The Splat Lab provides real-time interactive exploration of these population boundaries.

8.3 Directory-Mode Processing Efficiency

SHARP directory batching amortizes model load time: ~5 seconds per view versus ~18 seconds for per-file invocation. This 3.6× speedup was critical for processing 11,425 views in practical timeframes.

8.4 Hardware Stack Collapse

The ecoSLAM PADD design specified $4,600 in hardware (ZED stereo camera, iPad LiDAR, Insta360, Jetson Orin, BirdWeather PUC). SHARP monocular depth estimation collapsed this to a $500 consumer 360° camera plus software. ZED's meso-scale depth, COLMAP multi-view pipelines, and iPad LiDAR detail are all replaced by neural network inference from a single image.

8.5 Development Approach

Iterative prototype-and-refine with working implementations for immediate feedback rather than extensive specifications. Complete file deliveries rather than code snippets. Test with representative data (ELC station) before batch processing. Visual hierarchy and smooth transitions over complex feature sets. Simplicity, readability, and maintainability as primary code values.


9. Dataset Summary

Metric Value
Research stations 33 (with data) + 3 (ghost records)
Panoramas 457
Perspective views (terrariums) 11,425
Grid cells per panorama 25 (CNL-SP-2026-013 specification)
Thumbnails generated 458
Splat + point cloud data ~258GB (current), ~635GB (estimated complete)
Available storage (Galatea) 3.2TB
Gaussians per view ~1.18M
Processing time per view ~5 seconds (directory batch mode)
Species cached varies per station (up to 10,000 per sync)

10. Reference Documents

Document ID Title Date
CNL-SP-2026-013 ecoSLAM Specification v1.0 January 2026
CNL-FN-2026-019 ecoSPLAT to MacroscopeVR Field Note February 2026
Spatial Intelligence Framework v2 August 2025
ecoSLAM PADD Design 2025
Cerpa, A., Elson, J., Estrin, D., Girod, L., Hamilton, M., Zhao, J. (2001). "Habitat Monitoring: Application Driver for Wireless Communications Technology." ACM SIGCOMM Workshop, Costa Rica.
Askay, S. (2007). "New Visualization Tools for Environmental Sensor Networks: Using Google Earth as an Interface to Micro-Climate and Multimedia Datasets." UC Los Angeles, eScholarship. https://escholarship.org/uc/item/20f0w8wn
CENS: Center for Embedded Networked Sensing ($40M NSF STC) 2002–2012
EcoView / Econode Sensor Network Platform, Blue Oak Ranch Reserve 2010s
VeLEA: Very Large Ecological Array 2010s
macroscopeQT Desktop Application 1990s
Kerbl et al. — 3D Gaussian Splatting for Real-Time Radiance Field Rendering 2023
SHARP — Apple ML Research, Monocular Gaussian Splatting 2024
Van Horn et al. — The iNaturalist Species Classification and Detection Dataset 2018

11. Session Protocol

  • Start each task by reviewing relevant source files in project knowledge
  • Deliver complete files ready for BBEdit, not snippets
  • Wait for "proceed" after each delivery
  • Test against ELC data first, then verify with a second station
  • For admin dashboard: build and test locally on Galatea before public deployment
  • Coffee Mode for essays and intellectual synthesis; Laboratory Mode for code and architecture

Cite This Document

(2026). "MacroscopeVR Technical Specification and Project Report." Canemah Nature Laboratory Technical Note CNL-TN-2026-021. https://canemah.org/archive/CNL-TN-2026-021

BibTeX

@techreport{cnl2026macroscopevr, author = {}, title = {MacroscopeVR Technical Specification and Project Report}, institution = {Canemah Nature Laboratory}, year = {2026}, number = {CNL-TN-2026-021}, month = {february}, url = {https://canemah.org/archive/document.php?id=CNL-TN-2026-021}, abstract = {MacroscopeVR is a web-based Participatory Planetary Observatory that transforms consumer 360° photography into interactive three-dimensional structural models of ecological environments. The platform applies SHARP monocular Gaussian splatting — a neural network that reconstructs 1.18 million 3D ellipsoids from a single photograph — to panoramic imagery captured at biological field stations, producing explorable terrariums that preserve the geometric and spectral characteristics of vegetation, terrain, and built structures. A 25-position spherical sampling grid (CNL-SP-2026-013) decomposes each equirectangular panorama into perspective views spanning horizon, upper canopy, lower canopy, zenith, and nadir, yielding comprehensive structural coverage from a single capture point. The current dataset encompasses 33 ecological research stations, 457 panoramas, and 11,425 individual views totaling approximately 258GB of Gaussian splat and point cloud data. Analysis of SHARP output reveals three distinct Gaussian populations — physical surfaces, atmospheric fill, and reconstruction artifacts — separable through opacity and scale thresholds, enabling GPU-accelerated interactive filtering in a browser-based quad-view analysis laboratory. The platform's contribution model parallels iNaturalist's democratization of species identification: where iNaturalist compressed taxonomic expertise into smartphone-accessible AI classification, MacroscopeVR compresses structural ecology — traditionally requiring specialized instruments, plot mensuration, and forestry training — into a single 360° photograph processed through an automated pipeline. The system accepts geotagged input from multiple device types (360° cameras, smartphones, LiDAR scanners) and derives geographic and ecosystem classification from iNaturalist APIs using coordinates alone, requiring no manual metadata entry. Species observations and three-dimensional habitat context are captured simultaneously by the same observer at the same location, linking organism and environment in a unified, explorable record. MacroscopeVR represents the convergence of a forty-year research program — spanning LaserDisc interactive video, desktop Qt applications, Google Earth integration with SketchUp 3D visualization, CENS wireless sensor networks, EcoView/econode environmental monitoring with ArcGIS, and the VeLEA ecological array — with contemporary advances in neural 3D reconstruction. Each generation pursued the same cognitive journey from planetary context to organism-level detail; MacroscopeVR is the first to make that journey participatory, computationally active, and accessible to anyone with a consumer camera.} }

Permanent URL: https://canemah.org/archive/document.php?id=CNL-TN-2026-021