CNL-TN-2026-009 Technical Note

CNL-TN-2026-009: Macroscope Virtual Field Explorer

Published: January 18, 2026 Version: 2

CNL-TN-2026-009: Macroscope Virtual Field Explorer

Technical Specification for an Immersive Ecological Research Platform

Document Type: Technical Note
Document ID: CNL-TN-2026-009
Version: 4.0
Date: January 18, 2026
Author: Dr. Michael P. Hamilton
Institution: Canemah Nature Laboratory
Status: Implementation Specification

AI Collaboration Disclosure: This technical specification was developed collaboratively with Claude (Anthropic, Opus 4.5), with the AI contributing to system architecture design, workflow specification, and technical documentation. Dr. Hamilton provided the conceptual vision, domain expertise, and integration requirements based on forty years of ecological sensor deployment and viewmap research.


Abstract

This document specifies the Macroscope Virtual Field Explorer (MVFE), an immersive web-based platform enabling students, researchers, and citizen scientists to conduct quantitative ecological investigations within 360° spherical environments captured at field stations worldwide. The system integrates existing archives of seasonal panoramic video from The Virtual Field network with on-demand 3D Gaussian splat generation via Apple’s SHARP model, creating “measurement terrariums” that transform passive viewing into active spatial analysis.

The platform is built on a LAMP stack (Linux, Apache, MySQL, PHP) with vanilla JavaScript for frontend interactions and Python for computational backends. Users navigate a global constellation of field stations, immerse themselves in seasonal habitat documentation, deploy virtual instruments to extract quantitative measurements, and compare findings across ecosystems and seasons. The system supports screen-based interaction with future WebXR extension planned.

Key Design Principle: Pattern recognition remains with the human observer. The system provides measurement capability and contextual data; meaning emerges from the investigator’s interpretive framework shaped by experience.


1. Introduction

1.1 Background

The Virtual Field project, funded by the National Science Foundation during the COVID-19 pandemic, assembled a network of field stations collecting standardized 360° video documentation at solstices and equinoxes. This archive now contains over 280 five-minute spherical videos spanning 70+ locations across 25+ field stations, from Costa Rican tropical forests to Montana conifer stands to Belgian heathlands.

These videos were conceived as passive educational resources—windows into ecosystems for remote learners. This specification describes a transformation: converting that passive archive into an active research instrument through integration with rapid 3D scene reconstruction technology.

1.2 The SHARP Capability

Apple’s SHARP model (Single-image High-Accuracy Real-time Parallax) generates metric 3D Gaussian splat representations from single photographs in under one second. Previous work at Canemah Nature Laboratory (CNL-TN-2026-005) demonstrated that perspective views extracted from equirectangular 360° imagery can be processed through SHARP to create bounded 3D “terrariums”—measurement windows preserving depth relationships within each viewing frustum.

1.3 Design Philosophy

The Macroscope Virtual Field Explorer embodies four decades of viewmap research, from the original 1986 videodisc-based EXPLORER mode (“scan right, scan left, scan up, scan down, zoom to ecosystem”) through contemporary neural radiance fields. The fundamental insight remains constant: ecosystems become comprehensible when made navigable. MVFE extends navigability from viewing to measurement, from observation to quantification.

1.4 Human-Centered Investigation

As articulated in “Observatories of Complexity: Two Macroscopes for the Biosphere and Noosphere” (Hamilton, 2026), the instruments surface data; meaning emerges from the observer’s predictive model shaped by experience. A trained ecologist looking at a 2024 terrarium beside a 2021 terrarium will recognize a windthrow event. A student may need guidance to see it. Neither requires the system to flag it automatically—the visual evidence is present in the data; the interpretation is a human act.

The system computes; the human interprets.


2. System Architecture

2.1 Architectural Overview

┌─────────────────────────────────────────────────────────────────────┐
│                    MACROSCOPE VIRTUAL FIELD EXPLORER                │
│                    Human-Driven Investigation Platform              │
├─────────────────────────────────────────────────────────────────────┤
│                                                                     │
│  USER-DRIVEN INVESTIGATION WORKFLOW                                 │
│                                                                     │
│  ┌──────────────┐    ┌──────────────┐    ┌──────────────┐          │
│  │    GLOBE     │───▶│   STATION    │───▶│  IMMERSION   │          │
│  │  INTERFACE   │    │    PORTAL    │    │   CHAMBER    │          │
│  │  (Mapbox)    │    │    (PHP)     │    │ (Pannellum)  │          │
│  └──────────────┘    └──────────────┘    └──────────────┘          │
│                                                 │                   │
│                                                 ▼                   │
│                                          ┌──────────────┐          │
│                                          │   TRICORDER  │          │
│                                          │ Gaze Capture │          │
│                                          └──────────────┘          │
│                                                 │                   │
│                              ┌──────────────────┼──────────────┐   │
│                              ▼                  ▼              ▼   │
│                       ┌──────────┐       ┌──────────┐   ┌────────┐│
│                       │  SINGLE  │       │ TEMPORAL │   │ FIELD  ││
│                       │TERRARIUM │       │ COMPARE  │   │NOTEBOOK││
│                       └──────────┘       └──────────┘   └────────┘│
│                              │                  │              │   │
│                              └────────┬─────────┘              │   │
│                                       ▼                        │   │
│                              ┌──────────────┐                  │   │
│                              │ MEASUREMENT  │◀─────────────────┘   │
│                              │    TOOLS     │                      │
│                              └──────────────┘                      │
│                                       │                            │
│                                       ▼                            │
│                              ┌──────────────┐                      │
│                              │    EXPORT    │                      │
│                              │  (CSV/JSON)  │                      │
│                              └──────────────┘                      │
│                                                                     │
├─────────────────────────────────────────────────────────────────────┤
│                         BACKEND SERVICES                            │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌────────────┐│
│  │   SHARP     │  │  iNaturalist│  │   Session   │  │   Media    ││
│  │   Engine    │  │     API     │  │   Store     │  │  Archive   ││
│  │  (Flask)    │  │  (Public)   │  │  (MySQL)    │  │  (Local)   ││
│  └─────────────┘  └─────────────┘  └─────────────┘  └────────────┘│
└─────────────────────────────────────────────────────────────────────┘

2.2 Technology Stack

Component Technology Function
Globe Interface Mapbox GL JS Global field station visualization and navigation
Station Portal PHP + Mapbox GL JS Site-level viewmap locations with metadata
Immersion Chamber Pannellum 360° spherical viewing
Field Notebook Vanilla JS overlay Habitat metadata, user notes, iNaturalist context
Tricorder Vanilla JS + Canvas Gaze-directed sampling and perspective extraction
SHARP Engine Flask server → Python On-demand Gaussian splat generation with MPS acceleration
Terrarium View WebGL2 splat renderer Interactive 3D measurement environment
Session Store MySQL + localStorage User data persistence
Media Archive Local filesystem Equirectangular frames extracted from video

2.3 Data Flow

User gaze in Pannellum
        │
        ▼
┌───────────────────┐
│  Capture gaze:    │
│  yaw, pitch, fov  │
└─────────┬─────────┘
          │
          ▼
┌───────────────────┐     ┌───────────────────┐     ┌───────────────────┐
│  Extract perspec- │────▶│  Flask server     │────▶│  SHARP inference  │
│  tive from equi-  │     │  (localhost:5005) │     │  (~10s with MPS)  │
│  rectangular      │     └───────────────────┘     └─────────┬─────────┘
└───────────────────┘                                         │
                                                              ▼
                                                    ┌───────────────────┐
                                                    │  PLY Gaussian     │
                                                    │  splat file       │
                                                    └─────────┬─────────┘
                                                              │
                                                              ▼
                                                    ┌───────────────────┐
                                                    │  Splat viewer     │
                                                    │  (SAME viewpoint) │
                                                    └───────────────────┘

Critical Design Decision: The splat viewer initializes at the identical perspective (yaw, pitch, field of view) that the user was viewing when they triggered SCAN. The user should not have to reorient—they are looking at the same scene, now rendered in navigable 3D.


3. Modular Frontend Architecture

3.1 File Organization

The frontend has been refactored into modular components for maintainability:

Macroscope-Virtual-Field-Explorer/
├── index.php                    # HTML template + PHP config (~100 lines)
├── css/
│   └── explorer.css             # All application styles (~530 lines)
├── js/
│   ├── splat-viewer.js          # WebGL2 Gaussian splat renderer (~530 lines)
│   ├── pannellum.js             # 360° viewer init & gaze capture (~60 lines)
│   ├── selection.js             # Rectangle drawing & gaze conversion (~140 lines)
│   ├── panel.js                 # iPad panel drag/resize/minimize (~130 lines)
│   ├── measurement.js           # Measure mode & calibration (~200 lines)
│   └── app.js                   # Main controller & scan API (~70 lines)
├── generate-terrarium.php       # API endpoint (calls Flask server)
├── sharp_server.py              # Flask inference server with MPS
├── run_sharp.py                 # Python pipeline wrapper
├── extract_perspective.py       # Equirectangular projection
├── config.php                   # Configuration settings
├── 360_panos/                   # Source equirectangular images
├── data/
│   └── splats/                  # Generated PLY files
└── temp/                        # Intermediate perspective images

3.2 Module Dependencies

index.php
    │
    ├── css/explorer.css
    │
    └── JavaScript (load order matters)
        │
        ├── splat-viewer.js      # No dependencies (standalone WebGL2 class)
        │
        ├── pannellum.js         # Depends on: Pannellum CDN, CONFIG global
        │
        ├── selection.js         # Depends on: pannellum.js (viewer, currentGaze)
        │                        #             app.js (triggerScan callback)
        │
        ├── panel.js             # Depends on: splat-viewer.js (SplatViewer class)
        │
        ├── measurement.js       # Depends on: splat-viewer.js (getPointAt method)
        │                        #             panel.js (splatViewer instance)
        │
        └── app.js               # Depends on: All above modules
                                 #             CONFIG global from PHP

3.3 Module Specifications

3.3.1 splat-viewer.js

WebGL2 Gaussian splat renderer. Standalone class with no external dependencies.

Class: SplatViewer

class SplatViewer {
    constructor(canvasId, plyUrl)  // Initialize and optionally load PLY
    loadPLY(url)                   // Async load and parse PLY file
    parsePLY(buffer)               // Parse binary PLY to typed arrays
    uploadData()                   // Upload geometry to GPU
    sortSplats()                   // Back-to-front depth sort
    render()                       // Main render loop
    resize()                       // Handle canvas resize
    resetOrbit()                   // Reset camera to default position
    getPointAt(screenX, screenY)   // Raycast to find Gaussian at screen point
    start() / stop()               // Control render loop
}

Exports: window.SplatViewer

3.3.2 pannellum.js

Pannellum 360° viewer initialization and gaze state management.

Functions:

  • initPannellum() — Initialize viewer with CONFIG.panorama
  • updateGazeDisplay() — Update UI with current yaw/pitch/fov
  • formatAngle(angle) — Normalize angle to 0-360 range
  • resetPannellumView() — Reset to default orientation

Exports: viewer, currentGaze, initPannellum, updateGazeDisplay, formatAngle, resetPannellumView

3.3.3 selection.js

Rectangle selection for region-based terrarium generation.

Functions:

  • enterSelectionMode() / exitSelectionMode() — Mode management
  • handleSelectionMouseDown/Move/Up(e) — Mouse event handlers
  • updateSelectionRect() — Update visual rectangle
  • selectionToGaze(start, end, width, height) — Convert screen rect to gaze params
  • initSelection() — Attach event listeners

Exports: selectionMode, enterSelectionMode, exitSelectionMode, selectionToGaze, initSelection

3.3.4 panel.js

Floating iPad panel management for the terrarium viewer.

Functions:

  • openSplatViewer(plyUrl, metadata) — Show panel and load PLY
  • closeSplatViewer() — Hide panel and stop rendering
  • toggleMinimize() — Minimize/restore panel
  • initPanelDrag() / initPanelResize() — Drag and resize setup
  • initPanel() — Initialize all panel functionality

Exports: splatViewer, openSplatViewer, closeSplatViewer, toggleMinimize, initPanel

3.3.5 measurement.js

Point-to-point measurement tools with calibration.

State:

  • measureMode — Boolean, measurement active
  • measurePoints — Array of clicked points
  • scaleFactor — Inches per SHARP unit
  • scaleCalibrated — Boolean, calibration set

Functions:

  • enterMeasureMode() / exitMeasureMode() — Mode management
  • addMeasurePoint(point3D, screenX, screenY) — Add clicked point
  • drawMeasureLine(p1, p2) — Draw visual line between points
  • clearMeasureMarkers() — Remove all markers
  • calibrateScale() — Set scale from known distance
  • resetCalibration() — Clear calibration
  • initMeasurement() — Attach event listeners

Exports: All state variables and functions

3.3.6 app.js

Main application controller. Initializes all modules and handles API communication.

Functions:

  • triggerScan(gaze) — Send gaze to API, open result in viewer
  • initApp() — Initialize all modules, wire event handlers

Expects: CONFIG global with {panorama, sharpEndpoint}


4. User Experience Workflow

4.1 Navigation Hierarchy

GLOBAL ──▶ STATION ──▶ SPHERE ──▶ GAZE ──▶ TERRARIUM ──▶ MEASURE

4.2 Detailed Workflow

Phase 1: Global Orientation

User arrives at a 3D globe with field station markers at geographic coordinates. Clicking a marker or searching by name flies to station location.

Phase 2: Station Portal

The station view presents:

  • Station header with name, institution, links
  • Site map with viewmap markers
  • Viewmap table with ecosystem metadata
  • Seasonal navigator showing available captures (typically 4 per location)

User selects a viewmap location and season.

Phase 3: Immersion Chamber

User enters the 360° sphere via Pannellum viewer:

  • Full equirectangular panorama with drag navigation
  • Compass rose for orientation
  • Time badge showing capture date/season
  • Tool palette: Tricorder, Notebook (minimized by default)

Phase 4: Gaze Capture (Tricorder)

User looks at a feature of interest and activates SCAN:

  1. System captures current gaze state:
    • yaw: azimuth (-180° to +180°)
    • pitch: elevation (-90° to +90°)
    • hfov: horizontal field of view (user’s current zoom)
  2. Visual indicator shows extraction bounds (rectangular overlay)
  3. User confirms: “Generate Terrarium”

Phase 5: Terrarium Generation

Backend processing (~10-12 seconds total):

  1. PHP endpoint receives gaze parameters
  2. PHP calls Flask server on localhost:5005
  3. Flask extracts perspective image from equirectangular
  4. SHARP inference generates Gaussian splat (~10s with MPS)
  5. PLY file saved to static directory
  6. Response returns PLY URL and metadata

Phase 6: Terrarium Exploration

Critical: Splat viewer opens at the SAME viewpoint:

  • Camera position matches original gaze direction
  • User sees the same scene, now in 3D
  • No reorientation required

From this starting point, user can:

  • Orbit around the scene
  • Zoom in/out
  • Pan to reframe
  • Activate measurement tools

Phase 7: Measurement

Available tools:

  • Point-to-point distance: Click two locations, see distance
  • Height estimation: Click base and top, see vertical extent
  • Depth profile: Histogram of point depths in view

All measurements logged to Field Notebook with:

  • Value and units
  • Gaze direction at capture
  • Timestamp
  • Optional user annotation

Phase 8: Temporal Comparison

User selects a different season at same location:

  1. Season selector shows available captures
  2. Previous gaze direction is preserved
  3. User generates terrarium at identical direction
  4. Split-view comparison: Season A | Season B
  5. Differential metrics computed:
    • Canopy closure change
    • Sky view factor change
    • Point density by height band

The human recognizes patterns. The system provides quantified evidence.

Phase 9: Export

User exports findings:

  • CSV: raw measurements with metadata
  • JSON: structured session data
  • Screenshot: current terrarium view with annotations

5. Technical Specifications

5.1 Globe Interface

Technology: Mapbox GL JS v3+

Implementation: globe.php with embedded JavaScript

mapboxgl.accessToken = 'your-token';
const map = new mapboxgl.Map({
    container: 'map',
    style: 'mapbox://styles/mapbox/satellite-streets-v12',
    projection: 'globe',
    center: [-122.6, 45.5],
    zoom: 2
});

// Load station markers from PHP-generated GeoJSON
map.on('load', () => {
    map.addSource('stations', {
        type: 'geojson',
        data: '/api/stations.php'
    });
    // ... marker layer configuration
});

5.2 Station Portal

Technology: PHP with Mapbox embed

Files:

  • station.php?id=X — Station detail page
  • api/station-data.php?id=X — JSON endpoint for station metadata
  • api/viewmaps.php?station=X — JSON endpoint for viewmap list

5.3 Immersion Chamber

Technology: Pannellum (vanilla JS library)

Configuration (from pannellum.js):

const viewer = pannellum.viewer('panorama', {
    type: 'equirectangular',
    panorama: CONFIG.panorama,
    autoLoad: true,
    compass: true,
    hfov: 90,
    minHfov: 50,
    maxHfov: 120,
    showControls: false
});

Gaze Extraction:

function updateGazeDisplay() {
    currentGaze = {
        yaw: viewer.getYaw(),
        pitch: viewer.getPitch(),
        hfov: viewer.getHfov()
    };
}

5.4 Perspective Extraction

Technology: Python (called by Flask server)

The perspective extraction uses equirectangular-to-rectilinear projection. Unlike cubemap extraction (fixed 90° faces at cardinal directions), this extracts an arbitrary perspective at the user’s exact gaze direction and field of view.

Key function (extract_perspective.py):

def extract_perspective(equirect_path, yaw, pitch, hfov, output_size=1024):
    """
    Extract perspective view from equirectangular image.

    Args:
        equirect_path: Path to equirectangular source image
        yaw: Horizontal angle in degrees (-180 to 180)
        pitch: Vertical angle in degrees (-90 to 90)
        hfov: Horizontal field of view in degrees
        output_size: Output image dimension (square)

    Returns:
        Perspective image as numpy array
    """

5.5 SHARP Engine

Technology: Flask server → Python → SHARP model with MPS

Architecture:

Apache (PHP) → Flask (localhost:5005) → SHARP (MPS) → PLY

The Flask server runs as a persistent user process with full GPU access, eliminating both the Apache sandbox limitations and model loading overhead.

Endpoint: generate-terrarium.php (calls Flask internally)

Request:

POST /generate-terrarium.php
Content-Type: application/json

{
    "source": "360_panos/360_CCCELC001_fresh_20220922-01.jpg",
    "yaw": 45.2,
    "pitch": 12.5,
    "fov": 90
}

Response:

{
    "success": true,
    "ply_url": "data/splats/360_CCCELC001_fresh_20220922-01_45.2_12.5_1705592400.ply",
    "metadata": {
        "source": "360_panos/360_CCCELC001_fresh_20220922-01.jpg",
        "gaze": {
            "yaw": 45.2,
            "pitch": 12.5,
            "fov": 90
        },
        "generation_time": 10.5,
        "file_size": 66061086
    }
}

5.6 Terrarium Viewer

Technology: WebGL2 Gaussian Splat Renderer (splat-viewer.js)

Critical Feature: Camera initialization matches source gaze.

// From SplatViewer.loadPLY()
this.orbit.theta = Math.PI;
this.orbit.phi = Math.PI / 2;
this.orbit.radius = 1.5;

5.7 Measurement Tools

Point-to-point distance (from measurement.js):

function addMeasurePoint(point3D, screenX, screenY) {
    measurePoints.push({ point3D, screenX, screenY });

    if (measurePoints.length === 2) {
        const p1 = measurePoints[0].point3D;
        const p2 = measurePoints[1].point3D;

        const dx = p2.x - p1.x;
        const dy = p2.y - p1.y;
        const dz = p2.z - p1.z;
        const rawDistance = Math.sqrt(dx*dx + dy*dy + dz*dz);

        // Apply calibration if available
        if (scaleCalibrated) {
            displayDist = (rawDistance * scaleFactor).toFixed(2);
            unit = 'inches';
        }
    }
}

5.8 Session Persistence

Technology: localStorage (browser) + MySQL (optional server-side)

localStorage schema:

{
    "sessionId": "uuid",
    "currentStation": "elc",
    "currentViewmap": "forest-pond",
    "measurements": [
        {
            "id": "m1",
            "type": "distance",
            "value": 12.3,
            "unit": "m",
            "gaze": { "yaw": 45.2, "pitch": 12.5 },
            "timestamp": "2026-01-18T10:30:00Z",
            "annotation": "Windthrow trunk length"
        }
    ],
    "notes": [
        {
            "viewmap": "forest-pond",
            "season": "winter-2024",
            "text": "Canopy gap visible in NNW quadrant"
        }
    ]
}

6. Data Architecture

6.1 Data Assets

The Virtual Field Archive:

  • 280+ YouTube videos (public domain)
  • 5 minutes duration each, equirectangular 360°
  • 4 seasons × ~70 locations
  • Primarily 2021-2022 captures

ELC Local Archive:

  • Original MP4 files from Dr. Hamilton’s contributions
  • Full resolution, immediate access
  • 4 seasons: Winter 2021, Spring 2022, Summer 2022, Fall 2022

Frame Extraction: Frames extracted from videos using ffmpeg:

ffmpeg -ss 00:02:30 -i video.mp4 -frames:v 1 -q:v 2 frame.jpg

Timestamp 2:30 chosen for consistent conditions (after camera settling, before end).

6.2 Database Schema

-- Field stations
CREATE TABLE stations (
    id INT AUTO_INCREMENT PRIMARY KEY,
    name VARCHAR(255) NOT NULL,
    institution VARCHAR(255),
    latitude DECIMAL(10, 7),
    longitude DECIMAL(10, 7),
    website VARCHAR(255),
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Viewmap locations within stations
CREATE TABLE viewmaps (
    id INT AUTO_INCREMENT PRIMARY KEY,
    station_id INT NOT NULL,
    name VARCHAR(255) NOT NULL,
    description TEXT,
    ecosystem VARCHAR(100),
    aquatic_features BOOLEAN DEFAULT FALSE,
    latitude DECIMAL(10, 7),
    longitude DECIMAL(10, 7),
    FOREIGN KEY (station_id) REFERENCES stations(id)
);

-- Seasonal captures
CREATE TABLE captures (
    id INT AUTO_INCREMENT PRIMARY KEY,
    viewmap_id INT NOT NULL,
    season ENUM('winter', 'spring', 'summer', 'fall') NOT NULL,
    capture_date DATE,
    video_url VARCHAR(500),
    frame_path VARCHAR(255),
    FOREIGN KEY (viewmap_id) REFERENCES viewmaps(id)
);

-- User sessions (optional server-side persistence)
CREATE TABLE sessions (
    id VARCHAR(36) PRIMARY KEY,
    data JSON,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);

-- Generated terrariums (cache)
CREATE TABLE terrariums (
    id INT AUTO_INCREMENT PRIMARY KEY,
    capture_id INT NOT NULL,
    yaw DECIMAL(6, 2),
    pitch DECIMAL(6, 2),
    hfov DECIMAL(5, 2),
    ply_path VARCHAR(255),
    num_gaussians INT,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    FOREIGN KEY (capture_id) REFERENCES captures(id),
    UNIQUE KEY (capture_id, yaw, pitch, hfov)
);

6.3 External APIs

iNaturalist:

GET https://api.inaturalist.org/v1/observations
  ?lat={latitude}
  &lng={longitude}
  &radius=1
  &quality_grade=research
  &per_page=200

Mapbox:

  • Globe rendering
  • Satellite imagery
  • Marker clustering

7. Implementation Status

7.1 Operational Components

Component File(s) Status Notes
Perspective Extraction extract_perspective.py ✓ Working Arbitrary yaw/pitch/fov from equirectangular
SHARP Pipeline Wrapper run_sharp.py ✓ Working MPS acceleration via Flask server, ~10s inference
Flask Inference Server sharp_server.py ✓ Working Port 5005, full GPU access
PHP API Endpoint generate-terrarium.php ✓ Working Calls Flask endpoint, JSON responses
Pannellum Integration js/pannellum.js ✓ Working 360° navigation, gaze capture functional
Region Selection UI js/selection.js ✓ Working Rectangle drawing, gaze-to-FOV conversion
iPad Panel js/panel.js ✓ Working Drag, resize, minimize
WebGL2 Splat Viewer js/splat-viewer.js ✓ Working PLY parsing, depth sorting, orbit controls, raycasting
Measurement Tools js/measurement.js ✓ Working Point-to-point, calibration, recalibration
Modular CSS css/explorer.css ✓ Working All styles extracted
Application Controller js/app.js ✓ Working Module wiring, scan API

7.2 Components Requiring Development

Component Status Notes
Temporal Comparison Not started Phase 3 of roadmap
Multi-Station Navigation Not started Phase 4 of roadmap

7.3 Technical Findings

SHARP Output Characteristics:

  • Coordinate System: OpenCV convention (x right, y down, z forward)
  • Camera Position: Origin, looking down +Z axis
  • Output Format: PLY with positions, scales, rotations, spherical harmonics
  • Typical Output: ~1.18 million Gaussians, ~63 MB PLY file
  • Inference Time: ~10s on M4 Max with MPS

Critical Constraint: Single-image splats provide reliable depth only within a bounded viewing frustum. Sky and distant elements lack depth information and produce artifacts when viewed from significantly different angles.

Inference Architecture:

SHARP inference runs via a dedicated Flask server rather than direct Python invocation from Apache. This architecture provides:

  • MPS acceleration: ~10s inference on M4 Max (vs ~27s CPU-only from Apache)
  • Process isolation: Flask runs as user with full GPU access
  • Simple integration: PHP endpoint calls Flask API on localhost:5005
Apache (PHP) → Flask (localhost:5005) → SHARP (MPS) → PLY

The Flask server runs as a persistent process, eliminating model loading overhead on each request.


8. Pattern Recognition Philosophy

8.1 What the System Does

The Macroscope Virtual Field Explorer provides:

  • Navigation to archived 360° captures
  • On-demand 3D terrarium generation
  • Measurement tools for quantification
  • Temporal comparison capability
  • Data export for analysis

8.2 What the System Does Not Do

The system does NOT provide:

  • Automated anomaly detection
  • Pattern flagging without user prompting
  • Interpretation of structural changes
  • AI-generated ecological assessments

8.3 Why This Matters

The Virtual Field archive provides discrete seasonal snapshots—four captures per year per location. This temporal resolution enables seasonal comparison and interannual comparison, but not the continuous monitoring that would support real-time anomaly detection.

More fundamentally, ecological interpretation requires the predictive model that a trained observer brings. A 29% drop in canopy closure means something to someone who has seen windthrow events before. The numbers alone are not the insight.

8.4 Future Extension: Visual Pattern Detection

Current experiments at Canemah Nature Laboratory with YOLOv8 are developing automated visual pattern detection for live 360° camera feeds. When mature, this capability could extend to:

  • Real-time species detection in panoramic streams
  • Motion and activity pattern recognition
  • Integration with STRATA context generation

This remains experimental and is not included in the current implementation scope.

8.5 Connection to MEO Patterns

The existing MEO patterns infrastructure (macroscope.earth/?page=patterns) demonstrates automated detection for continuous time-series data:

  • Environmental correlations from Tempest weather station
  • Biological rhythms from BirdWeather detections
  • Anomaly flagging based on statistical deviation

The architectural pattern is established. Extending to visual/structural data from 360° imagery would require batch processing the archive to compute metrics—feasible but outside current scope.


9. Implementation Roadmap

Phase 1: Core Loop Proof of Concept (COMPLETE)

Objective: Demonstrate gaze-to-terrarium workflow with single ELC capture

Deliverables:

  • [x] index.php: Pannellum viewer with ELC winter frame
  • [x] Gaze capture UI (vanilla JS, display yaw/pitch/fov)
  • [x] SCAN button triggers perspective extraction
  • [x] generate-terrarium.php: PHP endpoint calling Flask server
  • [x] extract_perspective.py: Arbitrary gaze extraction
  • [x] run_sharp.py: SHARP inference wrapper
  • [x] Flask inference server with MPS acceleration
  • [x] Splat viewer integration with gaze-matched camera init
  • [x] Round-trip: look → scan → terrarium (same view) → return
  • [x] Modular refactoring into separate JS/CSS files

Test Data: ELC equirectangular frames (all four seasons available)

Phase 2: Measurement Tools (COMPLETE)

Objective: Point-to-point distance measurement in terrarium

Deliverables:

  • [x] Click-to-3D raycasting in WebGL viewer
  • [x] Two-point distance tool with display
  • [x] Calibration from known distance
  • [x] Recalibration capability
  • [x] Measurement logging to UI
  • [ ] Height estimation tool (vertical component)
  • [ ] Measurement logging to localStorage

Success Criteria: User can measure distance between two clicked points with real-world units

Phase 3: Temporal Comparison (2 weeks)

Objective: Side-by-side seasonal terrariums with differential metrics

Data Required:

  • ELC Winter 2021 and Winter 2024 frames (or available years)

Deliverables:

  • [ ] Season selector UI in sphere view
  • [ ] Gaze direction preservation across season switch
  • [ ] Dual terrarium display (split view)
  • [ ] Differential metrics computation:
    • Sky view factor change
    • Point density by height band
  • [ ] Visual comparison overlay

Success Criteria: User compares two seasons at same gaze direction, sees quantified change

Phase 4: Multi-Station Navigation (2-3 weeks)

Objective: Globe → Station → Sphere navigation for Virtual Field archive

Data Required:

  • Station metadata (coordinates, names, institutions)
  • Viewmap inventory per station
  • Frame extraction from YouTube archive (batch ffmpeg job)

Deliverables:

  • [ ] globe.php: Mapbox globe with station markers
  • [ ] station.php: Station portal with viewmap list
  • [ ] MySQL tables populated with Virtual Field metadata
  • [ ] Frame extraction pipeline for 5-10 stations

Success Criteria: User navigates from globe to any implemented station to sphere

Phase 5: Field Notebook + Export (2 weeks)

Objective: Session continuity and data export

Deliverables:

  • [ ] Notebook UI overlay (habitat metadata, species context)
  • [ ] iNaturalist API integration for location-based observations
  • [ ] Measurement logging with annotation
  • [ ] User notes per viewmap/season
  • [ ] CSV export of measurements
  • [ ] JSON export of session data

Success Criteria: Measurements persist across page reloads, can be exported

Phase 6: WebXR Integration (Optional, 3-4 weeks)

Objective: VR support for Quest 3, Vision Pro

Deliverables:

  • [ ] WebXR session management
  • [ ] Spatial UI adaptations for Pannellum
  • [ ] Controller input handling
  • [ ] Splat viewer in VR mode

Success Criteria: Full workflow functional in VR headset


10. Educational Integration

10.1 Alignment with Virtual Field Explorer Guides

The Virtual Field project includes Explorer Guides for skill development:

  • Write Field Notes
  • Sketch What You See
  • Learn to Ask Questions

MVFE extends these with quantitative capabilities:

  • Measure What You See: Distance, height, structural metrics
  • Compare Across Time: Seasonal change quantification
  • Compare Across Space: Cross-ecosystem structural analysis

10.2 Curriculum Pathways

K-8:

  • Guided exploration of single ecosystem
  • Simple observation logging in Field Notebook
  • “What do you notice?” prompts

High School:

  • Multi-ecosystem comparison
  • Basic measurement collection
  • Seasonal change documentation
  • Data export for analysis

University:

  • Full quantitative workflow
  • Hypothesis-driven investigation
  • Statistical comparison across sites
  • Research report generation

Citizen Science:

  • Contribution to phenology monitoring
  • Distributed measurement campaigns
  • Community science integration

11. Deployment Architecture

11.1 Development Environment

┌─────────────────────────────────────────────────────────────┐
│                   DEVELOPMENT (Data - M4 Max)               │
├─────────────────────────────────────────────────────────────┤
│  Web Server: Apache (WebMon managed)                        │
│  PHP: 8.3+ via /opt/homebrew/bin/php                        │
│  Database: MySQL 8.4+ (phpMyAdmin)                          │
│  Python: 3.x with SHARP dependencies                        │
│  SHARP: Flask server with MPS acceleration                  │
│  Storage: Local filesystem                                  │
│  URL: localhost or local network                            │
└─────────────────────────────────────────────────────────────┘

11.2 Production Environment

┌─────────────────────────────────────────────────────────────┐
│                   PRODUCTION (Galatea - M4 Pro)             │
├─────────────────────────────────────────────────────────────┤
│  Web Server: Apache                                         │
│  PHP: 8.3+                                                  │
│  Database: MySQL 8.4+                                       │
│  Python: 3.x with SHARP via Flask server                    │
│  Storage: Local NAS                                         │
│  Network: 1Gb fiber                                         │
│  URL: macroscope.earth or subdomain                         │
└─────────────────────────────────────────────────────────────┘

11.3 Computation Offload (Optional)

┌─────────────────────────────────────────────────────────────┐
│                   COMPUTATION (Sauron - Intel i9)           │
├─────────────────────────────────────────────────────────────┤
│  SHARP inference with GPU acceleration                      │
│  Batch processing of archive                                │
│  API endpoint for Galatea to call                           │
└─────────────────────────────────────────────────────────────┘

12. Connection to Macroscope Philosophy

12.1 Two Macroscopes

As described in “Observatories of Complexity” (Hamilton, 2026), the Macroscope Virtual Field Explorer represents the biosphere observatory—extending human perception to scales beyond unaided capacity. The archive of 280+ seasonal captures across 25+ field stations provides a view into ecological complexity that no individual field naturalist could survey.

The companion observatory—the noosphere Macroscope—is represented by the collaborative intelligence between human observer and AI system. The language model can surface patterns in how humanity has collectively understood ecosystems; the human observer brings the predictive model that recognizes what those patterns mean.

12.2 The Instruments Serve the Observation

The sensor feeds, the pattern recognition, the 3D reconstructions—all produce calibration signals. The controlled hallucination remains with the scientist. The Macroscope doesn’t see ecosystems; it provides data the ecologist’s brain uses to update its model.

A forest canopy can tear open. The terrarium shows the gap. The trained observer recognizes windthrow. The instruments surface the data; meaning emerges from the observer.


13. References

  1. Hamilton, M.P. & Lassoie, J.P. (1986). The Macroscope: A videodisc atlas for ecology education. Cornell University.
  2. Hamilton, M.P. (2026). “Virtual Terrariums: When a Failed Hypothesis Becomes a Better Instrument.” Coffee with Claude. CNL-TN-2026-005.
  3. Hamilton, M.P. (2026). “Observatories of Complexity: Two Macroscopes for the Biosphere and Noosphere.” Coffee with Claude. https://coffeewithclaude.com/post.php?slug=observatories-of-complexity-two-macroscopes-for-the-biosphere-and-noosphere
  4. The Virtual Field Project. (2021-present). 360-Degree Seasonal Videos. https://thevirtualfield.org/360-degree-seasonal-videos/
  5. Apple Machine Learning Research. (2025). SHARP: Single-image to High-fidelity 3D with Accurate shape and Realistic aPpearance. https://github.com/apple/ml-sharp
  6. Hamilton, M.P. (2025). MEO Spatial Intelligence Framework v2.0. MacroNexus Ecological Observatory.
  7. Pannellum. (2024). Lightweight Panorama Viewer for the Web. https://pannellum.org/
  8. Kerbl, B. et al. (2023). 3D Gaussian Splatting for Real-Time Radiance Field Rendering. SIGGRAPH 2023.
  9. Seth, A.K. (2021). Being You: A New Science of Consciousness. Dutton.

Document History

Version Date Author Changes
1.0 2026-01-18 M. Hamilton / Claude Initial draft specification
2.0 2026-01-18 M. Hamilton / Claude Revised for LAMP stack, honest scope, human-centered investigation, gaze-matched viewer
3.0 2026-01-18 M. Hamilton / Claude Consolidated with Addendum A; documented modular frontend architecture (css/, js/ separation); updated implementation status
4.0 2026-01-18 M. Hamilton / Claude WebGL2 viewer operational; Flask inference server architecture (~10s with MPS); Phase 2 measurement tools complete; removed resolved issues

This document is part of the Canemah Nature Laboratory Technical Note series.

Cite This Document

(2026). "CNL-TN-2026-009: Macroscope Virtual Field Explorer." Canemah Nature Laboratory Technical Note CNL-TN-2026-009. https://canemah.org/archive/CNL-TN-2026-009

BibTeX

@techreport{cnl2026cnltn, author = {}, title = {CNL-TN-2026-009: Macroscope Virtual Field Explorer}, institution = {Canemah Nature Laboratory}, year = {2026}, number = {CNL-TN-2026-009}, month = {january}, url = {https://canemah.org/archive/document.php?id=CNL-TN-2026-009}, abstract = {This document specifies the Macroscope Virtual Field Explorer (MVFE), an immersive web-based platform enabling students, researchers, and citizen scientists to conduct quantitative ecological investigations within 360° spherical environments captured at field stations worldwide. The system integrates existing archives of seasonal panoramic video from The Virtual Field network with on-demand 3D Gaussian splat generation via Apple’s SHARP model, creating “measurement terrariums” that transform passive viewing into active spatial analysis. The platform is built on a LAMP stack (Linux, Apache, MySQL, PHP) with vanilla JavaScript for frontend interactions and Python for computational backends. Users navigate a global constellation of field stations, immerse themselves in seasonal habitat documentation, deploy virtual instruments to extract quantitative measurements, and compare findings across ecosystems and seasons. The system supports screen-based interaction with future WebXR extension planned. **Key Design Principle:** Pattern recognition remains with the human observer. The system provides measurement capability and contextual data; meaning emerges from the investigator’s interpretive framework shaped by experience.} }

Permanent URL: https://canemah.org/archive/document.php?id=CNL-TN-2026-009