Dimensionless Structural Ecology from Monocular Gaussian Splatting: A Proposal for Habitat Characterization Using the MacroscopeVR Archive
Dimensionless Structural Ecology from Monocular Gaussian Splatting: A Proposal for Habitat Characterization Using the MacroscopeVR Archive
Document ID: CNL-TN-2026-022 Version: 0.1 (Draft) Date: March 4, 2026 Author: Michael P. Hamilton, Ph.D., Canemah Nature Laboratory
AI Assistance Disclosure: This technical note was developed with assistance from Claude (Anthropic, Claude Opus 4.6). The AI contributed to literature synthesis, metric formalization, and manuscript drafting during an extended collaborative dialogue. The author takes full responsibility for the content, accuracy, and conclusions.
Abstract
MacroscopeVR maintains an archive of 11,425 three-dimensional Gaussian splat reconstructions derived from 457 equirectangular panoramas captured at 33 ecological research stations across North America and Costa Rica. Each reconstruction was generated by Apple's SHARP monocular Gaussian splatting framework, which infers approximately 1.18 million three-dimensional ellipsoids from a single perspective photograph. Because SHARP operates monocularly, each reconstruction inhabits an arbitrary, uncalibrated coordinate system — absolute distances cannot be recovered without external reference. This paper argues that this apparent limitation is, for certain ecological questions, irrelevant. We propose a suite of dimensionless structural descriptors — eigenvalue-derived morphological ratios, color distribution statistics, opacity heterogeneity measures, and scale distribution parameters — that characterize the structural state of a habitat without requiring metric calibration. These descriptors are computed per grid cell from the filtered Gaussian population and assembled into a multi-dimensional structural state vector. Because the MacroscopeVR archive includes seasonal time series (up to four seasons) at multiple stations spanning diverse biomes, we propose validation against known phenological patterns: deciduous leaf flush and senescence, grassland green-up and dormancy, and tropical evergreenness. If dimensionless structural descriptors track established phenological signals, they constitute a new class of rapid ecological characterization — one that any observer with a consumer 360° camera can contribute to, paralleling iNaturalist's democratization of species identification but applied to habitat structure rather than taxonomy. We outline the proposed descriptor suite, the computational pipeline, validation strategy, and the ecological hypotheses to be tested.
1. Introduction
1.1 The Measurement Problem in Monocular 3D Reconstruction
Structure-from-Motion (SfM) photogrammetry and LiDAR scanning produce metrically calibrated three-dimensional models of vegetation and terrain. These methods recover absolute distances through geometric triangulation: multiple viewpoints separated by known or computed baselines provide the parallax necessary to resolve depth [1, 2]. A forest plot surveyed with terrestrial LiDAR yields tree heights in meters, stem diameters in centimeters, and canopy volumes in cubic meters — quantities directly comparable across sites and seasons.
Monocular depth estimation, by contrast, infers three-dimensional structure from a single image. Recent neural network approaches — including SHARP (Apple ML Research, 2024), which reconstructs scenes as populations of three-dimensional Gaussian ellipsoids [3, 4] — produce visually compelling reconstructions that preserve the geometric relationships within a scene. However, without a second viewpoint to provide parallax, absolute scale is fundamentally ambiguous. A tree reconstructed from a single photograph could be 5 meters tall or 50; the monocular network has no way to distinguish these cases from image content alone.
The MacroscopeVR platform decomposes each equirectangular 360° panorama into 25 perspective views following a systematic spherical grid (CNL-SP-2026-013) and processes each view independently through SHARP [5]. This produces 25 reconstructions per panorama, each in its own arbitrary coordinate system. Adjacent cells agree on angular geometry — the grid positions are known precisely — but disagree on radial depth. The 25 "terrariums" cannot be trivially assembled into a unified metrically calibrated model of the scene.
This paper proposes that for an important class of ecological questions, metric calibration is unnecessary. Dimensionless descriptors — ratios, distributions, and entropy measures computed from the intrinsic properties of the Gaussian population — characterize habitat structure in ways that are invariant to the arbitrary coordinate system, internally consistent within each reconstruction, and ecologically meaningful when compared across time and space.
1.2 Precedent: Dimensionless Metrics in Ecology
Ecology has a long tradition of dimensionless characterization. The Shannon diversity index H' = −Σ pᵢ ln(pᵢ) is a pure number — it requires no physical units, only relative proportions [6]. Simpson's index, evenness measures, and beta diversity are similarly unitless. These indices transformed community ecology by enabling comparison across sites, taxa, and scales without requiring that underlying measurements share units or calibration.
In vegetation science, cover fraction, leaf area index, canopy closure, and foliage height diversity are either dimensionless or computed as ratios [7, 8]. MacArthur and MacArthur's foundational 1961 demonstration that bird species diversity correlates with foliage height diversity — not with any single structural measurement — established the principle that habitat heterogeneity, expressed as a dimensionless distribution property, predicts biodiversity better than absolute dimensions [9].
The structural complexity hypothesis, formalized across multiple taxa and ecosystems, holds that habitats with greater three-dimensional architectural diversity support more species because they provide more microhabitat niches [10, 11]. This hypothesis has been tested extensively using LiDAR-derived structural metrics [12, 13], but LiDAR surveys require specialized equipment, expertise, and substantial per-site cost. If dimensionless structural descriptors derived from consumer photography could serve as proxies for LiDAR-derived complexity measures, the scaling implications would be significant.
1.3 The MacroscopeVR Archive
The archive comprises 33 ecological research stations affiliated with the Organization of Biological Field Stations (OBFS), spanning latitudes from approximately 10°N (Costa Rica) to 48°N (northern United States), and elevations from sea level to over 2,500 meters. Represented biomes include tropical wet and dry forest, temperate deciduous forest, mixed conifer forest, oak savanna, chaparral, grassland, desert scrub, and coastal sage scrub. Many stations have panoramic captures across multiple seasons (winter, spring, summer, autumn) from fixed positions, providing temporal replication at known locations.
The processing pipeline (described in Section 2) has generated 11,425 individual Gaussian splat reconstructions totaling approximately 258 GB of PLY data. Each reconstruction contains roughly 1.18 million Gaussian ellipsoids encoding position (xyz), color (spherical harmonics), opacity (sigmoid-encoded), scale (exponential-encoded), and rotation (quaternion). Physical filtering — opacity ≥ 0.5, scale ≤ 0.05 — isolates the subset of Gaussians corresponding to solid structural surfaces, separating them from atmospheric fill and reconstruction artifacts [5].
This dataset, processed and stored on production infrastructure, is ready for the batch descriptor computation proposed here.
2. Methods: Data Generation Pipeline
2.1 Panoramic Capture
Source imagery consists of equirectangular panoramas captured with Insta360 consumer 360° cameras at fixed positions within field station properties. Camera placement follows Virtual Field project protocols: tripod-mounted at approximately 1.5 m height, positioned to capture representative habitat structure. Each equirectangular image provides full spherical coverage at resolutions up to 11520 × 5760 pixels (approximately 66 megapixels).
2.2 Spherical Grid Decomposition
Each equirectangular panorama is decomposed into 25 perspective views following the ecoSLAM sampling grid (CNL-SP-2026-013):
| Ring | Grid Cells | Elevation | Azimuth Spacing | Count |
|---|---|---|---|---|
| Horizon | g00–g08 | 0° | 40° | 9 |
| Upper canopy | g09–g15 | +40° | ~51.4° | 7 |
| Lower canopy | g16–g22 | −40° | ~51.4° | 7 |
| Zenith | g23 | +70° | — | 1 |
| Nadir | g24 | −70° | — | 1 |
Each perspective view is extracted at 50° field of view and 1024 × 1024 pixel resolution using inverse gnomonic projection from the equirectangular source. The extraction is implemented in extract_grid.py using NumPy array operations with bilinear interpolation [5].
2.3 SHARP Gaussian Splatting
Each perspective image is processed through SHARP (Splatter Image with Hash-grid Adaptive Resolution and Prediction), Apple's monocular Gaussian splatting framework [3]. SHARP ingests a single RGB image and outputs a PLY file containing approximately 1.18 million three-dimensional Gaussian ellipsoids. Each Gaussian encodes:
- Position (x, y, z): Center location in arbitrary scene coordinates
- Color: Spherical harmonic coefficients (degree 0–3) encoding view-dependent appearance
- Opacity: Sigmoid-encoded logit value representing transparency (0 = fully transparent, 1 = fully opaque)
- Scale: Three exponential-encoded log values representing ellipsoid radii along principal axes
- Rotation: Quaternion encoding ellipsoid orientation
Processing uses Metal Performance Shaders (MPS) on Apple Silicon, achieving approximately 5 seconds per view when processing in directory batch mode (amortizing model load time across views within a panorama).
2.4 Physical Filtering
Analysis of SHARP output across diverse ecosystems reveals three distinct Gaussian populations separable by opacity and scale [5]:
-
Physical surface Gaussians: Compact scale, high opacity (≥ 0.5). These represent solid objects — vegetation, terrain, built structures — and preserve the geometric and spectral characteristics of the scene.
-
Atmospheric fill Gaussians: Large scale, low opacity (< 0.1). These model sky, gaps between objects, and ambient illumination. They are volumetric approximations of light transport, not physical surfaces.
-
Artifact Gaussians: Variable properties, including depth-plane discontinuities, edge fringe, and floating outliers. These arise from limitations in monocular depth estimation.
Physical filtering (opacity ≥ 0.5, max scale ≤ 0.05) reliably isolates population 1 across all tested ecosystems. The filtered population retains the structural information while excluding atmospheric modeling and reconstruction noise. All descriptors proposed in Section 3 are computed from this filtered population.
2.5 Current Dataset
| Parameter | Value |
|---|---|
| Stations | 33 (across North America and Costa Rica) |
| Panoramas | 457 |
| Grid cells (terrariums) | 11,425 |
| Gaussians per cell | ~1.18 × 10⁶ (pre-filter) |
| Physical Gaussians per cell | ~3–5 × 10⁵ (estimated post-filter) |
| Total PLY data | ~258 GB |
| Processing infrastructure | Galatea (Mac Mini M4 Pro), Data (MacBook Pro M4 Max) |
3. Proposed Descriptors: The Structural State Vector
We propose computing a multi-dimensional feature vector for each terrarium cell from the filtered Gaussian population. The vector comprises four descriptor groups — spatial structure, color distribution, opacity heterogeneity, and scale texture — chosen because each captures an ecologically distinct dimension of habitat characterization and each is inherently dimensionless.
3.1 Spatial Structure Descriptors (Eigenvalue Decomposition)
Compute the 3 × 3 covariance matrix of the filtered point positions and extract eigenvalues λ₁ ≥ λ₂ ≥ λ₃. The following dimensionless ratios are standard in the LiDAR point cloud analysis literature [14, 15]:
- Linearity L = (λ₁ − λ₂) / λ₁ — Proportion of variance along a single axis. High values indicate elongated structures: trunks, branches, grass blades.
- Planarity P = (λ₂ − λ₃) / λ₁ — Proportion of variance in a plane. High values indicate flat structures: ground surfaces, broad leaves, water.
- Sphericity S = λ₃ / λ₁ — Isotropy of the point distribution. High values indicate volumetric masses: dense canopy, shrub clusters.
- Vertical bias V = |cos(θ)| where θ is the angle between the primary eigenvector and the vertical axis (adjusted for grid cell elevation). Distinguishes vertically organized structure (forest) from horizontally organized structure (grassland).
- Anisotropy A = (λ₁ − λ₃) / λ₁ — Overall departure from spherical symmetry. High in structured environments, low in uniformly dense or uniformly empty cells.
Note: These ratios are mathematically constrained such that L + P + S = 1, providing a ternary composition that can be visualized on a simplex diagram.
3.2 Color Distribution Descriptors
Convert filtered Gaussian RGB values (decoded from spherical harmonic coefficients, degree 0) to HSV color space and compute:
- Green fraction G_f = proportion of pixels with hue in the green range (60°–180°) and saturation > 0.15. A first-order proxy for photosynthetically active vegetation, analogous to the green chromatic coordinate (GCC) used in phenocam networks [16, 17].
- Color entropy H_c = −Σ pᵢ ln(pᵢ) computed over a quantized hue histogram (e.g., 36 bins at 10° intervals). Measures chromatic diversity. Low in monochrome scenes (snow cover, bare soil, dense conifer canopy); high in mixed scenes (autumn foliage, flowering meadow, mixed forest with understory).
- Red-green ratio RG = mean(R) / mean(G). Tracks the red-to-green shift during autumn senescence, analogous to the excess green index (ExG) in remote sensing [18].
- Brightness mean B_μ = mean(V) in HSV. A proxy for illumination conditions, canopy closure (darker under dense canopy), and snow/ice presence.
- Saturation spread S_σ = standard deviation of saturation. High in scenes with both vivid and muted elements (e.g., bright flowers against grey bark); low in uniformly saturated or uniformly desaturated scenes.
3.3 Opacity Heterogeneity Descriptors
The opacity distribution of the physical Gaussian population (those passing the ≥ 0.5 threshold) encodes information about surface density and edge structure that no other sensor modality captures directly:
- Opacity mean O_μ = mean opacity of filtered Gaussians. Dense closed canopy drives this toward 1.0; open woodland with many semi-transparent canopy edges produces lower values.
- Opacity entropy H_o = Shannon entropy of the opacity histogram (quantized to 20 bins over the 0.5–1.0 range). Measures the diversity of opacity values. High entropy indicates a gradient from solid surfaces to translucent edges — characteristic of complex canopy architecture with layered foliage.
- Opacity skewness O_γ = skewness of the opacity distribution. Strongly left-skewed (concentrated near 1.0) in solid-surfaced environments like rock faces or bare ground. More symmetric in environments with extensive semi-transparent structure like needle-leaf canopy or fine branch networks.
3.4 Scale Texture Descriptors
The scale parameter of each Gaussian (its physical extent in the reconstruction) encodes surface texture information — the granularity at which SHARP represents the scene:
- Scale mean Sc_μ = mean of the three scale eigenvalues per Gaussian, averaged over the filtered population. Fine-textured surfaces (grass, needles, fine branches) produce smaller Gaussians; smooth broad surfaces (bare soil, rock, large leaves) produce larger ones.
- Scale entropy H_s = Shannon entropy of the scale distribution. High in scenes with mixed textures (e.g., fine grass at base transitioning to broad leaves above); low in texturally uniform scenes.
- Scale coefficient of variation Sc_cv = σ / μ of the scale distribution. A dimensionless measure of texture heterogeneity.
3.5 Composite Structural State Vector
The full descriptor vector per terrarium cell is:
v = [L, P, S, V, A, G_f, H_c, RG, B_μ, S_σ, O_μ, H_o, O_γ, Sc_μ, Sc_cv, H_s]
This 16-dimensional vector is entirely dimensionless. It can be computed for any SHARP-processed perspective image without calibration, registration, or reference to external measurements. The vector characterizes the structural state of the habitat visible within that grid cell at the moment of capture.
4. Proposed Analysis
4.1 Batch Computation
The descriptor pipeline will process all 11,425 terrariums, reading each physically filtered PLY file, computing the 16-element state vector, and storing results in a flat table indexed by station, panorama, grid cell, and capture date. Estimated computation time is modest — eigenvalue decomposition and histogram statistics on 300,000–500,000 filtered points per cell require seconds, not minutes. The full archive should process in under 24 hours on existing hardware.
4.2 Temporal Analysis: Phenological Signal Detection
The primary validation question: do dimensionless structural descriptors track known phenological patterns?
Hypothesis 1 (Deciduous phenology): At stations dominated by deciduous vegetation (e.g., temperate deciduous forest), the green fraction G_f and color entropy H_c should increase from winter to spring (leaf flush), peak in summer, and decline through autumn (senescence), while the red-green ratio RG should spike during autumn color change. Simultaneously, linearity L should be higher in winter (bare branches dominate the reconstruction) and sphericity S should increase in summer (canopy leaf masses).
Hypothesis 2 (Grassland seasonality): At stations dominated by annual grassland (e.g., California oak savanna), planarity P should dominate year-round (horizontal ground plane), but green fraction should track the Mediterranean wet/dry cycle — high in winter/spring, declining sharply in summer drought dormancy. Scale entropy H_s may decrease in summer as the textural uniformity of dead grass replaces the heterogeneous mix of green blades and seed heads.
Hypothesis 3 (Tropical evergreenness): At tropical wet forest stations, the structural state vector should show minimal seasonal variation across all 16 dimensions, serving as a negative control demonstrating that low seasonal variance in the descriptors reflects genuine ecological stability rather than measurement insensitivity.
Hypothesis 4 (Conifer stability): At mixed conifer stations, color metrics should show moderate seasonal change (snow presence/absence affects brightness and green fraction) while structural metrics (linearity, sphericity) remain relatively stable, reflecting the persistence of evergreen canopy architecture.
4.3 Spatial Analysis: Biome Discrimination
Hypothesis 5 (Cross-site clustering): When the 16-dimensional state vectors for all grid cells are subjected to principal component analysis or UMAP dimensionality reduction, cells from the same biome type should cluster together regardless of station identity. A horizon cell at a California oak savanna should be more similar to a horizon cell at another oak savanna than to a horizon cell at an adjacent conifer forest, even though the two savanna stations may be hundreds of kilometers apart.
Hypothesis 6 (Elevation ring stratification): Within a single panorama, the structural state vector should differ systematically between elevation rings. Nadir cells (g24, looking down) should show high planarity and ground-associated colors. Zenith cells (g23, looking up) should show canopy-associated characteristics in forested sites and high brightness/low saturation in open sites. Horizon cells should show the greatest structural diversity.
4.4 Structural Complexity and Biodiversity
Hypothesis 7 (Complexity-diversity correlation): If the structural complexity hypothesis holds at the scale of SHARP reconstruction, stations with higher structural state vector diversity — measured as the dispersion of cell vectors within a panorama, or the entropy across morphological types — should correlate with higher species counts as recorded in the iNaturalist species cache maintained by MacroscopeVR.
This hypothesis connects the structural characterization proposed here to the existing iNaturalist integration in the platform, where each station maintains a cached species inventory. The correlation would be exploratory — confounded by sampling effort, geographic range size, and taxonomic bias in citizen science observations — but a positive signal would motivate more rigorous investigation.
5. Discussion
5.1 Relationship to Existing Structural Metrics
The proposed descriptors are intellectually descended from terrestrial LiDAR structural analysis, where eigenvalue-based shape descriptors have been used to classify point clouds into ground, vegetation, and built structure [14, 15, 19]. The key differences are:
-
Source data: SHARP Gaussians rather than LiDAR returns. SHARP point clouds exhibit uniform density with depth-plane artifacts rather than the variable density and occlusion shadows characteristic of LiDAR [5]. The physical filtering step is designed to mitigate this difference.
-
Scale: SHARP reconstructions are uncalibrated. LiDAR eigenvalue descriptors are typically computed at calibrated spatial neighborhoods (e.g., all points within a 0.5 m radius). Here, the eigenvalue decomposition operates on the entire filtered population of a grid cell — a much coarser spatial scale.
-
Color integration: LiDAR intensity is a single-channel reflectance measure. SHARP Gaussians carry full RGB color from the source photograph, enabling chromatic descriptors that LiDAR cannot provide.
The phenocam literature provides direct precedent for the color-based descriptors. The PhenoCam Network uses fixed-position digital cameras to track canopy greenness through the green chromatic coordinate GCC = G / (R + G + B), validated against satellite NDVI and field phenology observations across hundreds of sites and multiple years [16, 17]. Our green fraction G_f is a three-dimensional analog: instead of computing GCC from a flat photograph, we compute it from the color distribution of physically filtered Gaussians that have been placed in three-dimensional space. This adds structural context — a green value associated with a high-sphericity, low-linearity Gaussian cluster is canopy; the same green value in a high-planarity context is ground cover.
5.2 The Dimensionless Advantage
The absence of metric calibration, which appears to be a weakness relative to LiDAR, may prove to be a methodological strength for comparative ecology. Calibrated measurements invite direct comparison of absolute values — and absolute values confound structure with scale. A 40-meter Amazonian emergent and a 4-meter chaparral shrub have nothing in common in absolute height, but both may exhibit similar linearity-to-sphericity transitions from trunk to canopy. Dimensionless descriptors abstract away absolute size and focus on proportional architecture — arguably the more ecologically relevant quantity for niche characterization.
This parallels the insight from fractal geometry that self-similar structures generate similar fractal dimensions across vastly different absolute scales [20]. If habitat architecture exhibits self-similar properties — and there is evidence that it does, particularly in branching patterns of vegetation [21] — then dimensionless descriptors may capture structural invariants that absolute measurements obscure.
5.3 Toward a Participatory Structural Ecology
The practical significance of dimensionless structural descriptors lies in their data requirements. A calibrated LiDAR survey requires a $50,000+ instrument, trained operators, registration targets, and substantial post-processing. A SHARP-derived structural state vector requires a $500 consumer 360° camera, a ten-second exposure, and an automated processing pipeline.
If the descriptors validate against known phenological and structural patterns, the MacroscopeVR contribution model extends from visualization to quantitative ecology. A citizen scientist capturing a geotagged panorama in a city park, a farm field, a wilderness trailhead, or a schoolyard contributes not just an explorable 3D scene but a standardized structural characterization of that habitat at that moment. Aggregated across contributors and seasons, this constitutes a distributed structural monitoring network — a phenocam network that measures in three dimensions rather than two, deployed not at funded research stations but wherever anyone points a camera.
This parallels the trajectory of iNaturalist, which compressed taxonomic expertise into AI-accessible classification [22]. MacroscopeVR would compress structural ecology — traditionally requiring plot mensuration, specialized instruments, and forestry training — into an automated pipeline triggered by a consumer photograph. The structural state vector is the quantitative output that makes the compression scientifically useful rather than merely visual.
6. Limitations
6.1 Monocular Depth Accuracy
SHARP's depth estimation, while structurally coherent within a scene, exhibits systematic errors including depth-plane banding and scale ambiguity [5]. The eigenvalue descriptors operate on this imperfect geometry. While ratios are robust to uniform scale errors, nonlinear depth distortions could bias the spatial structure descriptors. The magnitude of this bias is unknown and must be assessed empirically.
6.2 Illumination Sensitivity
Color descriptors are sensitive to illumination conditions at the time of capture. A panorama taken at solar noon will produce different brightness, saturation, and potentially hue distributions than one taken at dawn from the same position. The archive does not control for time of day, sky conditions, or sun angle. These confounds may add noise to temporal comparisons unless illumination normalization is applied.
6.3 Single-Viewpoint Occlusion
Each grid cell sees only the surfaces visible from the camera position. Dense understory occludes the forest floor; closed canopy occludes the sky from horizon-ring cells. The structural state vector characterizes visible structure, which is not identical to total structure. This is a fundamental limitation shared with phenocam observations and all passive optical methods.
6.4 Grid Cell Independence
The 25 cells of each panorama are treated as independent observations, but they are extracted from a single equirectangular image and share boundary regions. Spatial autocorrelation within a panorama should be expected and accounted for in statistical analyses.
6.5 Archive Heterogeneity
The 33 stations were captured by different observers, with different camera models and settings, at different times of day, under different weather conditions. This heterogeneity is both a strength (testing generalizability) and a weakness (introducing uncontrolled variance). Careful stratification by station, season, and ring elevation will be necessary to separate ecological signal from observational noise.
7. Conclusion
The MacroscopeVR archive of 11,425 SHARP Gaussian splat reconstructions across 33 research stations and multiple seasons constitutes a dataset ready for a novel form of ecological analysis. We propose that dimensionless structural descriptors — eigenvalue ratios, color distribution statistics, opacity heterogeneity, and scale texture measures — can characterize habitat structure without metric calibration, enabling comparison across sites, seasons, and biomes using data captured with consumer cameras and processed through automated pipelines.
The immediate next step is batch computation of the 16-element structural state vector across the full archive, followed by validation against known phenological patterns at stations where seasonal ecological dynamics are well-documented through decades of field observation. Success would establish a new class of ecological metric — rapid, participatory, three-dimensional, and dimensionless — bridging the gap between qualitative habitat description and the quantitative structural analysis currently accessible only through expensive, specialized instrumentation.
References
[1] Westoby, M. J., Brasington, J., Glasser, N. F., Hambrey, M. J., & Reynolds, J. M. (2012). "Structure-from-Motion photogrammetry: A low-cost, effective tool for geoscience applications." Geomorphology, 179, 300–314. DOI: 10.1016/j.geomorph.2012.08.021
[2] Calders, K., Adams, J., Armston, J., et al. (2020). "Terrestrial laser scanning in forest ecology: Expanding the horizon." Remote Sensing of Environment, 251, 112102. DOI: 10.1016/j.rse.2020.112102
[3] Szymanowicz, S., Rong, J., Monszpart, A., et al. (2024). "SHARP: Splatter Image with Hash-grid Adaptive Resolution and Prediction." Apple Machine Learning Research.
[4] Kerbl, B., Kopanas, G., Leimkühler, T., & Drettakis, G. (2023). "3D Gaussian Splatting for Real-Time Radiance Field Rendering." ACM Transactions on Graphics, 42(4). DOI: 10.1145/3592433
[5] Hamilton, M. P. (2026). "MacroscopeVR Technical Specification and Project Report." CNL-DR-2026-001 v2.0. Canemah Nature Laboratory.
[6] Shannon, C. E. (1948). "A Mathematical Theory of Communication." Bell System Technical Journal, 27(3), 379–423.
[7] Chen, J. M. & Black, T. A. (1992). "Defining leaf area index for non-flat leaves." Plant, Cell & Environment, 15(4), 421–429.
[8] Jennings, S. B., Brown, N. D., & Sheil, D. (1999). "Assessing forest canopies and understorey illumination: canopy closure, canopy cover and other measures." Forestry, 72(1), 59–73.
[9] MacArthur, R. H. & MacArthur, J. W. (1961). "On Bird Species Diversity." Ecology, 42(3), 594–598. DOI: 10.2307/1932254
[10] Tews, J., Brose, U., Grimm, V., et al. (2004). "Animal species diversity driven by habitat heterogeneity/diversity: the importance of keystone structures." Journal of Biogeography, 31(1), 79–92. DOI: 10.1046/j.0305-0270.2003.00994.x
[11] Stein, A., Gerstner, K., & Kreft, H. (2014). "Environmental heterogeneity as a universal driver of species richness across taxa, biomes and spatial scales." Ecology Letters, 17(7), 866–880. DOI: 10.1111/ele.12277
[12] Lefsky, M. A., Cohen, W. B., Parker, G. G., & Harding, D. J. (2002). "Lidar Remote Sensing for Ecosystem Studies." BioScience, 52(1), 19–30.
[13] Davies, A. B. & Asner, G. P. (2014). "Advances in animal ecology from 3D-LiDAR ecosystem mapping." Trends in Ecology & Evolution, 29(12), 681–691. DOI: 10.1016/j.tree.2014.10.005
[14] Weinmann, M., Jutzi, B., Hinz, S., & Mallet, C. (2015). "Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers." ISPRS Journal of Photogrammetry and Remote Sensing, 105, 286–304. DOI: 10.1016/j.isprsjprs.2015.01.016
[15] Hackel, T., Wegner, J. D., & Schindler, K. (2016). "Fast Semantic Segmentation of 3D Point Clouds with Strongly Varying Density." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, III-3, 177–184.
[16] Richardson, A. D., Hufkens, K., Milliman, T., et al. (2018). "Tracking vegetation phenology across diverse North American biomes using PhenoCam imagery." Scientific Data, 5, 180028. DOI: 10.1038/sdata.2018.28
[17] Sonnentag, O., Hufkens, K., Teshera-Sterne, C., et al. (2012). "Digital repeat photography for phenological research in forest ecosystems." Agricultural and Forest Meteorology, 152, 159–177. DOI: 10.1016/j.agrformet.2011.09.009
[18] Woebbecke, D. M., Meyer, G. E., Von Bargen, K., & Mortensen, D. A. (1995). "Color indices for weed identification under various soil, residue, and lighting conditions." Transactions of the ASAE, 38(1), 259–269.
[19] Brodu, N. & Lague, D. (2012). "3D terrestrial lidar data classification of complex natural scenes using a multi-scale dimensionality criterion: Applications in geomorphology." ISPRS Journal of Photogrammetry and Remote Sensing, 68, 121–134. DOI: 10.1016/j.isprsjprs.2012.01.006
[20] Mandelbrot, B. B. (1982). The Fractal Geometry of Nature. W. H. Freeman.
[21] Zeide, B. & Pfeifer, P. (1991). "A Method for Estimation of Fractal Dimension of Tree Crowns." Forest Science, 37(5), 1253–1265.
[22] Van Horn, G., Mac Aodha, O., Song, Y., et al. (2018). "The iNaturalist Species Classification and Detection Dataset." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 8769–8778.
Appendix A: Structural State Vector Summary
| Index | Symbol | Descriptor | Group | Range |
|---|---|---|---|---|
| 1 | L | Linearity | Spatial | [0, 1] |
| 2 | P | Planarity | Spatial | [0, 1] |
| 3 | S | Sphericity | Spatial | [0, 1] |
| 4 | V | Vertical bias | Spatial | [0, 1] |
| 5 | A | Anisotropy | Spatial | [0, 1] |
| 6 | G_f | Green fraction | Color | [0, 1] |
| 7 | H_c | Color entropy | Color | [0, ln(36)] |
| 8 | RG | Red-green ratio | Color | [0, ∞) |
| 9 | B_μ | Brightness mean | Color | [0, 1] |
| 10 | S_σ | Saturation spread | Color | [0, 0.5] |
| 11 | O_μ | Opacity mean | Opacity | [0.5, 1] |
| 12 | H_o | Opacity entropy | Opacity | [0, ln(20)] |
| 13 | O_γ | Opacity skewness | Opacity | (−∞, ∞) |
| 14 | Sc_μ | Scale mean | Scale | (0, 0.05] |
| 15 | Sc_cv | Scale CV | Scale | [0, ∞) |
| 16 | H_s | Scale entropy | Scale | [0, ln(N)] |
Note: L + P + S = 1 (constrained simplex). All descriptors except RG, O_γ, Sc_μ, Sc_cv are bounded; unbounded descriptors may require normalization for multivariate analysis.
Document History
| Version | Date | Changes |
|---|---|---|
| 0.1 | 2026-03-04 | Initial draft |
Cite This Document
BibTeX
Permanent URL: https://canemah.org/archive/document.php?id=CNL-TN-2026-030