Single-Image 3D Reconstruction from 360° Imagery: Experimental Findings Using Apple SHARP
Abstract
This field note documents experimental findings from testing Apple's SHARP model for 3D reconstruction from 360° imagery. The original hypothesis—that cubemap faces from a spherical capture could be processed independently through SHARP and merged into a unified scene model—proved incorrect. SHARP generates independent coordinate systems for each input image with inconsistent depth scales, making geometric fusion infeasible. However, the experiment yielded a productive reframing: each cubemap face produces a valid "terrarium"—a measurable 3D frustum suitable for per-view analysis. We developed a complete toolkit including high-resolution cubemap extraction, a custom WebGL2 Gaussian splatting renderer matching commercial quality, and format converters for GIS/CAD integration. Key finding: input image resolution significantly impacts splat quality; 1536px cubemap faces (from 6080×3040 source imagery) produce substantially better results than 512px extractions.
Access
AI Collaboration Disclosure
This field note was developed with assistance from Claude (Anthropic, Opus 4.5). The AI contributed to literature review of the SHARP repository documentation, experimental design discussion, protocol development, software development, and manuscript drafting. The author takes full responsibility for the content, accuracy, and conclusions.
Human review: fullCite This Document
BibTeX
Permanent URL: https://canemah.org/archive/document.php?id=CNL-TN-2026-005