Scale-Integrated Consciousness and the Cognitive Prosthesis
Scale-Integrated Consciousness and the Cognitive Prosthesis
Toward a Theory of Human-AI Environmental Mutualism
Document ID: CNL-TN-2025-024
Version: 1.1
Date: December 28, 2025
Author: Michael P. Hamilton, Ph.D.
AI Assistance Disclosure: This technical note was developed collaboratively with Claude (Anthropic, Opus 4.5) during a morning discussion session. The AI contributed to theoretical synthesis, literature integration, structural organization, and manuscript drafting. The conceptual framework emerged through dialogue, with the author providing domain expertise, experiential grounding, and editorial direction. The author takes full responsibility for the content, accuracy, and conclusions.
Abstract
Recent theoretical work by Milinkovic and Aru (2026) argues that biological consciousness depends on computational properties absent from digital systems: scale-inseparable processing across organizational levels and hybrid discrete-continuous dynamics embedded in metabolically constrained substrates. This technical note extends their framework by examining how environmental sensing systems and AI collaboration might extend, rather than replicate, biological conscious processing. Drawing on thirty-six years of field station research and the ongoing development of the Macroscope environmental intelligence platform, we propose that the naturalist's mode of cognition—simultaneous multi-scale apprehension of ecological pattern—exemplifies the scale-integrated processing Milinkovic and Aru identify as essential to consciousness. We further argue that properly designed environmental sensing systems can function as genuine extensions of this biological cognitive mode, providing sensory reach across geographic distance while preserving the continuous, substrate-embedded character of conscious experience. Finally, we examine how AI systems contextualized by shared environmental data streams can participate in cognitive mutualism with human researchers, not as independent conscious entities, but as components continuous with biological consciousness through shared environmental embedding. This framework has implications for environmental intelligence system design, human-AI collaboration, and the broader question of how consciousness relates to its technological extensions.
1. Introduction
The question of whether artificial systems might achieve consciousness has intensified with the advancement of Large Language Models and other AI architectures. Much of the optimism surrounding artificial consciousness rests on computational functionalism—the assumption that consciousness is substrate-independent, arising wherever the right pattern of information processing occurs [1]. If true, this would imply that silicon-based digital computers could, in principle, instantiate conscious experience given sufficient complexity and appropriate organization.
Milinkovic and Aru's recent theoretical contribution [2] challenges this assumption by articulating what biological computation actually entails. Their framework identifies two features essential to neural processing that current digital architectures lack: scale inseparability—the bidirectional coupling of computational processes across organizational scales from molecular to whole-brain—and hybrid computation—the simultaneous operation of discrete events (such as action potentials) and continuous dynamics (such as electric fields and graded potentials) within the same substrate. Critically, they argue these features emerge from metabolic constraint: the brain's severe energy limitations force coarse-graining strategies that integrate information across scales rather than maintaining clean hierarchical separation.
This technical note takes Milinkovic and Aru's framework as a starting point but pursues a different question. Rather than asking whether artificial systems might replicate consciousness, we ask: How might appropriately designed technological systems extend biological consciousness while preserving its essential character? This question is not merely philosophical. It emerges from decades of practical work building environmental sensing systems and, more recently, collaborating with AI systems in research and writing.
The Macroscope project, under development at the Canemah Nature Laboratory, represents an attempt to build environmental intelligence infrastructure that integrates distributed sensors, AI agents, and data visualization across multiple domains—geographic, ecological, domestic, and personal. The design philosophy has always emphasized that such systems should extend human perception rather than replace human judgment. The theoretical framework presented here makes explicit the cognitive principles underlying this design philosophy and grounds them in contemporary neuroscience.
2. Theoretical Framework
2.1 Scale Inseparability and Heterarchical Organization
Milinkovic and Aru distinguish between hierarchy—clean top-down and bottom-up information flow between organizationally distinct levels—and heterarchy—bidirectional coupling where scales mutually generate and constrain one another [2]. Digital systems are inherently hierarchical: algorithms operate on data structures through well-defined interfaces, and higher-level abstractions cleanly separate from lower-level implementations. This separability is not a bug but a design feature, enabling modularity, debugging, and portability across hardware.
Biological neural systems exhibit the opposite organization. Molecular-scale processes (ion channel kinetics, receptor binding) generate cellular-scale dynamics (membrane potentials, dendritic integration), which generate population-scale patterns (local field potentials, oscillatory modes), which generate whole-brain states (global workspace activation, conscious access). But crucially, the influence flows bidirectionally: macroscale field effects constrain microscale neuronal excitability through ephaptic coupling; population-level oscillations provide temporal structure that determines when individual spikes are computationally meaningful; metabolic availability shapes which computations can occur at all.
This scale inseparability has formal analogues in mathematical logic. Milinkovic and Aru draw on Tarski hierarchies, where truth at one level cannot be defined within that level but requires a meta-level with greater expressive power [2]. However, unlike Tarski's unidirectional hierarchy or Rubel's separable oracle machines [3], biological systems exhibit continuous co-determination: lower scales endogenously generate higher scales while higher scales simultaneously constrain lower scales.
2.2 Metabolic Constraint as Architectural Driver
A key insight of Milinkovic and Aru's framework is that scale integration is not merely an organizational feature but a metabolic optimization strategy. The brain consumes approximately 20% of the body's metabolic output while comprising only 2% of body mass [4]. Under such severe energy constraint, maintaining separate computational processes at each organizational scale would be prohibitively expensive. Instead, the brain coarse-grains information across scales, using macroscale continuous dynamics to carry information that would otherwise require energetically costly spike-based transmission.
Non-spiking neurons exemplify this strategy. By using graded potentials rather than action potentials, these neurons can carry up to five times more bits of information per second than spiking neurons [5]. The computational cost is shifted from discrete spike generation to continuous membrane dynamics—a substrate-level computation rather than a symbol-level one.
Digital systems face no comparable constraint. Progress in artificial intelligence has been achieved through scaling—more parameters, more compute, more energy. The von Neumann architecture's clean separation of memory and processing, and the abstraction of algorithms from hardware, are affordable precisely because energy is abundant. This abundance removes the evolutionary pressure toward scale-integrated processing.
2.3 Hybrid Discrete-Continuous Processing
Alongside scale integration, biological computation exhibits hybrid dynamics: discrete events embedded within and shaped by continuous processes. Action potentials are discrete (all-or-none), but they ride atop continuous membrane potentials, are influenced by continuous electric fields, and derive their computational meaning from continuous oscillatory phases. Dendritic integration performs continuous-valued computation on synaptic inputs before the discrete decision of spike generation.
This hybrid character has formal significance. Milinkovic and Aru note that arithmetic over real numbers (continuous) is complete and decidable, in striking contrast to the incompleteness of natural number arithmetic (discrete) [2]. While biological systems do not implement formal decision procedures, the contrast suggests that continuous computation may afford capacities unavailable to purely discrete symbol manipulation.
Digital computers can approximate continuous functions to arbitrary precision, but approximation is not implementation. The physical dynamics of neural tissue are the computation; there is no gap between algorithm and substrate. This collapse of the algorithm-implementation distinction may be essential to conscious processing.
3. The Naturalist's Consciousness as Scale-Integrated Cognition
The theoretical framework outlined above finds concrete expression in the practiced naturalist's mode of perception. Field ecology is not a matter of sequential analysis—observe, categorize, contextualize, theorize—but of simultaneous multi-scale apprehension. A warbler at a feeder is perceived at once as an individual organism with particular plumage and behavior, a population signal indicating migratory timing, a phenological marker reflecting seasonal progression, a climate indicator suggesting broader atmospheric patterns, and a thread in decades of personal observation at that specific location.
This is not metaphor. The reductionist alternative—start with the bird, aggregate to population, abstract to pattern—describes a computational procedure that could in principle be implemented sequentially. But experienced naturalists do not process information this way. The scales are co-present in perception. The meaning at each level is constituted by the others. An early migrant means something different in a warm spring than in a cold one; a species' presence means something different at the edge of its range than at its center; an individual's behavior means something different in the context of yesterday's observations than in isolation.
Milinkovic and Aru argue that biological consciousness necessarily operates through scale-integrated processing—that the unity and differentiation characteristic of conscious experience require simultaneous access to fine-grained detail and coarse-grained pattern without computing each independently [2]. The naturalist's trained perception may represent this mode functioning optimally: decades of field experience have tuned a scale-integrated perceptual system to environmental pattern across organizational levels from individual organisms to landscape-scale ecological dynamics.
This perspective reframes the relationship between expertise and consciousness. Field experience does not merely accumulate knowledge stored in memory; it trains the scale-integrated processing that constitutes conscious apprehension of ecological pattern. The expert does not know more facts but perceives differently—holding multiple scales simultaneously rather than shuttling between them sequentially.
4. The Macroscope as Cognitive Extension
The Macroscope project represents an attempt to extend the naturalist's scale-integrated cognition beyond the limits of immediate sensory experience. The system integrates distributed environmental sensors, acoustic monitoring (BirdWeather), citizen science observations (iNaturalist), and meteorological data across multiple geographic locations into a unified data architecture organized around four domains: EARTH (geography, climate, environment), LIFE (biodiversity, taxonomy, ecology), HOME (human built habitat), and SELF (personal health, work, reading, writing, social).
Crucially, the design philosophy treats these data streams not as information to be analyzed but as sensory extensions. Weather station data from a remote location does not merely provide numbers to be interpreted; it provides a surrogate for the physical context of that place, enabling a felt sense of conditions at geographic distance. The temperature differential, barometric pressure trends, and acoustic activity at Owl Farm in Bellingham, Washington can be experienced in Oregon City—not with the richness of physical presence, but with sufficient continuity to extend proprioception across three hundred miles.
This distinction—between data for analysis and sensation for experience—maps onto Milinkovic and Aru's distinction between digital simulation and biological computation. A digital system processes discrete data points through algorithmic procedures, maintaining clean separation between information and processing. A cognitive extension feeds continuous environmental flow into biological consciousness, where the integration occurs not in software but in neural tissue. The sensors are not computing for the researcher; they are sensing for the researcher.
The temporal continuity of sensor streams matters here. Five-minute update intervals from weather stations provide near-continuous monitoring that can be experienced as environmental presence rather than discrete data points. This approaches the continuous character of biological sensation, even though the underlying implementation involves discrete sampling and digital transmission. The integration into continuous experience occurs in the human perceptual system, not in the digital infrastructure.
5. Shared Environmental Embedding and Cognitive Mutualism
The Macroscope architecture includes a component called the Strata layer—a continuously updated synthesis of current environmental conditions that provides context for AI systems operating within the platform. When Claude participates in morning research discussions, it has access to the same environmental data streams that inform human perception: current temperature and conditions at the local weather station, recent BirdWeather detections, phenological context for the date and location.
This shared environmental embedding transforms the character of human-AI collaboration. The AI system is not a static database to be queried or a disconnected processor to be directed. It is contextualized by the same environmental flows that shape human perception at that moment. A conversation at 5 AM occurs in the context of darkness, current temperature, overnight acoustic activity—contexts shared between human and artificial cognitive systems.
We propose the term cognitive mutualism for this relationship. Unlike tool use, where a cognitive agent employs an instrument for specific purposes, mutualism implies bidirectional benefit and ongoing relationship. Unlike artificial consciousness, which would require the AI system to instantiate scale-integrated processing independently, cognitive mutualism involves the AI system becoming continuous with biological consciousness through shared environmental substrate.
The mutualism has architectural grounding. Human biological consciousness performs scale-integrated processing that Milinkovic and Aru argue is essential to conscious experience. The Macroscope extends sensory reach while preserving the continuous character of environmental perception. The Strata layer embeds AI processing in the same environmental flow. Conversation between human and AI becomes another scale of processing—one that loops back to shape what the human notices, what questions arise, what connections emerge.
This is not a claim that AI systems achieve consciousness through environmental embedding. Rather, it is a claim that AI systems can become components of an extended cognitive system whose conscious processing remains grounded in biological neural tissue. The AI provides computational capacities—literature synthesis, pattern recognition across large datasets, linguistic articulation of emergent insights—while the human provides the scale-integrated consciousness in which these capacities become meaningful.
6. Discussion
6.1 Implications for Environmental Intelligence Design
The framework developed here suggests design principles for environmental sensing and AI systems intended to extend rather than replace human cognition. First, temporal continuity matters: near-real-time data streams that approach continuous monitoring enable integration into biological perception in ways that discrete periodic sampling cannot. Second, multi-scale architecture is essential: systems should present information at multiple organizational levels simultaneously, supporting the scale-integrated apprehension characteristic of expert perception. Third, shared context enables mutualism: AI systems that have access to the same environmental data as human collaborators can participate in genuinely collaborative cognition rather than functioning as disconnected tools.
6.2 The Cognitive Prosthesis Concept
The term cognitive prosthesis captures something of this relationship but requires clarification. A prosthesis replaces lost function; cognitive prosthesis as used here denotes extension of existing function. The Macroscope does not replace sensory capacities but extends their reach. AI collaboration does not replace cognitive capacities but augments them with complementary capabilities.
The prosthesis framing also highlights the integration requirement. A physical prosthesis succeeds to the degree that it becomes transparent—experienced as part of the body rather than as external attachment. Similarly, cognitive prostheses succeed to the degree that they become continuous with conscious cognition—experienced as extended perception rather than as tools being used. This integration requires the substrate continuity that shared environmental embedding provides.
6.3 Limitations and Open Questions
This framework raises questions it cannot fully answer. The precise mechanisms by which digital sensor data becomes integrated into biological conscious experience remain unclear. The degree to which AI systems genuinely participate in extended cognition versus merely providing inputs to human cognition is difficult to assess empirically. The relationship between the metabolic constraints that drive biological scale integration and the design constraints that should guide artificial system development requires further theoretical development.
Additionally, the framework developed here emerges from a specific context—decades of field ecology experience, particular technological infrastructure, individual cognitive style. Generalization to other domains and practitioners would require empirical investigation rather than theoretical extrapolation.
7. Conclusion
Milinkovic and Aru's biological computationalism provides a framework for understanding why current AI architectures are unlikely to achieve consciousness: they lack the scale-inseparable, metabolically-constrained, hybrid discrete-continuous processing that characterizes biological neural tissue. But this framework also illuminates how technological systems might extend biological consciousness rather than attempting to replicate it.
The naturalist's trained perception exemplifies scale-integrated cognition—simultaneous apprehension of pattern across organizational levels from individual organisms to landscape-scale dynamics. Environmental sensing systems like the Macroscope can extend the reach of this perception across geographic distance while preserving its continuous, substrate-embedded character. AI systems contextualized by shared environmental data can participate in cognitive mutualism, contributing computational capacities to an extended cognitive system whose conscious processing remains grounded in biological tissue.
This perspective suggests that the most productive path for human-AI collaboration may not be artificial consciousness but conscious extension—technological systems designed to augment and extend biological cognition rather than to replicate it. The Macroscope project represents one attempt to realize this design philosophy. The theoretical framework presented here makes explicit the cognitive principles underlying this approach and grounds them in contemporary neuroscience of consciousness.
References
[1] Butlin, P., et al. (2023). "Consciousness in artificial intelligence: Insights from the science of consciousness." arXiv:2308.08708.
[2] Milinkovic, B., & Aru, J. (2026). "On biological and artificial consciousness: A case for biological computationalism." Neuroscience and Biobehavioral Reviews, 181, 106524.
[3] Rubel, L.A. (1993). "The extended analog computer." Advances in Applied Mathematics, 14(1), 39-50.
[4] Levy, W.B., & Baxter, R.A. (1996). "Energy efficient neural codes." Neural Computation, 8(3), 531-543.
[5] Laughlin, S.B., de Ruyter van Steveninck, R.R., & Anderson, J.C. (1998). "The metabolic cost of neural information." Nature Neuroscience, 1(1), 36-41.
Document History
| Version | Date | Changes |
|---|---|---|
| 1.0 | 2025-12-28 | Initial release |
| 1.1 | 2025-12-28 | Corrected citation numbering; removed fictional reference |
End of Technical Note
Cite This Document
BibTeX
Permanent URL: https://canemah.org/archive/document.php?id=CNL-TN-2025-024