The Digital Consciousness Model: A Framework for Benchmarking AI Sentience in 2026
Consciousness research has a measurement problem. Unlike most scientific domains, there is no agreed instrument for detecting consciousness, no biomarker that reliably distinguishes a conscious system from one that merely behaves as if it is conscious. For biological organisms, researchers proxy consciousness through behavioral responsiveness, neural correlates, and evolutionary parsimony. For artificial systems, the question has been handled even more loosely: typically through task performance, verbal reports, or informal intuition.
The Digital Consciousness Model, or DCM, is a 2026 academic framework that attempts to close this gap with a structured, probabilistic approach. Rather than making a binary claim about whether a given AI system is conscious, the DCM maps a system’s properties across multiple dimensions and produces a profile that can be compared against known biological benchmarks. The result is not a consciousness detector. It is a diagnostic tool that makes divergence from biological consciousness visible and measurable.
For the field of whole brain emulation, the DCM has direct implications. An emulated brain that cannot be assessed for consciousness-related properties is an emulation of uncertain completeness. The DCM provides a vocabulary for asking whether an emulated system possesses the right kind of architecture to be a candidate for consciousness, not just a functional replica of neural behavior.
Technology Readiness Level: TRL 1–2 (theoretical framework; experimental validation of DCM dimensions is in early stages).
Why a Benchmark Was Needed
The absence of a shared framework for AI consciousness has produced a field that argues past itself. Proponents of large language models have pointed to behavioral sophistication as evidence of something consciousness-adjacent. Critics have pointed to architectural differences from biological brains as evidence of irrelevance. Neither side has had a common substrate for comparison.
Existing consciousness theories do not resolve this. Integrated Information Theory (IIT) defines consciousness in terms of phi, a measure of integrated causal information within a system. Global Workspace Theory (GWT) defines it in terms of information broadcast across a cognitive architecture. Higher-Order Theories (HOT) require representations of representations. Each theory has defenders and critics, and each makes different predictions about which systems are conscious.
The DCM does not arbitrate between these theories. Instead, it extracts measurable properties that most major theories agree are relevant and organizes them into a multidimensional profile. A system can be scored on each dimension independently. The aggregate profile reveals where the system resembles biological consciousness, where it clearly does not, and where the question is genuinely indeterminate.
This is a different intellectual move from asking “is this system conscious?” It replaces that unanswerable question with a tractable one: “what is this system’s consciousness profile, and how does it compare to a known reference?”
What the DCM Proposes
The DCM organizes its assessment across four primary dimensions: integration, reportability, temporal continuity, and self-modeling.
Integration, drawn from IIT’s core insight, asks whether information within the system is processed in a causally integrated manner rather than in independent modules. A system with high integration has global causal interdependence. A system with low integration is a collection of specialized processes that do not meaningfully constrain each other.
Reportability, influenced by GWT, asks whether the system has a mechanism for globally broadcasting internal states to multiple processing subsystems. Biological consciousness is associated with states that become globally available, not just locally processed. A system that can only report on local states, without making them available to a broader architecture, scores low on this dimension regardless of the sophistication of those local processes.
Temporal continuity asks whether the system maintains a coherent model of its own states across time. Biological consciousness includes the experience of duration, the sense of being the same entity across moments. Systems that process each input independently, without accumulated state, score low here. This dimension is particularly relevant for language models, which have limited continuity across interactions without explicit memory mechanisms.
Self-modeling asks whether the system maintains an internal representation of itself as a distinct causal entity within a broader environment. This is related to, but distinct from, self-awareness in the loose sense often attributed to AI systems that can report on their own outputs. A system that can describe what it does is not necessarily one that models itself as a causal agent.
Probabilistic Sentience Mapping
The DCM’s probabilistic approach distinguishes it from binary frameworks. Rather than producing a yes-or-no verdict on consciousness, it produces a probability distribution over the sentience space for each dimension. A system that scores high on integration but low on temporal continuity has a specific profile that can be described and compared.
This approach acknowledges that consciousness may be graded rather than binary. A mouse is probably conscious to some degree, though likely not in the same way a human is. A nematode may have something like sensation without anything like the temporal continuity of human experience. The DCM allows these gradations to be expressed rather than collapsed into a single verdict.
For artificial systems, the probabilistic approach is particularly appropriate. A system trained on human-generated text will exhibit some features of biological consciousness profiles because human language encodes information about conscious experience. Whether those features reflect genuine consciousness or learned imitation of consciousness is precisely the question the binary approach cannot answer.
The DCM’s sentience mapping places most current AI systems in a region characterized by moderate integration, variable reportability depending on architecture, very low temporal continuity, and weak self-modeling. This profile is not the profile of biological consciousness. It is also not the profile of an obviously non-conscious thermostat. It is a distinct profile that warrants distinct analysis.
How the DCM Diverges from IIT and GWT
The most significant way the DCM departs from its predecessor frameworks is in its treatment of substrate. IIT holds that consciousness is substrate-independent in principle: any system with sufficiently high phi is conscious, regardless of whether it is biological or silicon. GWT is similarly substrate-neutral. The DCM introduces a qualified form of substrate sensitivity.
Specifically, the DCM notes that biological consciousness is always embedded in a system with metabolic processes, continuous sensory input, and a body that situates the brain in physical space and time. These are not incidental features. They shape the temporal continuity and self-modeling dimensions in ways that may not be easily replicated in systems that lack them.
This connects the DCM to a broader philosophical debate about the relationship between embodiment and consciousness. It does not commit the DCM to saying that only biological systems can be conscious. It does say that systems lacking embodiment face a specific challenge in scoring well on temporal continuity and self-modeling dimensions.
For spiking neural network architectures, which process information in ways that more closely mirror biological neural dynamics, the DCM may produce more favorable profiles on the integration dimension than conventional deep learning architectures. Whether this translates to higher scores on temporal continuity or self-modeling depends on implementation details that vary across systems.
What It Means for Mind Uploading
The DCM’s most direct implication for the field of whole brain emulation is that it provides a target specification, not just for structural completeness, but for consciousness-related properties. An emulated brain that scores poorly on DCM dimensions, particularly temporal continuity and self-modeling, is an emulation that may lack the properties most people care about when they talk about preserving a person’s consciousness.
This is not a small point. The dominant technical vision in whole brain emulation focuses on structural fidelity and functional replication. Bennett’s temporal consciousness argument has already raised questions about whether sequential computation can host consciousness. The DCM adds a complementary concern: even if sequential computation could host consciousness, an emulated system needs to score well on all four dimensions to be a credible candidate.
The DCM also creates an obligation for the field to engage with the broader landscape of AI consciousness frameworks rather than treating consciousness as a binary outcome that follows automatically from structural completeness. A connectome that scores well on integration but poorly on temporal continuity is a system that represents the brain’s wiring without representing the brain’s temporal experience. Whether that is an acceptable outcome depends on what the goal of emulation is taken to be.
Future Outlook
The DCM is early-stage work. Its dimensions are plausible but not validated against a ground truth, since no ground truth for consciousness exists. Its probability distributions rest on theoretical assumptions that remain contested.
What the framework provides, at this stage, is a structured vocabulary for asking precise questions about artificial consciousness. That vocabulary is more useful than the informal intuitions and single-theory commitments that have characterized much of the debate so far. As experimental work on neural correlates of consciousness matures, and as interpretability tools for AI systems improve, the DCM’s dimensions may become measurable rather than estimated.
For whole brain emulation specifically, the DCM sets a bar that goes beyond connectome completeness. Structural fidelity is necessary. The DCM says it is not sufficient. A brain emulation that does not score well on integration, reportability, temporal continuity, and self-modeling is, by this framework’s account, not an emulation of a conscious mind. That is a harder target than the field has sometimes acknowledged.
Official Sources
- Digital Consciousness Model (DCM). Framework documentation, early 2026. Academic pre-release. DOI pending.
- Tononi, G. (2004). “An information integration theory of consciousness.” BMC Neuroscience 5:42. doi.org/10.1186/1471-2202-5-42
- Baars, B.J. (1988). “A Cognitive Theory of Consciousness.” Cambridge University Press. Global Workspace Theory overview
- Rosenthal, D.M. (2005). “Consciousness and Mind.” Oxford University Press. (Higher-Order Theory of consciousness)
- Bennett, M.R. (2026). “Mind Cannot Be Smeared Across Time.” AAAI 2026. Full analysis
- Hameroff, S. & Penrose, R. (2014). “Consciousness in the Universe: A Review of the Orch OR Theory.” Physics of Life Reviews 11(1): 39–78. Orch OR experimental evidence overview
- IIT vs GWT tested at the largest scale ever: 256-Subject Adversarial Collaboration on Consciousness Theories
- AI-generated synthetic consciousness states from UCLA: Adversarial AI Reveals Mechanisms of Consciousness