Link to the code: brain-emulation GitHub repository

256 Subjects, 7 Years: The Largest Consciousness Experiment Ever Run


For decades, two theories have dominated scientific debate about the neural basis of consciousness. Integrated Information Theory (IIT), developed by Giulio Tononi, proposes that consciousness is identical to integrated information — a quantity (phi) that measures the irreducibility of causal information across a system. Global Workspace Theory (GWT), developed by Bernard Baars and extended by Stanislas Dehaene, proposes that consciousness corresponds to information being broadcast from a “global workspace” in prefrontal cortex to specialized modules across the brain, making it widely available for report and cognitive access.

Both theories have generated extensive research programs, thousands of papers, and genuine predictive disagreements about which brain regions, which neural dynamics, and which experimental manipulations should matter for conscious experience. Both have also accumulated anomalous findings that their proponents have struggled to fully accommodate.

In 2019, a consortium of consciousness researchers organized by the Allen Institute launched what was explicitly framed as an adversarial collaboration: a pre-registered, multi-site study designed to test both theories simultaneously using experiments that made contradictory predictions. Seven years and 256 subjects later, the results were published in 2026 — the largest and most statistically powerful empirical study of consciousness theories ever conducted.

The Adversarial Collaboration Design

The adversarial collaboration design is specifically built to prevent the most common failure mode in consciousness research: post-hoc reinterpretation. In standard science, when experimental results fail to fit a theory, theorists revise the theory to accommodate the result. In a field where theories are highly flexible, this process can continue indefinitely, producing theories that account for all existing data while making no falsifiable predictions.

The adversarial collaboration format addresses this by requiring both theory camps to agree in advance on:

  1. Which experiments will distinguish their predictions
  2. Which outcomes will count as evidence for which theory
  3. What statistical thresholds will be treated as decisive

Both the IIT camp (led by Tononi and Christof Koch) and the GWT camp (led by Dehaene and colleagues) participated in designing the protocol. The experiments were pre-registered at the Open Science Framework before data collection began.

The study used a combination of fMRI, EEG, MEG, and intracranial recording across six sites on three continents. The 256 subjects underwent multiple experimental paradigms, including masked stimulus presentation (examining visual awareness without motor report), contrastive analysis of perceptual stimuli, and perturbational complexity measurements using transcranial magnetic stimulation.

The Results and What They Challenged

The complete dataset challenged both theories in specific ways. The Allen Institute’s published summary documented both confirmations and refutations in each theory’s predictions.

For IIT, the study examined predictions about the posterior cortex as the primary seat of consciousness (IIT predicts high-phi regions should correspond to conscious experience). The data partially supported posterior cortex involvement but found substantial activity in prefrontal regions during conscious perception — a result that IIT in its standard form does not predict and has difficulty accommodating.

For GWT, the study examined predictions about prefrontal cortex as the global workspace that “ignites” to broadcast conscious content. The data found that prefrontal activity lagged posterior cortex activity by hundreds of milliseconds during many conscious perceptions — suggesting that prefrontal involvement may be related to cognitive access and report rather than to consciousness itself. This matches a distinction drawn by many consciousness researchers between “phenomenal consciousness” (raw subjective experience) and “access consciousness” (information available for report and cognition).

The key finding — summarized in the Allen Institute’s press release — is that the empirical data does not cleanly support either theory as currently formulated. Both theories predicted specific patterns that were partially confirmed and partially refuted. Neither emerged as the clear victor.

Why This Matters for Digital Consciousness

For whole brain emulation and digital consciousness research, the adversarial collaboration result has a specific implication: the theoretical foundation for digital consciousness — the specification of what properties a system must have to be conscious — remains unsettled.

The Digital Consciousness Model (DCM) 2026 framework proposed probabilistic benchmarking of AI consciousness across dimensions including integration, reportability, temporal continuity, and self-modeling. These dimensions were drawn partly from IIT and GWT. If neither theory fully accounts for biological consciousness data, then the dimensions they supply for assessing digital consciousness are correspondingly uncertain.

This does not make the DCM framework useless. It means the framework should be understood as provisional — built on the best available theories of consciousness, which the adversarial collaboration has now shown to be incomplete. The appropriate response is to use the framework while remaining explicitly uncertain about its theoretical grounding.

The adversarial AI approach from Monti’s UCLA team — which used a generative model to identify mechanistic structure in consciousness data — provides a complementary empirical direction. Rather than testing top-down theories against data, it builds up from data toward theory. The two approaches are methodologically opposite and potentially complementary.

The Perturbational Complexity Index Under Scrutiny

One specific finding from the 256-subject study that has received significant attention is the examination of the perturbational complexity index (PCI) — a measure developed by Casali and colleagues in 2013 that uses TMS-EEG to quantify the brain’s response to perturbation as an index of consciousness level. PCI has been widely used clinically to assess consciousness in patients with disorders of consciousness.

The adversarial collaboration study included PCI measurements and found that PCI tracked consciousness level reliably across the main experimental manipulations. This is a positive finding for clinical consciousness assessment. But the study also found that PCI does not cleanly discriminate between the mechanisms predicted by IIT and GWT — both theories can accommodate high and low PCI scores under different modeling assumptions.

The clinical value of PCI appears robust. Its theoretical interpretation — what exactly it is measuring about the neural basis of consciousness — is less settled than previously assumed.

What the 256-Subject Study Leaves Open

The adversarial collaboration is unprecedented in its scope and rigor. It is also limited in the ways that all empirical studies are limited.

The study examined conscious perception of visual stimuli in healthy, cooperative subjects. It did not directly address dreams, anesthesia, disorders of consciousness, or non-human animal consciousness. Its conclusions apply most directly to visual awareness in normal waking states.

The study also did not test all versions of IIT and GWT. Both theories have evolved significantly over the past decade. The versions tested were the consensus formulations that both camps agreed to pre-registration — which means the results may not fully apply to updated versions of either theory.

For digital consciousness, the most important open question is whether the patterns identified in the 256-subject study apply to artificial systems at all. The study’s results are about biological neural dynamics. Whether the mechanisms identified as necessary for consciousness in biological brains — posterior cortical integration, prefrontal global broadcast, PCI correlates — have analogs in artificial neural networks is a question the study does not address.

Official Sources

  • Allen Institute for Brain Science — Landmark Experiment Sheds New Light on the Origins of Consciousness. https://alleninstitute.org/news/landmark-experiment-sheds-new-light-on-the-origins-of-consciousness/
  • ScienceDaily (2026) — Scientists Racing to Define Consciousness. https://www.sciencedaily.com/releases/2026/01/260131084626.htm
  • Tononi, G. (2008) — Consciousness as Integrated Information: A Provisional Manifesto. Biological Bulletin, 215(3):216–242. DOI: 10.2307/25470707
  • Dehaene, S., Changeux, J.P., & Nacache, L. (2011) — Experimental and theoretical approaches to conscious processing. Neuron, 70(2):200–227. DOI: 10.1016/j.neuron.2011.03.018
  • Casali et al. (2013) — A theoretically based index of consciousness independent of sensory processing and behavior. Science Translational Medicine, 5(198):198ra105. DOI: 10.1126/scitranslmed.3006294
  • Open Science Framework — Pre-registration for the IIT-GWT adversarial collaboration. https://osf.io/