Building Brains on a Computer: The Three Non-Negotiable Requirements for 2026
Most discussions of whole brain emulation get stuck at the hardware question. How much compute do you need? How many petabytes of storage? When will scanning resolution be sufficient? These are legitimate engineering problems. They are also not the core problem.
M. Schons’s 2026 essay, published through Asimov Press (DOI: 10.62211/92ye-82wp), makes a sharper argument: the field lacks agreement on what a brain emulation must actually demonstrate to count as successful. Without that baseline, research programs cannot be evaluated against a common standard, and progress claims remain untethered from any measurable outcome.
The essay proposes three non-negotiable capabilities. Schons calls them structural fidelity, functional replication, and dynamic adaptability. All three must be present simultaneously. Partial achievement does not constitute emulation. It constitutes a simulation of a subset of neural phenomena, which is useful but categorically different.
Technology Readiness Level: TRL 1–2 (conceptual framework; no current system meets all three requirements at anything approaching human brain scale).
The Essay’s Central Argument
The essay opens by distinguishing emulation from simulation. A simulation models selected behaviors of a system. A flight simulator produces realistic cockpit responses without containing an actual turbine. A brain simulation, by Schons’s account, could replicate measurable output patterns, such as spike timing statistics or regional activation sequences, without representing the underlying causal structure that generates them.
Emulation, as Schons defines it, requires the causal structure to be present. The emulated system must respond to novel inputs the way the original would, not the way a trained statistical model predicts the original would. This is a harder bar to clear, and it has implications for what kind of data you need and what kind of architecture can hold it.
The Sandberg/Bostrom WBE roadmap articulated something similar in 2008, but its framing was primarily about resolution levels and scanning technology. Schons’s contribution is to shift focus from data acquisition to capability demonstration. You can acquire perfect data and still fail to build an emulation if the computational framework cannot do what brains actually do.
Requirement 1: Structural Fidelity
The first requirement is the one closest to current research trajectories. Structural fidelity means representing the connectivity architecture of a brain at sufficient resolution that the causal relationships between neural elements are preserved.
This is not simply a question of storing a connectome. A static wiring diagram, even a complete one, does not specify the dynamic properties of individual synapses, the chemical environment of local circuits, or the neuromodulatory context that shapes how the same structure behaves under different conditions. Structural fidelity, as Schons defines it, requires all of this to be captured, not merely the axon-to-dendrite topology.
The current state of the field relative to this requirement is mixed. For small organisms, near-complete connectivity data exists. The Drosophila connectome and the C. elegans connectome are structural records at this level of detail. For mammalian cortex, the Allen Institute’s 9M neuron mouse cortex simulation provides structural data at large scale, but with known gaps in synaptic dynamics and neuromodulatory detail. For human brain tissue, nothing approaching complete structural fidelity exists at the level Schons requires.
The essay is careful to note that structural fidelity does not require perfect data. It requires data above a threshold at which the relevant causal relationships are determinable. Schons does not specify this threshold numerically, which is one of the essay’s genuine weaknesses. The argument is easier to accept conceptually than to operationalize.
Requirement 2: Functional Replication
Structural fidelity is necessary but not sufficient. The second requirement is functional replication: the emulated system must produce outputs that match the functional behavior of the original across a representative range of inputs and tasks.
This is where current large-scale brain simulations most clearly fall short. The Allen Institute mouse cortex simulation operates at 9 million neurons and 26 billion synapses. It reproduces some statistical properties of cortical activity. It does not replicate the functional repertoire of a mouse, meaning it cannot perform the behavioral tasks a mouse performs, and it does not respond to sensory inputs the way a mouse’s cortex does in vivo.
The gap between structural completeness and functional replication is not trivial. It reflects the fact that neural function depends on continuous interaction with a body and environment. A connectome in isolation is not a functioning brain. It is a wiring diagram that only produces interesting outputs when embedded in the right context.
Schons’s requirement pushes against the common conflation of “running a brain simulation” with “emulating a brain.” Running a simulation on expensive hardware proves computational infrastructure. Replicating function proves something about the relationship between structure and behavior.
For the field’s most ambitious projects, this distinction has practical implications. A system that passes the functional replication test for a specific cognitive task, say, pattern recognition or associative memory retrieval, while failing across others is a functional simulator of that task, not an emulation. Schons is not dismissive of partial results. He is explicit that they are useful and necessary. But they do not constitute the destination.
Requirement 3: Dynamic Adaptability
The third requirement is the least discussed in current WBE literature and, arguably, the hardest to achieve. Dynamic adaptability means the emulated system must be capable of learning and updating its structure in response to experience, the same way biological brains continuously modify synaptic weights, form new connections, and prune unused ones.
A static emulation, even one with perfect structural fidelity and functional replication at the moment of scanning, would freeze the brain at a point in time. It would not be capable of learning new information, forming new memories, or adapting to changed circumstances. Whether a frozen emulation counts as a “copy” of a person is a philosophical question. Whether it is an emulation of a brain, in any functionally meaningful sense, is the question Schons is asking, and his answer is no.
The Izhikevich spiking neuron model and related computational frameworks can simulate synaptic plasticity at the level of individual neurons. The challenge is scaling this to whole-brain systems while keeping the structural fidelity constraints satisfied simultaneously. Plasticity mechanisms interact with structural data in ways that are computationally expensive to represent and not fully understood at the systems level.
This requirement is also where the essay connects most directly to the mind uploading debate. Critics of mind uploading often raise the point that a copy is not continuous with the original precisely because the copy cannot continue learning in the way the original did. Schons does not engage with the identity continuity question directly. He is making a narrower point: dynamic adaptability is a property brains have, and any system that lacks it is not emulating a brain, whatever else it may be doing.
Where the Field Stands Today
Against this three-part checklist, no current system scores above TRL 2 on all three requirements simultaneously. Structural fidelity is advancing, particularly for small-scale mammalian tissue following work like the Song et al. connectome preservation results. Functional replication is achieved for narrowly defined tasks in reduced preparations. Dynamic adaptability remains a research challenge at any scale above single-neuron models.
The 2025-2026 state of brain emulation reflects exactly this picture. Individual components are maturing. Integration of all three requirements into a unified system is not yet a planned engineering project. It is still in the conceptual phase.
Schons’s essay is most useful as a diagnostic tool. Applied to any specific WBE project or proposal, the three requirements quickly reveal which capabilities are present, which are absent, and which are being glossed over with vague language about “sufficiently detailed models.” The essay does not provide a roadmap to closing the gaps. It clarifies what the gaps are.
Future Outlook
The value of Schons’s framework is not that it tells the field something it did not know. Most researchers in this area are aware that structural, functional, and adaptive requirements all matter. The value is that it puts these requirements in a form that cannot be quietly dropped from project descriptions.
The history of ambitious computational neuroscience projects includes several cases where the original scope was quietly narrowed as timelines extended. Projects initially framed as brain emulation became brain simulation, then became large-scale neural modeling, then became high-resolution connectome mapping. Each pivot is scientifically legitimate. None of the resulting outputs are emulations by Schons’s criteria.
Whether the field needs the full emulation goal, or whether narrower targets are more productive in the near term, is a question the essay does not resolve. What it does resolve is the definitional question: until all three requirements are met, at scale, simultaneously, the claim of brain emulation is not substantiated. That is a useful boundary to have stated clearly, even if the path to it remains a subject of genuine debate.
Official Sources
- Schons, M. (2026). “Building Brains on a Computer: The Core Requirements for Whole Brain Emulation.” Asimov Press. DOI: 10.62211/92ye-82wp
- Sandberg, A. & Bostrom, N. (2008). “Whole Brain Emulation: A Roadmap.” Future of Humanity Institute Technical Report. fhi.ox.ac.uk
- Izhikevich, E.M. (2003). “Simple Model of Spiking Neurons.” IEEE Transactions on Neural Networks 14(6): 1569–1572. Izhikevich model overview
- Allen Institute for Brain Science (2025). Mouse cortex simulation on Fugaku supercomputer, SC25 presentation. Allen Institute simulation overview
- Song et al. (2026). “Near-perfect ultrastructural preservation of mammalian brain tissue within a 14-minute post-mortem window.” bioRxiv. DOI: 10.64898/2026.03.04.709724. Full analysis
- Related: JUPITER Exascale Supercomputer Achieves 20-Billion-Neuron SNN at Cortex Scale
- Related: WARP: Gene Expression and Neuronal Activity Co-Mapped Across the Zebrafish Brain