A Mind Cannot Be Smeared Across Time: The 2026 Paper That Challenges Digital Consciousness
Most arguments against mind uploading focus on what we do not yet know: we lack sufficient scanning resolution, our neuron models are incomplete, and we do not understand consciousness well enough to know what to preserve. These are empirical gaps. Fill them with enough research and the obstacles dissolve.
A paper accepted to the AAAI 2026 proceedings takes a different approach. Michael Timothy Bennett’s “A Mind Cannot Be Smeared Across Time” (arXiv:2601.11620) argues that a specific architectural property of conventional computers, namely that they process information sequentially, makes them categorically unsuitable for hosting conscious minds. The argument is not empirical. It is logical, and it applies right now, regardless of how much more we learn about the brain.
That is a stronger claim. It is also a more interesting one.
The Problem with Sequential Computation
Modern computers, including the neural network accelerators used in AI systems, process information one operation at a time. Even massively parallel hardware executes instructions in discrete sequential steps across its processing cores. This is so fundamental to how computation works that it rarely gets examined as an assumption.
Bennett examines it.
His starting point is a question that consciousness research has not resolved: what temporal structure does conscious experience require? We know that subjective experience feels unified. When you perceive a scene, the visual information, spatial layout, object identities, and emotional salience all arrive as a single coherent moment of awareness. They do not feel like a sequential readout from memory. The question is whether that feeling of simultaneity reflects something real about the underlying computation.
Bennett introduces two formal hypotheses about how conscious contents must be realized:
The Chord hypothesis: All components of a conscious experience must be instantiated simultaneously, within the same temporal window. Like notes in a chord, they must sound at the same time for the chord to exist.
The Arpeggio hypothesis: Components need only occur within a window, not simultaneously. Like notes in an arpeggio, they can appear in sequence and still count as realizing the same musical structure.
Most computational theories of consciousness implicitly assume the Arpeggio model. Global Workspace Theory, for instance, describes consciousness as a broadcasting mechanism where information from specialized modules is integrated across a global workspace, sequentially, over time. If the Arpeggio hypothesis is true, this is fine. Sequential processing can host consciousness.
Bennett’s formal argument is that the Arpeggio hypothesis is, at a minimum, not obviously correct, and that if the Chord hypothesis is correct instead, sequential computation cannot host consciousness at all.
The Formal Proof
The technical core of the paper establishes what Bennett calls the “existential temporal realisation does not preserve conjunction” theorem. In plain terms: a system can satisfy, across time, every individual requirement for consciousness without ever instantiating all of those requirements simultaneously.
Imagine consciousness requires conditions A, B, and C to all be present at the same moment. A sequential system can satisfy A, then B, then C in rapid succession. Under the Arpeggio model, this counts. Under the Chord model, it does not, because there is no single moment where A and B and C coexist.
This is not a limitation that can be engineered around within a sequential architecture. It is structural. No matter how fast the processor runs, it is always instantiating one state at a time. The conjunction of A, B, and C never has a moment where it exists as a whole.
Bennett formalizes this using temporal logic, constructing theorems (Theorems 1, 3, and 4 in the paper) that establish the conditions under which temporal distribution fails to preserve the properties a conscious experience requires. The proofs are available in the paper and are not trivial exercises in hand-waving.
The upshot: if the Chord hypothesis is correct, no purely sequential computational system can ever be conscious. This includes current CPUs, GPUs, and by extension any digital brain upload running on conventional hardware.
How This Differs from Existing Arguments
It is worth locating this argument in the landscape of existing objections to mind uploading.
The SOMA consciousness continuity problem is about personal identity: even if you make a perfect copy of your brain, the copy is a different person. The original ceases to exist. This is a metaphysical argument about what survives a copying procedure.
The mind uploading reality check covers empirical obstacles: we do not know enough about neurons, synapses, or the role of glial cells to know what to simulate.
The Orch OR model argues that consciousness depends on quantum processes in microtubules that conventional computers cannot replicate, because quantum coherence is destroyed by classical computational environments.
Bennett’s argument is distinct from all of these. It does not depend on:
- Whether the upload is a “true copy” of you
- Whether we understand the biology well enough
- Whether quantum processes are involved in consciousness
It depends only on whether conscious experience requires simultaneous co-instantiation of its components. If it does, sequential computers are permanently excluded, regardless of every other advance.
What the Argument Does Not Claim
The paper is careful about scope. Bennett is not claiming that digital consciousness is impossible. He is arguing conditionally: if the Chord hypothesis is correct, then sequential computation cannot host consciousness.
The Arpeggio hypothesis is not disproven. Most existing computational theories of consciousness assume something like it, and those theories are not obviously wrong. The paper does not refute them directly. What it does is establish that the question of which hypothesis is correct carries enormous practical stakes that the field has not adequately engaged with.
The distinction between Chord and Arpeggio is also not cleanly empirical. We do not have a direct measurement of whether conscious experience requires true simultaneity or merely tight temporal integration. The science of consciousness is not yet at the level of resolution needed to answer this definitively.
Bennett’s contribution is to make the stakes explicit and formal. If you are building a brain upload platform, you need to have an answer to the Chord/Arpeggio question, because the wrong answer means your platform categorically cannot do what it claims.
Parallel Computation as a Partial Response
An obvious response is: what about massively parallel hardware? If consciousness requires simultaneous instantiation, build a system that truly instantiates all its states simultaneously.
This is where the paper gets technically careful. A complementary structural objection exists at the cognitive level: the 4E cognition framework argues that even if the temporal problem were solved, a brain extracted from its body and environment would be missing constitutive components of the original cognitive system. Conventional parallel hardware is not truly simultaneous at the relevant level. Clock cycles synchronize operations across processing units, but the synchronization itself is sequential. At the level of individual neuron-scale operations, the system is still executing in series.
Neuromorphic hardware is a different matter. Architectures like Intel’s Loihi 3 or IBM’s NorthPole process spikes in event-driven, asynchronous parallel fashion that more closely mirrors the actual dynamics of biological neural networks. Whether neuromorphic hardware satisfies the Chord requirement depends on technical details the paper does not resolve, but it is a more plausible candidate than conventional CPUs or standard neural network accelerators.
This connects to an underappreciated point about BrainTransformers and SNN-based language models: spiking neural network architectures may have properties relevant to the Chord/Arpeggio debate that conventional artificial neural networks do not. The spike timing dynamics of SNNs involve genuine temporal structure at the level of individual computational events. Whether this constitutes simultaneous co-instantiation in Bennett’s sense is a tractable research question.
Implications for WBE Research Programs
If the Chord hypothesis turns out to be correct, it has immediate implications for how whole brain emulation research should be designed.
Standard WBE proposals involve scanning a brain at high resolution, converting the structural data into a computational model, and running that model on digital hardware. The running hardware is invariably assumed to be some form of digital computer. If Bennett is right, that assumption introduces a fatal flaw at the architectural level.
The alternative is to build emulation substrates that are themselves genuinely parallel at the relevant scale. This points toward biological computing (neurons in culture, organoids), quantum systems that maintain true simultaneity through superposition, or neuromorphic hardware that operates at a fundamentally different level than Von Neumann architectures.
None of these alternatives are ready for whole brain emulation today. But the existence of this argument is a reason to invest in them rather than assuming conventional compute will scale into consciousness. The whole brain emulation roadmap anticipates hardware challenges, but not necessarily this specific form of the challenge.
Why This Paper Matters Now
Arguments about the fundamental impossibility of machine consciousness are not new. Penrose and Searle both made versions of such arguments, and both were extensively debated without resolution. Bennett’s paper is different in character because it is not relying on intuitions about understanding or physics. It is a formal logical argument in temporal semantics, the kind that can be checked step by step.
AAAI is the main annual venue for artificial intelligence research. A paper challenging the temporal prerequisites for machine consciousness appearing there, rather than only in philosophy journals, signals that the question is being taken seriously by researchers building the actual systems. That convergence of venues is itself significant.
The paper does not settle the question. What it does is force anyone working seriously on mind uploading to take a position on the Chord/Arpeggio distinction, because the position you take determines whether your entire research program is coherent.
Official Sources
- Michael Timothy Bennett, “A Mind Cannot Be Smeared Across Time,” arXiv preprint arXiv:2601.11620, forthcoming in AAAI 2026 Proceedings — arxiv.org/abs/2601.11620
- AAAI 2026 Conference: The 40th Annual AAAI Conference on Artificial Intelligence
- Global Workspace Theory: Baars, B.J. (1988). A Cognitive Theory of Consciousness. Cambridge University Press
- Integrated Information Theory: Tononi, G. (2004). “An information integration theory of consciousness.” BMC Neuroscience, 5(1), 42
- Higher-Order Thought Theory: Rosenthal, D. (2005). Consciousness and Mind. Oxford University Press
- Intel Loihi 3 neuromorphic processor: Intel Neuromorphic Computing