Link to the code: brain-emulation GitHub repository

Organoid Intelligence: When Living Brain Tissue Becomes a Computer


Brain organoids are not simulations of neural tissue. They are neural tissue. Grown from human induced pluripotent stem cells (iPSCs), they self-organize into three-dimensional structures containing neurons, astrocytes, and the synaptic connections between them. They develop spontaneous electrical activity. They respond to stimuli. They form rudimentary circuits.

For most of their existence as a research tool, organoids have been used to study disease, test drugs, and model early brain development. In 2026, a growing community of researchers is pursuing a different application: using organoids as the computational substrate of a computer.

The field calls itself Organoid Intelligence, or OI. It argues that biological neural tissue can outperform silicon in specific computational tasks, particularly those requiring learning and adaptation, at a fraction of the energy cost. The first demonstrations are speculative by the standards of mainstream computing. But the trajectory connects directly to questions that sit at the center of whole brain emulation research, including what substrate cognition requires, whether grown tissue can be programmed, and at what point a biological computing system becomes ethically significant.

Technology Readiness Level: TRL 2–3 (basic concept demonstrated; laboratory prototypes exist but no production-capable OI computing system has been validated).

What Organoid Intelligence Claims

The core OI argument is straightforward. Silicon-based neural networks consume enormous energy to approximate what biological neurons do naturally. A human brain runs on approximately 20 watts. Training a large language model can consume megawatts. The energy efficiency gap between biological and silicon neural computation is not a detail. It is a fundamental constraint on what AI systems can achieve at scale.

Organoids close part of that gap by executing computation directly in biological tissue. Neurons communicate through electrochemical signals shaped by synaptic plasticity, the same mechanisms that underlie learning and memory in living brains. If researchers can read inputs into an organoid through electrode arrays and read outputs back out, they have a computing device that learns from experience in a biologically authentic way.

The landmark early demonstration of OI principles came from the DishBrain experiment at Cortical Labs in Melbourne, published in 2022 in Neuron. The team grew human cortical neurons on a multielectrode array and trained them to play the classic Atari game Pong using a feedback protocol. The neurons learned to respond to ball position signals and control a paddle. The result was low-fidelity by gaming standards and high-significance by neuroscience standards. It was the first demonstration that living neurons could acquire a goal-directed behavior through electrical feedback in an in vitro environment.

By 2026, the field has extended this work in several directions. Researchers at Johns Hopkins University, where the term “Organoid Intelligence” was formally defined in a 2023 Frontiers in Science paper by Lena Smirnova and colleagues, have been developing standardized protocols for growing organoids optimized for computational use, including vascularized models with improved nutrient delivery that can maintain viability for longer periods.

The goal is not to replicate a brain. It is to create a scalable, wetware computing module that can learn, adapt, and process information with biological efficiency. Whether this goal is achievable remains an open question. Whether it raises serious ethical concerns is not.

The Performance Case

What would make an organoid a useful computer? The argument rests on three claimed advantages: learning efficiency, energy efficiency, and the ability to perform certain classes of computation that are difficult for silicon architectures.

Learning efficiency refers to the fact that biological neurons adjust synaptic weights in response to experience through mechanisms that are inherently local and continuous. Backpropagation, the dominant training algorithm for artificial neural networks, is a global algorithm requiring that error signals propagate backward through entire networks. Biological synaptic plasticity is local: a synapse strengthens or weakens based on the activity of adjacent neurons, not signals from across the network. Local learning rules are potentially faster to implement and more robust in hardware that has noise and variability.

Energy efficiency is the more straightforward claim. Human neurons consume roughly one picojoule per synaptic operation. Current silicon implementations of artificial neural networks use orders of magnitude more energy per equivalent operation. Neuromorphic hardware like Intel’s Loihi 3 and IBM’s NorthPole has been designed specifically to close this gap by implementing spike-based computation in silicon. Organoid computing goes further by using actual biological tissue.

The class of computations that may suit OI disproportionately includes those requiring continuous adaptation to novel inputs. Organoids exposed to changing stimuli over time modify their connectivity through synaptic plasticity. An organoid that has been trained on a particular task may generalize to related tasks in ways that have not been explicitly programmed, because the underlying biological learning mechanisms operate on patterns rather than explicit rules.

These claims have not been demonstrated at scale. Current OI demonstrations use organoids of less than one million neurons grown in laboratory conditions that do not yet support the computational density needed for practical applications. The field’s proponents acknowledge that OI is a research program, not a ready technology.

Connections to Mind Uploading

Organoid Intelligence research intersects with whole brain emulation in ways that are not always made explicit in OI publications but are evident from the shared conceptual ground.

The whole brain emulation roadmap identifies three core requirements: recording neural dynamics, mapping structure, and running emulations. OI research addresses a question orthogonal to all three but foundational to the third: what substrate is capable of hosting the relevant neural computations? WBE researchers have generally assumed a silicon substrate running simulated neurons. OI research suggests that biological tissue itself may be the most efficient substrate for certain neural computations.

If that is correct, the implications branch in two directions. One is a hybrid architecture: an emulation that runs on biological wetware rather than digital silicon, preserving more of the causal structure of biological computation. The other is a challenge to the substrate independence assumption that underlies most WBE arguments. If biological tissue does computation differently from silicon in ways that matter for cognition, then a silicon emulation of a biological brain may not capture what matters about the original.

The 4E cognition challenge makes a related point from the phenomenological side: cognition is not substrate-independent because it is causally coupled to embodied processes. OI research adds an empirical dimension to this concern. Biological neural tissue may compute differently from silicon in ways that are functionally significant, not merely implementationally different.

The Orch OR theory of quantum consciousness is relevant here as well. If consciousness depends on quantum processes in microtubules, then organoids, which contain microtubules within their neurons, would preserve this substrate in a way that silicon architectures cannot. OI would be not just a computing technology but a potential substrate for consciousness.

The Ethics Problem

The ethical questions posed by OI are more difficult than the technical ones, and they are not hypothetical.

Brain organoids grown from human iPSCs contain human neurons with human genetic material. They develop spontaneous electrical activity that, in some experiments, resembles the activity patterns seen in premature human brains. The question of whether organoids have any form of sentience, and at what point of organizational complexity they might, is not settled.

Alysson Muotri at UC San Diego has documented EEG-like activity in brain organoids that includes patterns resembling those recorded in 25 to 39 week preterm infants. The similarity does not prove consciousness. It establishes that the electrical dynamics of organoids are not trivially different from those of developing human brains. If the threshold for morally relevant experience is not well-defined even for biological development, it is no clearer for organoids.

Using organoids as computers raises this problem in a specific form. If OI systems are trained through feedback to perform tasks, we are conditioning biological tissue that may have some form of proto-experiential state. We do not know whether the tissue experiences anything. We do not have good tools to assess this. The Digital Consciousness Model, developed to apply probabilistic frameworks to AI systems, could theoretically be extended to assess organoid systems, but no such application has been published.

The field’s own researchers acknowledge the issue. Smirnova et al.’s 2023 Frontiers in Science paper explicitly calls for an ethical framework for OI research to be developed before the technology scales. The authors propose ongoing engagement between OI researchers, bioethicists, and the public, including what they term an “embedded ethics” approach where ethical questions are integrated into research design rather than treated as external constraints.

What the ethics of OI computing looks like in practice remains an open question. Whether existing bioethics frameworks for human cell research, animal experimentation, or AI systems adequately cover the novel ethical territory OI opens is far from clear.

2026 Status

The OI field in 2026 is marked by genuine scientific progress and significant advocacy from a small but highly credentialed set of researchers. Johns Hopkins remains the primary institutional hub. The European Human Brain Project has funded related work under its neuromorphic and organoid research programs. A handful of startups have entered the space, including Cortical Labs (which has continued work on its BCI-based organoid interfaces) and newer entrants exploring vascularized organoid manufacturing for extended viability.

No OI computer system in 2026 matches silicon for practical computing tasks. The field is at an early proof-of-concept stage for learning in vitro, not at a stage where OI could displace or meaningfully supplement conventional computing. The significance is that the concept has been experimentally instantiated: neurons can learn, and reading that learning through electrode arrays is technically feasible.

What 2026 has added is better organoid manufacturing, improved multielectrode array integration, and the beginning of a standardized vocabulary for what counts as computation in biological tissue. These are the prerequisites for a research program. Whether OI eventually becomes a computing technology used outside the laboratory, or remains a tool for studying biological computation, will depend on whether the engineering challenges of scaling, longevity, and reproducibility can be addressed.

The Eon Systems fruit fly brain simulation demonstrated that even simple biological brain models, when fully emulated, produce emergent behaviors more complex than their components suggest. OI asks whether the biological substrate itself might be the more efficient path to that kind of computation.


Official Sources