Link to the code: brain-emulation GitHub repository

JUPITER and the 20-Billion-Neuron Wall: What Exascale Brain Simulation Actually Means


When the Jülich Supercomputing Centre brought their JUPITER machine to full operation in 2026, it became the fourth-fastest computing system on Earth and Europe’s first to cross the exaFLOP threshold — one quintillion floating-point operations per second. Researchers at the Jülich Research Centre in Germany, led by neurophysics professor Markus Diesmann, used it to do something no one had managed before: run a spiking neural network at the scale of the human cerebral cortex. Twenty billion neurons. One hundred trillion synaptic connections. All active simultaneously on a single computing system.

The announcement circulated through computational neuroscience communities in early 2026 as a landmark. For people following the whole brain emulation field, however, the landmark is only part of the story. The other part is the gap between hitting a neuron count and actually emulating a brain.

What JUPITER Is

JUPITER stands for Joint Undertaking Pioneer for Innovative and Transformative Exascale Research. It is co-funded by the European Union and operates at the Jülich Supercomputing Centre in Germany. It achieves more than 1 exaFLOP per second of computational performance, placing it among the fastest systems globally alongside American and Japanese machines. Its key hardware advantage is the NVIDIA GH200 super-chip architecture in its Booster partition, which allows very high-bandwidth memory access at the node level — a design choice that matters enormously for brain-scale network simulations.

The machine builds on infrastructure developed during the Human Brain Project, the €1.3 billion EU flagship initiative that ran from 2013 to 2023. That project faced significant criticism for overpromising on neuroscientific outcomes while underdelivering on the fundamental question of what large-scale simulations could actually tell us about brain function. Diesmann’s lab learned from those years. The approach taken with JUPITER is more computationally disciplined.

The Simulation: What Diesmann’s Team Did

The Jülich group published their methods in a December 2025 arXiv preprint describing scalable construction of spiking neural networks using up to thousands of GPUs. The core technical contribution is a new parallel construction algorithm that allows individual nodes in the computing cluster to handle their own local connectivity rather than routing everything through a single centralized process. This reduces network setup time by a factor greater than ten compared to previous methods and allows the simulation to scale without hitting communication bottlenecks as node counts increase.

The biological model they ran is a cortical network of spiking neurons — neurons that fire action potentials in discrete events rather than transmitting continuous analog signals. Spiking neural networks are widely considered more biologically realistic than the rate-based artificial neural networks that power most contemporary AI systems. Each neuron in the simulation follows Leaky Integrate-and-Fire dynamics, accumulating incoming signals and firing when a threshold is crossed.

At 20 billion neurons and 100 trillion synapses, the model reaches the scale of the human cerebral cortex. It is not the full brain — the cerebellum alone contains roughly 60–80 billion granule cells — but the cerebral cortex processes most of what we associate with cognition, perception, and behavior, making it the justified emphasis for this line of research.

The team describes the system as a high-performance scientific microscope: a tool for studying emergent dynamics and behaviors that only appear in large-scale networks and cannot be observed in smaller models. At a scale of millions of neurons, certain network oscillations and synchronization patterns emerge that are qualitatively different from what smaller simulations produce. Cortex-scale dynamics may reveal new phenomena that cannot be predicted from models ten times smaller.

Scale Is Not Emulation

This distinction is the most important thing to understand about the JUPITER result, and the researchers themselves make it explicitly. Thomas Nowotny, one of the collaborators, stated directly that simulating a structure at the scale of the human brain is fundamentally different from creating a functional simulation of the brain itself.

Consider what a full brain emulation requires. The Building Brains on a Computer: The 2026 Core Requirements essay by M. Schons identifies three non-negotiable capabilities: structural fidelity at the level of individual synaptic connections, functional replication that accurately models each neuron’s electrochemical behavior, and dynamic adaptability that preserves the capacity for learning and change. The JUPITER simulation addresses the first at scale — the network topology exists — but the functional replication is modeled rather than measured. The neurons follow simplified mathematical rules, not the full dendritic computation and ion-channel dynamics of real cortical cells.

The Allen Institute’s 9-million-neuron mouse cortex simulation on the Fugaku supercomputer took a different approach: building biological constraints from actual electron microscopy connectomics data, electrophysiology recordings, and Neuropixels measurements. That model was smaller but tried to satisfy multiple empirical constraints simultaneously. JUPITER’s simulation is larger but more idealized.

Eon Systems’ full emulation of an adult fruit fly, achieved by mapping the actual connectome and implementing realistic neuron models throughout, represents a third approach: complete structural and functional mapping of a real organism. The fly brain has roughly 140,000 neurons. Scaling that methodology to the human cortex’s 20 billion remains the central engineering problem in the field — and no amount of neuron-count matching shortens that gap.

What JUPITER Adds to the WBE Roadmap

The Jülich result does contribute something concrete that the WBE field needed. It demonstrates that the computational infrastructure for brain-scale simulation now exists and is accessible to European research institutions without requiring private cloud infrastructure. The exascale threshold has been crossed. Any future group with access to JUPITER or comparable systems can now run cortex-scale spiking network simulations as a research tool.

This has practical implications. The field can now test theoretical predictions about large-scale cortical dynamics — questions about how synchronization emerges, how information propagates across billions of connected neurons, and what network architectures produce stable versus unstable activity. These are questions that could not be answered with models orders of magnitude smaller.

It also establishes a baseline for gauging progress. The AI segmentation tools now compressing the connectomics timeline will eventually produce detailed wiring diagrams of mammalian cortical tissue. When that data arrives, the computational capacity to run the resulting network will need to exist. JUPITER demonstrates that the hardware side of that equation is within reach.

What remains undone is the data side. The JUPITER simulation runs a theoretically plausible cortical network. An actual emulation would run a network derived from a specific individual’s measured connectome, with neuron parameters calibrated against that individual’s recorded electrophysiology. Neither of those data sources exists for any human at anything approaching whole-cortex scale.

The Energy Problem

One figure from the Jülich materials is rarely discussed but relevant to any long-term thinking about brain-scale computing: power consumption. JUPITER draws approximately 14 megawatts under full load. The human brain uses 20 watts. The gap is six orders of magnitude. Even accounting for the developmental state of neuromorphic hardware and the fact that JUPITER was not designed as a brain-efficient system, this disparity illustrates that running a brain-scale model is currently an industrial-scale energy operation, not a datacentre-tier product.

Neuromorphic computing platforms like Intel Loihi 3 and IBM NorthPole aim specifically at collapsing this efficiency gap. They implement event-driven, sparse communication in hardware — closer to how biological spiking networks actually operate. Progress on that front will determine whether brain-scale simulation eventually becomes a tool available to ordinary research labs rather than only national supercomputing centres.

Path Forward

The JUPITER demonstration fits into a sequence. The connectome of a cubic millimetre of mouse cortex took years and petabytes of electron microscope data to reconstruct. Scaling to a full human cortex requires orders of magnitude more. AI-assisted segmentation is accelerating that work sharply, but it has not closed the gap entirely.

The measurement problem remains harder than the computation problem. JUPITER can run 20 billion neurons simultaneously. The bottleneck is not whether such a simulation can execute — it is whether the data to make it biologically valid will ever be acquired at the resolution required. Non-destructive scanning cannot reach synaptic resolution in a living human brain; electron microscopy can, but only on fixed tissue. The path to a genuine human brain emulation still runs through that hard physical wall.

What Diesmann’s team has done is establish that when, or if, sufficient connectome data is acquired, the computing substrate to simulate it at scale will be ready. That is a meaningful contribution to a roadmap that has too often been stalled by the computing side. The data side now holds the field back, which is precisely where the next decade of neuroscience will focus.

Official Sources