Link to the code: brain-emulation GitHub repository

Meta TRIBE v2: A Brain Digital Twin That Predicts Your Neural Responses at 70,000-Voxel Resolution


Functional brain imaging has always been a two-step process: scan, then interpret. Researchers present a stimulus, record the neural response with fMRI or EEG, and then build statistical models to describe the relationship between what was shown and what fired. The models are descriptive. They are not predictive in real time, and they are not general — a model trained on responses to natural images does not transfer cleanly to responses to speech.

Meta’s AI Research lab released TRIBE v2 in March 2026 with a different ambition. The model takes any video, audio, or text input and predicts the full-brain fMRI response that a human would produce — without scanning that human. It does this at 70,000-voxel resolution, covering essentially the entire cortical and subcortical volume captured by a modern 7-Tesla fMRI session. The first version of TRIBE, released in 2023, operated at roughly 1,000 voxels. Version 2 is a 70-fold resolution increase, trained on 1,115 hours of neural recording data from over 700 research volunteers.

The system represents the largest functional brain digital twin built to date — a model capable of simulating individual neural responses to arbitrary stimuli without requiring the physical presence of the person being modeled. For the whole brain emulation field, this raises a precise question: is this what brain emulation looks like at TRL 4, or is it something fundamentally different?

What TRIBE v2 Actually Does

The underlying architecture is a large neural network trained on paired data: stimulus inputs mapped to measured fMRI responses. The training dataset comes from a brain imaging project that recorded neural activity from volunteers watching hours of natural video content, listening to spoken language, and reading text, all inside high-field MRI scanners. The recordings capture blood-oxygen-level-dependent (BOLD) responses across the full brain volume at millimeter spatial resolution.

TRIBE v2 learns to model the stimulus-to-brain-response mapping across this dataset. At inference time, given a new stimulus that no volunteer in the training set has seen, the model predicts what a representative human brain would produce in terms of fMRI activation patterns. The output is a 70,000-element vector representing predicted BOLD signal at each voxel.

The resolution figure requires context. A 70,000-voxel fMRI volume, at typical 1.5–2mm isotropic resolution, covers the entire human brain but at a scale where each voxel contains roughly 100,000 to several million neurons. The prediction operates at the scale of cortical columns and regions, not at the level of individual neurons or circuits. This is functional brain modeling at the mesoscale, not the neuron scale.

The Gap Between Prediction and Emulation

The distinction matters for anyone evaluating TRIBE v2 in the context of whole brain emulation roadmaps. Predicting fMRI responses is not the same as emulating brain function.

An fMRI model like TRIBE v2 is trained to reproduce a statistical relationship between stimuli and blood-flow responses. It does not model the underlying neural computation: which neurons fire, in which sequence, with which synaptic weights, under what neuromodulatory state. The BOLD signal is a hemodynamic proxy for neural activity — it reflects changes in blood oxygenation that follow neural firing with a lag of 4–6 seconds and limited spatial specificity. Two completely different neural computation patterns can produce indistinguishable BOLD signals.

TRIBE v2 learns to reproduce the proxy signal, not the computation it reflects. This is genuinely useful for research purposes — predicting how the brain will respond to a given stimulus, accelerating experimental design, and identifying which brain regions are engaged by novel content. But it is not a model of what the brain is computing. A perfect TRIBE v2 prediction would tell you where blood flow changes; it would not tell you what cognitive process produced that change, what the individual neurons are doing, or how to recreate the computation in a different substrate.

The Digital Consciousness Model framework distinguishes between behavioral fidelity (producing outputs that match a biological system’s outputs) and functional emulation (reproducing the underlying computation). TRIBE v2 achieves a form of behavioral fidelity at the hemodynamic level. Functional emulation requires structural connectivity data — the kind neuromorphic twins and connectome-based circuit models are attempting to provide.

Where TRIBE v2 Does Add Value for Emulation Research

The limitations of fMRI modeling do not make TRIBE v2 irrelevant to emulation research. The model contributes in several ways.

First, scale. Training a model that generalizes across 700+ individuals at 70,000-voxel resolution demonstrates that brain activity patterns at the mesoscale have sufficient regularity across people to be captured by a single model. This is evidence that the brain is not infinitely idiosyncratic — that there is shared structure in how different humans respond to the same stimulus class. This has implications for how ambitious a general-purpose brain model can be.

Second, stimulus generalization. TRIBE v2’s ability to predict responses to novel stimuli across modalities — it was not trained on every possible video clip — means it has learned something about the mapping between stimulus semantics and brain responses that generalizes. This is closer to modeling a cognitive process than to memorizing a lookup table.

Third, scale of training data. 1,115 hours of brain recording data from 700+ volunteers is an unprecedented dataset for this type of research. The practical implications are twofold: it demonstrates that large-scale human brain imaging datasets are now feasible to collect, and it provides a benchmark against which other functional brain models can be evaluated.

Comparison to Virtual Brain Twins and Neuromorphic Approaches

TRIBE v2 operates at a different level than virtual brain twin approaches that model individual patients’ connectivity and dynamics for clinical purposes. Virtual brain twins typically use a combination of structural MRI, diffusion tractography, and personalized dynamical systems modeling to produce individualized predictions of neural dynamics. They operate at the level of brain regions connected by white matter tracts, rather than at the fMRI voxel level TRIBE v2 targets, but they explicitly model the underlying network dynamics rather than training a black-box statistical model on imaging data.

Neuromorphic twins go further: they emulate biological neural network activity in real time in neuromorphic hardware and co-evolve with the living brain through bidirectional coupling. This is orders of magnitude closer to what whole brain emulation requires: an explicit model of the computational substrate, not a statistical model of its outputs.

TRIBE v2’s contribution is at the data-and-prediction layer, not the emulation layer. It answers the question “what will the brain do?” at the fMRI level. The emulation question is “how does the brain compute what it does?” — which requires a different class of model.

Adversarial Use Cases and Privacy Implications

A model that predicts brain responses to arbitrary stimuli at high resolution has implications beyond research. TRIBE v2 raises the possibility of inferring cognitive states — what a person is attending to, what they find aversive, what activates their reward circuits — from stimulus alone, without scanning them. This is a weaker form of “mind reading” than direct neural decoding, but it moves in that direction.

Meta has not published details about how TRIBE v2 will be made available to external researchers or whether access will be restricted. The training data was collected under IRB-approved protocols and is not identifiable at the individual level, but the model itself encodes information about how the average human brain processes content. How that capability is deployed will have policy implications that are starting to receive attention from neurorights advocacy groups.

Path Forward

TRIBE v2 occupies an interesting position in the landscape of brain modeling approaches. It is not whole brain emulation — it does not model neural dynamics, does not operate at single-neuron resolution, and does not attempt to reproduce the underlying computation. But it demonstrates that large-scale, high-resolution functional brain modeling is technically achievable, and that the statistical structure of human brain responses to natural stimuli is learnable.

The next step toward emulation is models that connect functional predictions to structural substrates: that explain which circuits produce which responses, rather than predicting the hemodynamic output of those circuits statistically. The deep-learning cortical circuit simulation work published in March 2026 takes this step for the mouse visual cortex. Whether human-scale versions of that approach become tractable will depend on connectomics data availability, which projects like LICONN and Connectome-seq are working to address.

TRL Assessment: TRL 3–4 for functional fMRI prediction. The method is validated on large human datasets. Its relevance to brain emulation proper is indirect — it models outputs rather than the underlying computation.

Official Sources

  • Meta AI Research (March 26, 2026) — TRIBE v2: Large-Scale Brain Activity Prediction Model. Coverage via Psychology Today: https://www.psychologytoday.com/us/blog/the-future-brain/202603/a-new-digital-twin-for-brain-activity-aims-to-speed-research
  • Défossez, A. et al. (2023) — TRIBE: Decoding speech from non-invasive brain recordings. arXiv. arXiv:2208.12266
  • Huth, A.G. et al. (2016) — Natural speech reveals the semantic maps that tile human cerebral cortex. Nature, 532(7600):453–458. DOI: 10.1038/nature17637
  • Logothetis, N.K. (2008) — What we can do and what we cannot do with fMRI. Nature, 453(7197):869–878. DOI: 10.1038/nature06976
  • Doerig, A. et al. (2023) — The neuroconnectionist research programme. Nature Reviews Neuroscience, 24(7):431–450. DOI: 10.1038/s41583-023-00705-w