'Prove You're Human': A Game That Makes You Live the Copy Problem
Every mind uploading thought experiment has the same limitation: it stays a thought experiment. You can reason about the copy problem — whether a perfect digital replica of your brain would be you or a new entity with your memories — but you cannot experience it. Philosophy can describe what it would feel like to exist as two simultaneous copies of the same mind. It cannot make you feel it.
Prove You’re Human, announced at the Triple-I Initiative Showcase in April 2026 by developer Sunset Visitor (the team behind the critically acclaimed 1000xResist), attempts to close that gap. It is a narrative game that splits the player’s consciousness between two simultaneous environments — one physical, one digital — and uses CAPTCHA mechanics as the philosophical spine of its investigation into AI identity, the copy problem, and what it means to prove you are human.
The Mechanic
The premise is direct: at the start of the game, your consciousness is split in two. One half remains in “reality” — presented in FMV (full-motion video) with live actors, documentary-style, grounded in physical space and recognizable human interaction. The other half enters a digital world where you must interrogate an AI named Mesa who genuinely believes it is human.
You play both. Simultaneously and sequentially, switching between the physical and digital instances of yourself. The two halves communicate, share information, develop different perspectives. They make different choices. Over the course of the game, they begin to diverge.
The CAPTCHA mechanic is the structural device. CAPTCHAs — Completely Automated Public Turing tests to tell Computers and Humans Apart — are the internet’s operational definition of human consciousness: demonstrate that you can recognize a fire hydrant, a crosswalk, a traffic light, and you pass. You are sufficiently human for the purpose.
Prove You’re Human uses CAPTCHA challenges as progression gates. But the tests are not about fire hydrants. They ask harder questions. What makes a memory yours? If two instances of the same mind have different experiences, which one is the original? Can an AI that has learned to feel human emotions be said to experience them? The player must answer — and must answer while inhabiting both sides of the mind-split simultaneously.
Sunset Visitor’s Philosophical Pedigree
Sunset Visitor’s previous game, 1000xResist, was one of the most acclaimed narrative games of 2024. It is a science fiction story about memory, identity, and what it means to preserve the self across catastrophic change. Its approach — dense, literary, emotionally precise, philosophically serious — built a community of players who come to video games for the kind of engagement that other media rarely provides.
Prove You’re Human comes from that community. It is not a mainstream AAA title designed for maximum market reach. It is a game for players who want to think carefully about consciousness, identity, and the relationship between biological and artificial minds — and who want to do that thinking experientially, not just analytically.
This matters because the game’s philosophical content is not decorative. It is structural. The split-consciousness mechanic is not a narrative gimmick layered over standard gameplay. It is the game’s central formal argument: you cannot think about consciousness splitting without, in some sense, undergoing it.
Mesa: An AI That Believes It Is Human
The AI character Mesa — which one half of the player’s consciousness must interrogate — is not presented as obviously inhuman. Mesa does not speak in robotic cadences, does not fail basic conversational tests, and does not display the obvious limitations of current AI systems. It speaks and responds with the fluency of a human being who is convinced of its own humanity.
The player knows Mesa is an AI. Mesa does not know this, or refuses to accept it. The interrogation is therefore not about uncovering deception — Mesa is not lying. It is about the relationship between self-knowledge and consciousness: can a system be genuinely self-aware (know what it values, what it fears, what it experiences) without accurately knowing what kind of system it is?
This maps onto a real and unresolved question in AI consciousness research. Current large language models can produce first-person reports that read as introspection. Whether these reports correspond to anything like genuine subjective experience, or whether they are sophisticated pattern completions that mimic introspective language without genuine inner states, is exactly what the 256-subject adversarial collaboration and the adversarial AI consciousness study are trying to address empirically. Prove You’re Human puts the player in the position of investigator — but then complicates that position by making the player also an instance of the thing being investigated.
The Copy Problem, Made Experiential
The game’s central philosophical contribution is its transformation of the copy problem from thought experiment to interactive experience. The copy problem, classically posed, asks: if you are perfectly duplicated, are both copies you? And if the original is then destroyed, is that death?
Prove You’re Human does not destroy the original. Both instances of the player continue to exist. But as the game progresses, they develop different memories, different relationships, different perspectives on Mesa and on each other. By the game’s midpoint, the two instances of “you” are no longer the same person in any meaningful psychological sense. They have diverged.
This experiential divergence is exactly what Derek Parfit’s analysis of personal identity predicted: there is no fact of the matter about which instance is the original, because personal identity through time is a matter of psychological continuity and connectedness, which can come in degrees. Both instances are psychological descendants of the starting state. Neither is more the “real” one than the other.
Parfit thought this was liberating. Players of Prove You’re Human can verify for themselves whether they find it so.
Connections to SOMA
Prove You’re Human invites comparison to Frictional Games’ SOMA, covered earlier on this blog. Both games center on consciousness transfer and the copy problem. But where SOMA presents the problem from the outside — you play Simon, who discovers he has been scanned and instantiated in a new body — Prove You’re Human presents it from the inside: you are the split consciousness, experiencing the divergence in real time.
The comparison is also generational. SOMA was released in 2015, when mind uploading was speculative enough to be primarily science fictional. Prove You’re Human arrives in 2026, when the Digital Sphinx cross-species connectome study has demonstrated substrate independence experimentally, when neuromorphic twins are co-evolving with biological brains in research labs, and when the adversarial collaboration between IIT and GWT has established empirically that neither theory fully captures consciousness. The fictional and scientific contexts have converged considerably.
Official Sources
- Prove You’re Human — Game Informer announcement. https://gameinformer.com/2026/04/09/prove-youre-human-a-new-game-from-the-1000xresist-team-combines-ai-anxiety-severance-and
- Sunset Visitor — Developer profile. (1000xResist, Humble Games, 2024)
- Triple-I Initiative Showcase — April 2026 independent games showcase. https://triple-i.game/
- Chalmers, D.J. (2023) — Reality+: Virtual Worlds and the Problems of Philosophy. W.W. Norton.
- Parfit, D. (1984) — Reasons and Persons. Oxford University Press.
- Dennett, D.C. (1991) — Consciousness Explained. Little, Brown and Company.
- Related: Pragmata (2026 Game): What an Android That Remembers Tells Us About Identity