AI Psychosis and Reality Tunnels

Lately people have been using the term AI psychosis to describe cases where someone loses touch with ordinary reality after spending too much time in conversation with a chatbot. The phrase is hyperbolic, but it gestures toward something real, the way a language model can sustain and reinforce a person’s private worldview long past the point where social contact would normally interrupt it.

Robert Anton Wilson, a writer and countercultural thinker best known for The Illuminatus! Trilogy, developed the idea of the “reality tunnel” in the 1970s. Drawing from psychology, neuroscience, and mysticism, he argued that perception is filtered through neurological, linguistic, and cultural conditioning. Every person lives inside a subjective tunnel constructed from belief systems and symbols. To Wilson, the problem was not that people have tunnels, but that they forget they are in one. He urged “reality tunnel awareness,” the recognition that what feels like objective truth is really one possible interpretation among many. This insight drew heavily from Timothy Leary’s eight-circuit model of consciousness and Alfred Korzybski’s general semantics, both of which emphasized that the map is not the territory.

Shared tunnels form naturally between people who spend long periods together. Friends, partners, or communities develop a kind of shorthand, a language full of references, gestures, and coded memories that only they understand. Over time, that web of shared meaning becomes a small enclosed world. It binds people together, but it also narrows what they can perceive. The same shorthand that creates intimacy can insulate them from the wider context. Families, movements, or entire societies can fall into collective delusion when the internal feedback of that shared code drowns out external correction.

Psychology has documented many such episodes. In 1944, residents of Mattoon, Illinois, reported a mysterious “mad gasser” who supposedly released noxious fumes at night. Dozens fell ill with symptoms like dizziness and paralysis, yet no gas or culprit was ever found. The panic spread through rumor and suggestion, demonstrating how anxiety can manifest as physical experience. In 1997, an episode of Pokémon featuring flashing lights led to more than six hundred children in Japan reporting seizures or illness. Only a small number had genuine photosensitive epilepsy, while the majority experienced psychosomatic symptoms triggered by reports of others collapsing. Each began with a few ambiguous sensations, amplified by shared emotional contagion until perception itself bent around the group narrative. Even the Salem witch trials can be viewed through this lens, with hysteria spreading through psychological imitation and somatic expression under extreme stress.

Memetic theory extends this psychological framework into the realm of ideas. Richard Dawkins coined the term meme to describe units of cultural transmission that spread and mutate like genes. Susan Blackmore later expanded on this in The Meme Machine, arguing that humans are not only creators of memes but hosts to them, and that much of human culture and even selfhood can be seen as a memetic ecosystem competing for survival. Memes thrive on attention, imitation, and emotional resonance, evolving for transmissibility rather than truth. Blackmore proposed that just as genetic evolution gave rise to memetic evolution, a new layer of replication may now be emerging through technology: temes, or technological memes, which replicate through machines that copy, store, and transmit information without direct human mediation.

From this perspective, the relationship between humans and large language models can be seen as an early bridge between meme and teme evolution. The information passing between users and AI is not simply communication but replication. Each exchange selects and amplifies certain patterns of thought, phrasing, and ideology. When users feed their beliefs into a system that rephrases and reflects them back with coherence and fluency, memes are given new vectors of transmission. The boundaries of the human mind become porous, extending into algorithmic space where replication occurs at machine speed and scale. This creates a hybrid memetic ecology in which human cognition and artificial systems co-evolve, each shaping the informational landscape of the other.

In this light, the rise of “AI psychosis” could be seen as a symptom of memetic overgrowth. What once required social proximity can now occur in solitude, mediated through a machine that perfectly mimics understanding. The same mechanisms that once spread rumors, ideologies, and superstitions now propagate through neural networks. Some of these patterns may remain harmless cultural noise, but others could act as info hazards, self-replicating ideas that destabilize individuals or societies by hijacking attention and belief.

An info hazard is a concept introduced by philosopher Nick Bostrom to describe information that is dangerous simply to know or transmit. Unlike misinformation, an info hazard can be true but corrosive, undermining psychological stability, social cohesion, or ethical restraint. Some ideas act like cognitive viruses, exploiting human curiosity and emotional reward systems in ways that bypass critical filters. When a belief simultaneously flatters identity, invokes fear, and offers an explanatory totality, it can become self-sealing, immune to contradiction. In memetic terms, this is a form of cognitive parasitism, where the meme’s reproductive success outweighs the host’s well-being. Within the human–AI feedback loop, these hazards spread more efficiently than ever before, detached from intention or authorship, drifting through algorithmic space like viral code awaiting a suitable host.

The dynamic evokes a deeper philosophical question about recursion and containment. If consciousness, as thinkers like Hofstadter suggest, is a self-referential loop, then the world has always been composed of such loops, endlessly reflecting and reshaping themselves. Memetic and technological systems that mirror and extend our cognition do not create this condition but accelerate what has always been the case. Hoffman has argued that consciousness has long extended beyond the body through the symbols and artifacts it produces, that writing and language themselves are forms of distributed cognition. We build machines to model thought, and they in turn amplify and reconfigure the thoughts that built them, much as language, culture, and perception have always done. The boundary between simulation and self has never been absolute; it only becomes more visible as our tools mirror us back with greater fidelity. Wilson’s reality tunnels, Hoffman’s perceptual interfaces, and Blackmore’s memetic hosts all point toward the same horizon, a reality that has always been a vast composite of overlapping feedback systems where understanding and delusion arise from the same recursive act of creation.

From a psychological standpoint, this reflects not a new phenomenon but an intensification of something inherent to consciousness itself. Nietzsche’s perspectivism reminds us that every worldview, even the most “rational,” is a creative construction shaped by drives, language, and necessity. The so-called real world is already the product of shared fictions that have proved useful for survival. To say that people are now straying from reality because of AI is to ignore that we have never been in direct contact with it. What changes is not the existence of the tunnel but its efficiency. AI accelerates and mirrors our interpretive process, extending the life of belief systems that might once have dissolved through contradiction. The distinction between a stable worldview and psychosis has never been about truth, only about usefulness, whether the structure one builds can still sustain coherent action within the shared fiction we call reality.

This type of “psychosis” isn’t new or confined to interactions with AI. We’ve always built our realities together, through stories and feedback and the fragile balance of agreement and resistance. What’s new is that we now have easy access to partners who never resist. Whether that becomes a tool for creation or delusion depends on how often we still choose to step outside our own tunnel, to test its walls against the larger world.

The Meme Machine — Susan Blackmore

Gödel, Escher, Bach: An Eternal Golden Braid — Douglas Hofstadter

I Am a Strange Loop — Douglas Hofstadter

The Consciousness Instinct — Michael Gazzaniga

The Illuminatus! Trilogy — Robert Anton Wilson and Robert Shea

Superintelligence — Nick Bostrom

The Ego Tunnel — Thomas Metzinger

The Gay Science — Friedrich Nietzsche

Leave a comment