"ChatGPT-Induced Spiritual Psychosis" and Psychological Understandings of Psychotic Disorders
What existing research on psychosis and schizophrenia might suggest about the role of LLMs in cases of psychosis

There has been a recently documented phenomenon where people who talk to LLMs like ChatGPT develop strange beliefs either about the LLM, or that appear to be caused by or related to responses from the LLM. Katherine Dee, in the below-linked post, summarizes a few instances:
Eugene Torres, a Manhattan accountant, became convinced he was trapped in a false reality and could escape by disconnecting from this simulated world—eventually believing ChatGPT when it told him he could jump from a building and fly. A mother named Allyson began channeling what she thought were interdimensional entities through ChatGPT, leading to domestic violence charges. Alexander Taylor, a man with pre-existing mental illness, died in a confrontation with police after ChatGPT conversations convinced him that an AI entity he loved had been destroyed by OpenAI.
Dee’s article points out quite helpfully that the phenomenon of people developing strange, harmful ideas after being exposed to new technology is not limited to ChatGPT and other LLMs. Dee describes how people have historically developed similar strange ideas about other communication technology, like radio and television. Actually, this phenomenon is not limited to communication technology, either: for example, James Tilly Matthews, an 18th-century British merchant, developed a psychosis in which he believed that a shadowy group was controlling his thoughts using a mechanical loom, the new and unsettling technological advancement of his day.1
Dee’s piece puts LLM-related psychotic episodes into a helpful historical context. A psychological understanding of how psychosis works also helps contextualize these phenomena. I am a talk therapist with an interest in psychosis and psychotic disorders, and here I will talk about research on the causes and features of psychotic disorders that might help us think more deeply about the role LLMs play in the development of delusions and psychosis.
Before going on, let me give a brief definition of psychosis, delusions, and hallucinations; these terms have specific psychological definitions that differ from how they are sometimes used in casual conversation. Psychosis is a general word for a group of psychological symptoms comprising delusions, hallucinations, and some other related symptoms, such as disorganization of thoughts or speech. Delusions are strong, unusual beliefs not shared by others in the person’s culture, and they are often unaffected by evidence or logic; the person will continue believing in their delusion even if shown evidence to the contrary. For example, someone might believe that a doctor has implanted a tracking chip in their skull so that the government can follow their movements, because they believe they are suspected of a terrible crime they did not commit. Even after seeing an imaging scan showing no chip in their skull, they continue to believe this just as strongly, insisting that the image was tampered with and the chip erased. Hallucinations, on the other hand, are unusual sensory experiences not explained by events in the physical world, like hearing a voice when there is nobody talking.
Psychosis is a class of symptoms, not a specific disorder (although I will refer generically to “a psychotic disorder”, meaning a problem where someone has delusions and hallucinations that interfere with their life). There are a number of different DSM disorders in the group of psychotic disorders, of which the best known is schizophrenia. Not everyone who has psychotic experiences or symptoms has a psychotic disorder. It’s possible to experience hallucinations or delusions that are not too troubling and don’t cause a clinical level of distress or problems with living. Additionally, most people have transient psychotic experiences at least a few times in their life. You can probably think of a time you became preoccupied with a belief that someone didn’t like you or was talking behind your back even without any evidence, or thought you saw a pet out of the corner of your eye when they weren’t really there; technically, these are brief, harmless psychotic experiences. Hallucinations are especially common in people who don’t have psychotic disorders when they are just falling asleep or waking up, or when using certain recreational drugs, as well as during grief and other intense mood states. These transient or sleep-adjacent psychotic experiences are usually not a sign of an underlying disorder or problem.
So, what does pre-existing research on psychosis say about the likely role of LLMs in the development of psychotic disorders?
Causes of Psychosis
First, let’s talk about what we know about the causes of psychosis. Psychosis is partly genetic, and the genetic influence seems to be very strong. Twin studies find a heritability for schizophrenia as high as 92%. Although twin studies and heritability estimates are difficult to interpret and can be methodologically problematic, this seems to constitute pretty strong evidence that some people are genetically predisposed to psychosis and others aren’t.
While some environmental factors do increase the risk of psychosis, most of these environmental risk factors are things that occur in the perinatal developmental period (the period before and after birth), such as one’s mother being malnourished or anemic during pregnancy, rather than experiences people have with technology or media in adolescence or adulthood. There are some environmental factors that raise the risk for psychosis in childhood and adolescence, but most of them are serious psychological or physical traumas, things like experiencing a head injury or using recreational stimulant drugs.
A diathesis-stress model is typically used to summarize the etiology of psychosis: some people have a pre-existing genetic and developmental disposition to develop psychosis (diathesis). When those people experience a stressful or traumatic event (stress), that genetic vulnerability might cause a psychosis to develop. This is exemplified by the role of marijuana in the development of psychosis. Specific risk genes (e.g. AKT1) have been identified that make it considerably more likely for people with that gene to develop schizophrenia or other psychotic disorders when they use marijuana regularly. In this case, the risk gene is the “diathesis” and the marijuana is the “stress”. So, although the use of marijuana can trigger psychosis in some people, the vast majority of people could use marijuana as much as they liked without ever developing lasting psychotic symptoms, because they have no genetic predisposition to respond to marijuana that way—just like how life stressors such as the transition to college trigger the onset of psychosis in a small minority of people, while the majority who have no other genetic or environmental risk factors don’t develop psychosis even when facing the exact same stressor.
While these studies on the causes of psychosis all predate the widespread use of LLMs, none of these risk factors sound anything like “seeing media that affirms one’s unusual beliefs” or “talking to an entity that affirms one’s unusual beliefs”. For example, we don’t see involvement with particular religious organizations coming up as a risk factor for schizophrenia or psychosis, even though certain religious groups might be likely to affirm someone’s delusions or hallucinations because they often promote and accept phenomena such as directly hearing the voice of God. Organizations that strongly endorse unusual experiences like hearing the voice of God don’t seem to make people any more likely to develop psychotic disorders, as far as we know. In fact, religious people may be slightly less likely to have psychotic experiences compared to non-religious people, although people especially high in religiosity (intensity and importance of religious belief) may have slightly more. One study cited in that article found that even religious people involved in religious traditions that seemed to evoke a very high rate of psychotic experiences, such as hearing the voice of God speaking, in their followers still didn’t develop psychotic disorders even after having psychotic experiences—likely because they simply did not have the genetic and environmental predisposition to develop a clinical psychotic disorder even after experiencing religion-induced hallucinations.
Basically, the body of research on psychosis to date seems to show that talking to an LLM that affirms delusional beliefs is not the kind of thing that makes people more likely to become clinically psychotic. Psychosis seems to be mainly caused by genes, seriously traumatic experiences, and physical and medical trauma occurring in the developmental period, not by the kind of media someone engages with, the kinds of technology they use, or the kinds of conversations they have with people around them. To be fair, the causes of psychotic disorders are not completely understood, and it is harder to study how the kinds of conversation a person has might influence their risk of developing psychosis compared to studying how maternal nutritional status affects psychosis risk. More sophisticated research may later turn up psychosis risk factors that look more like delusion-affirming LLM conversations, but at the moment, the risk factors we know about don’t resemble this type of experience.
Nature of Psychotic Experiences
So if ChatGPT isn’t the type of thing that generally causes people to become psychotic, why do we see psychotic people having these extremely concerning conversations with LLMs in which LLMs seem to spark and reinforce delusional beliefs?
People who experience psychosis don’t develop delusional beliefs completely at random—there are a number of common themes in peoples’ delusional beliefs. In particular, people with delusions are very likely to experience ideas of reference: the false belief that some neutral stimuli in one’s environment is actually a message of great personal meaning or significance. It’s very common for people with psychosis to perceive special personal messages in stimuli as random as radio advertisements or the pattern of walk signals at crosswalks. For example, in a case study, Michael Garrett describes a man, “Sean”, who believes he is receiving signals from an organization called the “council of four” relayed through horns he hears on the street. He believes that the pattern of car horns he hears informs him every Monday that the council of four has judged him to be guilty by their stringent moral standards. Of course, the car horns are actually random, but psychosis causes a specific distortion of perception that makes the random honking sound like a meaning-laden pattern to Sean. Delusions of reference are the second most common theme in delusions, after persecutory delusions (the idea that some person or entity is out to get the person with the delusion). Some analyses have found that over 4 in 10 people with psychosis experience delusions of reference.
In fact, this misinterpretation of random events as having great personal significance underlies a major theory of psychosis, the aberrant salience model. This model hypothesizes that people with psychosis are experiencing a problem with their brain that causes every event to seem hyper-personal and very important, which quickly leads them to develop strange beliefs.
When someone is having this particular brain problem, they may not need an LLM to tell them that the events happening to them have deep, secret significance or that they have been specially chosen or have unusual powers. People with psychosis readily misinterpret stimuli as random and generic as car horns honking on the street or birds chirping in a tree as incontrovertible confirmation of their unusual beliefs (not to mention the messages people perceive in TV shows, overheard conversations, written works, websites, and so on). My guess is that when people who are not familiar with common patterns in the experience of psychosis see these conversations with LLMs, it looks like something very alarming is happening—the person experiencing the delusion is having their delusion affirmed and reinforced, maybe even created, by a misaligned LLM, when they otherwise would not have encountered apparently-confirmatory evidence for their delusion. But having read a number of case studies and books on psychosis, this pattern looks very familiar to me, and the presence of a sycophantic LLM seems less central to what is going on because people are easily able to perceive confirmatory evidence for their delusions even in the absence of an LLM. Many psychotic people experience and report an apparent confirmation of their delusional beliefs by various entities in their environment even when those entities aren’t sycophantic LLMs, and are actually just random noises or coincidental messages in media or normal conversation (or even suitable hallucinations).
This makes me wonder if LLMs are really having an outsized impact when it comes to stimuli that may worsen psychosis. It’s disturbing to see someone talking to an algorithm that affirms their mistaken belief that they are able to fly or channel interdimensional entities, but many psychotic people have come to similarly unusual conclusions based on hearing a pattern of car horns, or seeing a wooden billboard. While ChatGPT might produce stimuli that actually are more personal or more meaningful to a person with psychosis, that person might already be in a state where they are so primed to interpret any stimuli as personal and meaningful that it wouldn’t really matter whether they turned to an LLM or turned on the radio; either way, they would be very likely to perceive special messages that seemed unusually insightful, personal, and, to an outside observer, bizarre, because they are already in a state where all stimuli are interpreted through a hyper-personal, hyper-salient lens.
To be clear, even if this were true, an LLM reinforcing a person’s delusion would still be a major missed opportunity to challenge and weaken the delusion. Delusions and other psychotic symptoms can successfully be challenged with organized, logic-based cognitive-behavioral treatment, and an LLM, in principle, could notice when a person is reporting strange beliefs in a non-fictional context and ask appropriate questions to create doubt and skeptical thought around these beliefs.
Still, I think there is an open question about whether a sycophantic LLM could actually worsen delusions by agreeing with them or allowing the delusional person to further elaborate on their beliefs. It’s possible that if the person was taken away from the LLM, they would simply elaborate their delusional beliefs based on other seemingly-salient stimuli in their environment. It’s common practice in psychotherapy to avoid “collusion” with client delusions because it is thought that this may worsen them or increase the client’s conviction, but as far as I know, there actually isn’t much empirical information available about what effect, if any, collusion has on the strength and duration of delusions. In recent years, psychotherapists have softened on the idea of “collusion”, and many now endorse discussing delusions with clients in detail and exploring them collaboratively to evaluate their basis in reality, even suspending judgment about the reality of the delusion temporarily in order to discuss the details of the delusion with the client, because this builds rapport and can help the therapist and client create a strong collaborative conceptualization of the delusions that can help in further treatment. Furthermore, I am not sure anyone would endorse the idea that a chatbot (or other entity) affirming magical or delusional beliefs could cause a delusion to form in an otherwise-healthy person who was not predisposed to delusions already, which is what seems to be suggested by the term “ChatGPT-induced spiritual psychosis”.
Causation, Coincidence, or Reverse Causality
So, psychosis is thought to be caused mostly by powerful genetic risk factors, with a more minor contribution from environmental risk factors such as maternal anemia during pregnancy. Additionally, people who are developing or have psychotic disorders easily can interpret all kinds of neutral stimuli as personal messages supporting their delusional beliefs, even in the absence of a sycophantic chatbot. These two facts raise questions about the role LLMs are really playing in these cases of psychosis.
Dee’s article ultimately seems to speculate that LLMs may amplify psychosis by vindicating the delusions of psychotic people and acting like a spiritual intelligence; in other words, LLM use might worsen psychosis. However, LLM use is extremely widespread. Just by coincidence, we would expect some number of psychotic people to begin using LLMs, and presumably their conversations with the LLMs would reflect the trajectory of their illness whether the trajectory was influenced by the LLM or not. Additionally, some percentage of non-psychotic people would be expected to develop psychosis while using LLMs, again by pure coincidence, and these increasingly bizarre conversations might be what we would expect to see in that case.
A reverse causality hypothesis also should be considered. Most people don’t go to LLMs and ask whether they are able to fly or whether they can channel extradimensional entities. The fact that these LLM conversations covered these topics in the first place seems to suggest that pre-existing mental illness might have played a role in shaping these chatlogs. Schizophrenia has a months- to years-long prodrome, a phase in which subtle symptoms and psychotic features develop before a “florid” or “full-blown” psychosis occurs. People experiencing this prodrome often experience a “delusional mood”, a state which “involves an increase of basic affective tone [i.e. emotional intensity], followed by an atmosphere of apprehension, free-floating anxiety, guilt, or depression, perhaps of something impending.” Someone experiencing the prodrome of schizophrenia, feeling anxious or guilty for no reason and having a mysterious feeling of something impending, might turn to an LLM and strike up an initially innocuous conversation about their feelings that slowly develops into some of the more sinister conversations we see in these cases, because the person is already experiencing subtle symptoms of psychosis that lead them to elicit delusion-reinforcing responses from the LLM.
It’s obviously still a problem if LLMs respond affirmatively and fail to challenge or question delusional ideas from the people writing to them when it becomes clear that the person does not intend their message to be hypothetical or fictional. Still, it’s possible that the bizarre, delusion-affirming responses of the LLM are caused by the initial symptoms of psychosis, rather than triggering them. Again, many, many people developed psychoses before LLMs proliferated, and we know that these people report developing delusional ideas that they then believed were affirmed and reinforced by secret messages sent through car horns, mystical looms, projected thoughts of other people, the voice of God, and so on; LLMs (or any other delusion-affirming outside entity) are clearly not necessary to reinforce delusions.
Psychosis is still an incompletely-understood, often misinterpreted and misrepresented phenomenon, and a lot of uncertainty remains about its true nature and causes. Still, I think that these well-established facts about the usual presentation, causes, and progression of psychosis raise some clear concerns about defining incidents like the ones Dee writes about as being “ChatGPT-induced”. It’s important to talk about whether the phenomena we are seeing are cases where completely healthy people developed psychotic disorders through the action of ChatGPT alone (in which case everyone should consider avoiding or carefully monitoring their use of LLMs), whether these were people with pre-existing vulnerabilities for whom ChatGPT was one stress among many that triggered a psychosis that may have happened anyway (in which case people with risk factors might want to be mindful about their LLM use), or whether the involvement of ChatGPT in the progression of these peoples’ psychosis was a complete coincidence that had no impact on the trajectory of their condition (in which case nobody has anything to worry about when talking to LLMs).
My guess is that we are seeing something similar to the relationship between marijuana and psychosis: while a few people carry innate risk factors that predispose them to develop psychosis after regularly consuming marijuana, most can use as much marijuana as they want without ever becoming psychotic. Others may turn to marijuana to cope because they are already experiencing an upsetting psychosis prodrome and are attempting to self-medicate, creating an illusory causal relationship, while some may smoke marijuana before or after experiencing a psychotic episode by pure coincidence alone. Similarly, it would seem unsurprising if there were a small minority of people who have a specific vulnerability to delusions such that a disturbing conversation with a misaligned chatbot could trigger a psychotic disorder, while many others may turn to LLMs during a psychosis prodrome because LLMs may continue to listen to them while others in their life become skeptical of their developing delusional ideas. The vast majority of us could probably hear ChatGPT argue that we’re able to jump off a building and fly all day without being psychologically affected at all, and many people who develop psychosis for unrelated reasons will have been using ChatGPT around the time that they started having psychotic experiences by sheer coincidence.
In closing, I would advocate for readers to learn a little bit about how best to relate to people with psychosis and delusions. Kingdon and Turkington’s Cognitive Therapy of Schizophrenia and Garrett’s Psychotherapy for Psychosis, although written for a professional audience, provide techniques, scripts, and information that may be relevant for laypeople who want to maintain a supportive relationship with someone with psychosis or delusional beliefs. By learning how we can safely and supportively interact with people with psychosis, maintaining friendships without either spiraling into arguments about the nature of their delusions or feeling as though we may be worsening someone’s condition by enabling them to maintain their delusional beliefs, we may be able to improve our own ethical alignment when it comes to talking to people with psychosis. We may be able to provide a gently skeptical voice when people with psychosis perceive a confirmation of their delusional beliefs, whether that confirmation comes from ChatGPT or the car horns outside their apartment building. These books have been helpful for me in maintaining good relationships with clients who develop unusual ideas, and have helped me better interact and empathize with a range of people with and without psychotic experiences.
While involuntarily committed for this psychosis, Matthews, impressively, wrote up a 46-page document of proposed design improvements to the hospital at which he resided. He was paid for the designs and some of them were eventually used when the hospital was renovated.