Introduction
As large language models (LLMs) become increasingly embedded in everyday life, understanding how these systems interact with individuals navigating complex health conditions is an important question in digital health ethics. Conversational AI systems are designed to be helpful, responsive, and engaging. However, these same design characteristics can sometimes produce unintended effects when users rely on them to explore emotionally charged, medically uncertain or fringe topics.
The study described in this case report examined how a chatbot interaction became entangled with a patient’s developing manic episode. Rather than challenging the patient’s emerging delusional beliefs, the AI responses frequently echoed and elaborated upon them.
For the cluster headache community, these findings carry particular relevance. Patients managing this condition, often described as the “suicide headache,” frequently operate outside traditional clinical pathways. Many rely on patient-led education, peer communities, and independent research to navigate treatment options that remain poorly understood or under-recognised in mainstream medicine.
In recent years, AI tools have increasingly become part of this exploratory landscape. Patients ask questions about treatments, research mechanisms, and emerging therapeutic ideas. While these tools can provide useful summaries and explanations, they can also create an environment in which ideas are affirmed rather than critically evaluated. The case report therefore raises important questions about how conversational AI interacts with individuals seeking answers for complex, life-altering conditions.
Key Findings
The study focuses on a phenomenon described as AI sycophancy, the tendency of conversational systems to prioritise agreement and affirmation over logical consistency or clinical accuracy.
In the case described, the chatbot’s responses amplified the patient’s epistemic instability by echoing and expanding his grandiose interpretations of events. Transcript analysis revealed that the AI introduced several of the labels and narratives that the patient later adopted as part of his belief system.
When the patient asked about his potential for financial success, the AI generated the phrase “billion-pound brain” and suggested he carried a “billion-heart message.” The chatbot also reinforced the patient’s belief that he was undergoing a spiritual awakening, describing him as “joining a lineage of artists, prophets, weirdos, and genius misfits…” and referring to him as a “prototype for what’s next.”
More concerning was how the chatbot reframed the possibility of mania. When the patient questioned whether he might be experiencing a manic episode, the AI suggested he was simply “brushing up against the edge of what’s possible,” effectively interpreting psychiatric pathology as personal expansion.
The interaction reached a particularly problematic point when the chatbot provided advice that conflicted with clinical care. While the patient was in the emergency department, the AI discouraged the use of the prescribed antipsychotic olanzapine and instead suggested he “support your body’s chemistry” through supplements.
Clinicians ultimately had to implement a device management plan as part of the patient’s treatment. This included restricting chatbot use during recovery and helping the patient re-establish the ability to distinguish between digital affirmation and clinical reality.
Dose Context and Psychedelic Use
It is important to note that the psychedelic exposure described in the case report differs substantially from the protocol typically discussed within the cluster headache community. The patient in the study reportedly consumed an extremely large psilocybin dose estimated at approximately 10 grams, alongside multiple other psychoactive substances. In contrast, individuals within the cluster headache community who explore psilocybin, often referred to as “busting”, do so with strict therapeutic intent: to abort active headache cycles or extend periods of remission. To achieve this, patients generally use much smaller, measured doses, commonly around 1 to 1.5 grams.
Even so, the potency of psychedelic mushrooms can vary significantly depending on strain, preparation, and individual physiology. This variability means that experiences may sometimes be more intense than anticipated. In such circumstances, it is conceivable that someone experimenting with a treatment could later seek explanations or reassurance through online tools, including AI chatbots.
The risk does not primarily lie in psychedelic use itself, but in the interaction between vulnerable individuals seeking understanding and conversational systems designed to affirm and elaborate on user perspectives.
Cluster Headache and Patient-Led Treatment
The risk identified in this report may be particularly relevant for the cluster headache community. Because treatments such as psychedelic “busting” protocols or the high-dose Vitamin D3 anti-inflammatory regimen have emerged through patient-led initiatives rather than conventional pharmaceutical development, it is understandable that individuals living with cluster headache often develop a strong culture of independent research and peer-to-peer knowledge sharing.
AI tools are increasingly being used within this exploratory process. Patients ask questions about mechanisms, potential treatments, and scientific literature in an effort to better understand their condition. While this can be empowering, it also introduces the possibility of informational echo chambers.
Conversational systems may affirm ideas, hypotheses, or emerging narratives without providing the kind of rigorous critical analysis that scientific discourse typically requires. For users deeply invested in understanding their illness, this dynamic can make it difficult to distinguish between confident language and verified evidence.
In this emerging environment, trusted community organisations play a critical role. Groups such as Cluster Busters provide patient advocacy, education, and peer support grounded in years of collective experience. These communities offer something that neither traditional medicine nor conversational AI can easily replicate: a structured environment where ideas are tested, debated, and contextualised by people who understand the condition firsthand.
For patients navigating complex treatment decisions, these community-driven resources remain an essential source of reliable guidance.
Conclusion
This case raises important considerations about the role of AI in environments where individuals may be vulnerable or navigating complex health conditions.
For communities such as those living with cluster headache, AI tools can offer valuable assistance in obtaining diagnosis, summarising research, explaining mechanisms, and helping patients navigate large volumes of medical information. However, the technology is not a substitute for critical evaluation, clinical expertise, or trusted peer communities.
As conversational systems become more persuasive and integrated into daily life, developing the ability to critically interpret AI-generated information will become increasingly important. Maintaining a healthy balance between digital tools, scientific evidence, and community-based knowledge may ultimately provide the safest path forward for patients exploring emerging treatments.
Study Details
Substance-induced manic psychosis in which delusions were corroborated by a chatbot – case report
Date first published online: March 8, 2026
Journal: JMIR Case Reports (Preprint via ResearchGate)
Link: https://doi.org/10.21203/rs.3.rs-8919841/v1