Digital Trends

The ascent of the AI therapist – MIT Technology Review

Four new books grapple with a global mental-health crisis and the dawn of algorithmic therapy.
We’re in the midst of a global mental-­health crisis. More than a billion people worldwide suffer from a mental-health condition, according to the World Health Organization. The prevalence of anxiety and depression is growing in many demographics, particularly young people, and suicide is claiming hundreds of thousands of lives globally each year.
Given the clear demand for accessible and affordable mental-health services, it’s no wonder that people have looked to artificial intelligence for possible relief. Millions are already actively seeking therapy from popular chatbots like OpenAI’s ChatGPT and Anthropic’s Claude, or from specialized psychology apps like Wysa and Woebot. On a broader scale, researchers are exploring AI’s potential to monitor and collect behavioral and biometric observations using wearables and smart devices, analyze vast volumes of clinical data for new insights, and assist human mental-health professionals to help prevent burnout. 
But so far this largely uncontrolled experiment has produced mixed results. Many people have found solace in chatbots based on large language models (LLMs), and some experts see promise in them as therapists, but other users have been sent into delusional spirals by AI’s hallucinatory whims and breathless sycophancy. Most tragically, multiple families have alleged that chatbots contributed to the suicides of their loved ones, sparking lawsuits against companies responsible for these tools. In October, OpenAI CEO Sam Altman revealed in a blog post that 0.15% of ChatGPT users “have conversations that include explicit indicators of potential suicidal planning or intent.” That’s roughly a million people sharing suicidal ideations with just one of these software systems every week.
The real-world consequences of AI therapy came to a head in unexpected ways in 2025 as we waded through a critical mass of stories about human-chatbot relationships, the flimsiness of guardrails on many LLMs, and the risks of sharing profoundly personal information with products made by corporations that have economic incentives to harvest and monetize such sensitive data. 
Several authors anticipated this inflection point. Their timely books are a reminder that while the present feels like a blur of breakthroughs, scandals, and confusion, this disorienting time is rooted in deeper histories of care, technology, and trust. 
LLMs have often been described as “black boxes” because nobody knows exactly how they produce their results. The inner workings that guide their outputs are opaque because their algorithms are so complex and their training data is so vast. In mental-health circles, people often describe the human brain as a “black box,” for analogous reasons. Psychology, psychiatry, and related fields must grapple with the impossibility of seeing clearly inside someone else’s head, let alone pinpointing the exact causes of their distress. 
These two types of black boxes are now interacting with each other, creating unpredictable feedback loops that may further impede clarity about the origins of people’s mental-­health struggles and the solutions that may be possible. Anxiety about these developments has much to do with the explosive recent advances in AI, but it also revives decades-old warnings from pioneers such as the MIT computer scientist Joseph Weizenbaum, who argued against computerized therapy as early as the 1960s.  
Charlotte Blease, a philosopher of medicine, makes the optimist’s case in Dr. Bot: Why Doctors Can Fail Us—and How AI Could Save Lives. Her book broadly explores the possible positive impacts of AI in a range of medical fields. While she remains clear-eyed about the risks, warning that readers who are expecting “a gushing love letter to technology” will be disappointed, she suggests that these models can help relieve patient suffering and medical burnout alike.
“Health systems are crumbling under patient pressure,” Blease writes. “Greater burdens on fewer doctors create the perfect petri dish for errors,” and “with palpable shortages of doctors and increasing waiting times for patients, many of us are profoundly frustrated.”
Blease believes that AI can not only ease medical professionals’ massive workloads but also relieve the tensions that have always existed between some patients and their caregivers. For example, people often don’t seek needed care because they are intimidated or fear judgment from medical professionals; this is especially true if they have mental-health challenges. AI could allow more people to share their concerns, she argues. 
But she’s aware that these putative upsides need to be weighed against major drawbacks. For instance, AI therapists can provide inconsistent and even dangerous responses to human users, according to a 2025 study, and they also raise privacy concerns, given that AI companies are currently not bound by the same confidentiality and HIPAA standards as licensed therapists. 
While Blease is an expert in this field, her motivation for writing the book is also personal: She has two siblings with an incurable form of muscular dystrophy, one of whom waited decades for a diagnosis. During the writing of her book, she also lost her partner to cancer and her father to dementia within a devastating six-month period. “I witnessed first-hand the sheer brilliance of doctors and the kindness of health professionals,” she writes. “But I also observed how things can go wrong with care.”
A similar tension animates Daniel Oberhaus’s engrossing book The Silicon Shrink: How Artificial Intelligence Made the World an Asylum. Oberhaus starts from a point of tragedy: the loss of his younger sister to suicide. As Oberhaus carried out the “distinctly twenty-first-century mourning process” of sifting through her digital remains, he wondered if technology could have eased the burden of the psychiatric problems that had plagued her since childhood.
“It seemed possible that all of this personal data might have held important clues that her mental health providers could have used to provide more effective treatment,” he writes. “What if algorithms running on my sister’s smartphone or laptop had used that data to understand when she was in distress? Could it have led to a timely intervention that saved her life? Would she have wanted that even if it did?”
This concept of digital phenotyping—in which a person’s digital behavior could be mined for clues about distress or illness—seems elegant in theory. But it may also become problematic if integrated into the field of psychiatric artificial intelligence (PAI), which extends well beyond chatbot therapy.
Oberhaus emphasizes that digital clues could actually exacerbate the existing challenges of modern psychiatry, a discipline that remains fundamentally uncertain about the underlying causes of mental illnesses and disorders. The advent of PAI, he says, is “the logical equivalent of grafting physics onto astrology.” In other words, the data generated by digital phenotyping is as precise as physical measurements of planetary positions, but it is then integrated into a broader framework—in this case, psychiatry—that, like astrology, is based on unreliable assumptions.  
Oberhaus, who uses the phrase “swipe psychiatry” to describe the outsourcing of clinical decisions based on behavioral data to LLMs, thinks that this approach cannot escape the fundamental issues facing psychiatry. In fact, it could worsen the problem by causing the skills and judgment of human therapists to atrophy as they grow more dependent on AI systems. 
He also uses the asylums of the past—in which institutionalized patients lost their right to freedom, privacy, dignity, and agency over their lives—as a touchstone for a more insidious digital captivity that may spring from PAI. LLM users are already sacrificing privacy by telling chatbots sensitive personal information that companies then mine and monetize, contributing to a new surveillance economy. Freedom and dignity are at stake when complex inner lives are transformed into data streams tailored for AI analysis. 
AI therapists could flatten humanity into patterns of prediction, and so sacrifice the intimate, individualized care that is expected of traditional human therapists. “The logic of PAI leads to a future where we may all find ourselves patients in an algorithmic asylum administered by digital wardens,” Oberhaus writes. “In the algorithmic asylum there is no need for bars on the window or white padded rooms because there is no possibility of escape. The asylum is already everywhere—in your homes and offices, schools and hospitals, courtrooms and barracks. Wherever there’s an internet connection, the asylum is waiting.”
Eoin Fullam, a researcher who studies the intersection of technology and mental health, echoes some of the same concerns in Chatbot Therapy: A Critical Analysis of AI Mental Health Treatment. A heady academic primer, the book analyzes the assumptions underlying the automated treatments offered by AI chatbots and the way capitalist incentives could corrupt these kinds of tools.  
Fullam observes that the capitalist mentality behind new technologies “often leads to questionable, illegitimate, and illegal business practices in which the customers’ interests are secondary to strategies of market dominance.”
That doesn’t mean that therapy-bot makers “will inevitably conduct nefarious activities contrary to the users’ interests in the pursuit of market dominance,” Fullam writes. 
But he notes that the success of AI therapy depends on the inseparable impulses to make money and to heal people. In this logic, exploitation and therapy feed each other: Every digital therapy session generates data, and that data fuels the system that profits as unpaid users seek care. The more effective the therapy seems, the more the cycle entrenches itself, making it harder to distinguish between care and commodification. “The more the users benefit from the app in terms of its therapeutic or any other mental health intervention,” he writes, “the more they undergo exploitation.” 
This sense of an economic and psychological ouroboros—the snake that eats its own tail—serves as a central metaphor in Sike, the debut novel from Fred Lunzer, an author with a research background in AI. 
Described as a “story of boy meets girl meets AI psychotherapist,” Sike follows Adrian, a young Londoner who makes a living ghostwriting rap lyrics, in his romance with Maquie, a business professional with a knack for spotting lucrative technologies in the beta phase. 
The title refers to a splashy commercial AI therapist called Sike, uploaded into smart glasses, that Adrian uses to interrogate his myriad anxieties. “When I signed up to Sike, we set up my dashboard, a wide black panel like an airplane’s cockpit that showed my daily ‘vitals,’” Adrian narrates. “Sike can analyze the way you walk, the way you make eye contact, the stuff you talk about, the stuff you wear, how often you piss, shit, laugh, cry, kiss, lie, whine, and cough.”
In other words, Sike is the ultimate digital phenotyper, constantly and exhaustively analyzing everything in a user’s daily experiences. In a twist, Lunzer chooses to make Sike a luxury product, available only to subscribers who can foot the price tag of £2,000 per month. 
Flush with cash from his contributions to a hit song, Adrian comes to rely on Sike as a trusted mediator between his inner and outer worlds. The novel explores the impacts of the app on the wellness of the well-off, following rich people who voluntarily commit themselves to a boutique version of the digital asylum described by Oberhaus.
The only real sense of danger in Sike involves a Japanese torture egg (don’t ask). The novel strangely sidesteps the broader dystopian ripples of its subject matter in favor of drunken conversations at fancy restaurants and elite dinner parties. 
The sudden ascent of the AI therapist seems startlingly futuristic, as if it should be unfolding in some later time when the streets scrub themselves and we travel the world through pneumatic tubes.
Sike’s creator is simply “a great guy” in Adrian’s estimation, despite his techno-messianic vision of training the app to soothe the ills of entire nations. It always seems as if a shoe is meant to drop, but in the end, it never does, leaving the reader with a sense of non-resolution.
While Sike is set in the present day, something about the sudden ascent of the AI therapist—­in real life as well as in fiction—seems startlingly futuristic, as if it should be unfolding in some later time when the streets scrub themselves and we travel the world through pneumatic tubes. But this convergence of mental health and artificial intelligence has been in the making for more than half a century. The beloved astronomer Carl Sagan, for example, once imagined a “network of computer psychotherapeutic terminals, something like arrays of large telephone booths” that could address the growing demand for mental-health services.
Oberhaus notes that one of the first incarnations of a trainable neural network, known as the Perceptron, was devised not by a mathematician but by a psychologist named Frank Rosenblatt, at the Cornell Aeronautical Laboratory in 1958. The potential utility of AI in mental health was widely recognized by the 1960s, inspiring early computerized psychotherapists such as the DOCTOR script that ran on the ELIZA chatbot developed by Joseph Weizenbaum, who shows up in all three of the nonfiction books in this article.
Weizenbaum, who died in 2008, was profoundly concerned about the possibility of computerized therapy. “Computers can make psychiatric judgments,” he wrote in his 1976 book Computer Power and Human Reason. “They can flip coins in much more sophisticated ways than can the most patient human being. The point is that they ought not to be given such tasks. They may even be able to arrive at ‘correct’ decisions in some cases—but always and necessarily on bases no human being should be willing to accept.”
It’s a caution worth keeping in mind. As AI therapists arrive at scale, we’re seeing them play out a familiar dynamic: Tools designed with superficially good intentions are enmeshed with systems that can exploit, surveil, and reshape human behavior. In a frenzied attempt to unlock new opportunities for patients in dire need of mental-health support, we may be locking other doors behind them.
Becky Ferreira is a science reporter based in upstate New York and author of First Contact: The Story of Our Obsession with Aliens.
The experimental model won't compete with the biggest and best, but it could tell us why they behave in weird ways—and how trustworthy they really are.
Four ways to think about this year's reckoning
They managed to cut the size of the AI reasoning model by more than half—and claim it can now answer politically sensitive questions once off limits in Chinese AI systems.
A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation.
Discover special offers, top stories, upcoming events, and more.
Thank you for submitting your email!
It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.

© 2025 MIT Technology Review

source
This is a newsfeed from leading technology publications. No additional editorial review has been performed before posting.

Leave a Reply