Artificial intelligence is rapidly entering mental health care, prompting both excitement and alarm among clinicians and researchers. Large health systems and solo practitioners are experimenting with AI tools to streamline treatment delivery, but fast adoption and troubling encounters with general-purpose chatbots have heightened unease.
“There is a lot of fear and anxiety about AI,” says Vaile Wright, senior director of health care innovation at the American Psychological Association. She notes a particular worry that AI could replace jobs — a concern that helped trigger a 24-hour strike last month by about 2,400 mental health workers at Kaiser Permanente in Northern California and the Central Valley.
Therapists involved in intake and triage at Kaiser have been especially vocal. Ilana Marcucci-Morris, who handled telepsychiatry triage since 2019, says her role was changed after the system reworked its intake process. What had been a 10-to-15-minute screening performed by a licensed clinician is now often run as an “E-visit” or by unlicensed staff following scripted prompts, she says. Colleagues fear that such changes create openings for AI or further de-skilling of clinical roles.
At Kaiser’s Walnut Creek site, marriage and family therapist Harimandir Khalsa says the triage team shrank from nine clinicians to three, with some responsibilities shifting to telephone service representatives. Workers protested this reduction in licensed triage among other labor issues. Kaiser Permanente Northern California responded that its use of AI does not replace clinical expertise and confirmed it is evaluating tools from U.K.-based company Limbic, while noting Limbic is not currently in use.
So far, Wright says, wholesale job displacement has not been widespread. Instead, most AI adoption has targeted administrative burdens — documentation, billing, and updating electronic health records — the time-consuming tasks that pull clinicians away from direct patient care. Properly applied, she argues, AI could improve efficiency and free clinicians to spend more time with patients, though familiarity with and vetting of tools varies across practices.
A growing business ecosystem supports these shifts. Blueprint offers an AI assistant that summarizes sessions, updates records, and monitors patient progress for individual therapists. Limbic builds AI assistants aimed at large health systems; founder and CEO Ross Harper says the company is deployed across much of the U.K. National Health Service and in 13 U.S. states. One Limbic product, Limbic Care, is trained in cognitive behavioral therapy techniques and can provide evidence-based tools through a patient portal. “If it’s 3 a.m., you can connect and begin working on the challenges you’re experiencing right there and then,” Harper says.
Clinical deployment remains limited, however. Dr. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center, notes that many tools are promising but not yet well tested. They can be costly to operate and require IT infrastructure and safety protocols that smaller practices may lack. With sparse regulation, Wright urges providers to thoroughly assess whether tools are safe and effective before adopting them.
Experts expect AI’s role to grow as the technology matures, but they emphasize clinician training and involvement in development and rollout. Striking Kaiser workers have demanded a seat at those decision-making tables. “If AI is utilized, don’t keep us clinicians out of the human process of engaging with our patients in determining the right level of care,” Khalsa says.
Most observers foresee a hybrid model: human clinicians delivering psychotherapy while AI augments care by handling documentation, supporting skills practice and homework, and offering timely feedback. As Wright puts it, no current AI solution can replace human-driven psychotherapy or the therapeutic relationship between clinician and patient.