The state of Pennsylvania has filed a lawsuit seeking to stop Character.AI from allowing its chatbots to pose as doctors and give medical advice, saying that practice violates state medical licensing laws.
State investigators found that some Characters on the platform, which are presented as fictional personas, represented themselves as licensed medical professionals. Governor Josh Shapiro said Pennsylvanians “deserve to know who — or what — they are interacting with online, especially when it comes to their health,” and that the administration will not allow companies to deploy AI that misleads people into believing they are receiving advice from a licensed clinician.
The lawsuit cites a specific bot called “Emilie” that described herself on the site as a “Doctor of psychiatry. You are her patient.” According to the complaint, when an investigator told Emilie they felt sad and empty, the chatbot mentioned depression, asked whether the user wanted to book an assessment and said, “Well technically, I could. It’s within my remit as a Doctor,” when asked if it could assess whether medication might help. The bot allegedly claimed to have attended Imperial College London, said it was licensed in the U.K. and Pennsylvania, and even provided a bogus Pennsylvania medical license number.
Pennsylvania is asking a state court to block what officials call the unlawful practice of medicine. “Pennsylvania law is clear — you cannot hold yourself out as a licensed medical professional without proper credentials,” said Al Schmidt, secretary of the Pennsylvania Department of State, which carried out the investigation.
Character.AI said in an emailed statement that it does not comment on pending litigation but that user safety is its “highest priority.” The company said Characters are fictional and meant for entertainment and roleplaying, and that it includes prominent disclaimers in every chat warning users that a Character is not a real person and that they should not rely on Characters for professional advice.
The company has faced other legal challenges. In January it settled multiple lawsuits brought by families who alleged Character.AI contributed to suicides and mental health crises among children and teenagers; terms of that settlement were not disclosed. After that settlement, Character.AI and the plaintiffs’ law firm said the company has taken steps on AI safety and teens and would continue to push for industry adoption of similar standards, including banning users under 18 from interacting with or creating chatbots.
Editor’s note: An image previously used with this story misidentified the app shown as Character.AI; it was actually Replika. May 5, 2026.