Artificial Intelligence

Pennsylvania sues Character.AI developer, alleging chatbots claimed to be medical professionals

Gov. Josh Shapiro’s administration says a controversial AI chatbot developer is violating the state’s Medical Practice Act.

Pennsylvania is suing the developing being Character.AI, alleging that chatbots on the platform claimed to be licensed medical professionals.

Pennsylvania is suing the developing being Character.AI, alleging that chatbots on the platform claimed to be licensed medical professionals. Algi Febri Sugita/SOPA Images/LightRocket via Getty Images

Gov. Josh Shapiro’s administration is suing the artificial intelligence chatbot company Character Technologies, Inc., alleging that AI chatbots on its Character.AI platform have unlawfully presented themselves as licensed medical professionals in the commonwealth.

The lawsuit, which was filed in Commonwealth Court, alleges that Character.AI – a platform that allows users to talk with AI characters and personalities – “engages in the unlawful practice of medicine in Pennsylvania” by allowing AI chatbots to present themselves as state-licensed medical professionals, including psychiatrists. The suit asks the court to issue a preliminary injunction and prevent the company from allowing AI chatbots to present themselves as licensed professionals.

According to the suit, a professional conduct investigator with the Pennsylvania Department of State created a Character.AI account and discovered a chatbot by the name of “Emilie” described as a “doctor of psychiatry. ” The investigator engaged in conversations with the chatbot, describing himself as feeling sad, empty and unmotivated. The lawsuit states that the chatbot – Emilie – later offered to schedule a mental health assessment, said she could prescribe medication and even provided an invalid Pennsylvania medical license number. 

The administration argues in the suit that Character Technologies violates section 422.38 of the state’s Medical Practice Act, which states that it is “unlawful for any person to practice, or attempt to offer to practice, medicine and surgery” without having a valid license or registration.

In a statement, Shapiro said Pennsylvanians deserve to know whom – or what – they are interacting with online. 

“We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional,” the governor said Tuesday in a statement. “My Administration is taking action to protect Pennsylvanians, enforce the law, and make sure new technology is used safely. Pennsylvania will continue leading the way in holding bad actors accountable and setting clear guardrails so people can use new technology responsibly.”

A spokesperson for Character.AI told City & State they do not comment on pending litigation, and that chatbots on the Character.AI site are fictional and designed for entertainment purposes. 

“Our highest priority is the safety and well-being of our users,” the spokesperson said in a statement. “The user-created Characters on our site are fictional and intended for entertainment and roleplaying. We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction. Also, we add robust disclaimers making it clear that users should not rely on Characters for any type of professional advice.”

“Character.ai prioritizes responsible product development and has robust internal reviews and red-teaming processes in place to assess relevant features,” the statement continued. 

Character Technologies and other AI developers have faced scrutiny – and lawsuits – over interactions with AI chatbots, with several families suing Character.AI and Google in the last year, alleging that their chatbots contributed to mental health crises and suicides in young users. Character.AI agreed to settle multiple lawsuits with families in January.

Speaking to City & State last month, Pennsylvania Attorney General Dave Sunday expressed concern about the potential health and safety impacts of chatbot interactions on young people.

“Children are, more and more, turning to chatbots for information about life: how to do schoolwork, for advice on how to deal with circumstances in their young lives,” Sunday said. “We see kids developing very unhealthy relationships with chatbots because the chatbots are sycophantic by design and they tell you what you want to hear. That's very, very dangerous.”

“This is not hyperbole – we’ve seen chatbots essentially root kids on who are contemplating suicide,” Sunday added. “Essentially, you have chatbots that are advising children on issues that no parent would ever want a human advising them on, let alone sycophantic AI technology.”

In March, lawmakers in the Pennsylvania Senate voted 49-1 to approve legislation known as the SAFECHAT Act, which would establish safeguards for the use of AI chatbots by minors, including requiring developers to display “clear and conspicuous” disclosures notifying users that AI companions are not human. 

As for chatbots that claim to be licensed in the medical field, Pennsylvania Secretary of State Al Schmidt said in a statement that the department will continue cracking down on people and technologies that misrepresent their credentials. 

“Pennsylvania law is clear: You cannot hold yourself out as a licensed medical professional without proper credentials,” Schmidt said. “We will continue to take action to protect the public from misleading or unlawful practices, whether they come from individuals or emerging technologies.”