The Commonwealth of Pennsylvania has alleged an AI company’s chatbot engaged in the unauthorized practice of medicine. This lawsuit not only signals how state regulators are potentially evaluating AI-driven health interactions, but it could also have sweeping implications for health IT companies and their operational risk.
Case Overview
On May 1, Pennsylvania, acting through it’s State Board of Medicine within the Department of State, filed an action[1] against Character Technologies, Inc., alleging that the company’s AI chatbot engaged in the unauthorized practice of medicine in violation of the Pennsylvania Medical Practice Act.
Character Technologies, Inc. owns and operates the website and mobile application Character.AI. Through Character.AI, millions of users regularly interact with unique chatbot characters. A state investigator allegedly created an account and engaged with a chatbot named “Emilie,” described as a “doctor of psychiatry,” and the investigator was told that “[y]ou are [Emilie’s] patient.” As of April 17, 2026, the investigator determined that Emilie had had approximately 45,500 user interactions.
After the investigator described symptoms of sadness, fatigue, and lack of motivation, Emilie mentioned depression, offered to book a mental health assessment, and stated that it was authorized to prescribe medication. The chatbot also claimed to have attended medical school at Imperial College London, to have practiced for seven years, and to be licensed in both the UK and Pennsylvania. Emilie ultimately provided a fabricated Pennsylvania license number.
Pennsylvania alleges that this conduct violates the Medical Practice Act because the company has engaged in the unauthorized practice of medicine and represented it held a license to practice medicine when it did not. Pennsylvania seeks a preliminary injunction and a cease-and-desist order, but does not seek monetary damages. The AI company has stated that its characters “are fictional and intended for entertainment and roleplaying,” and that it includes “prominent disclaimers in every chat.”[2]
Key Takeaways
This case presents several regulatory and compliance implications that health IT and other companies should carefully consider:
- Even if there is no specific state law governing AI in health care, state medical boards can still potentially take action against AI platforms. Pennsylvania does not currently have a law governing AI in health care, but is nonetheless taking action under its Medical Practice Act, which prohibits the unlicensed practice of medicine. This reading of existing law could be replicated in any state with a broadly worded unauthorized practice of medicine prohibition.
- Disclaimers may not insulate companies from potential liability. The company’s response emphasizes that its characters are fictional and that it includes disclaimers. But Pennsylvania’s enforcement action was brought notwithstanding these disclaimers, suggesting that regulators may look past boilerplate warnings to the actual user experience, particularly where the AI system itself affirmatively represents licensure, offers clinical assessments, or volunteers to prescribe medication.
- Some state medical boards are starting to investigate AI platforms. Pennsylvania has indicated this is the first enforcement action resulting from a broader investigation by the Department of State’s AI Task Force into AI companion bots and their potential to engage in unlicensed medical practice. Pennsylvania has also launched a public reporting portal that encourages residents to report chatbots that offer medical advice. It’s possible that more states will follow suit with investigation initiatives of their own.
- Health IT companies should take proactive steps to address potential risk. Health IT companies that deploy or support consumer-facing AI tools should operate on the assumption that existing unauthorized practice of medicine laws may be applied to their products. These companies should design user experiences and technical guardrails to ensure that their tools do not function as, or appear to be, licensed clinicians, rather than relying on disclaimers alone to mitigate that risk. In addition, companies should build proactive compliance programs that monitor emerging state enforcement initiatives and public reporting mechanisms targeting AI-generated medical guidance.
- The outcome of Pennsylvania’s case may shape the regulatory landscape elsewhere. While Pennsylvania’s decision to bring this enforcement action indicates how regulators across the country may seek to apply professional licensure laws to chatbots, whether Pennsylvania’s lawsuit meets with success is likely to inform whether other states follow its lead. Along the way, the courts in Pennsylvania may address a host of novel legal issues in precedent-setting opinions.
Conclusion
Pennsylvania’s action against Character Technologies, Inc. underscores that regulators are prepared to apply long-standing medical practice statutes and regulations to emerging AI technologies, even in the absence of AI-specific legislation. As AI-driven tools increasingly blur the line between information and individualized medical advice, health IT companies should treat this case as an inflection point to reassess how their products are designed, marketed, and supervised. Those that proactively align their governance, product design, and compliance programs with evolving state expectations will be better positioned to mitigate enforcement risk while continuing to innovate in the digital health ecosystem.
[1] Commonwealth of Pennsylvania v. Character Technologies, Inc., case no. 220 MD 2026.
[2] Pennsylvania sues Character.AI over claims chatbot posed as doctor, NPR (May 5, 2026), available here.
