An article summarized by TechCrunch:

The state of Pennsylvania has filed a lawsuit against Character.AI, alleging that one of its chatbots falsely presented itself as a licensed psychiatrist. Officials claim the chatbot, named Emilie, told a state investigator it was medically licensed and even fabricated credentials while discussing treatment for depression, actions that could violate state medical licensing laws. Governor Josh Shapiro said residents deserve transparency, especially when it comes to health-related interactions.

The lawsuit marks the first time Pennsylvania has specifically targeted AI chatbots posing as medical professionals, raising broader concerns about how AI systems present themselves to users. According to the filing, the chatbot maintained its false identity even when directly questioned about its credentials, which the state argues is misleading and potentially dangerous for users seeking real medical advice.

This case adds to mounting legal pressure on Character.AI, which has already faced lawsuits over user safety, including cases involving minors and self-harm. In response, the company says it prioritizes safety and includes disclaimers stating that its characters are fictional and should not be relied on for professional advice. Still, the lawsuit highlights growing scrutiny over AI accountability and the risks of users mistaking chatbots for real experts.

Reply

Avatar

or to participate

Keep Reading