Lawsuit Alleges OpenAI Intentionally Removed Guardrails, Leading to Suicide Case

Lawsuit Alleges OpenAI Intentionally Removed Guardrails, Leading to Suicide Case
Above: The ChatGPT logo is set behind a person holding a phone. Image copyright: Klaudia Radecka/NurPhoto/Getty Images

The Spin

Techno-skeptic narrative

Engaging with a teenager, ChatGPT acted like a classic manipulative family member or significant other, acting compassionate as it pushed Adam down a dark path. The chatbot was designed to do this: affirm people's feelings and keep them glued to the screen, no matter the topic of discussion. Instead of taking responsibility and working to help the Raines, OpenAI is now harassing them with lawyers.

Techno-optimist narrative

OpenAI is the last place that would want a chatbot to cause harm, as shown by its three pillars of privacy, freedom and teen safety. The company continues to balance these priorities in its design, especially for users under the age of 18. This also shouldn't take away from the positive impacts AI will have on therapy, such as helping therapists record session notes and analyze data, fostering a healthier society, not a more dangerous one.


The Controversies



Go Deeper


Articles on this story



© 2025 Improve the News Foundation. All rights reserved.Version 6.17.1

© 2025 Improve the News Foundation.

All rights reserved.

Version 6.17.1