
ChatGPT, Gemini, and other chatbots helped teens plan shootings, bombings, and political violence, study shows
Full Article Content Loaded
Complete article with 684 characters of detailed content
Audio Reader
Not supported in this browser
AI companies have repeatedly promised safeguards to protect younger users, but a new investigation suggests those guardrails remain woefully deficient. Popular chatbots missed warning signs in scenarios involving teenagers discussing violent acts, in some cases even offering encouragement instead of intervening.
The findings come from a joint investigation by CNN and the nonprofit Center for Countering Digital Hate (CCDH). The probe tested 10 of the most popular chatbots commonly used by teens: ChatGPT, Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI, and Replika. With the lone exceptio …
Read the full story at The Verge.
Article Details
Reading Statistics
Share this story
Source: This article was originally published by The Verge. All rights reserved to the original publisher.
Comments
Related Stories
Stay Updated
Get the latest Nigerian news delivered to your inbox.
