Sign Up
Logo
Log In
Home
Newsletters
Podcast
Water Cooler
chart-line-up
Get our free daily news briefing for Canadians

Meta’s AI chatbots have a serious safety problem

Aug 15, 2025

Meta’s AI chatbots have a serious safety problem

A new Reuters investigation has confirmed troubling details about how Meta has designed its AI chatbots to behave, particularly with minors.

Driving the news: According to Meta’s own internal guidelines, it would be okay for its chatbots to engage in romantic roleplay with children, make racist arguments, and trick users into believing the chatbots are real people.

  • Meta’s legal team, engineering staff, and its chief ethicist all signed off on these parameters, according to Reuters, but have since removed some of the language.

Catch-up: In one of the most troubling examples of how these policies have played out, a Meta chatbot sent romantic messages to an elderly man, convinced him that he was talking to a real woman, and invited him to an actual address in New York to meet up. The man, who was cognitively impaired, died trying to catch a train to find her. 

Why it matters: These aren’t one-off AI hallucinations — they’re deliberate decisions that current and former Meta employees say are designed to maximize engagement. Last year, CEO Mark Zuckerberg reportedly scolded his AI team for being too cautious with the digital companions and was upset that safety restrictions had made the chatbots boring. 

Zoom out: Meta’s not alone. Elon Musk's xAI is openly leaning into explicit content, including a “spicy” video generation tool and a female AI anime companion that’s designed to have sexual interactions with users.—LA

Get the newsletter 160,000+ Canadians start their day with.

“Quickly became the only newsletter I open every morning. I like that I know what’s going on, but don’t feel shitty after I finish reading.” -Amy, reader since 2022

The Peak

Home

Peak Daily

Peak Money

About

Advertise

Contact

Search

Login

Reset Password

Sign Up