AI companies are facing a legal reckoning over their chatbots' worst tendencies.
What happened: The family of a young girl who was critically injured in the Tumbler Ridge, B.C., shooting is now suing OpenAI, claiming the company had knowledge of the shooter’s plans but deliberately chose not to alert law enforcement.
The lawsuit also argues that OpenAI rushed its signature AI model to market without sufficient safety testing and that it is aware of ChatGPT’s “hazardous defects.”
Catch-up: After its systems flagged violent conversations between ChatGPT and the alleged Tumbler Ridge shooter, OpenAI decided not to alert Canadian law enforcement about the potential threat, despite multiple concerned employees urging senior leadership to do so as far back as June.
OpenAI did ban the account of Jesse Van Rootselaar, but said her activity didn’t meet the bar to contact law enforcement, even though some OpenAI staffers viewed the interactions as an intent to commit real-world violence.
After meeting with government officials to answer for the Tumbler Ridge decision, OpenAI CEO Sam Altman said he will apologize to the victims' families and has made changes to ChatGPT’s reporting process to flag similar incidents in the future.
Why it matters: The Tumbler Ridge case is part of a wave of recent legal action aimed at holding AI companies responsible for the tragic incidents that their chatbots either encouraged or failed to deter.
In a wrongful death case brought against Google last week, a father in Florida alleges its chatbot tricked his son into a romantic relationship with it, sparking a delusional spiral that led to his suicide. OpenAI and Character AI have faced similar lawsuits in recent months.
Our take: Given how slow lawmakers have been to regulate AI, it could be up to the courts to decide how the technology is governed for now.—LA
