
People who go on about the danger of AI get written off as ‘doomers,’ but when it’s the people who helped build the tech in the first place, it’s probably wise not to brush them off.
Conversations about AI risk are back in the spotlight after current and former OpenAI employees signed an open letter accusing AI companies of being reckless, particularly in the pursuit of artificial general intelligence (AGI) that can think and reason like a human.
- “OpenAI is trying to build a system that can do everything, and that has huge safety question marks,” said Wyatt Tessari L'Allié, executive director of Artificial Intelligence Governance & Safety Canada.
Why it matters: It’s one thing when early AI pioneers or the Pope urge caution with AI. But now, a growing number of people who have worked at OpenAI are accusing the company of shutting down security and safety concerns.
- It’s not just OpenAI: Experts at Google DeepMind, Anthropic, and Meta also expressed concerns about a “lax approach to safety” in a U.S. State Department report.
Big picture: Most of us are already aware of the risks AI can bring, including misinformation, deepfakes, and cybercrime. When talking about AGI, though, those problems get more sophisticated, and it brings new, more existential risks.
- The risks probably won’t be Terminator robots storming cities — but they could be advanced military AI destabilizing global security or economic upheaval from job replacement.
- Perpetuating inequality could lead to civil unrest, like if biased AI was used by banks to decide who gets a loan or by law enforcement for facial recognition.
In Canada: The feds plan to set up an AI Safety Institute to research and protect against AI risks, and while it’s too early to know how effective it will be, L'Allié points out the $50 million earmarked for it might not be enough to keep up with the billions being pumped into the development side.