One of the world’s leading AI companies may have built a model too powerful for our own good.
Driving the news: Anthropic is withholding its newest model, Mythos, from the public and only releasing it to a select group of cybersecurity groups and large tech companies, citing fears it could hack even the world’s most sophisticated computer systems.
The AI startup (though after reporting a US$30 billion run rate, it may be time to retire “startup”) says the new model won’t be released until it can put safeguards in place to prevent it being used for cyberattacks.
According to Anthropic, Mythos has found bugs (and ways to exploit them) in “every major operating system and web browser." In over 83% of test cases, the model created plans to exploit vulnerabilities on its first try.
Zoom in: In one of the most eye-opening examples, Anthropic says Mythos found multiple vulnerabilities in something called the Linux kernel, a core component of the software that powers most of the world’s servers.
Anthropic’s model was able to piece together those flaws, which cybersecurity researchers had missed for years, in a way that would let hackers take complete control of any device running on Linux.
In another test, Mythos found a nearly three-decade-old vulnerability in OpenBSD (widely considered the most secure open source operating system in the world) that would allow hackers to crash any machine running on it.
Why it matters: AI has already allowed cybercriminals to launch more sophisticated and frequent attacks. Without proper guardrails, this next generation of models could enable bad actors to exploit pretty much any computer system — from iPhones to supercomputers — with relative ease.
Dean Ball, a researcher and former White House adviser on AI, said that Mythos is “a tool that could damage the operations of critical infrastructure and government services in every country on Earth.”
What’s next: OpenAI is taking a similar step with its upcoming model, releasing it only to a small group of companies for testing. The real question becomes how long these rival AI companies will be willing to hold off product releases for the sake of public safety.—LA

