The White House is having second thoughts about its let ‘er rip approach to AI.
What happened: The Trump administration is considering a plan to require government reviews of new AI models by an oversight committee that would include representatives from both industry and government.
The news follows a shakeup of the White House’s policy team in March that saw the departure of tech mogul and podcast host David Sacks, an outspoken critic of AI regulation.
Google, Microsoft, and xAI have agreed to provide the U.S. government with early versions of their latest models to probe them for security risks.
Catch-up: The U.S. under Donald Trump has led a global charge against AI regulation. You may recall Vice-President JD Vance using his first major speech abroad to lecture attendees about the dangers of AI regulation, saying “The AI future is not going to be won by hand-wringing about safety.”
Why it matters: With the world’s most advanced AI companies all based in the U.S., the American government is the only entity capable of effectively regulating (or not) the rollout of this technology. Its posture will have enormous influence on how AI impacts everyone, even outside the U.S.
Why it’s happening: AI models are becoming more sophisticated, and that creates greater risk that they are used for nefarious purposes, like hacking, fraud, or terrorism.
For example, researchers have been able to get already-available AI tools to provide them with step-by-step instructions on making biological weapons and deploying them in public spaces to maximize casualties.
Zoom out: In a recently published blog post, Anthropic co-founder Jack Clark said there is a good chance AI will be “powerful enough that it could plausibly autonomously build its own successor” by the end of 2028. There is no doubt that governments will demand a say in how tech with capabilities like that is used.—TS




