
Google parent Alphabet's newly updated AI code of ethics conceivably opens the door for the tech giant to start working on AI weapons technology.
Driving the news: The company nixed an internal policy preventing it from using AI to develop technology that is “likely to cause overall harm, ” a sign that major AI companies are no longer afraid of getting into the defence business — though it remains a touchy subject.
Why it matters: Last year, Meta changed its policy to allow militaries to use its large language model, OpenAI made a deal with the U.S. Air Force, and Anthropic announced it would sell products to U.S. military and intelligence customers.
Zoom out: The money to be made in AI defence is no doubt a motivating factor. Look no further than Palantir Technologies. Shares in the firm famous for providing AI software for defence clients like the U.S. and U.K. militaries hit an all-time high this week after its latest earnings beat.
- And CEO Alex Karp didn’t mince words during the company’s earnings call about what work Palantir is doing, saying, “We believe we are making America more lethal.”
Bottom line: AI heads for the likes of Google and Anthropic argue that, as AI expands, democratic nations should get ahead of rivals on tech development, just in case. Critics argue that it’s just one slippery slope to autonomous killer robots.—QH