Meta tackles AI-generated content

With AI deepfakes becoming more common on Facebook than old high school classmates promoting their multi-level marketing schemes, Meta has decided it’s time to step in. 

Driving the news: As deepfakes impersonating the likes of U.K. Prime Minister Rishi Sunak, Canadian treasure Michael Bublé, and pop star Taylor Swift come into the mainstream, Meta will roll out AI detection and labelling features across its platforms.

  • In the coming months, Meta will begin labelling AI-generated images on Facebook, Instagram, and Threads, including those created with tools by OpenAI and Google. 

Why it matters: It’s becoming almost impossible for humans (and machines) to identify AI-generated content. Without a system that can label deepfakes and manipulated media, there will be no real way for users to differentiate between real and fake content online. 

  • Almost half of Canadians have trouble telling AI-generated and authentic content apart, all while political deep fakes become more common ahead of an election year.

Yes, but: While Meta’s new feature could help weed out AI-generated images, they’ve admittedly been unable to develop an equivalent for AI-generated video and audio content

Big picture: Meta is feeling the pressure to beef up its AI-generated content policies ahead of a slew of elections this year. Just this week, the company faced criticism from its oversight committee for not taking down a fake campaign video of U.S. President Joe Biden.—LA