Wednesday, February 21, 2024
featured storyUncategorized

Meta starts to identify AI generated photos on its platforms as global election fever heats up

Meta will start labelling artificial intelligence (AI) generated photos uploaded to its Instagram, Facebook and Threads platforms in the coming months as election season around the world begins, the company announced.

Mark Zuckerberg’s company is building tools to identify AI content at scale to sort out misinformation and deep fakes.

Meta says it will now seek to label AI-generated content that comes from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutter stock. It will also punish users who do not disclose if a realistic video or audio piece was made with AI.

Until now, Meta only labelled AI-generated images that used its own AI tools.

Meta’s president of global affairs Nick Clegg wrote in a blog post on Tuesday that the company will begin the AI-generated labelling from external companies in the coming months and will continue working on the problem through the next year.

Additionally, he said “Meta is working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers”.

Why now?

During the 2016 and 2020 US presidential election, Meta, then known as Facebook, was slammed for election-related misinformation from foreign actors, largely Russia, spreading across its site. The company also came under fire during the 2019 COVID-19 pandemic, when health misinformation also ran rampant on the platform.

Meta’s new AI policy is also likely to quell the company’s independent Oversight Board, which on Monday criticized its media policy as “incoherent” and “lacking in persuasive justification”.

It follows a ruling upheld by Meta that a manipulated video of Joe Biden, which originally showed the US President exchanging “I Voted” stickers with his adult granddaughter in 2022, was later manipulated to look like he touched her chest inappropriately.

Meta said the video did not infringe on its current policies because the video was not manipulated with AI.

Clegg said the social media industry is behind in building standards to identify AI-generated video and audio, but admitted that the company cannot catch every fake media on its own.

“This work is especially important as this is likely to become an increasingly adversarial space in the years ahead,” Clegg said.

“People and organizations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it. Across our industry and society more generally, we’ll need to keep looking for ways to stay one step ahead.”

 

Spread the love

Leave a Reply

Your email address will not be published. Required fields are marked *