By Katie Paul
NEW YORK (Reuters) – Facebook owner Meta is barring political advertisers from using its new generative AI advertising products, a company spokesperson said on Monday, cutting off campaigns’ access to tools that lawmakers have warned could turbo-charge the spread of election misinformation.
Meta has not yet publicly disclosed the decision in any updates to its advertising standards, which prohibit ads with content that has been debunked by the company’s fact-checking partners but do not appear to have any rules specifically on AI.
The policy comes a month after Meta – the world’s second-biggest platform for digital ads – announced it was starting to expand advertisers’ access to AI-powered advertising tools that can instantly create backgrounds, image adjustments and variations of ad copy in response to simple text prompts.
The tools were initially made available only to a small group of advertisers starting in the spring. They are on track to roll out to all advertisers globally by next year, the company said at the time.
Meta and other tech companies have raced to launch generative AI ad products and virtual assistants in recent months in response to the frenzy over the debut last year of OpenAI’s ChatGPT chatbot, which can provide human-like written responses to questions and other prompts.
The companies have released little information so far about the safety guard rails they plan to impose on those systems, making Meta’s decision on political ads one of the industry’s most significant AI-related policy choices to come to light to date.
Alphabet’s Google, the biggest digital advertising company, announced the launch of similar image-customizing generative AI ads tools last week. It plans to keep politics out of its products by blocking a list of “political keywords” from being used as prompts, a Google spokesperson told Reuters.
Google has also planned a mid-November policy update to require that election-related ads must include a disclosure if they contain “synthetic content that inauthentically depicts real or realistic-looking people or events.”
Snapchat owner Snap and TikTok both bar political ads, while X, previously known as Twitter, has not rolled out any generative AI advertising tools.
Meta’s top policy executive, Nick Clegg, said last month that the use of generative AI in political advertising was “clearly an area where we need to update our rules.”
He warned ahead of a recent AI safety summit in the United Kingdom that governments and tech companies alike should prepare for the technology to be used to interfere in upcoming elections in 2024, calling for special focus on election-related content “that moves from one platform to the other.”
Earlier, Clegg told Reuters that Meta was blocking its user-facing Meta AI virtual assistant from creating photo-realistic images of public figures. Meta committed this summer to developing a system to “watermark” content generated by AI.
Meta narrowly bans misleading AI-generated video in all content, including organic non-paid posts, with an exception for parody or satire.
The company’s independent Oversight Board said last month it would examine the wisdom of that approach, taking up a case involving a doctored video of U.S. President Joe Biden that Meta said it had left up because it was not AI-generated.
(Reporting by Katie Paul in San Francisco; Editing by Kenneth Li and Matthew Lewis)