Meta and Microsoft Join Google in Effort to Crack Down on AI-Generated Content in Political Ads
By: Daniel Bruce, Jason Torchinsky, and Andrew D. Watkins
Meta, the parent company of Facebook and Instagram, announced that it will now require advertisers to disclose “social issue, election, or political” advertisements that have been “digitally created or altered, including through the use of AI.” The new policy is set to take effect in January 2024, and Meta has indicated that more details will be forthcoming.
The disclosure will be required for ads that are digitally created or altered to “[d]epict a real person as saying or doing something they did not say or do,” to “[d]epict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened,” and to “[d]epict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.” The policy does not apply to “inconsequential or immaterial” alterations such as image sharpening or color corrections.
Meta’s new policy follows a substantially similar policy announced by Google in September, which is set to take effect later this month. Notably, while Google’s policy requires advertisers to place a disclaimer within the ad themselves, “Meta will add information on the ad when an advertiser discloses in the advertising flow that the content is digitally created or altered.”
Microsoft has also announced that it will implement new policies in response to the increasing use of AI in political advertising. The tech giant has launched a tool called Content Credentials that “enables users to digitally sign and authenticate media using . . . digital watermarking credentials.” This will help candidates and campaigns keep better track of future uses of their content and likeness. The tool will also help consumers determine whether the content was altered after its credentials were created. This tool is set to launch in Spring 2024 and will first be made available to political campaigns. The reception of this tool among political users remains an open question since few details have been made publicly available.
Microsoft also announced that it will deploy a “Campaign Success Team” that “will advise and support campaigns as they navigate the world of AI, combat the spread of cyber influence campaigns, and protect the authenticity of their own content and images.” Additionally, Microsoft will provide governments around the world with access to its new “Elections Communications Hub,” which will provide election security support in the lead up to elections. Finally, Microsoft announced its endorsement of the bipartisan “Protect Elections from Deceptive AI Act.”
Introduced by Senators Klobuchar, Hawley, Coons, and Collins, the Protect Elections from Deceptive AI Act would prohibit any person or entity from distributing any “materially deceptive AI-generated audio or visual media of a” candidate for federal office with intent to influence an election or solicit funds. The Act would also provide candidates a civil action for damages for violations of the Act. Prospects for the Act in the Senate are unclear, but it appears unlikely the bill will make it into law. Additionally, if it should pass, legal challenges are likely. A few states, such as California and Texas, have passed or proposed similar laws.
Meta and Microsoft are just the latest major tech companies to implement AI policies. Candidates and campaigns should remain aware of the emerging tools, policies, and laws that will impact the use of AI as the 2024 election cycle moves forward.