"It's important that you save your vote for the November election ... voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again." This is how a voice message allegedly from U.S. President Joe Biden reached various voters in New Hampshire before the presidential primary elections.
However, the issue with the call was that it did not originate from Biden himself; rather, it was a robocall created using generative Artificial Intelligence, utilizing the voice of the incumbent president.
This is one of many instances where generative AI has been employed during the U.S. presidential elections. In recent months, American voters have witnessed numerous examples of AI integration into electoral campaigns, from deepfakes generated by ordinary citizens to tools used directly by politicians. For example, in March 2023, fabricated images depicting the arrest of former President Donald Trump, created using Midjourney, circulated on social media. While discerning the falsehood of these images may be relatively straightforward, the situation grows more complex when politicians themselves utilize generative AI. Specifically, this technology is used not only for analyzing voting patterns, crafting targeted messages and monitoring social media behaviors – actions that are somewhat controversial but might be considered acceptable – but also extensively employed in electoral advertising. With such professional execution, distinguishing between reality and fabricated and fake content has become more challenging.
For instance, in April 2023, the Republican National Committee released a completely AI-generated ad to depict the U.S. future if Biden were reelected. The ad featured convincingly realistic yet AI-generated images of boarded-up storefronts, armored military patrols on the streets, and overwhelming waves of immigrants that led to panic. Another example came from Florida Gov. Ron DeSantis's campaign, which released an attack ad against Donald Trump by using AI-generated images that showed the former president together with Dr. Anthony Fauci, one of the important and controversial names during the COVID-19 pandemic.
The widespread use of AI in the upcoming U.S. presidential elections also referred to as "the first AI election," has sparked significant debates about the role of emerging technologies in shaping electoral outcomes. It is crucial to note that this isn't the first instance where technology has played a critical role in the U.S. elections. For example, during the 2008 presidential campaign, Obama effectively used social media platforms to reach and influence voters. Similarly, the 2016 elections witnessed significant impacts from Cambridge Analytica's data exploitation and Russia's social media manipulation. Today, the electoral process in the U.S. faces a new challenge with the potential misuse of AI, raising serious concerns about electoral integrity and the very foundations of American democracy.
One of the most substantial impacts of using AI for electoral purposes is the increase in disinformation. Generative AI provides anyone with the tools to create fake audio, video and images, which can be easily used to disseminate false information across social media platforms. In a world heavily influenced by social media content, this poses a real threat. This is particularly concerning for younger generations, who are more susceptible to disinformation and may be influenced in their voting decisions by fake information generated by AI and circulated on social media. So, suppose this AI-generated electoral propaganda is not effectively regulated. In that case, there is a very high risk that the election outcomes could be twisted in the sense that the voters might base their decisions on misleading or false information, potentially compromising the integrity of the electoral process.
Another important and dangerous impact of AI usage for electoral purposes is that as people become more aware of the disinformation as mentioned earlier, a sense of hysteria could emerge, leading to widespread mistrust. Voters may begin to doubt the authenticity of all information they see on social media platforms, creating a climate where facts and truth are indistinguishable from false information. As a result, this skeptical environment can undermine the foundation of informed democratic decision-making.
Yet, the problem with AI use and the disinformation emerging from it extends beyond just the pre-election period and can affect post-election stability. For instance, in 2020, the U.S. experienced the Jan. 6 riots, where supporters of Trump who refused to accept Biden's victory stormed the Capitol, posing the biggest threat in history to American democracy. Now, with the capabilities of AI, it could become even easier to fabricate damaging content, such as audio or videos of a candidate falsely claiming that the election results have been manipulated or other forms of misinformation that could provoke supporters to interrupt the vote counting.
Lastly, these developments have highlighted one of the U.S.' most significant concerns in recent years: foreign interference, particularly from Russia and China. Russia's meddling in past elections has long cast a shadow over American democracy. However, as China's technological capabilities grow, the threat of its interference in American elections through the use of AI looms larger than ever for American policymakers from both parties. In the case of China, a primary concern involves TikTok, through which AI-generated content can be quickly disseminated among American voters, especially younger ones, potentially influencing public opinion and electoral outcomes.
Elections have always been associated with propaganda and disinformation. However, as technology evolves rapidly day by day, staying ahead of the challenges it presents has become increasingly difficult. In response to the challenges the U.S. faces in the presidential electoral campaign, American legislators and major tech companies are striving to address these issues as swiftly as possible. Some regulations are being implemented to prohibit the use of AI in electoral advertisements. Furthermore, big tech companies are also taking proactive steps. For example, Google will soon mandate that political ads using artificial intelligence include a prominent disclosure if any imagery or sounds have been synthetically altered. Whether these measures will suffice remains uncertain and to be seen.
However, a crucial insight from these developments is the global expectation for the U.S. as a superpower and technological leader, namely to spearhead comprehensive AI global regulations that address the myriad challenges posed by this rapidly advancing technology. Unfortunately, while the international community is looking to the U.S. for leadership in regulating AI in general, domestically, the U.S. is facing its own struggles to manage this technology effectively to prevent it from undermining the integrity of its elections and, by extension, American democracy. If the U.S. fails to provide a strong framework for AI regulation, especially in terms of AI use for electoral purposes, it could lead to a global distrust of the American superpower both in terms of technology and democratic integrity.