GROW YOUR TECH STARTUP

A Look Into AI and the Risks to Elections

September 11, 2024

SHARE

facebook icon facebook icon

Image via: Freepik

When an entire nation devotes its attention to Vice President Kamala Harris and the Democrats, other candidates struggle to take over the media ahead of the November 5 presidential elections. Except, of course, when Taylor Swift is involved.

On August 18, former President Donald Trump shared an AI-generated image showing Swift dressed as Uncle Sam. The post included text saying, “Taylor wants YOU to VOTE for DONALD TRUMP.” In his repost of the image, Trump added, “I accept!” Ali Swenson, from the Associated Press reported.

This brings attention again to how AI could play a role in upcoming presidential elections. As AI-generated deepfakes of images, videos, and audio clips flood social media, they pose a serious risk to public trust. Leaders of political groups say, “I don’t know if FEC law can catch up to this in a few months, so we should use it to our advantage.” 

GenAI and its new capabilities

The AI conundrum is that while the latest GenAI presents opportunities for a productivity boost, potentially enhancing both election security and administration, foreign nation-state actors and cybercriminals could leverage these same capabilities for malicious purposes.

GenAI algorithms learn from existing data to restructure or generate new content. This could look like fake videos of a Pakistani candidate encouraging voters to boycott the general election, deepfakes of Taylor Swift, or full-blown campaigns across multiple counterfeit news platforms. 

A Kremlin-backed disinformation operation, otherwise known as the Doppelganger campaign, used AI to create fake news websites and spread pro-Russian propaganda on social media. These bogus sites mimic legitimate outlets like The Washington Post and Fox Business, targeting audiences worldwide. Russian political consultant Ilya Gambashidze, behind the operation, is now subjected to sanctions for targeting Ukraine and, more recently, the US elections, David Gilbert states on Wired.

Although one member is subject to sanctions, the campaign has been in motion for two years and continues today. The ongoing operation calls for more stringent regulations on the domain name industry, stricter measures against malicious software and infrastructure, greater accountability for opaque organizations, and better data access for researchers to combat covert influence operations in elections and beyond.

Combining AI with targeted social media algorithms 

A 2022 Pew Research Center survey investigated social media’s influence as one reason for the declining health of democracy in nations worldwide. Across 19 countries surveyed, 84% believed access to the internet and social media have made it easier to manipulate people through false information and rumors, with larger shares of Americans seeing a negative political impact.

Fast-forward to 2024, and the Guardian reports democrats’ use of AI in a positive targeted effort to stay ahead with Latino and Black voters. NextGen America, one of the nation’s largest youth voter organizations, built the Vote-E AI chatbot, enabling Latino voters (predominantly from WhatsApp) and Black voters (from Facebook Messenger) to ask election-related questions such as “How do I register to vote?” The organization uses natural language processing to process conversations and identify shared concerns. In response, it employs AI to locate friendly Spanish-language sites to advertise Democrats’ plans, such as its clean energy initiative.

AI can accurately identify individuals and their opinions on social media, and political campaigns can use that information to target them with AI content. David Myers, a Professor of Psychology at Hope College, Michigan, describes ten evidence-based strategies that influence behavior. Some of these include framing messages that speak to the audience’s viewpoint and values, exploiting the power of repetition, creating visual images, connecting with people’s social identities, and focusing communications on those undecided or disengaged. While these strategies can be used for positive purposes, they can also be misused to cause harm.

Surveillance and regulatory concerns

Recent studies indicate that the public is most often unable to distinguish between genuine and AI-generated images and may even perceive fake faces as more lifelike than what’s real. And the technology is only increasing in sophistication.

The Coalition for Content Provenance and Authenticity (C2PA) identifies metadata that attaches to a photo as a solution to track what’s real, what’s fake, and how that fakery happened. They’ve established a technical standard using cryptographic signatures to authenticate digital media. Nevertheless, if the cameras don’t record this data, manufactured information can still be applied during the editing process. Moreover, in cases where genuine metadata is available, online platforms where these images are being circulated, like X and Reddit, are yet to display it on published images. 

In attempts to moderate disinformation, five secretaries of state joined forces to ensure X users get accurate information on elections by redirecting questions about election administration to CanIVote.org — a nonpartisan resource from professional election administrators of both major parties.

While the likes of Google, Meta, X, and regulatory bodies agree on how to present generated information to users and work downstream to ensure all relevant stakeholders, including camera brands and digital editing providers, are on board, there are some things the public can do to avoid being fooled. Mistakes the public can look out for in AI-generated images include unusually shaped facial or hand features, shadows pointing in different directions, or objects out of place, like buttons on belt buckles. 
The Verge confirms that many stakeholders, including Microsoft, Adobe, Arm, OpenAI, Intel, Truepic, and Google, already support C2PA authentication. But when it came to implementing the standard in their products, neither Apple nor Google responded. Although there are still some loose ends to tie, it is hoped that a timely agreement can be reached. It’s important to act quickly to maintain public trust.

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending