In 2024, we are preparing for a contentious and polarizing election season amidst the biggest we have ever witnessed on a global scale. More voters than ever in history will be heading to the polls in at least 63 countries. From Trump supporters targeting Black voters with AI-generated images in the U.S. to the unquantifiable impact of AI-generated propaganda on Progressive candidate Michal Simecka placing second to a leader of a left-wing nationalist party in Slovakia, AI has entered our global group chat.[1][2]However, AI is not creating a new issue. AI intensifies pre-existing problems within the online “public” sphere that Black women, queer and trans people, and social justice researchers have been warning us about since the inception of social media.[3] [4] [5] AI is being used to accelerate the speed of obscuring reality while further centralizing power into fewer, wealthier, and usually more conservative hands like Argentina’s President Javier Milei, who is using AI to declare war on “the collectivists.”[6]
For at least four years, researchers have surfaced concerns about the pervasiveness of “algorithmic radicalization,” where platform algorithms on popular sites like YouTube and Facebook drive users toward, for example, more White supremacist, antisemitic, and/or fascist content over time, leading them to develop far-right political views.[7] When algorithmic radicalization is combined with access to AI tools that make deep fake generation, image, and audio manipulation easier than ever before, we see an increase in an existing issue: the largely unregulated monetization of voter disinformation across social media platforms.[8]Just last year, Imran Ahmed, chief executive at the Center for Countering Digital Hate (CCDH), said that Twitter was “monetizing hate” at a rate many of us had not seen before. Specifically, just five Twitter accounts spreading misinformation generated up to $6.4 million in annual advertising revenue, according to CCHD research.[9] According to the American Psychological Association, misinformation is getting the facts wrong, while disinformation is false information spread with the deliberate intention to mislead those who consume it.[10] When far-right actors engage in machine learning, a branch of AI that uses algorithms to imitate the ways that humans learn and improve the efficacy of the algorithm over time, they give algorithms content to ingest with the sole purpose of disinformation, intentionally seeking to mislead people into believing what is false or untrue.
Currently, AI sector experts don’t share a single definition for what it means to use AI responsibly or ethically. This makes it difficult for early adopters in progressive spaces. In other words, movement leaders face a moving, morphing, and unregulated target, whether they are focused on winning elections, and/or protecting communities queued up for the surge in online attacks, harassment, doxxing, trolling, and shadowbanning that far-right campaigns bring to their front doors and inboxes.
Elections at Risk
Caroline Sinders, a machine-learning/design researcher who has been examining the intersections of technology’s impact on society, artificial intelligence, abuse, and politics in online conversational spaces, says that our fear of what’s possible with White supremacists and AI is a fear that researchers have been sounding the alarm about for over a decade. “[Far Right actors] have always been here. I think the reason we’re starting to see it more is because Twitter (now X) has changed hands, and a lot of folks who had been removed from that platform have been invited back. We’re seeing the gutting of trust and safety teams.” Trust and safety teams on the social media platforms we use every day are critical because they’re responsible for developing and maintaining a safe and fair environment for end users. They moderate content, manually review flagged cases, identify and address fraudulent activity, and ideally escalate issues of violence and discrimination so these events do not run rampant on a platform. Many tech companies, like Twitter/X, Meta, and Amazon have been firing their Trust & Safety teams in large numbers since 2023.[11]
Dr. Nikki Stevens, a software engineer and postdoctoral researcher at MIT who researches how trans activists and communities use data in their activism and lobbying work, shared their thoughts on the re-imagining of an age-old White supremacist issue: “All AI is doing is masking the fundamental problems with our electoral system under the guise of computational sorcery. We also know as recently as the 2000 presidential election, we were having basic computational problems,” they said.[12] This “computational sorcery” produces AI-enabled apparitions that are already being used to create a reality that doesn’t exist, one where BIPOC and LGBTQIA+ people are the villains and White supremacist cis-heteropatriarchal fascism is the solution.[13]
In 2024, U.S. voters are becoming increasingly concerned about AI being used to manipulate presidential elections. According to a 2023 poll by the Associated Press, nearly 58 percent of adults think AI tools that can specifically target political audiences, mass produce political messages, and generate deepfakes will increase the spread of disinformation and misleading information during the 2024 presidential election.[14] Across political affiliations, 61 percent of Democrats, 44 percent of Independents, and 61 percent of Republicans said that AI being used to spread false information would increase during the 2024 presidential election.
And it’s begun globally. In January, the New Hampshire Attorney General’s office was investigating a series of robocalls that used artificial intelligence to mimic President Joe Biden’s voice, attempting to push Democrats to vote in November and skip the primary.[15] In December of last year, Moldova dismissed a deepfake of President Sandu expressing support for a Russian-friendly political party.[16] Last March, YouTube suspended accounts in Venezuela using AI-generated news anchors to promote disinformation in favor of President Nicolás Maduro.[17] In Bangladesh last year, users were using deepfakes of politicians and news anchors to generate AI avatars for as little as $24 a month.[18]
What makes this problem especially alarming are the chasms of digital literacy that exist, and all the more so in historically marginalized communities due to systemic exclusion from technological advancements. Last year, Community Tech Network reported that 70 percent of Black Americans were “unprepared with digital skills, affecting their employability, and, as more work moves to remote formats, they estimate that Black and Hispanic workers, without intervention, could be locked out of 86% of jobs by 2045.”[19] Daniel Greene’s research shows that the U.S. “digital divide” was constructed through racist tech policies since the Clinton administration.[20] What Greene calls the “access doctrine” tried to “solve” the problem of poverty through technological tools and skills—notably, AI also claims to solve problems of access and representation. Writ globally, the chasms around digital literacy continue to worsen, especially when we consider the fact that, for example, 40 percent of Black Americans did not have access to high speed internet in 2020, and more than a third of the world’s population have never used the internet.[21][22] Countries with the highest numbers of people most lacking internet connections are India, China, Pakistan, Nigeria, Ethiopia, Bangladesh, and Indonesia.[23] No access to high-speed internet equates to little to no access to technological advancement and the skills-building needed to not be left behind.[24]
Large Language Models (LLMs) are AI systems that process and generate text. They use machine learning to determine probability, like the percentage of probability that a person might end an email with Best Regards, use very specific terminologies that would make an AI-generated anchor believable to millions of voters in Bangladesh, or call a Democratic voter in New Hampshire. These language models predict the most likely next word in a sentence based on the previous words entered. Language models then make predictions from these pre-existing patterns. Without existing datasets, namely human beings’ writing and communication styles, language models could not exist – at least not as we know them today. Not everyone is convinced that the use of AI by leftists and progressives, particularly in its current unregulated form, is a good answer to AI-enabled voter disinformation.
In fact, the new media, policy, and cultural concerns around AI recalls the past decade’s anxiety around big data, e.g. our personal data collected for profit. Craig Johnson, who owns a slew of AI companies and is the founder of Unfiltered Media, a boutique digital communications firm that seeks to raise the bar on progressive digital strategy, believes that that world is possible. “We have [a] product [where] we do online polling. But instead of unobserved online polling, instead of people asking people what they believe, we just listened to what they believe… I can train a model to know how a leftist talks, someone on the right talks, and someone in the center talks or someone in the media talks, or a specific group of people talks. I can evaluate a million tweets at a time.” Ever since we lost control over our data, some LLMs and other forms of AI are made to seem like an obvious answer to the same injustice AI produces in everything from Instagram and TikTok to our text autocorrect and email predictions.
Caitlin Chin-Rothmann, a fellow at the Center for Strategic and International Studies, became interested in tech policies after the scandal surrounding Cambridge Analytica. She shared her concern about the lightning-speed adoption of AI to advance political targeting: “One of my biggest concerns is that technology companies are really able to collect and share such a vast amount of personal information. And this is very detailed and intimate information as well and includes our geolocation records. So, where we go on a daily basis, and from that, you can infer where somebody lives, who we hang out with, what we do. So, this is all incredibly invasive information [and] there’s very few limitations on how mobile app developers or how websites or social media platforms are able to share and sell this content.” Our data—like algorithms—flows across the international data and processing centers of international corporations. Thus, our data privacy, software access, and algorithmic regulation laws are also geographic and deeply uneven: from U.S. states (when Montana banned TikTok in 2023) to nation states’ policies (Iceland’s ranking as having the strictest data privacy protections), or a collective’s position (the EU’s European Digital Media Observatory (EDMO) of 28 countries producing synced fact checking and policy recommendations.[25]These uneven and disparate policies and approaches (and at times subterfuge) create extensive gaps in protections that work to safeguard those most vulnerable to the manipulation of data, images, and information.
Taking Action
BIPOC, queer, trans, disabled, poor, refugees, and other marginalized people don’t have time to wait on technology to catch up to their calls for protection. What do our communities do in the meantime?
Global organizations, social movements, and elected officials working to protect our digital rights and digital privacy teach us just how connected these movements are to interventions in far-right absolutism. For example, in the U.K., the Online Safety Act 2023 requires online platforms to remove illegal content when they become aware of it.[26] In 2018 the BBC published a report based on extensive research of nationalism in India, Kenya and Nigeria and how citizens engage with and spread mis- and disinformation on the internet.[27] The World Economic Forum released its Global Risks Report for 2024 analyzing challenges countries will face in the coming decade. The report places mis- and disinformation at the top of the list in posing a risk—and ranked India first as the country facing the highest risk in the world.[28]
Experts interviewed for this article recommend tools like Nightshade, which poisons large datasets so our images cannot be scraped by AI, and the messaging platform Signal, which can be used for the most encrypted communication. It’s important to take the time to refuse access to your data on each site you land on, and use virtual private networks (VPNs are built into some browsers like Opera) to prevent sharing private information and data on the internet. Beyond specific tools, healthy skepticism and clear boundaries are mandatory. There’s also the time-consuming and essential work of researching audio and video messages, calls from political figures, and other news to ensure that they have been accurately reported on by reputable sources. Given this, labor and social movement leaders likely need to designate entire new staff and create training to support their bases.
At the same time, historically marginalized communities have been relying on circles of community and support for centuries. It’s important to remember that AI is a tool that is not inherently bad or evil and AI has not created alt-Right or White supremacist ideologies, but AI is being used by bad actors to further that radicalization and normalization of violence and bigotry all over the internet. It is always important to demand tangible, continually up-to-date legislation that protects users and voters from AI-enabled disinformation, laws and policies that can only be passed by a government that values our privacy and safety.
Because we’re still in the infancy stages of understanding how these technologies interact and should be permitted to interact with our society, it is too early to tell exactly how far-right AI abuse will impact the result of the global elections. Now is the time to brace and prepare, because it will be substantial.
Endnotes
[1] Marianna Spring, “Trump supporters target black voters with faked AI images,” BBC, March 4, 2024, https://www.bbc.com/news/world-us-canada-68440150.
[2] Daniel Zuidijk, “Deepfakes in Slovakia preview how AI will change the face of elections,” Bloomberg, October 4, 2023, https://www.bloomberg.com/news/newsletters/2023-10-04/deepfakes-in-slovakia-preview-how-ai-will-change-the-face-of-elections.
[3] Bridget Todd, “How Black women tried to save Twitter,” There are no girls on the internet, July 14, 2020, https://www.tangoti.com/ep004-how-black-women-tried-to-save-twitter.
[4] Christopher Wiggins, “Instagram is Blocking LGBTQ+ Accounts: Report,” The Advocate, September 1, 2023, https://www.advocate.com/business/instagram-shadowbanning-lgbtq-content.
[5] The Conversation, “Here is how research says Facebook and Instagram have harmed teens’ wellbeing,” Fast Company, October 27, 2023, https://www.fastcompany.com/90972779/states-sue-meta-for-knowingly-hurting-teens-with-facebook-and-instagram.
[6] David Feliba, “How AI shaped Milei’s path to Argentina presidency,” The Japan Times, November 22, 2023, https://www.japantimes.co.jp/news/2023/11/22/world/politics/ai-javier-milei-argentina-presidency/.
[7] Huo Jingnan and Shannon Bond, “New study shows just how Facebook’s algorithm shapes conservative and liberal bubbles,” NPR, July 27, 2023, https://www.npr.org/2023/07/27/1190383104/new-study-shows-just-how-facebooks-algorithm-shapes-conservative-and-liberal-bub; Andrew M. Guess, et al. “How Do Social Media Feed Algorithms Affect Attitudes and Behavior in an Election Campaign?” Science 381, no. 6656 (2023): 398–404.
[8] “Profiting from Hate: Platforms’ Ad Placement Problem,” Anti-Defamation League, September 27, 2023, https://www.adl.org/resources/blog/profiting-hate-platforms-ad-placement-problem.
[9] Anuj Chopra, “Monetizing hate’: Unease as misinformation swirls on Twitter,” the Japan Times, April 17, 2023, https://www.japantimes.co.jp/news/2023/04/17/world/twitter-monetizing-hate/.
[10] “Misinformation and Disinformation,” American Psychological Association, November 29, 2023, https://www.apa.org/topics/journalism-facts/misinformation-disinformation.
[11] “Why to assemble a trust and safety team for your organization,” Persona, March 25, 2024, https://withpersona.com/blog/why-to-assemble-a-trust-and-safety-team-for-your-organization#:~:text=A%20trust%20and%20safety%20team%20is%20the%20team%20within%20a,reviewing%20flagged%20cases%2C%20and%20more.
[12] Lesley Kennedy, “How the 2000 Election Came Down to a Supreme Court Decision,” History.com, December 1, 2023, https://www.history.com/news/2000-election-bush-gore-votes-supreme-court.
[13] Rashawn Ray and Alexandra Gibbons, “Why are states banning critical race theory?” Brookings, November 2021, https://www.brookings.edu/articles/why-are-states-banning-critical-race-theory/.
[14] Ali Swenson and Matt O’Brien, “Poll Shows Most US Adults Think AI Will Add to Election Misinformation in 2024,” AP News, November 3, 2023, https://apnews.com/article/artificial-intelligence-2024-election-misinformation-poll-8a4c6c07f06914a262ad05b42402ea0e. Poll data from the Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy.
[15] Alex Seitz-Wald and Mike Memoli, “Fake Joe Biden robocall tells New Hampshire Democrats not to vote Tuesday,” NBC News, January 22, 2024, https://www.nbcnews.com/politics/2024-election/fake-joe-biden-robocall-tells-new-hampshire-democrats-not-vote-tuesday-rcna134984.
[16] Madalin Necsutu, “Moldova Dismisses Deepfake Video Targeting President Sandu,” Balkan Insight, December 29, 2023, https://balkaninsight.com/2023/12/29/moldova-dismisses-deepfake-video-targeting-president-sandu/.
[17] Joe Daniels and Madhumita Murgia, “Deepfakes and AI: the new frontier in Venezuela’s misinformation campaign,” Financial Times, March 16, 2023, https://www.ft.com/content/3a2b3d54-0954-443e-adef-073a4831cdbd.
[18] Benjamin Parkin, “Deepfakes for $24 a month: how AI is disrupting Bangladesh’s election,” Financial Times, December 14, 2023, https://www.ft.com/content/bd1bc5b4-f540-48f8-9cda-75c19e5ac69c.
[19] “Digital Equity for Black Americans: A Racial Justice Issue,” Community Tech Network, February 6, 2023, https://communitytechnetwork.org/blog/digital-equity-for-black-americans-a-racial-justice-issue.
[20] Daniel Greene, “The Promise of Access: Technology, Inequality, and the Political Economy of Hope,” MIT Press (Cambridge, Massachusetts): 2021.
[21]Danielle Hinton and John Horrigan, “Let’s close the digital divide once and for all for Black communities,” The Hill, April 4, 2023, https://thehill.com/opinion/technology/3933241-lets-close-the-digital-divide-once-and-for-all-for-black-communities/.
[22] Agence France-Presse in Geneva, “More than a third of world’s population have never used internet, says UN,” The Guardian, November 30, 2021, https://www.theguardian.com/technology/2021/nov/30/more-than-a-third-of-worlds-population-has-never-used-the-internet-says-un.
[23] Ani Petrosyan, “Countries with the most people lacking internet connection 2024,” Statista, February 5, 2024, https://www.statista.com/statistics/1155552/countries-highest-number-lacking-internet.
[24] Danielle Hinton and John Horrigan, “Let’s close the digital divide once and for all for Black communities,” The Hill, April 4, 2023, https://thehill.com/opinion/technology/3933241-lets-close-the-digital-divide-once-and-for-all-for-black-communities/.
[25] Bobby Allyn, “Montana Becomes 1st State to Approve a Full Ban of TikTok,” Wyoming Public Media, April 14, 2023. https://www.wyomingpublicmedia.org/2023-04-14/montana-becomes-1st-state-to-approve-a-full-ban-of-tiktok; Stephen Mash, “Data Privacy Rankings - Top 5 and Bottom 5 Countries,” Privacy HQ, 2022, https://privacyhq.com/news/world-data-privacy-rankings-countries/; EDMO, “EDMO: About Us,” 2024, https://edmo.eu/about-us/edmoeu/.
[26] Sea Butcher, “2024 may be the year online disinformation finally gets the better of us,” Politico, March 25, 2024, https://www.politico.eu/article/eu-elections-online-disinformation-politics/.
[27] “Nationalism a driving force behind fake news in India, research shows,” BBC, November 11, 2018, https://www.bbc.com/news/world-46146877.
[28] “The Global Risks Report 2024,” World Economic Forum, January 2024, https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf.