AI & ELECTIONS: What Artificial General Intelligence means for Democracy.

Raymond Amumpaire

February 8, 2024

The year 2023 saw the growth of Artificial General Intelligence or AGI as we have come to know and call it. Fastforward, 2024 will see nearly a third of the world head to elections which usher in the next regime of leadership with over 17 countries in Africa heading to the polls. The increase in the misuse of AI capabilities to create deep fakes, misinformation, disinformation, and so on leaves one wondering, where does this leave democracy?

Before I proceed any further, for those that haven’t yet “caught my drift”, Artificial Intelligence (AI) simply is the ability of computers to mimic human conscience as nearly as possible in the orchestration of daily activities. Use cases are numerous including transportation, medicine and health, information, research and development, communication, entertainment, innovation, and agriculture but so are disadvantages such as authoritative errors, the flood of academic plagiarism, the possibility for AI to upend the job market, and the “existential risk” posed by the creation of a true digital intelligence, IP Infringement (algorithms trained on protected works), exploitation for wrongdoings, compromising national security, privacy, challenges to democracy, social interaction, political manipulation as well as misinformation, disinformation and so on that results in polarisation of society at volatile moments like election seasons.

The deep fakes and disinformation campaigns by internal and foreign actors have associated risks of violation of fundamental rights and democracy through hate speech, disinformation etcetera. In the last few days, a deepfake video generated with the help of artificial intelligence bearing Joe Biden’s voice calling upon constituents not to vote in the Republican primaries and “to save their vote” for the November general election in the United States is a clear example of how AI exacerbates and affords the rogue cyber elements the juice to interfere with and incapacitate democracies. AI aids totalitarian regimes and also undermines the fundamental principles on which liberal democracies are founded.

Governments then must prepare and build resilience against these campaigns in the event they occur to protect the integrity of the electoral process. This notwithstanding, a decentralised approach is equally efficient so that there is collaboration between private and Government efforts to safeguard our democracies from perishing.

Organisations and other parties in the civic space can employ open-source intelligence also known as OSINT to analyse trends, fact-check & and investigate data/information that makes rounds during elections. Fortunately, millions of OSINT resources are available on social media and are easier to use than the traditional fact-checking and verifying avenues.

Disclosure and transparency as well as tagging certain information as being AI-generated or deep fake plus the entity that generated it is crucial to curbing the misinformation scourge. Equally, responsibility by design from the get-go when these AI models are developed to ensure that they inherently abhor misinformation, disinformation, and other vices that pose a threat to our democracies.

Developing more robust cybersecurity measures to protect against foreign interference and other malicious activities aimed at disrupting democratic processes. Promoting media literacy and education initiatives to help citizens better understand the risks and implications of AI-generated content during election periods.

These, of course, are just a few of the avenues we can pursue and I think it is high time we get a bill moved on the floor of the legislature that discusses all risks regulation and responsible development of AI models deployed whether or not they have an impact on the outcome and integrity of the electoral process as Uganda prepares to head to the ballot in 2026. Together, we can safeguard and make democracy more resilient in the face of AI.