Skip to main content Skip to secondary navigation
Main content start

How AI threatens the communities that sustain democracy—and what we can do

While the threat is real, safeguards to reduce AI risks to U.S. elections are already within reach, argues Aneesh Pappu (2025 cohort).
Aneesh Pappu (2025 cohort) as a child with his parents. Photo courtesy Aneesh.

My parents emigrated from India to the United States as postdoctoral researchers and settled in Pullman, Washington, where I grew up—a rural, majority-white college town of fewer than 35,000 people. As one of few Indian families in the community, it would have been natural to feel isolated. Instead, we were welcomed. Neighbors who didn’t look like us, prayed differently from us, and ate very different foods were always there when we needed help. To me, our community embodied the core American value of building understanding across differences that the promise of America relies on.

This experience shaped my understanding of democracy. I believe democratic institutions don't exist in the abstract but rather are sustained by communities like Pullman, where people with different backgrounds establish trust through finding common ground. I believe democracy relies on these values and the communities that embody them.

Having worked at the intersection of AI research and policy for the past six years at the Ada Lovelace Institute, Google DeepMind, and now Stanford, I've grown increasingly worried about the risks AI misuse poses to our communities and the democratic institutions upon which they depend.

The risks are no longer theoretical. In my recent article in MIT Technology Review, my colleague Tal Feldman from Yale Law School and I examine the emerging threat landscape of AI-driven persuasion in elections. We highlight recent peer-reviewed research showing that AI chatbots can shift voters' views by up to 10 percentage points — making them nearly four times more effective than traditional political advertisements tested from the 2016 and 2020 elections. When these models are explicitly optimized for persuasion, the shift soars to 25 percentage points.

The economics are what should truly alarm us. For less than a million dollars, anyone can generate personalized, conversational messages for every registered voter in America. The 80,000 swing voters who decided the 2016 election could now be targeted for less than $3,000. This isn't about sophisticated deepfakes or fake news. It's about AI that can hold conversations, read emotions, tailor arguments, and quietly reshape political views at scale.

In our article, we explore policy solutions for mitigating these risks: evaluating foreign-made political technology before widespread deployment, establishing technical standards for AI systems that generate political content, tightening access to computing power for large-scale foreign persuasion efforts, and building multilateral agreements that treat election manipulation as a collective security challenge. The solution requires coordination among intelligence agencies, regulators, platforms, and international partners.

Despite the rapid advancement of AI, there is plenty that researchers and policymakers can do to mitigate these risks. Being a Knight-Hennessy scholar has allowed me to engage in meaningful dialogue with fellow scholars across an incredible diversity of intellectual backgrounds—from law and policy to medicine and the humanities—who share concerns about technology's impact on society. These are exactly the kinds of interdisciplinary groups we need in order to tackle challenges like AI persuasion, where evolving technology capabilities intersect with democratic values, legal frameworks, and human behavior.

The communities that welcomed my family to America, and the democratic values they represent, are worth defending. As AI capabilities advance, that defense requires bringing together diverse perspectives and collaborative approaches to ensure technology strengthens rather than undermines the very foundations of democracy.

Aneesh Pappu (2025 cohort) is pursuing a PhD in electrical engineering at Stanford School of Engineering. His work focuses on research and policy for building safe AI.

Knight-Hennessy scholars represent a vast array of cultures, perspectives, and experiences. While we as an organization are committed to elevating their voices, the views expressed are those of the scholars, and not necessarily those of KHS.

More News Topics

More News