Imagine a small village in India, where people gather every evening at the local chai shop to discuss the latest news, the upcoming elections, and the issues affecting their lives. Conversations flow freely, with everyone sharing their opinions on what’s best for their community. Now, imagine someone quietly slipping false stories into these discussions—stories that seem real, sound convincing, and gradually change how people think. But these stories aren’t coming from a neighbour or even someone from the next town. They’re being generated by an AI tool called ChatGPT, used by a group with a hidden agenda to manipulate opinions and influence votes.
This might sound like a plot from a movie, but it’s happening closer to reality than we might think. Recently, a foreign group named “Storm-2035” was caught using ChatGPT to try to sway voters in the United States. Their goal? To create confusion, spread misinformation, and influence election outcomes. They crafted fake news websites and generated content that targeted sensitive issues like LGBTQ rights, the Israel-Hamas conflict, and deep political divisions.
Fortunately, OpenAI, the company behind ChatGPT, discovered the operation and acted quickly. All accounts related to Storm-2035 were deleted, and the group’s activities were brought to a halt. But the damage was done—the incident has shown us how easily AI can be used to manipulate public opinion on a massive scale.
The Story of Sampath and ChatGPT’s Influence
Let’s bring this closer to home. Meet Sampath, a farmer living in a small village in Tamil Nadu. He’s a father of three, deeply rooted in his community, and keeps up with the latest news on his smartphone. Like many others, he’s active on WhatsApp, where he receives news articles, videos, and messages from his friends and family.
One day, Sampath receives a forwarded message about a controversial political issue. The message is persuasive, full of facts, and seems to align with his own beliefs. It’s not long before he shares it with his contacts, adding his own thoughts to the conversation. What Sampath doesn’t know is that the message was generated by ChatGPT, under the direction of a group like “Storm-2035.” It was designed to play on his emotions, to subtly push his opinion in a certain direction.
Sampath’s story is not unique. Across India, millions of people could be reading and sharing AI-generated content, believing it’s from a trusted source. The content spreads rapidly through social media and messaging apps like WhatsApp, influencing opinions and even voting decisions. This is the power—and danger—of AI in elections.
Why This Matters in India
India, with its vast population and diverse electorate, is particularly vulnerable to the misuse of AI in elections. The country’s democratic process, while robust, can be easily swayed by misinformation, especially in rural areas where access to reliable news sources may be limited. The use of ChatGPT in this way is like giving a loudspeaker to those who want to mislead. It’s not just about spreading false information; it’s about shaping how people think and feel, steering them toward certain conclusions without them even realizing it.
What’s even more concerning is how easy it is to do. With tools like ChatGPT, creating convincing articles, social media posts, and even fake messages takes minutes, not hours. This allows groups to flood the digital space with content that looks real, sounds real, and convinces many that it is real.
The Role of OpenAI and Indian Authorities
In the case of “Storm-2035,” OpenAI discovered the operation and took swift action by deleting all the accounts linked to the group. This incident, however, raises important questions about the future, especially for a democracy as large and complex as India’s. How do we stop this from happening here? How do we protect voters like Sampath from being manipulated by AI?
India’s Election Commission and cybersecurity agencies must be vigilant. It’s essential to monitor and regulate the use of AI in the political sphere, ensuring that it’s used to empower, not exploit, the electorate.
Looking Ahead
As India continues to embrace the digital age, AI will play an increasingly significant role in all aspects of life, including politics. It has the potential to do great good—helping to inform voters, making campaigns more efficient, and reaching people who might otherwise be left out of the conversation. But as the story of “Storm-2035” shows, it also has the potential to do great harm.
The challenge for political strategists, tech companies, and government agencies in India is clear: We must find ways to harness the power of AI for good while protecting against its misuse. This means setting up better safeguards, being vigilant about where information comes from, and educating the public on how to spot misleading content.
For Sampath, and millions like him across India, the future of democracy may depend on it. The stakes have never been higher, and the time to act is now.
To Read More Articles From Samuel Mathew Like This Click Here.