INDIA: ‘The negative impacts of AI on democratic processes seem to be outweighing the positive’

VanditaMorarkaCIVICUS speaks with Vandita Morarka, founder and CEO of the One Future Collective, about AI-powered disinformation in the campaign for India’s current election.

One Future Collective is a feminist social purpose organisation with a vision of a world built on social justice, led by communities of care. It fights for the right of each person to live a life of safety, dignity and belonging by catalysing people power and building just institutions. It exists to further social justice in India and globally.

How big a problem is disinformation in India’s election campaign, and how much of this can be attributed to AI?

In 2018, the BBC conducted extensive research on the spread of fake news in India, Kenya and Nigeria. In India in particular, the problem is that disinformation is rampant and hard to combat because emotions are prioritised over facts. This makes it difficult to use factual information to effectively counter disinformation.

This complexity is particularly pronounced during elections. In India, the sharing of disinformation is largely spearheaded by the digitally organised right wing, which outpaces civil society in efficiency. The speed and scale at which disinformation spreads is outstripping the capacity of civil society organisations (CSOs) and political parties to counter it.

The strategic use of disinformation has become entrenched in political campaigns, particularly through platforms such as Facebook and WhatsApp, where it spreads unchecked. Features such as WhatsApp’s message forwarding indicator do little to curb its reach.

The emergence of deepfake technology has further complicated things, as evidenced by instances of political parties, including the ruling Bharatiya Janata Party (BJP), using AI to generate personalised messages for voters. The accessibility and affordability of AI-driven manipulation tools has democratised their use, extending their reach to local election campaigns. It has become significantly easier to generate and distribute manipulated content, allowing for rapid deployment in hours rather than days.

In addition, reports point to a worrying trend of political parties making unethical requests to AI companies, particularly Instagram and Telegram. This underscores the urgent need to address the systemic challenges posed by the spread of disinformation.

What impact is AI having on democratic processes?

As a lawyer, I understand the importance of recognising the potential for misuse of any technology. No technology is inherently good or bad; it is how it is used that determines its impact. In the current landscape, the negative impacts of AI seem to be outweighing the positive. We are not seeing AI being used to increase voter turnout or make it easier for those away from their districts to vote in elections. Instead, its main uses seem to be to spread disinformation and exacerbate social divisions.

From my perspective, based on its current applications, the impact of AI appears to be strongly negative. Efforts such as the Civil Society Manifesto for Ethical AI and initiatives such as Tagore AI offer glimmers of hope, but their reach and influence are limited compared to AI’s pervasive negative applications. While there are examples of organisations and companies attempting to use AI for positive purposes, they often struggle to counterbalance the widespread negative impacts.

In summary, while there is potential for positive impact, the current reality suggests that AI is predominantly being used in ways detrimental to democratic processes and social cohesion.

How effective are existing regulations, such as India’s IT rules, in addressing the challenges posed by AI-generated content?

In India as elsewhere, there’s a significant gap in the regulatory framework for emerging technologies such as AI. Existing laws are inadequate and often reactive, failing to anticipate and address the complexities of these technologies. Instead of creating piecemeal regulations for each new technology, there is an urgent need for a comprehensive and forward-looking regulatory framework that addresses current challenges and anticipates future uses and potential risks.

For instance, current IT regulations focus primarily on dealing with cases where illegal content has already been uploaded rather than implementing preventative measures. There is a lack of provisions to ensure users are informed when they encounter manipulated content, placing the burden on victims to file complaints in a legal system that may not be conducive to dealing with such issues effectively.

The Ministry of Electronics and IT has issued guidelines, but they lack the necessary legal enforcement to ensure compliance. This highlights the inadequacy of relying solely on advisory measures to regulate emerging technologies.

Recent incidents such as the government’s response to Google’s Gemini AI flagging content related to Prime Minister Narendra Modi as fake news demonstrate the challenges of regulating AI without stifling innovation or inadvertently promoting censorship. The government’s approach of pressuring tech companies to moderate content without a robust legal framework risks censorship and may not effectively address underlying issues of manipulative content.

In addition, the lack of coordination between different areas of tech law, such as data protection and privacy, exacerbates the regulatory gap. A holistic approach is needed to ensure AI laws exist within a broader legal framework that addresses the interrelated issues of emerging technologies, data protection and privacy.

There is an urgent need for India to develop a comprehensive and forward-thinking legal framework that anticipates the challenges posed by emerging technologies like AI while safeguarding individual rights and promoting innovation. This requires collaboration between policymakers, tech companies, CSOs and legal experts to develop effective and balanced regulations.

How is civil society working to ensure adequate access to information and counter election disinformation?

It is questionable whether Indian civil society has the necessary technological expertise to effectively address the challenges of disinformation and manipulation, particularly when it comes to emerging technologies such as AI. While legal and policy interventions are crucial, they often take time. It is important to explore technical interventions as well.

While some efforts have been made by CSOs, such as sending joint letters to election authorities and launching voter registration and literacy campaigns, these initiatives are not yet having the desired impact. Fact-checking activities by CSOs are valuable, but they may not be enough to counter the widespread dissemination of disinformation, particularly on platforms like WhatsApp.

The sheer volume of disinformation circulating on these platforms presents a significant challenge. Individual efforts to debunk false information are not scalable or effective in addressing the magnitude of the problem. There is a clear need for more robust and scalable systems for marking and addressing disinformation, but developing them is a challenge.

Additionally, there is a sense of unpreparedness among CSOs to effectively combat manipulation of messaging during elections. Although this challenge has been anticipated, there is a gap in readiness to address them comprehensively.

While efforts are being made, it is clear that more needs to be done. Innovative solutions are required to effectively tackle the complex problem of disinformation and manipulation in the digital age. It is essential to continue exploring avenues for collaboration, innovation and capacity strengthening to strengthen civil society’s response to these challenges.

What should be done to prevent AI influencing election results?

AI can have significant impacts on election outcomes. While it may not necessarily determine the ultimate winner of an election, it can indeed influence the extent of the swing in votes, potentially altering the winning margin or, in tight races, tilting the results. AI can be very effective, as shown by the example of Shakti Singh Rathore, a BJP campaigner who used an AI replica of himself to have personalised conversations with countless people at the same time, telling them all about Modi’s programme and convincing them to vote for him.

Comparisons to other countries, such as the USA or European countries, show that India is lagging behind when it comes to having comprehensive regulations and practices to counter AI-driven disinformation. This regulatory gap leaves the democratic process all the more vulnerable to manipulation.

Addressing this issue requires regulatory measures, proactive planning and collaboration across sectors.


Civic space in India is rated ‘repressed’ by the CIVICUS Monitor.

Get in touch with the One Future Collective through its website or Facebook page, and follow @onefuture_india on Twitter and @onefuturecollective on Instagram.

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here