The Aryavarth Express
Agency (New Delhi): As India prepares for its upcoming general election, artificial intelligence has become a battleground for political parties. Experts caution that current regulations are falling behind rapidly advancing technologies like deepfakes. The move from conventional campaigning methods to utilizing AI for gaining votes was evident during the recent local elections in several key Indian states.
According to SCMP, an AI-generated video in Telangana supposedly showing the leader of the former ruling Bharat Rashtra Samiti party, KT Rama Rao, endorsing opposition party Congress sparked a major controversy after it was watched more than half a million times on Congress’ official X account. Rao’s party ended up losing the election, leading to Congress subsequently establishing a new state government.
Deepfakes have surfaced involving Bollywood icon Amitabh Bachchan, who hosts Kaun Banega Crorepati, the Hindi adaptation of the television game show Who Wants to Be a Millionaire. In a manipulated video, Bachchan is shown discussing a loan-waiver scheme for farmers in Madhya Pradesh with a contestant, leading to negative portrayal of then-Chief Minister Shivraj Singh Chouhan and praise for political opponent Kamal Nath. However, this interaction is entirely fake.
During the state elections, AI was utilized in various ways such as cloning the voice of Congress’ chief ministerial candidate in Rajasthan, Ashok Gehlot, to send customized WhatsApp messages addressing voters by name. Similarly, in Tamil Nadu, the ruling Dravida Munnetra Kazhagam party employed AI to digitally ‘bring back’ former chief minister M. Karunanidhi, deceased in 2018, to speak at public gatherings.
In November last year, Indian Prime Minister Narendra Modi expressed concern about the proliferation of AI following the appearance of a deepfake video showing him engaging in a traditional dance. He described the misuse of this technology as alarming and cautioned that it could result in a serious crisis, highlighting that a large portion of India’s population may struggle to verify or authenticate manipulated media.
In December, the government released a directive to all social media platforms urging them to adhere to the Ministry of Electronics and Information Technology’s regulations regarding banned content, particularly those concerning material designed to deceive, mislead, or impersonate others.
A recent advisory from the ministry advised tech firms to seek permission before publicly launching generative AI models that are deemed “unreliable” or still in the testing phase, cautioning against actions that could jeopardize the integrity of the electoral process.
Right-wing groups recently criticized Google’s Gemini chatbot for its response to a question about Modi, noting accusations that his policies have been described as fascist by some experts. The chatbot also highlighted the government’s crackdown on dissent and use of violence against religious minorities.
Regardless of the regulations in place, numerous experts argue that it is challenging to prevent the dissemination of AI-generated content, particularly deepfakes. Considering that more than half of India’s population, which exceeds 1.4 billion people, now has internet access, as reported by the South China Morning Post (SCMP), experts believe it is increasingly challenging to control the spread of AI-generated content, including deepfakes.
One of the challenges arises from the absence of any legislation in India specifically governing deepfakes, a situation that may be insurmountable, as suggested by a technology attorney and policy analyst.
Anushka Jain, a technical lawyer and research associate at the technology research network Digital Futures Lab, informed SCMP that the identification of deepfakes is mandated by prominent firms such as Google and Facebook. Content creators may begin utilizing open-source generators that do not mandate labelling and are readily available in the market. This shift could make it impractical to regulate deepfakes in
The rapid development of rapidly advancing generative AI technology has caught many nations off-guard, prompting them to enact laws to address the risks posed by deepfakes in election contexts. Several states in the US have passed laws to address deepfakes, and a bill called the Protect Elections from Deceptive AI Act was presented to Congress in September to ban the use of misleading AI-generated content in elections.
The European Parliament endorsed the Artificial Intelligence Act on Wednesday, 13th of March, which was previously negotiated with member states in December. This legislation includes transparency measures, compliance mandates, and aims to mitigate risks by prohibiting the indiscriminate extraction of facial images from surveillance videos.
The UK’s newly implemented Online Safety Act includes provisions to regulate aspects of AI, including criminalizing the sharing of non-consensual deepfakes.
Prateek Waghre, the executive director of the Internet Freedom Foundation, an Indian digital rights organization, informed SCMP that the group had issued an open letter urging political candidates and parties to avoid employing deepfake technology in the upcoming elections in India. He mentioned that political parties hold the most influence in deciding whether to use deepfakes during elections, leaving others trying to keep up. However, the feasibility of these parties being realistic in their decision-making is another important consideration.
Waghre stated that deepfakes are a progression of the disinformation, misinformation, and negative campaigning that India has faced for many years. He described it as another type of negative campaign rhetoric.
The World Economic Forum’s 2024 Global Risk Report, issued in January, highlighted misinformation and disinformation as a significant global risk over the next two years, especially for countries like India preparing for elections.
Waghre mentions that the main difficulty lies in identifying deepfakes and ensuring they are addressed correctly. He stated that without authorities taking action to outlaw deepfakes in some manner, enforcing any regulations would prove to be very challenging.
Fake videos targeting politicians and parties are widespread on Indian social media. A report from Italian tech company DeepTrace Technologies in 2019 ranked India as the sixth most vulnerable country to deepfakes.
Lately, there has been a significant rise in the popularity of AI-generated videos on Instagram featuring Prime Minister Modi singing popular songs in various regional languages. The deepfake content produced for political parties is predominantly outsourced to private consulting firms rather than being created internally.
Divyendra Singh Jadoun, who operates a “synthetic media” startup named The Indian Deepfaker, informed SCMP that creating a 60-second video used to require his team 15 days. However, now, anyone can make a deepfake in under three minutes using just one image. He expressed concern that the creation of deepfakes is no longer restricted to individuals with coding expertise, as it is now easily accessible. This raises worry about the possibility of widespread use of deepfakes during the general elections.
Jadoun, speaking to This Week in Asia as cited by SCMP, mentioned that he was collaborating with various politicians in preparation for the upcoming elections. He noted that several of India’s political parties were incorporating AI into their election strategies. His company was responsible for replicating Ashok Gehlot’s voice to create personalized WhatsApp messages that the politician distributed before Rajasthan’s assembly elections in November.
According to Jadoun, as generative AI advances, the tools for detecting deepfakes are falling behind. He mentioned that companies such as Intel rely on deepfake detection tools that can only identify outdated and low-quality deepfakes. However, the quality of deepfakes is constantly improving.
For instance, he mentioned an Instagram advertisement he encountered showing a fabricated interview with cricketer Virat Kohli. Despite reporting it, the platform stated that it did not breach its policies. He remarked that these platforms lack the capability to identify top-notch deepfakes, making it technically impossible to prevent their dissemination.
Hafiz Malik, an electrical and computer engineering professor at the University of Michigan-Dearborn in the US, concurred that the swift evolution of deepfake technology would have a global impact on elections. He expressed specific worry about countries such as India, where WhatsApp is extensively utilized and held in high regard. While platforms like Facebook and Twitter have features such as deepfake detection and alert systems to comply with regulations, WhatsApp does not have similar safeguards, according to Malik.
Major technology companies have implemented restrictions on the utilization of deepfakes, like YouTube’s updated regulations from last year. These guidelines now mandate creators to disclose if they have employed generative AI to produce authentic-seeming videos. Microsoft has committed to adding digital watermarks to content to indicate if AI was utilized in its development, aiding campaigns and election monitors in addressing cybersecurity risks.
Meta, the parent company of Facebook, Instagram, WhatsApp, and other platforms, announced that political advertisements must disclose if AI was utilized in their production. However, this disclosure is not mandatory for other non-paid posts.
Regulation alone will not suffice, as emphasized by Malik, who highlighted the vital requirement for ongoing technological advancements and updates to effectively tackle the issue. He cited Imran Khan’s utilization of deepfakes to campaign while in prison during the recent elections in Pakistan as a potential positive example of the technology’s applications.
He mentioned that creating a tool solely to appease regulators will be ineffective if it is not regularly updated. He emphasized that the only effective approach to regulating deepfakes is through the continuous development of advanced technology. Ultimately, it is a matter of technology, which can be either utilized or exploited.
S.Y. Quraishi, a former chief election commissioner of India, informed This Week in Asia that the primary tool accessible to the nation’s electoral authorities to prevent deepfakes was educating voters about the matter. He noted that spreading rumours during elections had been a persistent challenge.
He mentioned that he has seen how false information during elections can result in violence and the practice of booth capturing, where individuals associated with a candidate forcefully take control of a polling station to manipulate the outcome through coercion and intimidation.
The simplicity of disseminating fake news through deepfakes with just a click of a button significantly raises the danger of spreading misinformation. In November, the UK’s cybersecurity agency cautioned about the increasing risk that AI presents to elections, and the US Federal Election Commission is exploring methods to oversee AI-generated deepfakes in political advertisements. Quraishi stated that deepfakes are a widespread issue without any existing global legislation to regulate them. (IPA Service)
By Girish Linganna