“You ( Central government) will have to start working on this. You also start thinking about this. It (deepfake) is going to be a serious menace in the society,” said Acting Chief Justice Manmohan and Justice Tushar Rao Gedela
New Delhi: Deepfake technology is going to be a serious menace in society and the government should start thinking about it, the Delhi High Court observed on Wednesday, while noting that the antidote for Artificial Intelligence (AI) would be technology only.
The high court, which was hearing two petitions against the non-regulation of deepfake technology in the country and the threat of its potential misuse, observed that before the elections, the government was agitated on the issue and now things have changed.
To this, Additional Solicitor General Chetan Sharma, appearing for the Centre, said that “our body language might have changed but we are still agitated as much we were then”.
The Centre’s counsel also said the authorities recognise that it is a problem which needs to be dealt with.
“We can employ counter AI technology to annul what would otherwise be a very damaging situation. To deal with the issues, four things are needed – detection, prevention, grievance support mechanism and raising awareness. No amount of laws or advisories will go a long distance,” Sharma contended.
To this, the bench responded that the antidote for AI would be technology only.
“Understand the damage that will be done by this technology because you are the government. We as an institution would have certain limitations,” it said.
In response to the concern regarding the identification of websites granting access to deepfakes and for suo-motu blocking them, the Ministry of Electronics and Information Technology (MeitY) in its reply told the court that it is not empowered to monitor any online content on the Internet on a suo motu basis.
“Any content/URL/websites on the Internet can only be blocked as per the established legal procedure,” it said.
Deepfake technology facilitates the creation of realistic videos, audio recordings and images that can manipulate and mislead the viewers by superimposing the likeness of one person onto another, altering their words and actions, thereby presenting a false narrative or spreading misinformation.
“You ( Central government) will have to start working on this. You also start thinking about this. It (deepfake) is going to be a serious menace in the society,” said Acting Chief Justice Manmohan and Justice Tushar Rao Gedela.
Justice Manmohan further said, “You also do some study. It is like what you are seeing and what you are hearing, you can’t believe it. That is something which shocks.
“What I see through my own eyes and what I have heard through my own ears, I don’t have to trust that, this is very very shocking.” One plea has been filed by journalist Rajat Sharma against the non-regulation of deepfake technology in the country and seeks directions to block public access to applications and software enabling the creation of such content.
The other petition has been filed by Chaitanya Rohilla, a lawyer, against deepfakes and the unregulated use of artificial intelligence.
The ministry, said in its reply, that it has taken various steps to address proliferation of harmful applications (AI-enabled and deepfake) and illegal content.
In order to an ensure open, safe, trusted and accountable digital ecosystem, the Digital Personal Data Protection (DPDP) Act 2023 has been notified, it said.
The court granted two weeks’ time to the petitioners to file an additional affidavit containing their suggestions and listed the matter for further hearing on October 24.
Rajat Sharma, the Chairman and Editor-in-Chief of Independent News Service Private Limited (INDIA TV), has said in the public interest litigation (PIL) that the proliferation of deepfake technology poses a significant threat to various aspects of society, including misinformation and disinformation campaign, and undermines the integrity of public discourse and the democratic process.
The PIL said there is a threat of potential use of this technology in fraud, identity theft and blackmail, harm to individual reputation, privacy and security, erosion of trust in media and public institutions and violation of intellectual property rights and privacy rights.
It said it is imperative for the government to establish regulatory frameworks to define and classify deepfakes and AI-generated content and prohibit the creation, distribution and dissemination of deepfakes for malicious purposes.
The plea said the Centre had made a statement of its intent to formulate regulation for dealing with deepfakes and synthetic content in November 2023, but nothing of the sort has seen the light of the day so far.
The petitioner mentioned in the plea that certain unscrupulous people were maintaining social media accounts and uploading fake videos featuring his image and his AI-generated voice to sell or endorse various products such as purported medication for diabetes and fat loss.
The PIL sought a direction to the Centre to identify and block public access to the applications, software, platforms and websites enabling the creation of deepfakes.
The plea said that the government be asked to issue a directive to all social media intermediaries to initiate immediate action to take down deepfakes upon receipt of a complaint from the person concerned.