Next Story
Newszop

SC Dismisses PIL On Deepfakes Targeting Colonel Sofiya Qureshi

Send Push

The Supreme Court has reportedly dismissed a public interest litigation (PIL) that sought the Court’s intervention to curb AI-generated deepfake videos of Colonel Sofiya Qureshi being circulated online.

Colonel Qureshi, an officer in the Indian Army, was a part of the team that regularly addressed the media during Operation Sindoor, India’s response to the gruesome Pahalgam terrorist attack last month.

The PIL urged the SC to set up a court-appointed expert panel for drafting a legal framework to tackle the issue of deepfakes.

While the bench comprising Justices Surya Kant and N Kotiswar Singh acknowledged that it is a “serious issue”, the SC pointed out that several such cases are already being heard by the Delhi High Court (HC).

“We are not saying that it is not a serious issue but the Delhi High Court has been hearing this issue for a couple of years. If we entertain this petition, the High Court will stop hearing the pending matter and all its hard work over the years will go in vain. It will be appropriate, if you approach the Delhi High Court,” the bench was quoted as saying by Live Law.

The petitioner called for legal reforms to address the misuse of deepfakes, asserting that such synthetic content can damage an individual’s dignity and threaten national security. However, the SC dismissed the PIL and suggested that the petitioner approach the Delhi HC instead.

The Rise Of Deepfakes

The term deepfake originated from a Reddit user “Deepfakes”, who, back in 2017, used off-the-shelf AI tools to paste celebrities’ faces onto pornographic videos.

The commodification of cloud computing, access to open-source AI and availability of vast data and media have created an umbrella of deepfakes today – which includes video, audio and image-based deepfakes.

Social media is flooded with deepfake videos featuring numerous well-known figures, sparking an online debate about the misuse of AI in creating deceptive and harmful content.

From late Ratan Tata, Mukesh Ambani and Narayan Murthy to , Nora Fatehi, Sachin Tendulkar and Virat Kohli, several Indian personalities have fallen prey to deepfake videos in recent times.

Last year, NSE CEO Ashishkumar Chauhan and his BSE counterpart Sundaraman Ramamurthy were shown in deepfake videos.

It is pertinent to note that although the electronics and IT ministry (MeitY) has, in the past, issued advisory on deepfakes, it is yet to come up with a dedicated regulation.

In January last year, it was reported that MeitY was considering amending the IT Rules, 2021 to “explicitly define deepfakes and make it obligatory for all intermediaries to make reasonable efforts not to host them”.

In March 2025, the IT ministry submitted a status report before the Delhi HC, which outlined key recommendations regarding the management and regulation of deepfakes. In the report, industry stakeholders rued the lack of a standard definition for deepfakes and called for regulation around “mandatory Al content disclosure”. They also sought mandatory AI labelling standards and grievance redressal mechanisms to crack the whip on deepfakes.

While IT minister Ashwini Vaishnaw and former minister of state (MoS) for IT Rajeev Chandrashekhar were said to have held several closed-door meetings with AI companies and experts on the need for drafting legislation on deepfakes, nothing concrete has come up yet.

The post appeared first on .

Loving Newspoint? Download the app now