top of page
Asian Institute of Research, Journal Publication, Journal Academics, Education Journal, Asian Institute
Asian Institute of Research, Journal Publication, Journal Academics, Education Journal, Asian Institute

Law and Humanities
Quarterly Reviews

ISSN 2827-9735

Judge Gavel
 Scales of Justice
City Crowds
People in Library
crossref
doi
open access

Published: 16 July 2025

Deepfake Technology in India and World: Foreboding and Forbidding

Biranchi Naryan P. Panda, Isha Sharma

XIM University Bhubaneswar (India)

asia institute of research, journal of education, education journal, education quarterly reviews, education publication, education call for papers
pdf download

Download Full-Text Pdf

doi

10.31014/aior.1996.04.03.152

Pages: 18-28

Keywords: Deepfake, Artificial Intelligence, Privacy, Regulation, Deep Learning, Digital Media

Abstract

Integrating artificial intelligence technology into the digital environment is not new. Similarly, deepfake technology recently created huge challenge to the legal and compliance frameworks across the globe through its hyper-realistic AI-generated manipulated media that is known as “deepfakes.” These developments pose unique challenges to the legal and ethical standards in India and across globe. Therefore, this study raises a few crucial questions about personal rights, data privacy and how the legal system in India and its preparedness to address the unique threats posed by this technology. This study also tries to explore the challenges and preparedness at the international level on deepfake regulation in contrast with the Indian approach. Further, this paper analyses the technological underpinnings of deepfakes and their potential for misuse, this paper identifies critical gaps in current legal and regulatory mechanisms, legal reforms, and public awareness campaigns for a multi-pronged strategy to mitigate this new threat.

 

1.     Introduction

 

The expansion of deepfake technology, driven by progress in artificial intelligence and machine learning, poses intricate legal and compliance difficulties worldwide, with distinct implications for India. The term “Deepfakes” is used to describe the processing of real images by combining the terms “deep learning” and “fake”. Voices and facial expressions can often change without consent. If you want to know how deep it goes, they are powered by state-of-the-art Artificial intelligence, machine learning and deep learning (Pu et. al,. 2021). There are legitimate concerns thatthese tools could be misused. Artificial intelligence is used to digitally process different types of media, including audio, video and image files, tocreate deep images (Artificial Intelligence (AI) Policies in India- A Status Paper, 2020). Digitally controlled media, by its very nature, has thepotential to undermine trust in organisations, damage people’s reputations and prove false. Additionally, the convergence of deepfake technology with current legal frameworks, including those related to defamation, privacy, and intellectual property, presents new enquiries regarding liability, jurisdiction, and the extent of protection granted to individuals and businesses (Qureshi et al., 2024). Although the Indian government hasacknowledged that the law is being worked on, there are currently no specific laws in India, the current risks demand a more robust and adaptive legal system (Nguyen et al., 2019).

 

In the global context, still some challenges are there relating to deepfake even after adopting different approaches. For instance, in United States, some states have come up with legislation relating to creation and distribution of deepfakes, particularly those used in political campaigns or to create non-consensual pornography. Similarly, in case of European Union, active initiatives towards comprehensive legislation on artificial intelligence that includes deepfake (Artificial Intelligence (AI) Policies in India- A Status Paper, 2020).  Although many jurisdictions have actively worked towards existing technical solutions to combat deepfakes, still a larger human verification is an essential step in the information verification process (Velasco, 2022).

Deepfake creation and transmission can have serious effects, including political, social, economic, and legal ramifications (Lyu, 2020a, 2020b). Deepfake technology has advanced more quickly than legal and regulatory frameworks have, posing a serious problem for legislators and law enforcement organizations around the globe (Mirsky & Lee, 2021). Deepfakes have a wide range of potential applications, from disseminating propaganda and false information to producing non-consensual pornography and influencing financial markets (Pu et al., 2021). Dealing with issues like deepfakes and false information, biassed algorithms, and safeguarding personal data are all part of navigating the legal landscape of AI (Bharati, 2024). AI decision-making procedures’ lack of transparency raises questions regarding accountability and the possibility of unforeseen repercussions.

 

The legal framework in India for dealing with deepfakes is still developing; while current laws on data protection, information technology, and defamation may be relevant, they are not especially designed to meet the particular difficulties presented by this technology. A more thorough and sophisticated approach is required because India lacks a specific legal framework for deepfakes, which leads to confusion and possible protection gaps. Additionally, it is true that India does not yet have special laws to deal with deepfakes and crimes involving artificial intelligence (Bharati, 2024; Vig, 2024). The legal framework is still being developed, even though the government has recognised the necessity of regulation.

2.     Literature Review

 

In the 1990s, scholars from educational institutions pioneered the development of deepfake technology. Subsequently, individuals with less expertise in online forums also contributed to the development of this technology (Nguyen et al., 2019). Lately, the industry has adopted these tactics. The term “deepfakes” refers to a combination of “deep learning” and “fake”, which was initially used to describe artificial material that had been digitally altered to effectively replace one person’s likeness with another (Altuncu et al., 2024). The word was originally used in 2017 by a Reddit user, and it has now been broadened to include any artificial intelligence-generated pictures, images, or audio that seems real, such as lifelike portraits of fictional individuals. While producing false content is not a novel concept, deepfakes make use of machine learning and artificial intelligence tools and techniques, such as facial recognition algorithms and artificial neural networks with features like generative adversarial networks (GANs) and variational autoencoders (VAEs) (Masood et al., 2021). The study of picture forensics, in turn, creates methods for identifying photos that have been altered. The potential application of deepfakes in the production of celebrity pornographic films, revenge porn, fake news, hoaxes, bullying, and financial fraud has drawn a lot of attention (Lyu, 2020a, 2020b). By obstructing people’s ability to decide for them, set collective agendas, and express political will through informed decision-making, the ‘dissemination of hate speech’ and ‘misinformation through deepfakes’ threatens fundamental democratic norms and functions. In response, the government and the information technology sector have issued guidelines for identifying and limiting their use (Democracy in the Age of Generative AI, 2024).

 

Motion picture production quickly adopted the 19th-century technology of photo alteration. Throughout the 20th century, technology advanced consistently and more quickly with the introduction of digital video. Beginning in the 1990s, academics at academic institutions developed deepfake technology (Shrivastava, 2024). Later, amateurs in online forums also developed this technology.  In recent times, industry has embraced these techniques. At first, the main uses of deepfakes were benign and amusing, such making celebrity face swaps or adding faces to motion pictures. But as technology progressed, so did the potential for misuse. The sophistication of deepfake algorithms led to increasingly smooth and believable manipulations (Mahmud & Sharmin, 2021). This prompted worries about the possibility of nefarious applications, like political manipulation, slander, or the dissemination of false information.


In or around 2017, a Reddit user going by the handle “deepfakes” started uploading movies that had been altered using AI software. This is when the idea of “deepfakes” first surfaced. With roots in past research in neural networks and image manipulation, the history of deepfakes is entwined with the development of AI and machine learning. Here’s how deepfake is made and how it functions (Gamage et al., 2023; Masood et al., 2021). A deepfake is made up of multiple phases, all of which are powered by AI algorithms: (a) Gathering data: This entails compiling a sizable dataset of pictures or videos featuring the intended subject. The deepfake can get more realistic as more information is obtained. (b) Training the Model:Next, the generative adversarial networks (GAN) are trained using the gathered data. The discriminator assesses the veracity of the images or videos that the generator attempts to produce. (c) Refinement and Rendering: The model can produce a deepfake if it has received sufficient training. After checking for errors, this output is transformed into a final video or image format.

 

3.     Research Method:

 

The present research is based on both theoretical and analytical in-depth studies. The current research paper concentrated on the raising issue of deepfake in India and world in this study through obtaining data from publications in academic journals, articles, and reports from different resources. The papers that were selected have been peer reviewed credible reports from organizations or from government sources. The paper also analyses different deepfake case studies from different countries and try to find out the possible solutions. Further various Law web sites like Lexis Nexus, Manupatra and West Law have been also referred to while carrying the research.

 

 

4.     Discussion & Analysis

 

The first documented usage of deepfake technology is believed to have happened in 2017 when a Reddit user made use of an openly available AI-driven program designed to create sexual content by superimposing celebrity faces on everyday people’s bodies. Photoshop and other editing programs have been around for many years. Images and audiovisual recordings can now be altered by untrained or semi-skilled individuals thanks to the ability to make deepfakes. An alert pointing out that disinformation-creation and spreading technologies are now more easily accessible, less expensive, quicker, and easier to use was issued in 2020 by the Deep Trust Alliance, a coalition of stakeholders from business and civil society (Barman et al., 2024). Artificial intelligence (AI) is used to edit and modify digital media, such as audio, video, and photographs, to create deepfakes. They have the ability to be exploited to manufacture evidence, harm reputations, and erode trust in democratic institutions since they use hyper-realistic digital falsification (Artificial Intelligence (AI) Policies in India- A Status Paper, 2020). These deepfakes can be categories broadly into three: (a) The Political Damage of Deepfakes; (b) The Reputational Damage of Deepfakes (c) Damage of Deepfakes in Finance.

 

4.1   Political Damage of Deepfakes

 

Manoj Tiwari deepfake: Under the auspices of the BJP party, an Indian politician employed the Deepfake technology to transform his previous speech on the “Citizen Amendment Act” into a new phony speech on the Delhi Elections. Artificial intelligence technology was used to manipulate the content of the speech. In order to reach the people of Haryana who are eligible to vote, the address, which was originally delivered in Hindi, was dubbed and broadcast in the Haryanvi language (Alavi 2023).

 

Vladimir Putin deepfake: A nonpartisan advocacy organisation known as ‘RepresentUs’ was responsible for the creation of Deepfakes of Vladimir Putin, the president of Russia. The DeepFakes of Kim Jong-un and Vladimir Putin were intended to be broadcast as commercials with the purpose of interfering with the elections in the United States of America, further shocking the inhabitants of the United States to become aware of the precarious nature of their democracy, and demonstrating the power of the media to influence the course of the country, regardless of whether or not they are credible (Dictators, 2023).

 

Ranveer Singh Deepfake: Ranveer Singh, the Indian actor, has lodged a complaint regarding a widely-distributed deepfake video in which he was shown as backing a political party. The video, which is an interview he had with the news agency ANI on his recent trip to Varanasi, is authentic. However, the audio was produced using an AI-powered technology. Ranveer Singh was observed in the deepfake video expressing his disapproval of Prime Minister Narendra Modi’s handling of unemployment and inflation. Messages urging people to vote for the Congress were the last touches on the edited video. The team representing Ranveer Singh has verified the filing of a First Information Report (FIR) and the subsequent commencement of an inquiry into the matter (Das, 2024). Following the deepfake’s viral success, he warned his Instagram followers, “Deepfake se bacho doston (Friends, beware of deepfakes)”.

 

4.2   Reputational Damage of Deepfake

 

Rashmika Mandana Deepfakes: A deepfake video featuring Rashmika Mandanna has been extremely popular, with over 2.4 million views. One of the most insidious forms of cyberbullying, deepfake technology recently targeted Rashmika Mandana, an Indian actor. By combining “deep learning” with false information, deep fakes create fictional likenesses of persons using AI technology; these likenesses can be so similar that they are hard to tell apart. With or without their permission, deepfake photos can be an effective tool for changing someone’s digital profile. This includes face swapping and modifying attributes like skin tone, hair length, etc. (Alavi, 2023). Although it is impossible to not be amazed by the advances artificial intelligence has made recently, users of the Internet should be aware of the serious risks it poses to their safety. What legal protection is available against deepfake photographs has come under scrutiny due to the increasing number of examples of online harassment and abuse including the leaking or threat of releasing deepfake images of individuals.

 

Barack Obama deepfake: On the 17th of April in 2018, a Deepfake video was released on YouTube in which Barack Obama called Donald Trump derogatory names and insulted him. BuzzFeed and Monkeypaw Productions, along with the American actor Jordan Peele, were responsible for the creation and production of this Deepfake trailer. In order to bring attention to the massively harmful implications and potency of Deepkakes, which can make anyone say anything, the purpose of this video was to expose these things (Artificial Intelligence (AI) Policies in India- A Status Paper, 2020).

 

5.     Financial Damages of Deepfake:

 

UK-based energy company: It is also possible for deepfakes to result in enormous financial losses, as was the case in 2019, when the chief executive officer of a UK-based energy company, who was under the impression that he was on the phone with his boss (the chief executive of the firm’s German parent company), simply followed the directive of his boss and transferred €220,000 (approximately $243,000) to the bank account of a Hungarian supplier (Rouhollahi, 2021). As it turned out, the voice was that of a criminal who had used artificial intelligence voice modification technology to imitate the actual voice of the CEO. When it came to the replication of a real voice, this was one of the first instances in which artificial intelligence was employed to create a soundtrack.

 

The $25 Million Deepfake Heist: In one notable instance, a Hong Kong finance worker was duped into disbursing $25 million following a video conference with deepfake counterparts of his coworkers (Chen & Magramo, 2024; Tan, 2024). In order to facilitate the fraud, the deepfake personas carefully imitated the actions of actual people (Hong Kong MNC Suffers $25.6 Million Loss in Deepfake Scam, 2024). Only after contacting the company's headquarters directly did the employee discover the deceit (Koh, 2024).

 

Deepfake proliferation has resulted in a new set of regulatory challenges and worries. Regulators must act now to balance the interests of digital businesses, creative industries, healthcare, consumers, and other stakeholders while establishing boundaries for the use of technology as it grows. Catching the most harmful individuals is one of the toughest issues in enforcement since they frequently operate anonymously, adapt swiftly, and disseminate their synthetic creations over international online platforms. Deepfakes have the potential to restrict free speech, especially political speech since they can be used to disseminate inaccurate or misleading information.

 

6.     Regulation of Deepfakes in Few Emergent Countries

The advent of deepfakes has taken everyone by surprise, which is both frightening and a significant technical advancement. A lot of people have started to notice deepfakes. A number of serious drawbacks, however, aid in spreading false information and engaging in fraudulent actions. Do all countries have bans on this technology? Combining deep learning with false, in this case meaning cyber manipulations produces deepfakes. The necessity for thorough deepfake regulation is being more acknowledged on a worldwide basis. International organizations such as the United Nations and the World Economic Forum have begun discussing how to legally control technology to protect victims. These debates highlight the need to protect freedom of expression and innovation while avoiding harm. Lawmakers should work to reduce the dangers of deep-divingtechnology that creates synthetic images and videos (sometimes called “deepfakes”). In this review, we examine the main mechanisms ofcriminalization in various countries, focusing on the United States, the United Kingdom, China, the European Union, South Korea, and Canada. Businesses have responded by experimenting with better methods to detect and isolate counterfeits Deepfake Disclosure Laws: Global Approaches 2024. (2025). So what precautions are countries taking to control the use of this technology? Here are some examples of nationalapproaches:

 

6.1 Republic of China

                                                                                                                                                           

In 2019, the Chinese government issued regulations requiring public disclosure of the use of deep-sea footage in movies and many other forms ofmedia. Additionally, these restrictions prohibit the distribution of deep content unless there is a clear and unambiguous indication that the content is man-made. In addition, China has implemented regulations to regulate technology developers through the Cyberspace Administration of China(CAC), which will come into force on January 10, 2023 (Lawson, 2023). All aspects of development regulation and advertising are the responsibility of those involved in their production and use. To obtain consent, verify identity, register information with the government, report illegal content, set denials, process refunds, and fulfill other obligations, businesses and individuals who use deep processing systems must complywith rules for creating, publishing, broadcasting, or editing (Lawson, 2023). This shows China's proactive approach to managing the risks associated with AI and deepfake technology (Artificial Intelligence (AI) Policies in India- A Status Paper, 2020).

 

6.2 Canada

 

The three pillars of Canada’s anti-fraud laws are detection, response and prevention, and they are fully integrated. To prevent fraud from occurringand spreading, the Canadian government is working hard to develop prevention mechanisms and increase public understanding of relevanttechnologies (Tunney, 2024). To achieve this, to break the law, the government is now considering implementing laws that will stop the misuse or distribution of leather. Canadian law currently prohibits sharing personal photos without permission (Siekierski, 2019).

 

Similar to the California Elections Act, the Canada Elections Act contains restrictions that could potentially apply to deepfake. Recently, two more precautions that Canada has taken in order to decrease the negative effects of deepfakes to “safeguard Canada’s 2019 election” and the “Critical Election Incident Public Protocol”, which is a panel investigation process for deepfake occurrences (Tunney, 2024). Both of these measures are examples of the mitigation strategies that Canada has implemented. However, many argues that still a better mechanism is required in Canada to combat deepfake issues (Canada Needs Deepfake Legislation Yesterday, 2024).

 

6.3 South Korea

 

Considering the significant technological advancements that South Korea has made, it was one of the first countries to invest in artificial intelligence research and regulatory study. In January of 2016, the government of South Korea announced that it would allocate one trillion won, which is equivalent to approximately 750 million USD, to research into artificial intelligence over the course of five years (Werner, 2024). In December of 2019, South Korea presented its National Artificial Intelligence Strategy.

 

In the year 2020, saw the adoption of a regulation in South Korea that made it illegal to disseminate deep fakes that had the potential to “cause harm to public interest” (Lyons, 2024). Those who violate the law risk the possibility of serving up to five years in prison or paying fines of up to fifty million won, which is equivalent to approximately forty-three thousand dollars. In an effort to prevent sexual offences and digital pornography, advocates ask South Korea to take more steps, such as education, civil remedies, and recourse, among other things (Cabinet Passes New Bill to Criminalize Even Possession of Deepfake Porn, 2024).

 

6.4 United States

 

 The United States was the initial nation to react to artificial intelligence technologies. The “Malicious Deep Fake Prohibition Act of 2018” was passed by the U.S. Congress in December 2018, making it the first legislation to provide a definition for the term “Deepfake” (S. 3805, 2023). The “Deepfakes Accountability Act” was proposed in June 2019. Nevertheless, the public has raised concerns and objections over its ambiguous definitions and its potential contradiction with the First Amendment of the United States Constitution. In 2019, the Congress introduced the “Deepfake Report Act”, which mandates the U.S. Department of Homeland Security to periodically publish assessment reports on deepfake technology (DEEP FAKES Accountability Act, 2021). Furthermore, certain jurisdictions promptly address instances of “deepfake” misuse, particularly in relation to pornographic videos and political elections.

 

Only a small proportion of US states have enacted legislation concerning the new technology of deepfake. In 2019, Texas ratified the S.B. 751, while California enacted AB730. Both of these regulations prohibit the utilisation of deepfakes, which have the potential to manipulate forthcoming elections. California’s AB602, Georgia’s S.B. 337, and Virginia’s SB 1736 were all passed in 2019 to prohibit the production and distribution of non-consensual deepfake pornography. In 2020, New York’s law S6829A granted individuals the right to take legal action against the unauthorised publication of deepfakes. Recently, various states have introduced and passed a series of measures aimed at regulating artificial intelligence (AI) during the year 2022 (Deceptive Audio or Visual Media (“Deepfakes”) 2024 Legislation, 2024). Although there have been advancements in US legislative developments related to AI, the majority of states still lack specific legislation that addresses AI in a comprehensive manner, either through introduction or enactment (Kaplan, 2025).

 

6.5 European Union

 

At first, the EU did not create specific laws for “deepfake” technology. However, they have implemented a set of regulations and initiatives to include ‘deepfake’ within their regulatory framework. These measures aim to restrict the use of deepfakes in areas such as combating disinformation, safeguarding personal information, and regulating artificial intelligence (Velasco, 2022).

 

Perhaps the first global effort to regulate AI is the European Union’s Artificial Intelligence Policy (EU AI Policy). The aim of the document is to make the EU a hub for intellectual property trust by setting common rules for the development, commercialization and use of intellectual propertyin the EU. The stated aim of the directive is to ensure that all EU AI systems provide a safe and respectful workplace that respects and protects thefundamental rights and ethical standards of all individuals. The main objectives that the law seeks to achieve are: (a) to promote investment and innovation in the field of intellectual property, (b) to strengthen intellectual property and governance, and (c) to pave the way for the emergence ofEU-wide business intelligence (Velasco, 2022). The target date for the bill is early 2024, before the European Parliament elections in June 2024.The rules will not come into full force until the 18-month free period ends. In addition to the Data Protection Act, the Digital Services Act and the Digital Economy Act, the Artificial Intelligence Act includes various EU laws regulating various areas of the digital economy. Data protection, online platforms and content environments are all areas that the Artificial Intelligence Bill does not explicitly cover.

 

7.     Indian Existing Regulations on Deepfake in India

 

Although deepfakes and AI-related crimes are not specifically covered by any laws in India, there are provisions under numerous statutes that may provide both criminal and civil remedies.

 

Information Technology Act, 2000 (IT Act): Information Technology Act, 2000 (Information Technology Act): The crime of fraud is defined as thecrime of causing invasion of the privacy of a person by taking, publishing or publishing it in the mass media. Section 66E of the IT Act contains provisions regarding such offences. The penalty is a fine of two lakhs or imprisonment for up to three years (WIPO Lex, 2000). Additionally,Section 43 of the Information Technology Act, 2000 addresses the issue of unauthorized computer access, including systems used to create deeplinks. Any person who misuses computer or communication for the purpose of impersonation or cheating using advanced technology will bepunished under Section 66D of the IT Act. This will result in imprisonment of up to three years and/or a fine of 1 lakh rupees (WIPO Lex, 2000).

 

IT Act’s Sections 67, 67A, and 67B on Obscene Content: These sections grant the legal right to prosecute people who share deepfake content that contains explicit or obscene sexual content. Social media companies risk losing their “safe harbor” protection if they do not take swift action to delete “artificially morphed images” (Artificial Intelligence (AI) Policies in India- A Status Paper, 2020).

 

IPC Provisions for Cybercrimes: The Indian Penal Code, 1860 has provisions that can be used to deal with fraud-related cybercrimes, including Section 509 (Criminal Offense), Section 499 (Criminal Offense) and Section 153 (a) and (b) (Abetment of Evil) hatred based on common ground).Deepfakes are charged under Section 469 of the Indian Penal Code, which provides for fraud as well as forgery (WIPO Lex, 2000). TheDepartment of Electronics and Information Technology has also published an amended policy titled “Information Technology [Instructions toIntermediaries (Amendment) Rules, 2018]”. It will force intermediaries to remove or block access to illegal data within 24 hours of receiving a complaint (Amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 for an Open, Safe & Trusted and Accountable Internet, 2023). You can use it to get rid of the deep-rooted restrictions in India.

 

Bharatiya Nyaya Sanhita (BNS) & Cyber Crime: It has stringent provisions to address offences associated with cyber crimes in light of technological progress. Section 294 outlaws the publication and transmission of obscene content, including electronic formats. Violations result in imprisonment and fines, with more severe penalties for repeat offences. Section 77 addresses voyeurism, which involves the unauthorised capture or dissemination of images of a woman's intimate parts or actions. Additionally, Section 316 addresses the larceny of digital assets, including data or currency acquired through online methods. Sections 336, 354, and 356 address offences such as email spoofing and online forgeries. These sections also impose penalties for defamation, including the transmission of defamatory content by email or other digital medium. It enforces imprisonment and monetary penalties. Other parts, such as Section 111, address Organised Crime, encompassing cybercrimes perpetrated by gangs, including internet scams and data trafficking.

 

Copyright Act for Unauthorized Use: Copyright 1957 may apply if use of the content to make a copy is prohibited. According to Article 51 of thePrivacy Act, if a person has exclusive rights over anything, it is illegal to use that content without that person's permission. This provides a legal way to deal with illegal issues related to deepfakes (JOLT, 2023).

India currently lives in the era of hyper-realistic artificial intelligence, or “deepfake,” which poses particular challenges to our ethical and legal systems. Deepfakes have the potential to transform storytelling and entertainment, but if used inappropriately they can cause serious damage and raise serious concerns about human rights, private data letters and the truth itself.

 

Deepfakes are not currently covered by any specific laws in India. Nonetheless, the present legal structure provides just a few protections:

·       Defamation: Deepfakes that propagate false information or harm someone’s reputation may be subject to legal action under defamation statutes.

·       Right to Privacy: Deepfakes used for harassment or voyeurism may be protected from unlawful dissemination of personal information by the Information Technology Act of 2000.

·       Copyright Infringement: It may be against someone’s intellectual property rights to use their likeness without their consent in a deepfake.

·       Cybercrime: Violations of applicable cybercrime legislation may result in prosecution for deepfakes used for financial fraud or other malevolent conduct.

The need for comprehensive regulation that especially tackles deepfakes is highlighted by the piecemeal remedies provided by the current laws.

 

8.     Recent Developments in India

 

The significance lies in the part performed by private stakeholders some of whom also happen to be major AI investors in putting preventive measures into place. In addition to promoting ethical AI development, Google is in discussions with the Indian government to arrange a “multi-stakeholder discussion” aimed at addressing the difficulties associated with handling deepfaked content. Additionally, Google and the Indian Institute of Technology, Chennai, collaborated to establish a think-tank with the goal of developing rules and regulations for the ethical application of AI technology.

 

Ironically, the Indian judiciary has curbed unlawful deepfake exploitation. Popular Indian actor ‘Anil Kapoor’ recently requested protection from internet usage of his name, image, publicity, persona, voice, and other features. To promote their products, the defendants employed AI deepfake technology to produce derogatory content with Mr. Kapoor’s face, personality, and movie words on celebrities’ torsos. The actors’ case warranted a three-pronged injunction prohibiting the defendants from utilising Mr. Kapoor’s name, likeness, voice, personality, etc. in any way using machine learning, AI, deepfakes, and face morphing, for profit or not. Prima facie evidence of his persona, picture, and other content being used without authorization or law; Irreparable loss or injury to the actor, involving social, economic, and/or violation of his right to a dignified life. Notably, India has always had these types of reliefs. Similar relief was given to Mr. Amitabh Bachchan in 2022 when his popularity and public persona were exploited to advertise goods and services.

 

In its most recent advisory, dated November 07, 2023, the Ministry of Electronics and Information Technology instructed the major social media intermediaries to:

a.      Make sure that due diligence is done and that appropriate measures are taken to find false information and deepfakes, especially material that contravenes laws, rules, and/or user agreements.

b.     These cases are handled quickly, well within the deadlines set forth in the IT Rules 2021.

c.      Users are prompted to refrain from hosting such data, content, or deep fakes.

d.     When such content is reported, remove it within 36 hours of the report.

e.      Make sure that prompt action is taken, well within the deadlines specified by the IT Rules 2021, and that access to the content/information is disabled.

Particular Restrictions on the Copyright System in the Management of Deepfake Technologies Individuals are frequently granted copyright rights in order to safeguard their financial interests when it comes to the monetization of their creative output.

 

There are currently no laws governing the usage of AI in India. The lack of limitations has allowed anyone with access to tools like ChatGPT or Mid journey to utilize them freely and without fear of consequences, which has been great for advances in the AI sector. Indeed, during the last five years, a number of regulatory rules have been adopted. This includes the Principles of Responsible AI, which were published in 2021, and the first National AI Strategy (#AIFORALL) by Niti Ayog in 2018. In 2023, the government also introduced the DPDP (Digital Personal Data Protection) Act, which regulates how digital personal data is processed in India. These rules, however, focus more on how AI uses data and how resources enabled by AI benefit various sectors. Regulations prohibiting the misuse of AI at the consumer level, such as the usage of voice samples, pictures, or even films, do not yet exist.

 

9.     Addressing the Threat of Deepfake

 

The accessibility of deepfake technology to non-professionals has led to an increase in fake audio and video products. The 2018 report “harmful Use of AI: Prediction, Prevention and Mitigation” by the Institute for Future Life highlights the risk of harmful AI use, including deepfake technology.

 

The Effect on Individuals: According to The State of Deepfakes, Landscape, Threat, and Impact, 96% of deepfake films were sexual and targeted women. The initial application of GANs was to produce phoney pornographic videos, particularly revenge and celebrity ones. Pornographic deepfakes inflict significant harm to women, including professional discrimination, emotional and reputational harm, sexual exploitation, and even murder and rape threats (Kowalski et al., 2023).

 

Societal Effects: Deepfake technology will confuse truth and illusion, causing a trust crisis. March 2021 saw the Hongkou District People's Procuratorate of Shanghai Municipality pursue a massive false VAT invoice. To avoid facial detection and generate false VAT bills, the offender faked action films of nodding, shaking head, blinking, and mouth opening using high-definition profile photos and ID card information. Deepfake technology can cause misinformation, conflict, and social unrest. WhatsApp rumors of kidnapping or other crimes caused 20 violent fatalities in India in 2018. Deepfake technology dangers legal practice and system. As courts use artificial intelligence technology, detection tools that cannot keep up with deepfake technology may make mistakes, undermining judicial fairness and victim interests (Artificial Intelligence (AI) Policies in India- A Status Paper, 2020).

 

Nation-State Effects: There are significant concerns surrounding the impact of political deepfake videos on elections. Videos with a similar focus were directed towards President Joe Biden during the 2020 US election. Deepfakes have the potential to negatively impact social relationships.Disinformation incidents played a role in the political crisis in the Middle East. There has been a proliferation of fake videos depicting politicians and national leaders on social media lately. The rapid advancement in deep penetration technology is causing serious problems as it makes fake videos more realistic and harder to detect. Therefore, this will lead to greater risks for politicians (Artificial Intelligence (AI) Policies in India- A Status Paper, 2020; Kipkemboi et al., 2024).

 

10.   Conclusion

 

From the above discussion, it is clear that there is no such comprehensive legislation followed by any of the jurisdictions including India, However, recently, the G7 has released a draft of guiding principles for organizations interested in advanced AI systems, seeking to promote safety, security, and trust in the development and use of AI (Kipkemboi et al., 2024). In the field of intelligence, Deepfake technology showsduality. While experiencing happy moments and being happy, concerns and bad feelings about abuse raise important questions. As deepfake algorithms continue to evolve, it is important to remain vigilant and develop strategies to minimize damage. By raising public awareness, investing in research technology, and implementing appropriate laws and regulations, we can complete the challenging journey of deep learningand protect the integrity, security and trust of the digital world. The far-reaching effects of electronic devices arise from deep interaction. Two in-depth cases illustrate the importance of understanding the entire technology; not only its mechanisms and capabilities, but also the results it canbring.

 

Given the popularity of this technology, understanding its content and focus mechanisms is important for individuals and companies. Informationliteracy, training, and in-depth knowledge of evidence-based processes are critical to navigating and thriving in this ever-changing environment. In this age of deepfakes, the difference between real content and fake content can be a big problem. Therefore, the following tips can help us analyzeinformation and data: (a) Using evidence tools (b) Cross-references (c) Thinking about checking. In some cases, existing consumer laws may come into play, especially when counterfeit products are used to deceive or deceive people. In an effort to balance technology and public education, many advocacy groups, including the Electronic Frontier Foundation, Alliance for Content Provenance and Authenticity, and Witness Media Lab, have advocated against those responsible for personal use, illegal activity, fraud, obscenity, and other crimes. In some cases, thecontroller needs to identify legal possibilities and explore other ways to resolve human rights violations and protect identity, personal data and legal rights.

 

 

Author Contributions: All authors contributed to this research.

 

Funding: Not applicable.

 

Conflict of Interest: The authors declare no conflict of interest.

 

Informed Consent Statement/Ethics Approval: Not applicable.

 

Declaration of Generative AI and AI-assisted Technologies: This study has not used any generative AI tools or technologies in the preparation of this manuscript.

References

Alavi, A. M. (2023). Video. https://www.ndtv.com/video/news/news/in-bjp-s-deepfake-video-shared-on-whatsapp-leader-speaks-in-2-languages-541161
Amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics code) rules, 2021 for an Open, Safe & Trusted and Accountable Internet. (2023). https://www.pib.gov.in/PressReleasePage.aspx?PRID=1914358
In BJP’s Deepfake Video Shared On WhatsApp, Leader Speaks In 2 Languages. (2020). https://www.youtube.com/watch?v=XbrpkxIfb0M
Altuncu, E., Franqueira, V. N. L., & Li, S. (2024). Deepfake: definitions, performance metrics and standards, datasets, and a meta-review [Review of Deepfake: definitions, performance metrics and standards, datasets, and a meta-review]. Frontiers in Big Data, 7. Frontiers Media. https://doi.org/10.3389/fdata.2024.1400024
Barman, D., Guo, Z., & Conlan, O. (2024). The Dark Side of Language Models: Exploring the Potential of LLMs in Multimedia Disinformation Generation and Dissemination. Machine Learning with Applications, 16, 100545. https://doi.org/10.1016/j.mlwa.2024.100545
Bharati, R. (2024). Navigating the Legal Landscape of Artificial Intelligence: Emerging Challenges and Regulatory Framework in India. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4898536
Cabinet passes new bill to criminalize even possession of deepfake porn. (2024). https://www.koreatimes.co.kr/www/nation/2025/02/113_383957.html
Canada needs deepfake legislation yesterday. (2024). https://policyoptions.irpp.org/magazines/march-2024/deepfake-law-urgent/
Das, S. (2024). Video Of Ranveer Singh Criticising PM Modi Is A Deepfake AI Voice Clone. https://www.boomlive.in/fact-check/viral-video-bollywood-actor-ranveer-singh-congress-campaign-lok-sabha-elections-claim-social-media-24940
Deceptive Audio or Visual Media (“Deepfakes”) 2024 Legislation. (2024). https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation
Deepfake Disclosure Laws: Global Approaches 2024. (2025). https://www.scoredetect.com/blog/posts/deepfake-disclosure-laws-global-approaches-2024
DEEP FAKES Accountability Act. (2021). https://www.govtrack.us/congress/bills/117/hr2395
Dictators. (2023). https://www.youtube.com/watch?v=ERQlaJ_czHU&feature=youtu.be
Gamage, D., Raveenthiran, H., & Sasahara, K. (2023). Moral intuitions behind deepfake-related discussions in Reddit communities. https://doi.org/10.31235/osf.io/mznge
JOLT. (2023). Deepfakes and the Copyright Connection: Analysing the Adequacy of the Present Machinery. https://jolt.richmond.edu/2022/01/25/deepfakes-and-the-copyright-connection-analysing-the-adequacy-of-the-present-machinery/
Kaplan, C. (2025). What Legislation Protects Against Deepfakes and Synthetic Media? https://www.halock.com/what-legislation-protects-against-deepfakes-and-synthetic-media/
Koh, S. (2024). HK firm scammed of $34 million after employee duped by video call with deepfake of CFO. https://www.straitstimes.com/asia/east-asia/hk-firm-scammed-of-34-million-after-employee-is-duped-by-video-call-with-deepfake-of-cfo
Kowalski, J., Air, C., & Kamal, O. (2023). Deepfakes and their impact on women. https://www.dacbeachcroft.com/en/articles/2021/august/deepfakes-and-their-impact-on-women/
Łabuz, M., Nehring, C. On the way to deep fake democracy? Deep fakes in election campaigns in 2023. Eur Polit Sci 23, 454–473 (2024). https://doi.org/10.1057/s41304-024-00482-9
Lawson, A. (2023). A Look at Global Deepfake Regulation Approaches. https://www.responsible.ai/post/a-look-at-global-deepfake-regulation-approaches
Lyons, E. (2024). South Korea set to criminalize possessing or watching sexually explicit deepfake videos. https://www.cbsnews.com/news/south-korea-deepfake-porn-law-ban-sexually-explicit-video-images/
Lyu, S. (2020a). DeepFake Detection: Current Challenges and Next Steps. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2003.09234
Lyu, S. (2020b). Deepfake Detection: Current Challenges and Next Steps. https://doi.org/10.1109/icmew46912.2020.9105991
Mahmud, B. U., & Sharmin, A. (2021). Deep Insights of Deepfake Technology : A Review [Review of Deep Insights of Deepfake Technology : A Review]. arXiv (Cornell University). Cornell University. https://doi.org/10.48550/arxiv.2105.00192
Masood, M., Nawaz, M., Malik, K. M., Javed, A., & Irtaza, A. (2021). Deepfakes Generation and Detection: State-of-the-art, open challenges, countermeasures, and way forward. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2103.00484
Mirsky, Y., & Lee, W. (2021). The Creation and Detection of Deepfakes [Review of The Creation and Detection of Deepfakes]. ACM Computing Surveys, 54(1), 1. Association for Computing Machinery. https://doi.org/10.1145/3425780
Nguyen, T. T., Nguyen, C. M., Nguyen, D. T., Nguyen, D. T., & Nahavandi, S. (2019). Deep Learning for Deepfakes Creation and Detection. arXiv (Cornell University). http://arxiv.org/pdf/1909.11573.pdf
Pu, J., Mangaokar, N., Kelly, L., Bhattacharya, P., Sundaram, K., Javed, M., ... & Viswanath, B. (2021, April). Deepfake videos in the wild: Analysis and detection. In Proceedings of the Web Conference 2021 (pp. 981-992).
Qureshi, S. M., Saeed, A., Almotiri, S. H., Ahmad, F., & Ghamdi, M. A. A. (2024). Deepfake forensics: a survey of digital forensic methods for multimodal deepfake identification on social media. PeerJ Computer Science, 10. https://doi.org/10.7717/peerj-cs.2037
Rouhollahi, Z. (2021). Towards Artificial Intelligence Enabled Financial Crime Detection. arXiv (Cornell University). https://doi.org/10.48550/arXiv.2105.10866
S. 3805. (2023). https://www.govinfo.gov/content/pkg/BILLS-115s3805is/pdf/BILLS-115s3805is.pdf
Siekierski, B. J. (2019). Deep Fakes: What Can Be Done About Synthetic Audio and Video? https://lop.parl.ca/sites/PublicWebsite/default/en_CA/ResearchPublications/201911E
Tunney, C. (2024). AI-powered disinformation is spreading — is Canada ready for the political impact? https://www.cbc.ca/news/politics/ai-deepfake-election-canada-1.7084398
Voice fraud scams company out of $243,000. (2019). https://blog.avast.com/deepfake-voice-fraud-causes-243k-scam
Velasco, C. (2022). Cybercrime and Artificial Intelligence. An overview of the work of international organizations on criminal justice and the international applicable instruments. ERA Forum, 23(1), 109. https://doi.org/10.1007/s12027-022-00702-z
Werner, J. (2024). South Korea Unveils Unified AI Act. https://babl.ai/south-korea-unveils-unified-ai-act/
WIPO Lex. (2000). https://www.wipo.int/wipolex/en/text/185998


bottom of page