Deep dive in to Deepfakes
A deepfake of actor Rashmika Mandanna went viral on social media platforms.
A deepfake of actor Rashmika Mandanna went viral on social media platforms. It has been used in recent years to make a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone.
During the ongoing Russia-Ukraine war, cybercriminals hacked a Ukrainian television channel and showed the Ukranian President, Volodymyr Zelenskyy, surrendering. The fake video was created using deep fake hacking technology.
A few years ago, a deep fake video was reportedly created with Bharatiya Janata Party (BJP) leader Manoj Tiwari speaking in Haryanvi, Hindi and English. The video was circulated via various WhatsApp groups ahead of the Legislative Assembly elections in Delhi in 2020.
The most recent Indiana Jones movie shows actor Harrison Ford de-aged by 40-plus years. The moviemakers used artificial intelligence to comb through all of the decades-old footage of the actor and create a younger Ford.
These are just a few of the examples regarding the deepfakes and their uses, seems like We’re just waiting for a big wave to happen to move towards stringent regulations.
What are deepfakes?
Deepfakes are highly realistic video, audio, or image forgeries or replicas generated using AI. The technologies that create deepfakes include Generative Adversarial Networks (GANs) and machine learning (ML). ML is a subset of AI that enables systems to learn and improve from experience in the form of the data it collects.
GANs are a type of machine learning algorithm that use two neural networks—a generator and a discriminator—to learn and create data that looks real or human rather than AI-generated.
The generator produces fake data, such as images or video frames, while the discriminator distinguishes between real and fake data—a feedback loop called deep synthesis. Over time, the generator gets better at creating fake data, and the discriminator also gets better at detecting it in an iterative process that results in highly convincing fakes.
Algorithms work on massive data sets —fairly convincing deepfake can be done with as few as 300 images— belongs to the source person to create deepfakes. Today, even one single photo of a source is enough to create deepfake contents.
You can watch this video to understand how real-time deepfakes can be created!
Benefits of Deepfake Technology
Accessibility for Disabled Individuals:
Voice Cloning: Companies like Lyrebird offer voice synthesis to help those with conditions such as ALS to communicate, using deepfake technology to clone their voices, enabling them to maintain their unique vocal identity.
Innovations in Entertainment:
Enhanced Special Effects: Deepfakes provide more lifelike and cost-effective special effects in filmmaking.
Posthumous Performances: Enables the respectful use of an actor's likeness to complete or create new performances after their demise.
Multilingual Representation: Platforms like Synthesia allow personalities like David Beckham to spread malaria awareness in nine different languages.
Interactive and Engaging Education:
Simulation-Based Learning: Creates immersive educational experiences, making academic content more engaging.
Medical Training: Allows for realistic surgical simulations, giving medical trainees exposure to near-real-life scenarios without the risk to patients.
Enhancement in Journalism and Awareness:
Historical Recreations: Can bring historical events to life, aiding in educational and documentary storytelling.
Empathy Generation: Projects like Deep Empathy by UNICEF and MIT use deepfakes to help people empathize with the conditions in conflict zones by showing familiar cities in similar straits.
Further Advantages Across Industries:
Media Accessibility: Can greatly enhance the accessibility of video content for non-native speakers and the hearing impaired by generating accurate dubbed audio and synchronized subtitles.
The Dark side of Deepfakes!
Using deepfake technology, several crimes can be committed, which can have serious repercussions for individuals and society. Some of these offenses and their legal implications under Indian law are:
Identity Theft and Virtual Forgery: Deepfakes can be used to steal someone's identity or create false representations, damaging their reputation and spreading misinformation.
These acts can be prosecuted under Sections 66 (computer-related offenses) and 66-C (punishment for identity theft) of the Information Technology Act, 2000. Additionally, Sections 420 (cheating) and 468 (forgery for the purpose of cheating) of the Indian Penal Code, 1860, can also be applied1.
Misinformation Against Governments: Spreading misinformation using deepfakes to subvert the government, incite hatred, or undermine public trust
can attract charges under Section 66-F (cyber terrorism) of the IT Act, 2000. Also, Sections 121 (waging war against the Government of India) and 124-A (sedition) of the Penal Code, 1860, may be invoked
Hate Speech and Online Defamation: Deepfakes contributing to hate speech or defamatory content can be prosecuted under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022.
Penal provisions like Sections 153-A and 153-B (promoting enmity between different groups) and Section 499 (defamation) of the Penal Code, 1860, could also be used
Practices Affecting Elections: The use of deepfakes to manipulate public opinion during elections could be challenged under
Section 66-D (punishment for cheating by personation using a computer resource) and Section 66-F (cyber terrorism) of the IT Act, 2000.
Violation of Privacy, Obscenity, and Pornography: Deepfakes can violate privacy or be used to create and distribute obscene or pornographic material.
Under the IT Act, 2000, Sections 66-E (violation of privacy), 67 (obscenity in electronic form), 67-A (sexually explicit material), and 67-B (child pornography) provide for prosecution. The Penal Code, 1860, has corresponding sections 292 and 294 for the sale of obscene materials, and the Protection of Children from Sexual Offences Act, 2012 (POCSO) has provisions to protect the rights of women and children from such offenses
A recent interim order passed by the Delhi High Court, while affirming the personality rights of actor Anil Kapoor, held for the first time that employing technological dark patterns, including deepfakes, to mislead consumers for commercial purposes is violative of personality rights and goes beyond the right to freedom of speech and expression.
In addition to these legal provisions, in January , 2023, the Ministry of Information and Broadcasting issued an advisory to media organisations to exercise caution while airing content that could be manipulated or tampered with. The Ministry also advised media to clearly label any manipulated content as “manipulated” or “modified” to ensure that viewers are aware that the content has been altered.
In February, the IT ministry issued advisories to the chief compliance officers of various social media platforms after it received reports regarding the potential use of AI-generated deepfakes that were manipulating people by generating doctored content.
Learning from others
In the USA, the Deepfakes Accountability Act (passed in 2019), mandated deepfakes to be watermarked for the purpose of identification.
Regulation of Deep Synthesis Technology in China
China’s new regulations, called Deep Synthesis Provisions, govern deep synthesis (or deepfake) technology and services, including text, images, audio, and video produced using AI-based models.
Two categories of entities need to abide by the provisions: the platform providers that provide content generation services and end-users who use such services. Under these new Chinese regulations, any content that was created using an AI system must be clearly labeled with a watermark i.e., text or image visually superimposed on the video indicating that the content had been edited.
Content generation service providers must undertake not to process personal information and comply with other rules such as the evaluation and verification of AI algorithms deployed, authentication of users (to enable verification of the creators of the videos), and setting up feedback mechanisms for content consumers.
More than 90 percent of deepfakes are porn, according to the research company Sensity AI.
Regulatory challenges and considerations for deepfakes:
Navigating Regulatory Guardrails: Regulators are tasked with balancing the interests of various stakeholders while creating rules for the responsible use of deepfake technology as it becomes more widespread.
Enforcement Hurdles: The anonymous and agile nature of malicious deepfake creators, combined with the borderless realm of online platforms, presents significant enforcement challenges.
Free Speech Concerns: Deepfakes pose risks to free speech, especially in the political arena, where they can be used to disseminate false or misleading information.
Legal Recourse and Research: Current legal mechanisms like takedown notices and lawsuits tackle issues around copyright and defamation, but further research is necessary to gauge their effectiveness and develop best practices.
Standards Development: Organizations such as WIPO are working on creating guidelines, like the recommendation for a remuneration system for victims of deepfake misuse and copyright concerns regarding deepfakes.
Consumer Law and Advocacy: Consumer protection laws may be applicable in instances of deception or fraud involving deepfakes, and groups like the Electronic Frontier Foundation advocate using existing legal frameworks and public education to regulate deepfake technology.
Legal Gaps and Human Rights: Regulators are examining existing laws for inadequacies and exploring new ways to safeguard human rights, privacy, personal data, and intellectual property in the face of deepfake technology.
‘Seeing is believing’ is an old saying, but with deepfakes, one can no longer believe what they are viewing.
Way Forward:
Right now, deepfake technology is in its infancy, and it can be easily recognized as fake. However, deepfake technology is quickly maturing and increasingly becoming more difficult to detect.
Recognizing these complex issues, there is a clear necessity for the careful regulation of deepfakes. Countries like China are pioneering in this realm, having enacted legislation that requires the clear labeling of deepfake-enhanced media content. The overarching goal is to foster robust AI risk management practices to curb the negative repercussions that may arise from the misuse of this potent technology.
Disclaimer: Image and some content have been generated using AI!!!
Please share!
If you like this post, please share it with your UPSC friends, and groups. Nudge them to subscribe. Please spread the word!
Reference:
https://www.responsible.ai/post/a-look-at-global-deepfake-regulation-approaches
https://blogs.lse.ac.uk/southasia/2020/05/21/deepfakes-in-india-regulation-and-privacy/
Holistic coverage