• Recently, a deepfake video featuring actress Rashmika Mandanna went viral on social media, causing huge uproar over dangers of deepfakes.


  • Deepfakes are digital media, video, audio, and images, edited and manipulated using Artificial Intelligence (AI).
  • Since they incorporate hyper-­realistic digital falsification, they can potentially be used to damage reputations and undermine trust in democratic institutions.


  • It takes a few steps to make a face-swap video.
  • First, you run thousands of face shots of the two people through an AI algorithm called an encoder.
  • The encoder finds and learns similarities between the two faces, and reduces them to their shared common features, compressing the images in the process.
  • A second AI algorithm called a decoder is then taught to recover the faces from the compressed images.
  • Because the faces are different, you train one decoder to recover the first person’s face, and another decoder to recover the second person’s face.
  • To perform the face swap, you simply feed encoded images into the “wrong” decoder. For example, a compressed image of person A’s face is fed into the decoder trained on person B.

  • The decoder then reconstructs the face of person B with the expressions and orientation of face A.
  • For a convincing video, this has to be done on every frame.
  • Another way to make deepfakes uses what’s called a generative adversarial network, or Gan.


  • Poor-quality deepfakes are easier to spot.
  • The lip synching might be bad, or the skin tone patchy.
  • There can be flickering around the edges of transposed faces.
  • And fine details, such as hair, are particularly hard for deepfakes to render well, especially where strands are visible on the fringe.
  • Governments, universities and tech firms are all funding research to detect deepfakes.
  • Last month, the first Deepfake Detection Challenge kicked off, backed by Microsoft, Facebook and Amazon.
  • Massachusetts Institute of Technology (MIT) created a Detect Fakes website to help people identify deepfakes by focusing on small intricate details.
  • It will include research teams around the globe competing for supremacy in the deepfake detection game.


  • Starting with the most prevalent use-case, some research has estimated that 96 percent of deepfake videos are created for pornography.

  • The variety of deepfake apps have been used by millions worldwide to create content which has been shared on social media platforms.
  • The application of deepfakes in the arena of politics is likely the most controversial.
  • During the 2022 Russian invasion of Ukraine, a deepfake of Russian leader Vladimir Putin showed him surrendering to Ukraine circulated on Twitter.
  • The high quality of deepfakes make it difficult for the general public to discern fact from fiction.
  • They can also be used to exploit people, sabotage elections and spread large­scale misinformation.


  • India lacks specific laws to address deepfakes and AI­related crimes, but provisions under a plethora of legislations could offer both civil and criminal relief.
  • For instance, Section 66E of the Information Technology Act, 2000 (IT Act) is applicable in cases of deepfake crimes that involve the capture, publication, or transmission of a person’s images in mass media thereby violating their privacy.
  • Such an offence is punishable with up to three years of imprisonment or a fine of two lakh.
  • Further, Sections 67, 67A, and 67B of the IT Act can be used to prosecute individuals for publishing or transmitting deepfakes that are obscene or contain sexually explicit acts.
  • The IT Rules, also prohibit hosting ‘any content that impersonates another person’ and require social media platforms to quickly take down ‘artificially morphed images’ of individuals when alerted.
  • In case they fail to take down such content, they risk losing the ‘safe harbour’ protection — a provision that protects social media companies from regulatory liability for third ­party content shared by users on their platforms.
  • Provisions of the Indian Penal Code (IPC) can also be resorted for cybercrimes associated with deepfakes — Sections 509 (words, gestures, or acts intended to insult the modesty of a woman), 499 (criminal defamation), and 153 (a) and (b) (spreading hate on communal lines) among others.


  • S. President Joe Biden signed a far­ reaching executive order on AI to manage its risks, ranging from national security to privacy.
  • Additionally, the DEEP FAKES Accountability Bill, 2023, recently introduced in Congress requires creators to label deepfakes on online platforms and to provide notifications of alterations to a video or other content.
  • The European Union (EU) has strengthened its Code of Practice on Disinformation to ensure that social media giants like Google, Meta, and Twitter start flagging deepfake content or potentially face fines. 


  • Different countries around the globe have passed legislations to curb the misuse of deepfake tech.
  • AI governance in India cannot be restricted to just a law and reforms have to be centred around establishing standards of safety, increasing awareness, and institution building.



The post DEEPFAKES AND AI appeared first on Vajirao IAS.


Source link

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *