Digital Child Abuse: The Danger of AI-Based Exploitation

PWOnlyIAS

April 03, 2025

Digital Child Abuse: The Danger of AI-Based Exploitation

Recently, the Department for Science, Innovation and Technology of the British Government, along with the AI Security Institute, released the first-ever International AI Safety Report 2025.

Key Findings of the International AI Safety Report 2025

  • Emerging AI-Related Harms:  Existing AI-related harms include scams, child sexual abuse material (CSAM), non-consensual intimate imagery (NCII), bias, and privacy violations, while emerging threats such as AI-enabled hacking, biological attacks, and large-scale job displacement are becoming more evident.
  • Gendered Nature of Deepfake Abuse: Abuse using fake pornographic or intimate content overwhelmingly targets women and girls: A 2019 study found that 96% of deepfake videos are pornographic.
  • Fake Content for Exploitation: Malicious actors can misuse AI-generated fake content to extort, scam, psychologically manipulate, or sabotage targeted individuals or organisations.
  • Children and AI-Generated Sexual Content: Hundreds of CSAM images were found in open datasets used to train AI models like Stable Diffusion.
  • Limited Efficacy of Countermeasures: Countermeasures that help people detect fake AI-generated content, such as warning labels and watermarking, show mixed efficacy.

About Digital Child Abuse

  • Digital child abuse refers to the use of online platforms, technologies, and AI tools to exploit, manipulate, or harm children. 
    • This includes cyberbullying, child grooming, non-consensual sharing of explicit content, and AI-generated child sexual abuse material (CSAM).
      • According to a Lancet study, One in 12 children globally faces online sexual abuse.

Role of Artificial Intelligence (AI) in Child Exploitation

  • AI-Generated Child Sexual Abuse Material (CSAM): Deepfake technology and AI-based image synthesis can generate realistic child abuse images without involving real victims. 
    • AI-generated CSAM has risen 380 per cent, with 245 confirmed reports in 2024 compared with 51 reports in 2023, according to the Internet Watch Foundation (IWF) data.
  • Grooming and Manipulation: Predators are increasingly using AI chatbots and voice synthesis tools to impersonate children or trusted adults, engaging in manipulative conversations that can lead to real-world abuse.
  • Automated Harassment and Cyberbullying: AI tools enable large-scale, automated cyberbullying through spam messages, bot-generated threats, and deep fake blackmail.
    • Example: In 2024, South Korean authorities uncovered a case where AI-generated deep fake images of schoolgirls were created and circulated in online groups.
  • Data Exploitation and Privacy Violations: Children’s personal data, including images, preferences, and voice samples, are harvested from social media and online platforms.

Impact of Digital Child Abuse

  • Emotional Trauma: AI-generated harassment can lead to anxiety, depression, and suicidal tendencies in children.
  • Identity Theft and Loss of Privacy: AI can be used to create digital replicas of children, affecting their future online presence.
  • Social Isolation: Victims of AI-based abuse often withdraw from social interactions due to fear and stigma.
  • Distrust in Technology: Parents may become overly restrictive, limiting children’s healthy digital engagement.
  • Cybercrime Escalation:  AI enables large-scale abuse, increasing the burden on law enforcement agencies.

Challenges in Preventing Digital Child Abuse

  • Open-web accessibility of AI-driven CSAM tools: AI-driven CSAM tools, such as deepfake generators, text-to-image AI models, and synthetic voice simulators, are increasingly accessible on the open web and dark web, making it easier for offenders to create exploitative content.
  • Jurisdictional Issues: AI-generated abuse materials are often created and distributed across international borders, complicating enforcement efforts.
  • Lack of Clear Legal Definitions: Many countries have yet to define AI-generated CSAM explicitly, creating loopholes for offenders to escape prosecution.
    • Many countries lack laws criminalizing AI-generated CSAM, as it doesn’t involve a real child.
  • Anonymity of Offenders: Offenders use Virtual Private Networks (VPNs), encrypted messaging apps, and decentralized networks to avoid detection.
  • Lack of Awareness: Parents and educators often underestimate risks, leaving children vulnerable to grooming and sextortion.

Ethical Dilemmas in AI-Driven CSAM Detection 

  • Protecting Children Vs Violating Privacy: AI-based scanning can prevent child exploitation, but it also raises privacy issue
    • Example: Google’s automatic photo scanning flagged a father in San Francisco for sending a medical image of his child to a doctor.
  • Mass Surveillance vs. Civil Liberties: AI monitoring of private content may normalize excessive corporate and government surveillance. Breaking encryption to scan messages for CSAM can expose all users to privacy risks.
    • Example: Scanning encrypted messages for CSAM could allow authoritarian regimes to track dissenters.
  • Risk of False Positives: AI detection systems lack context and can mistakenly flag innocent content as CSAM.
    • Example: Parents have faced wrongful accusations due to AI misidentifying medical or family photos, causing legal and reputational harm.
  • Arbitrary Actions of AI Companies: There is a chance that Tech companies enforcing AI policies without oversight can lead to arbitrary actions.
    • Example: According to Google’s own transparency report, it disabled 140,868 accounts during the period of June to December, 2021.
  • Defining AI-Generated CSAM: AI-created CSAM may not involve a real victim, complicating legal and ethical enforcement.
    • Example: Courts struggle to prosecute AI-generated CSAM under existing child protection laws.

Supreme Court Observations In Just Rights for Children Alliance vs. S. Harish (2024)

  • Viewing, Downloading, or Storing CSEAM is a Criminal Offense: The Supreme Court overturned the Madras High Court’s ruling and ruled that merely watching, downloading, storing, or possessing Child Sexual Exploitative and Abuse Material (CSEAM) is a punishable offense under:
    • Section 15 of the POCSO Act, 2012 (criminalizes possession of child sexual abuse material).
    • Section 67B of the IT Act, 2000 (penalizes browsing, transmission, and publication of child abuse content).
  • Constructive Possession Doctrine: Even if a person views and later deletes CSEAM, they are liable if they had the knowledge and power to control it. Circulating or selling links to such material is also punishable.

Measures For The Prevention Of Digital Exploitation Of Children In India

  • Steps to be implemented by Internet Service Providers (ISPs) to protect children from sexual abuse online. These are:
    • Blocking of Websites: Government blocks websites containing extreme Child Sexual Abuse Material (CSAM) based on INTERPOL’s “Worst-of-list.”

International Measures To Prevent Digital Child Abuse

  • Lanzarote Convention: The Lanzarote Convention is the first regional treaty dedicated specifically to the protection of children from sexual violence. Adopted in Lanzarote, Spain in 2007, it entered into force in 2010 and has been signed by all Council of Europe Member States. It requires criminalization of all kinds of sexual exploitation and abuse against children.
  • Internet Watch Foundation (IWF): Identifies and removes online child abuse content.
  • Google’s Content Safety API Match: Uses artificial intelligence to detect and report CSAM.
  • United Kingdom: A “pseudo image” generated by a computer which depicts child sexual abuse is treated the same as a real image and is illegal to possess, publish or transfer in the UK.
  • Project Arachnid: It is an innovative tool developed by the Canadian Centre for Child Protection to combat child sexual abuse material (CSAM) on the internet.

      • The list is shared by the Central Bureau of Investigation (CBI) (National Nodal Agency for INTERPOL) with the Department of Telecommunications (DoT).
      • DoT directs major Internet Service Providers (ISPs) to block such websites.
    • Dynamic Removal of CSAM: ISPs in India are required to adopt and remove CSAM dynamically based on the Internet Watch Foundation (IWF), UK list.
  • Cybersecurity Awareness and Education: The Ministry of Electronics and Information Technology (MeitY) has launched the Information Security Education and Awareness (ISEA) Programme.
  • IT Act, 2000:  Section 67B prescribes stringent punishment for publishing, browsing, or transmitting child pornography in electronic form.
  • Online Cyber Crime Reporting Portal (www.cybercrime.gov.in): Enables public to report cases of child pornography, child sexual abuse material (CSAM), and sexually explicit content anonymously or with tracking.
  • Protection of Children from Sexual Offences (POCSO) Act, 2012:
    • Section 15: Criminalised storage and possession of child pornographic material.
    • Section 43: Mandates awareness campaigns by the Central and State Government.
  • Bharatiya Nyaya Sanhita(BNS): Section 294 of the Bharatiya Nyaya Sanhita penalises the sale, distribution, or public exhibition of obscene materials, while Section 295 makes it illegal to sell, distribute, or exhibit such obscene objects to children.
  • India’s existing laws do not specifically address AI-generated CSAM.Way Forward:
  • Updating Legal Terminology: Amend the POCSO Act to replace ‘child pornography’ with Child Sexual Abuse Material (CSAM) for a more comprehensive definition, as recommended by the National Human Rights Commission of India (NHRC)Advisory (October 2023).
  • Defining ‘Sexually Explicit’ in IT Act: Introduce a clear definition of ‘sexually explicit’ under Section 67B of the IT Act to facilitate the real-time identification, monitoring, and blocking of CSAM.
  • Expanding the Definition of Intermediaries: Amend the IT Act to explicitly include Virtual Private Networks (VPNs), Virtual Private Servers (VPS), and Cloud Services as intermediaries to impose statutory obligations on them for CSAM compliance.
  • Criminalizing AI-Generated CSAM: Update laws to explicitly penalize deepfake and AI-synthesized child abuse content.
    • Example: The United Kingdom became the first country to introduce laws against artificial intelligence tools used to generate sexualised images of children.
  • International Cooperation & UN Frameworks: Support the adoption of the UN Draft Convention on ‘Countering the Use of Information and Communications Technology for Criminal Purposes’ at the UN General Assembly to align with global best practices.
  • Creation of National Database of Sexual Offenders: Individuals involved in accessing or distributing Child Sexual Abuse Material (CSAM) should be listed in a national registry and barred from child-related employment.
  • Digital Literacy in Schools: Example: The UK’s Education for a Connected World framework teaches children online safety, safe internet practices etc through interactive lessons and parental guidance.

Conclusion

A shift from an ‘accused-centric’ and ‘act-centric’ to a ‘tool-centric’ approach’ is crucial in combating digital child abuse by regulating AI, encryption, and online platforms to prevent misuse.

To get PDF version, Please click on "Print PDF" button.

Need help preparing for UPSC or State PSCs?

Connect with our experts to get free counselling & start preparing

Aiming for UPSC?

Download Our App

      
Quick Revise Now !
AVAILABLE FOR DOWNLOAD SOON
UDAAN PRELIMS WALLAH
Comprehensive coverage with a concise format
Integration of PYQ within the booklet
Designed as per recent trends of Prelims questions
हिंदी में भी उपलब्ध
Quick Revise Now !
UDAAN PRELIMS WALLAH
Comprehensive coverage with a concise format
Integration of PYQ within the booklet
Designed as per recent trends of Prelims questions
हिंदी में भी उपलब्ध

<div class="new-fform">






    </div>

    Subscribe our Newsletter
    Sign up now for our exclusive newsletter and be the first to know about our latest Initiatives, Quality Content, and much more.
    *Promise! We won't spam you.
    Yes! I want to Subscribe.