Australia has become the first country in the world to enforce a minimum age for social media use, requiring platforms such as Instagram, YouTube and Snapchat to block more than a million accounts of users below the age of 16.
About the Online Safety Amendment (Social Media Minimum Age) Act
- Aim: The Legislation aims to protect minors from potential harm from social media.
- Guidelines: Age-restricted platforms must:
- Take reasonable steps to find and remove accounts of users below 16.
- Prevent them from creating new accounts, including blocking workarounds.
- Have mechanisms to correct errors, ensuring no legitimate accounts are unfairly removed.
- Penalties: Platforms that fail face fines up to $33 million.
- Platforms Covered: Mandatory age checks apply to Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, Twitch, X, and YouTube.
- Excluded from Law for now: Dating apps, gaming platforms, AI chatbots.
- The Australian government may revisit the list depending on the evolving situation, and if young users rush to other platforms that are currently not covered.
- Coverage criteria: Platforms enabling:
- Online social interaction between users
- Linking or interacting with others
- Posting content
Rationale of the Australian Government
- Exposure to Online Threats: According to the Australian government, being logged into a social media account increases the likelihood that users under 16 will face various online threats.
- These include cyberbullying, stalking, grooming, and exposure to harmful or hateful content.
- Platform Design Features Amplify Risk: Social media platforms often incorporate design features (persuasive design) that encourage young users to spend extended periods online.
- These features can also serve content that may negatively impact the health and well-being of under-16 users.
- Prevalence of Harmful Content: An Australian online safety regulator has found that a significant proportion of children in the country have encountered harmful content while using social media platforms.
Arguments Against the Ban
- Privacy Concerns: There is an argument that mandatory age-verification ID cards violate privacy.
- Risk of Shifting Teens to Unsafe Apps: It has been noted that disconnecting teens from their friends and family does not make them safer, but may push them toward less secure, less private messaging apps.
- Reduction of Parental Control: The law takes control away from parents, who now manage parental controls, and gives it to the government.
- Free Speech Concerns: The Australian Human Rights Commission stated that a blanket ban on social media for under-16s could curtail their right to free speech.
- Inefficiency of the Ban: Critics say the law won’t make young people safer online or help those harmed by technology.
Comparing Australia and India on Regulating Children’s Online Safety
- Minimum Age Requirement: Australia has banned social media for users under 16.
- India does not have a law specifically to regulate the use of social media platforms by children.
- Regulatory Approach: Australia follows a policy of removing and blocking underage accounts.
- India does not have a specific law regulating children’s use of social media platforms.
- Under the Digital Personal Data Protection Act, 2023, tech companies must implement mechanisms for collecting “verifiable” parental consent before processing personal data of children, even though no specific technical method is prescribed.
- Under Indian law, a child is defined as an individual below the age of 18.
Conclusion
Australia’s minimum-age social media law marks a bold, controversial step to protect children from online harm. Yet concerns over privacy, parental autonomy, free speech, and practical effectiveness show the need for more balanced, adaptable regulation.