A first-of-its-kind ban aimed at protecting under-16s from addictive algorithms, online predators, and digital bullies will prevent children in Australia from accessing their social media accounts on Wednesday.
No other nation has taken such sweeping measures, and the rollout of the tough new law is being closely watched by legislators around the globe.
The majority of the 10 platforms that have been banned—Instagram, Facebook, Threads, Snapchat, YouTube, TikTok, Kick, Reddit, Twitch, and X—have stated that they will comply with the ban and will use age verification technology to suspend the accounts of users under the age of 16; however, they do not believe this will make children any safer. Australian Prime Minister Anthony Albanese is already touting the ban as a success because families are talking about social media use. It is expected of some children and their parents to break the rule, but neither will be punished.
What the platforms are doing
Albanese stated on Sunday to the public broadcaster ABC, “We’ve said very clearly that this will not be perfect… but it’s the right thing to do for society to express its views, its judgment, about what is appropriate.”
Platforms can face fines of up to 49.5 million Australian dollars ($32 million) if they can demonstrate that they have taken “reasonable steps” to deactivate accounts used by minors and prevent the opening of new accounts.
Youtube: On December 10, YouTube account holders will be automatically signed out. Their channels will no longer be visible, but their data will be saved so that when they turn 16, they can reactivate their accounts. YouTube will continue to be accessible to children without the need to sign in.
TikTok: According to TikTok, on December 10, all accounts used by minors will be deactivated. It states that the age verification technology will determine who is using the account, regardless of which email is used or whose name is on it. Content previously posted by young users will no longer be viewable. Parents who believe their children may have lied about their age when opening accounts are also encouraged to report them on the platform.Despite the fact that current accounts held by under-16s will not be deactivated until January 9.
Twitch: Twitch states that under-16s in Australia will not be permitted to create new accounts on the live streaming site, which is popular among gamers. A request for an explanation of the delay was not met by the company.
Meta: On December 4, Meta began deleting Instagram, Facebook, and Threads accounts belonging to minors. Users were invited to download their content, and it’ll be there should they want to reactivate their account when they turn 16.
Which platforms aren’t included?
Along with the list of banned sites is a list of platforms that aren’t considered part of the ban – yet. They are Discord, GitHub, Google Classroom, LEGO Play, Messenger, Pinterest, Roblox, Steam and Steam Chat, WhatsApp and YouTube Kids.
Given recent reports claiming that children have been the targets of adult predators within its games, the decision to exclude Roblox was viewed as puzzling by many Australians.
eSafety Commissioner Julie Inman-Grant has said that talks with Roblox began in June and it agreed to introduce new controls which are being rolled out this month in Australia, New Zealand, and the Netherlands, and elsewhere in January.
To use the chat features, users will need to verify their age, and they can only talk to someone their age.
How are platforms identifying under-16 accounts?
The date of birth that users entered when they opened an account gave banned platforms a good idea of who was using their service, but the new law requires them to actively verify their ages.
Some adult users have objected because they are concerned that they will be required to verify their age. The Age Assurance Technology Trial carried out early this year convinced the government that age checks could be done without compromising privacy.
Using live video selfies, email addresses, or official documents, platforms are determining age. Yoti, an age verification service whose customers include Meta, claims that the majority of users select a video selfie that uses facial data points to estimate age.
What happens next?
Part of the motivation for the ban was to get children offline and more engaged with the real world, and that is something officials plan to measure.
We’ll be looking at a variety of factors, including whether children are sleeping more and interacting more. Are they taking fewer antidepressants? Are they spending more time reading? Are they participating in sports outside? eSafety Commissioner Inman-Grant told the Sydney Dialogue last week.
However, she stated that they will also monitor any unintended consequences. What happens if they go to darker corners of the internet? Six experts from Stanford University’s Social Media Lab will work with the eSafety Commissioner to gather the data, and the whole process will be reviewed by an independent Academic Advisory Group of 11 academics from the United States, the United Kingdom and Australia.
Stanford University said its approach, methods and findings will be published for scrutiny by researchers, the public and policymakers worldwide.
“We are hopeful that the evidence generated can directly support and inform decision-making by other countries as they seek to promote the online safety of children in their jurisdictions,” the university said in a statement.