Australia Bans Under-16 Teenagers From Accessing Social Media Accounts
Children across Australia will wake up on Wednesday with no access to their social media accounts under a world-first ban designed to shelter those under 16 from addictive algorithms, online predators, and digital bullies, reports CNN.
No other nation has taken such sweeping measures, and the rollout of the tough new law is being closely watched by legislators around the globe. Most of the 10 banned platforms – Instagram, Facebook, Threads, Snapchat, YouTube, TikTok, Kick, Reddit, Twitch and X – say they’ll comply with the ban, using age verification technology to identify under-16s and suspend their accounts, but they don’t believe it’ll make children safer.
Australian Prime Minister Anthony Albanese is already touting the ban as a success because families are talking about social media use. Some children and their parents are expected to flout the ban, but there are no consequences for either.
“We’ve said very clearly that this won’t be perfect… but it’s the right thing to do for society to express its views, its judgment, about what is appropriate,” Albanese told the public broadcaster ABC on Sunday.
Under the law, platforms need to show they’ve taken “reasonable steps” to deactivate accounts used by under-16s, and to prevent new accounts being opened, to avoid fines of up to 49.5 million Australian dollars ($32 million).
Reuters says Snapchat users will have their accounts suspended for three years or until they turn 16.
YouTube account holders will be automatically signed out on December 10. Their channels will no longer be visible; however, their data will be saved so they can reactivate their accounts when they turn 16. Children will still be able to watch YouTube without logging in.
TikTok says all accounts used by under-16s will be deactivated on December 10. It says it doesn’t matter which email is used or whose name is on the account – its age verification technology will determine who’s using it. Content previously posted by young users will no longer be viewable. The platform’s also encouraging parents who believe their children may have lied about their age when opening accounts to report them.
Twitch says no under-16s in Australia will be allowed to create new accounts on the live streaming site popular with gamers from December 10, but current accounts held by under-16s will not be deactivated until January 9. The company did not respond to a request to explain the delay.
Meta started removing accounts belonging to teens under 16 on Instagram, Facebook and Threads on December 4. Users were invited to download their content, and it’ll be there should they want to reactivate their account when they turn 16.
How are platforms identifying under-16 accounts? Banned platforms already had a good idea of who was using their service from the date of birth users entered when they opened an account, but the new law requires them to actively verify their ages.
That’s raised objections from some adult users who are concerned that they’ll be asked to verify their age. The Age Assurance Technology Trial carried out early this year convinced the government that age checks could be done without compromising privacy.
Platforms are checking ages via live video selfies, email addresses or official documents. According to Yoti, an age verification company whose clients include Meta, most users choose a video selfie which uses facial data points to estimate age.
BBC reports that Gloucestershire-based mother Ellen Roome will be watching how Australia’s social media ban unfolds very closely over the coming days and weeks.
In April 2022 she discovered her 14-year-old son, Jools Sweeney, unconscious in his room. She believes her son died after an online challenge went wrong and that his social media accounts could provide the answer – but she has, so far, not been allowed to access them.
Now on a campaign to change the law to allow parents access to loved ones’ social media accounts following their deaths, she says the UK isn’t doing enough to protect children from online harm.
“I think its a shame that the UK isn’t doing enough to protect children more, and hats off to Australia who are basically leading the way with banning social media,” she tells me.
While the UK is not following in Australia’s footsteps with a ban, new rules were rolled out in July threatening social media and online platforms with fines unless children were properly shielded from pornography and other harmful content.
Now keenly watching Australia’s example, Roome says it will be interesting to see whether the ban will keep children off social media, or whether they will find a workaround.
What are other countries doing on this issue? In France, a law was passed in 2023 requiring parental consent to be obtained for under-15s to create social media accounts – but with no set date to begin and it’s yet to be enforced.
In China, the country has some of the tightest internet controls in the world, with many sites banned, but there is no age-specific social media ban. New rules do regulate the amount of time young people spend online and on social media sites.
In Denmark, the government in November announced plans to end social media access for children under 15, but parents would be able to grant their 13 and 14-year-olds access. It is unclear when or how the measures would be imposed.
In Germany, children aged 13 to 16 can only use social media if their parents give consent, but there are questions over how effectively the rules are enforced.
In Spain, the government this year approved a draft law to lift the minimum age for social media use to 16, but it hasn’t yet come into law.
In Malaysia, the government is moving to ban under 16s from having social media accounts – the plans are in their early stages but the government says it hopes they can come into force next year.
In July, the UK’s media regulator Ofcom rolled out new measures aimed at restricting what children can see online, including pornography and other harmful material.
They also put the onus on tech firms, including social media companies, to ensure children weren’t exposed to harmful content.
There are more than 40 guidelines companies must follow, or they risk big fines. They include Algorithms must be changed to filter out harmful content from children’s feeds. Stricter age checks for people accessing age-restricted content, like pornography, taking quick action when harmful content is identified, making terms of service easy for children to understand, giving children the option to decline group chats invitations which may include harmful content, and to block and mute accounts and to disable comments on their own posts and providing support to children who come across harmful content.
The Australian government says its world-first ban is partly about having parents’ backs. Emma Mason, whose daughter Tilly died by suicide at the age of 15 after she was bullied online, is a supporter of what the government is doing. She tells the BBC that she sees the move as “protection” rather than “control”.
“We have to have the rules, even if those rules are broken,” she says.

