🎙️ No sensationalism, just facts and context. Donate now
How will Australia’s teen social media ban work?
Feb 25, 2025

How will Australia’s teen social media ban work?

HTML EMBED:
COPY
The onus for enforcing the stringent rules will be on the tech platforms. One of the challenges involved is reliably and securely assessing users' age.

This story was produced by our colleagues at the BBC.

It may not be surprising that young Australians have mixed feelings about plans to ban social media for kids under 16. While some recognize that harm like online bullying is an issue, others think platforms don’t pose enough of a threat for a full ban.

Australia isn’t the only country in the world mulling how to better protect young people from harm on the internet — but it has passed some of the most stringent rules to date.

The law, which comes into effect later this year, puts the responsibility for stopping kids under 16 years of age from accessing social media firmly on the shoulders of tech companies, with fines of up to U.S. $32 million if they fail. Parents cannot give permission for kids under the age of 16 to have accounts, but apps for messaging, gaming, health and education will be excluded.

Digital and mental health researchers and academics have raised concerns about the impact of a blanket social media ban, including associate professor Susanne Schweizer of the University of New South Wales’ school of psychology.

“It will lead to a divide in who can get around those bans and who can’t, effectively excluding some young people from interacting in a space that they have become used to interacting,” she said.

Many experts say the detail of how the ban will work is key and won’t just impact children.

“If a tech company is being held to a certain standard, they’re really going to have to ask for government ID, for example,” said professor Lisa Given at the Royal Melbourne Institute of Technology.

A government-funded age assurance technology trial is now underway, led by a U.K.-based nonprofit company. It aims to assess the strength of the age assurance technology currently available to companies.

“What the Australian government are looking for in particular is that they can be deployed and can be operational,” explained Tony Allen, the trial’s lead, and CEO of the Age Check Certification Scheme. “That they can preserve privacy, they can be secure, they can be accessible, and they can perform as intended.”

Currently, there are three types of age assurance.

“Age verification is where you find somebody’s date of birth, and you can bind that to the individual,” he said. “And that can be on a passport, a driving license, a bank record, a school record, wherever your date of birth is held.

“The second one is age estimation, which is about features of humans that vary with age, your face, your voice, your hands, your pulse. Age inference is about any other information about you from which it’s reasonable to infer something about your age. A commercial airline pilot is likely to be over 21 because that’s the minimum age to be a commercial airline pilot,” he added.

The challenge of assessing age has held up other countries’ attempts to restrict social media access, Given said. “Texas in the United States tried to implement a ban of anyone under the age of 18. France has tried under the age of 14, but they’ve not been terribly successful.

“There are a number of different workarounds that people can actually use, such as [virtual private network] access or just accessing content on other people’s accounts.”

But Allen argues that technology can make it harder to get around age bans. “Any age assurance system, online or offline, is not foolproof,” he said. “Despite the fact that kids have been banned from buying tobacco and alcohol for years, it doesn’t stop them necessarily being able to get hold of it in certain places. I think with technology, it is harder.”

Platforms like TikTok, Instagram and Snapchat already require users to be 13 or older. But, according to Julie Inman Grant, Australia’s e-safety commissioner, it’s rarely enforced.

“There’s been a degree of almost willful blindness in terms of they know there are underage users on their platforms,” she said. “They haven’t had to do anything about it, but now they will.”

So what about the industry’s reaction? Sunita Bose, managing director of the Digital Industry Group, which advocates for the sector in Australia, said that the ban may push young people toward more harmful, unregulated places online.

“The focus really needs to be on ensuring the platforms are safe through ensuring that the work that the mainstream industry is doing at the moment in order to get harmful content down quickly, that that continues, that never stops,” she said.

The details of what requirements tech companies will have to meet are still being worked on. But by the end of this year, the responsibility for keeping kids off social media will be out of the hands of families, and in the hands of tech platforms.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer