11 Mar, 2026
It has now been three months since Australia made history. On December 10, 2025, Australia became the first country in the world to enforce a nationwide social media ban under 16 Australia. It was a moment that reverberated around the world, with governments from Denmark to the United Kingdom to Malaysia watching closely and preparing their own versions of the law.
Now, on March 11, 2026, Australians are living with the reality of what that landmark decision actually looks like on the ground. And the answer, as parents across the country are discovering, is complicated.
From Queensland to the Northern Territory, something genuinely different has happened in some Australian homes. Children are reaching for their phones less often in the morning. Family dinner conversations are lasting longer. Parents who were struggling to compete with Instagram and TikTok for their children's attention are reporting real, tangible changes to the rhythm of daily life at home.
At the same time, other parents are frustrated. Their teenagers were back on their favourite platforms within days of the ban. New accounts, VPNs, and borrowed identities have given determined teens a path around the restrictions. And the platforms themselves, despite removing millions of accounts, have been openly critical of the law and its design.
So is Australia's social media ban working? The honest answer is that it is working for some children, partially for many others, and not at all for a committed minority. But the more important question for Australian parents is not whether the ban is perfect. It is what you can do, right now, to make it work for your family.
In January 2026, the Australian government announced the first major data point from the social media ban's enforcement: more than 4.7 million accounts belonging to children and young people had been deactivated, deleted, or restricted across the ten platforms covered by the law.
That is not a small number. It represents the largest coordinated removal of children from social media platforms that has ever occurred in any country. The ten platforms required to enforce the ban, Facebook, Instagram, TikTok, Snapchat, YouTube, Threads, Reddit, Twitter, Twitch, and Kick, all reported compliance to the eSafety Commissioner Australia on time.
To illustrate the scale: Meta, which owns Facebook, Instagram, and Threads, announced that by the day after the ban took effect, it had already removed nearly 550,000 accounts belonging to users identified as being under 16. Snapchat reported blocking 415,000 accounts by the end of January alone, during which process it asked hundreds of thousands of users to verify their age using an Australian bank account, facial recognition age assessment, or official documents such as a passport or driver licence.
The eSafety Commissioner Julie Inman Grant described the result as a clear demonstration that the law is achieving its primary purpose. Communications Minister Anika Wells told reporters that Australia had stared down some of the most powerful and wealthy companies in the world and won.
Prime Minister Anthony Albanese pointed to the fact that despite initial scepticism from technology industry figures and civil libertarians, the ban is now being replicated around the world as a source of genuine Australian pride.
What the 4.7 million figure does not tell us is how many of those children have since created new accounts, found workarounds, or migrated to platforms not covered by the law. That is where the story gets more complex.
Reporting from across Australia through early March 2026 paints a picture of two different realities living side by side in Australian family homes.
In some homes, the change has been striking. Parents have described smartphones lying unnoticed on the kitchen table in the morning, something that would have been unthinkable six months ago. Children who were previously scrolling for hours each evening are now more present, more talkative, and more engaged with family life. Some teenagers have picked up books, returned to outdoor activities, or rediscovered face-to-face friendships that had drifted into digital-only contact.
These are exactly the outcomes the government promised. And for the families experiencing them, the ban has delivered.
In other homes, the story is very different. A mother of twins posted publicly in February 2026 that nothing significant had changed for her children except that the government was celebrating itself. Most of their friends, she said, were back online immediately by creating new accounts using slightly modified details or borrowing a parent's identity information to pass age verification.
This is the honest dual reality of the ban at the three-month mark. It is changing the behaviour of children who were more casual or less digitally determined users of social media. It is having limited impact on teenagers who are highly motivated to maintain their social media presence and have the digital literacy to find workarounds.
Australian parents need to understand the specific methods teenagers are using to circumvent the ban, because knowing the methods is the first step toward having productive conversations about them.
The most common method is creating new accounts with inaccurate ages. During the initial enforcement period, platforms required existing accounts to verify their age. But creating a brand new account with a stated age of 16 or older remains straightforward on most platforms. Until robust age assurance technology is universally deployed and maintained, this gap will remain.
The second method is the use of virtual private networks, commonly known as VPNs. A VPN allows a user to route their internet connection through a server in another country, making it appear as though they are not in Australia and therefore not subject to Australian law. VPN downloads spiked noticeably in Australia immediately after the ban came into effect. The eSafety Commissioner acknowledged the spike but noted that monitoring data did not show a corresponding sustained spike in usage, suggesting many downloads were precautionary rather than immediately active.
The third method involves using platforms that are not covered by the current law. The legislation specifically names the ten largest platforms. But a significant number of smaller platforms, niche social applications, gaming communities with social features, and messaging apps operate in spaces that the current law does not directly address. Some teenagers have migrated their social activity to these less-scrutinised environments.
The eSafety Commissioner has indicated that the regulator is actively monitoring migration patterns and may expand the list of covered platforms. The AI crackdown that came into force on March 9, 2026 is partly a direct response to the concern that children displaced from mainstream social media are migrating toward AI companion apps that carry their own distinct risks.
The current Australian social media ban applies to the following ten platforms for users under 16: Facebook, Instagram, TikTok, Snapchat, YouTube, Threads, Reddit, Twitter, Twitch, and Kick.
Notably absent from this list are a number of platforms that Australian teenagers actively use. WhatsApp, despite being a Meta product, is classified as a messaging service rather than a social media platform and is not currently covered. Discord, which is heavily used by Australian teenagers particularly in connection with gaming, is also not on the restricted list. Pinterest, BeReal, and a range of smaller content platforms are similarly not covered.
Gaming platforms with social features, including those built into consoles and PC gaming environments, also fall outside the current scope of the law. For teenagers whose social life is substantially organised around multiplayer gaming and the communities that form around it, the ban has limited direct impact.
The eSafety Commissioner has confirmed that the regulator is watching usage patterns and has the power under the Online Safety Act Australia to add platforms to the restricted list. Australian parents should not assume that platforms not currently named in the law are permanently exempt.
The social media ban faces a significant legal challenge that every Australian parent should be aware of. Two separate cases are proceeding in the High Court of Australia, and the outcomes could modify or potentially strike down key elements of the law.
The Digital Freedom Project, led by libertarian NSW Legislative Council member John Ruddick, is challenging the constitutional validity of the ban. Two teenagers, 15-year-olds Macy Neyland and Noah Jones, are named as plaintiffs. The case argues that the ban infringes on constitutional rights including the freedom of political communication.
Reddit is also pursuing a separate High Court challenge, arguing among other things that social media access can actually protect vulnerable minors by giving them access to communities and support networks, and that Reddit's primarily adult-oriented forum structure should place it outside the scope of the law.
The governments of New South Wales, South Australia, and Western Australia have all announced they will oppose both challenges. The federal government has stated clearly that it will not yield to legal intimidation and that it stands on the side of Australian parents and children rather than on the side of platforms.
An initial hearing is expected to take place by mid-2026 with a judgment likely later in the year. Depending on the outcome, the law could be upheld in full, modified in specific ways, or require legislative revision. Australian parents should follow this case closely as it will directly affect the legal framework protecting their children online.
eSafety Commissioner Julie Inman Grant delivered a significant update in early March 2026 that every Australian parent should understand. The Commissioner confirmed that all ten platforms covered by the ban have met their initial compliance obligations and reported removal figures on time.
However, she also signalled a significant escalation of enforcement priorities. The eSafety Commissioner has confirmed that the regulator is now shifting its focus from enforcing the initial account removal to preventing children from creating new accounts or otherwise circumventing the ban. This is a more technically demanding challenge than the initial removal phase.
In the same statement, the Commissioner confirmed that world-leading AI companion and chatbot restrictions came into force on March 9, 2026, a direct response to evidence that some children displaced from social media are gravitating toward AI apps that carry serious mental health risks. As of this week, those rules are now live and enforceable.
The Commissioner has also made clear that app stores and search engines themselves could become enforcement targets if AI platforms fail to meet the March 9 age restriction deadline. This is the same nuclear option approach that has given the social media ban its teeth: if you cannot find an app in the App Store or in Google search results, it effectively does not exist for Australian users.
Alongside the digital safety revolution, Australian families are also navigating a major change to how children with developmental delays access support. In February 2026, Health Minister Mark Butler released the operational plan for the Thriving Kids initiative.
This is a AU$4 billion investment, jointly funded by the Commonwealth and state territories, designed to move children with mild to moderate developmental delays out of the National Disability Insurance Scheme and into a new system of mainstream foundational supports.
For years, the NDIS has been stretched beyond its original design by a surge in children needing support for developmental delays. With nearly half of all new NDIS participants being children under nine, the government identified a sustainability crisis that was threatening the long-term viability of the scheme.
The Thriving Kids initiative creates two new categories of support for children and families who do not qualify for full NDIS packages but need more than nothing. Early Intervention Support will help children build core developmental and communication skills through funded access to allied health professionals including speech pathologists and occupational therapists, without the need for a full NDIS plan. Parental Skill Building will fund workshops and guidance to help parents themselves become the primary support for their children's developmental needs.
The rollout, originally planned for July 2026, has been pushed back to October 1, 2026, to give state governments more time to prepare. Full scale operation is expected by January 2028. For Australian families currently on NDIS waiting lists or paying out-of-pocket for allied health support, Thriving Kids represents a potentially significant change in what is available and how to access it.
Australian parenting experts and child development researchers are consistent on one key point: three months is too short a period to draw firm conclusions about the long-term impact of the social media ban.
What they are confident about is this. The children who stand to benefit most from the ban are not the highly digitally determined teenagers who will always find a way around restrictions. They are the children who were casual or passive users of social media, drifting into hours of scrolling more out of habit and social pressure than genuine desire. For these children, the removal of easy access to platforms is already producing measurable changes in how they spend their time.
The children who are least affected are those with strong social motivations to maintain their online presence, older teenagers in particular whose friendships, social lives, and sense of identity are substantially organised around digital communities.
Experts are also consistent in their warning about what comes next. The ban has displaced child attention from the most scrutinised platforms toward less visible spaces, including AI companion apps, gaming platforms, and smaller social applications. Without parallel action in those spaces, including the AI crackdown that came into force on March 9, the overall online safety picture for Australian children could improve in some dimensions while worsening in others.
The consensus among researchers and practitioners working with Australian families is that the ban is a necessary first step, not a complete solution. Its long-term success depends on consistent enforcement, expansion to cover new platforms and technologies, and investment in educating both parents and children about why these protections matter.
Australia's global influence on children's online safety regulation has moved faster than almost anyone anticipated. Within three months of the social media ban taking effect, the following countries had either enacted, proposed, or were actively preparing similar legislation.
Denmark announced in November 2025 that it was preparing a social media ban for children under 15. The United Kingdom, France, Germany, Italy, Greece, Spain, and Malaysia are all developing similar bans or access restrictions. The UN joint statement on artificial intelligence and the rights of the child, published in January 2026, cited Australia's approach as the global model for government action on child online safety.
The speed of this international movement surprised even the Australian government. Prime Minister Albanese, speaking to reporters in January 2026, described the global uptake as a source of Australian pride, noting that Australia had succeeded in changing the conversation about what governments can and should do to protect children from the documented harms of social media.
For Australian parents, this international context matters because it means the regulatory framework protecting Australian children is likely to strengthen over time rather than weaken. Platform operators who resist Australian law face the growing prospect of similar laws in their largest markets worldwide. The commercial calculus for fighting rather than complying is becoming increasingly unfavourable.
Australian parents deserve an honest answer to the question of whether the ban is working, free of political spin in either direction.
The ban is working as a structural intervention. It has removed 4.7 million accounts, it has forced the world's largest platforms to implement age verification at a scale never attempted before, and it has shifted the global regulatory conversation in a way that will benefit children not just in Australia but in countries that follow Australia's lead.
The ban is a work in progress as a behavioural intervention. It has meaningfully reduced social media use among children who were less committed users. It has had limited impact on teenagers who are highly motivated to maintain their online presence. The workaround gap, while narrower than critics predicted, is real.
The ban is not a failure, but it is not complete. The displacement of children toward AI companion apps, unregulated gaming communities, and smaller platforms means the overall digital safety challenge facing Australian families has not been solved. It has been partially addressed, and in ways that create new challenges that require new responses.
The honest verdict for Australian parents is this: the ban is doing what a law can do. It is not doing what only parents can do.
The most important thing Australian parents can do in March 2026 is not outsource their child's online safety entirely to the law. Use the law as a tool. Understand what it covers and what it does not. And fill the gaps that legislation will always leave.
Have a direct conversation with your child about the ban and why it exists. If your teenager has found a workaround, you need to know about it. The only way you will find out is if your child trusts that telling you will lead to a productive conversation rather than a confiscation and a lecture.
Check which apps and platforms your child is actually using. The ten platforms covered by the ban are only a fraction of the digital spaces where Australian teenagers spend their time. Discord, WhatsApp, gaming platforms, and AI companion apps are all potentially active parts of your child's digital life and none of them are covered by the current law.
Use device-level parental controls alongside the law. Both iOS Screen Time and Android Family Link provide content filtering, app restrictions, and usage reporting that do not depend on platform compliance. These tools are not foolproof but they add a meaningful layer of protection.
Stay engaged with the High Court proceedings. If the constitutional challenge succeeds, the legal framework protecting your child could change. Following the case through reliable Australian news sources will ensure you are not caught off guard.
Monitor the AI crackdown that is now live as of March 9, 2026. AI platforms are now subject to the same child protection obligations as social media platforms under Australian law. Understanding which AI apps your child is using and whether those apps are compliant is now an essential part of responsible Australian parenting.
Is Australia's social media ban actually working?
It is working in part. More than 4.7 million accounts belonging to under-16s were deleted or restricted in the first weeks of enforcement. Many families report genuine changes in their children's screen time and family engagement. However, determined teenagers have found workarounds including VPNs, new accounts, and migration to unregulated platforms.
What happens if my child is found on a banned social media platform?
The fines under the legislation fall on the platforms, not on individual children or parents. Platforms that fail to take reasonable steps to prevent under-16s from having accounts face fines of up to AU$49.5 million. There is no penalty for children who circumvent the ban.
Which social media platforms are banned for under-16s in Australia?
Facebook, Instagram, TikTok, Snapchat, YouTube, Threads, Reddit, Twitter, Twitch, and Kick are all currently restricted for Australian users under 16. WhatsApp, Discord, and a number of smaller platforms are not currently covered.
Can my child use a VPN to access social media?
Technically yes. The eSafety Commissioner acknowledged a spike in VPN downloads after the ban took effect. However, sustained VPN usage for the purpose of accessing banned platforms puts the child's data privacy at risk through unreliable third-party VPN providers, and the regulator is monitoring for platform-level VPN workarounds.
Is the High Court challenge likely to overturn the ban?
This is uncertain. The challenge is being taken seriously by the courts, which agreed to hear arguments in 2026. The governments of New South Wales, South Australia, and Western Australia are defending the law alongside the federal government. Most legal analysts consider the law likely to survive, though potentially with modifications.
What is the Thriving Kids initiative and how do I access it?
Thriving Kids is a AU$4 billion government program launching October 2026 that will provide funded access to speech pathologists, occupational therapists, and other allied health professionals for children with mild to moderate developmental delays, without requiring a full NDIS plan. Parents should contact their GP or early childhood health service for referral information as the rollout date approaches.
What is the connection between the social media ban and the new AI crackdown?
The AI crackdown came into force on March 9, 2026, partly in response to evidence that children displaced from social media are migrating to AI companion apps. The eSafety Commissioner confirmed that world-leading AI chatbot restrictions are the logical next step in the same child protection framework that produced the social media ban.
Where can I report a social media platform that is allowing my under-16 child to have an account?
Complaints can be made directly to the eSafety Commissioner, who has the legal power to investigate platforms and seek financial penalties for non-compliance with the under-16 ban.
This article is published by meandkids.com.au, Australia's trusted parenting resource. All information is based on publicly available reporting and official statements as of March 11, 2026.