As we have heard, the Government have announced a three-month consultation on children’s social media use. That is a welcome demonstration that the Government recognise the importance of this issue and are willing to consider further action beyond the Online Safety Act. However, our amendments make it clear that we should not wait until summer, or even beyond, to act, as we have a workable, legally operable solution before us today. Far from weakening the proposal from the noble Lord, Lord Nash, our amendments are designed to make raising the age to 16 deliverable in practice, not just attractive in a headline.

I share the noble Lord’s diagnosis: we are facing a children’s mental health catastrophe, with young people exposed to misogyny, violence and addictive algorithms. I welcome the noble Lord’s bringing this critical issue before the House and strongly support his proposal for a default minimum age of 16. After 20 years of profiteering from our children’s attention, we need a reset. The voices of young people themselves are impossible to ignore. At the same time, tens of thousands of parents have reached out to us all, just in the past week, calling to raise the age—we cannot let them down.

The Government have announced that Ministers will visit Australia to learn from its approach. I urge them to learn the right lessons. Australia has taken the stance of banning social media for under-16s, with a current list of 10 platforms. However, their approach demonstrates three critical flaws that Amendment 94A, as drafted, would replicate and that we must avoid.

First, there is the definition problem. The Australian legislation has had to draw explicit lines that keep services such as WhatsApp, Google Classroom and many gaming platforms out of scope, to make the ban effective. The noble Lord, Lord Nash, has rightly recognised these difficulties by giving the Secretary of State the power to exclude platforms, but that simply moves the arbitrariness from a list in legislation to ministerial discretion. What criteria would the Secretary of State use? Our approach instead puts those decisions on a transparent, risk-based footing with Ofcom and the Children’s Commissioner, rather than in one pair of hands.

Secondly, there is the cliff-edge problem. The unamended approach of Amendment 94A risks protecting children in a sterile digital environment until their 16th birthday, and then suddenly flooding them with harmful content without having developed the digital literacy to cope.

As the joint statement from 42 children’s charities warns, children aged 16 would face a dangerous cliff edge when they start to use high-risk platforms. Our amendment addresses that.

Thirdly, this proposal risks taking a Dangerous Dogs Bill approach to regulation. Just as breed-specific legislation failed because it focused on the type of dog rather than dangerous behaviour, the Australian ban focuses on categories rather than risk. Because it is tied to the specific purpose of social interaction, the Australian ban currently excludes high-risk environments such as Roblox, Discord and many AI chatbots, even though children spend a large amount of time on those platforms. An arbitrary list based on what platforms do will not deal with the core issue of harm. The Molly Rose Foundation has rightly warned that this simply risks migrating bad actors, groomers and violent groups from banned platforms to permitted ones, and we will end up playing whack-a-mole with children’s safety. Our amendment is designed precisely to address that.

Our concerns are shared by the very organisations at the forefront of child safety. This weekend, 42 charities and experts, including the Molly Rose Foundation, the NSPCC, the Internet Watch Foundation, Childline, the Breck Foundation and the Centre for Protecting Women Online, issued a joint statement warning that

“‘social media bans’ are the wrong solution”.

They warn that blanket bans risk creating a false sense of safety and call instead for risk-based minimum ages and design duties that reflect the different levels of risk on different platforms. When the family of Molly Russell, whose tragic death galvanised this entire debate, warns against blanket bans and calls for targeted regulation, we must listen. Those are the organisations that pick up the pieces every day when things go wrong online. They are clear that a simple ban may feel satisfying, but it is the wrong tool and risks a dangerous false sense of safety.

Our amendments build on the foundation provided by the noble Lord, Lord Nash, while addressing these critical flaws. They would provide ready-made answers to many of the questions the Government’s promised consultation will raise about minimum ages, age verification, addictive design features and how to ensure that platforms take responsibility for child safety. We would retain the default minimum age of 16. Crucially, that would remain the law for every platform unless and until it proves against rigorous criteria that it is safe enough to merit a lower age rating. However, and this is the crucial improvement, platforms could be granted exemptions if—and only if—they can demonstrate to Ofcom and the Children’s Commissioner that they do not present a risk of harm.

Our amendments would create film-style age ratings for platforms. Safe educational platforms could be granted exemptions with appropriate minimum ages, and the criteria are rigorous. Platforms would have to demonstrate that they meet Ofcom’s guidance on risk-based minimum ages, protect children’s rights under the UN Convention on the Rights of the Child, have considered their impact on children’s mental health, have investigated whether their design encourages addictive use and have reviewed their algorithms for content recommendation and targeted advertising. So this is not a get-out clause for tech companies; it is tied directly to whether the actual design and algorithms on their platforms are safe for children. Crucially, exemptions are subject to periodic review and, if standards slip, the exemption can be revoked.

First, this prevents mitigating harms. If Discord or a gaming lobby presents a high risk, it would not qualify for exemption. If a platform proves it is safe, it becomes accessible. We would regulate risk to the child, not the type of technology.

Secondly, it incentivises safety by design. The Australian model tells platforms to build a wall to block children. This concern is shared by the Online Safety Act Network, representing 23 organisations whose focuses span child protection, suicide prevention and violence against women and girls. It warns that current implementation focuses on

“ex-post measures to reduce the … harm that has already occurred rather than upstream, content-neutral, ‘by-design’ interventions to seek to prevent it occurring in the first place”.

It explicitly calls for requiring platforms to address

“harms to children caused by addictive or compulsive design”—

precisely what our amendment mandates.

Thirdly, it is future-proof. We must prepare for a future that has already arrived—AI, chatbots and tomorrow’s technologies. Our risk-based approach allows Ofcom and the Children’s Commissioner to regulate emerging harms effectively, rather than playing catch-up with exemptions.

We should not adopt a blunt instrument that bans Wikipedia or education and helpline services by accident, drives children into high-risk gaming sites by omission or creates a dangerous cliff edge at 16 by design. We should not fall into the trap of regulating categories rather than harms, and we should not put the power to choose in one person’s hands, namely the Secretary of State.

Instead, let us build on the foundation provided by the noble Lord, Lord Nash, by empowering Ofcom and the Children’s Commissioner to implement a sophisticated world-leading system, one that protects children based on actual risk while allowing them to learn, communicate and develop digital resistance. I urge the House to support our amendments to Amendment 94A.