For children to be safe online, it’s not they who need to change – it’s the tech companies

For children to be safe online, it’s not they who need to change – it’s the tech companies

Ian Russell

If the government is going to strengthen the Online Safety Act, banning social media for under-16s is not the answer

In the six years since my youngest daughter Molly died, it is striking how little has changed. Children and young people continue to face a wave of inherently preventable online harms on often negligent social media platforms. In a ferocious battle for market share, the risks on sites such as Instagram and TikTok have, in some respects, become worse.

It is therefore no surprise that there is a considerable groundswell in demands for much more to be done. Across the UK, grassroots parents’ groups are increasingly calling for a fundamental reset in the relationship between children and their smartphone use.

This genuinely and deeply felt concern has been noted by the government, and expectations are growing that it will announce a fresh consultation on further online safety measures shortly. As a bereaved parent and campaigner for improved online safety, my overwhelming reaction to the recent steady stream of briefings is one of trepidation and concern.

While it is right that Rishi Sunak is listening to the huge outpouring from parents, and I applaud him for his focus on this issue, we owe it to our children to deliver effective solutions that will really work.

By rushing to introduce new measures that may sound attractive but research has shown may be deeply flawed, my fear is that there is a real risk the government introduces a set of poorly thought-out measures which will result in multiple unintended consequences. Put simply, much of what is being suggested may cause more harm than good.

In recent days well sourced media stories have suggested that Downing Street is exploring a requirement on social media sites to notify parents if their child encounters harmful content. Every parent will instinctively want to know if their child is accessing harmful material. There isn’t a day that goes by when I don’t wish we had known the sheer despair Molly was being algorithmically recommended nearly every time she opened her social media apps.

However, the available evidence shows that while overly intrusive parental controls may make parents more reassured, they typically result in worse safety outcomes. These controls weaken trust between children and parents, and may make it less likely for children to disclose harm and receive the vital help and support they need.

Ministers have also been exploring the potential for an outright ban on social media for under-16s. Some MPs and peers have even been calling for emergency legislation to pass these measures before the election. But this would remove the benefits of digital technology for young people while leaving online harms available for them to find.

Put simply, this would punish children for the failures of technology companies to build their products responsibly. While young people’s voices have been largely absent from the debate up to now, children have told us that being online is fundamental to their lives. So it’s important not to forget social media is a place where children can learn, communicate and explore and express themselves during often difficult and challenging teenage years.

It is naive to assume that removing smartphones also removes the risk of harm altogether. Pulling up the drawbridge until children reach 16 at best only delays the online dangers faced by children and young people – it doesn’t remove them.

A 16-year-old girl opening social media accounts for the first time will continue to face appalling misogynistic abuse, and she will continue to run the risk of being coerced and pressed into sending self-generated images. Once that girl turns 18, and as a direct result of the Online Safety Act having been watered down, she can continue to be freely algorithmically recommended suicide, self-harm or content on eating disorders.

Reform is needed, but it must be targeted at the companies rather than children. Until we change the commercial, legal and regulatory incentives on social media companies by targeting the commercial and design choices that enable them to prey on and exacerbate the insecurities of young people, an entire generation will continue to be blighted by tech companies that opt to monetise their misery. It is only through effective regulation that tech companies will start to design products based on children’s safety, not simply what’s best for their bottom line.

If politicians rightly want to do more about online safety, the answer must be to commit to a strengthened Online Safety Act. While we wait for Ofcom to set out its approach, it’s clear that the regulator needs to take a considerably bolder and more ambitious approach. There are also structural issues in the legislation that need to be quickly addressed.

The next government should therefore commit to introducing strong yet measured follow-up legislation early in the next parliament. Across the chamber, MPs must recognise this is not about legislating for hurt feelings, it is about improving children’s lives and preventing more parents having to bury their teenage sons and daughters.

We owe our children nothing less than to protect them from entirely preventable online harms. Politicians must employ a considered, evidence-based approach if they are to avoid moving fast and breaking things, much like big tech has done before.

  • Ian Russell is an internet safety campaigner and chair of the Molly Rose Foundation

  • Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here