Search Icon


< Back

Can we trust social media companies to protect kids online?

by Oliver Hayes, Policy & Campaigns Lead

4 min read





In an extraordinary series of investigations, the Wall Street Journal has revealed the callous cynicism at the heart of Facebook and its associated platforms.

Buried research 


The WSJ’s “Facebook files” reveal a company whose leadership is well aware of the harm its products cause but unwilling to do anything about those harms for fear of hurting the bottom line. 


For instance: 


  • Facebook’s rapid global expansion has been accompanied by minimal content regulation outside the US, allowing videos of murders, incitements to violence, and ads for human trafficking to be posted 
  • Its newsfeed algorithm’s prioritisation of ‘re-shares’ has seen a prevalence of “misinformation, toxicity & violent content” 
  • It gives high profile and powerful users “free rein to harass others, make false claims, and incite violence". An internal review of these practices stated the obvious: “we are not actually doing what we say we are doing publicly”, i.e. treating all users equally. 


And perhaps least surprising but in some ways most shocking, Facebook-owned Instagram has serious negative impacts on teens’ – particularly teen girls’ – mental health. Specifically: 


  • Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse 
  • Teens blame Instagram for increases in the rate of anxiety and depression 
  • Among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the desire to kill themselves to Instagram 


The above are direct quotes from internal Facebook presentations, summarising the findings of their own research. Research which they buried. 


As our campaign partners Accountable Tech put it, “There is nothing more chilling than turning one’s back on a child who’s in harm’s way. Yet, this is what Facebook and Instagram executives do every day." 


For many it has been patently clear for a long time that companies like Facebook are not going to police themselves. These investigations confirm as much. 


That’s why it is so important that regulatory efforts – like the UK Government’s draft Online Safety Bill – are as robust as possible and target platforms’ business model, rather than tinkering around the edges. 


Their business model is best characterised as ‘surveillance for profit’. That is, harvesting every possible piece of real and ‘inferred’ information about users, in order to hone the content that will keep them – us – hooked. The more we’re hooked, the more ad revenue they make. 


That’s why ‘extreme’ content is so prevalent – it’s great for business. We find it incredibly difficult to look away because with every second spent online, algorithms learn more what we each find most captivating. 


Such ‘surveillance advertising’ is problematic for all of society, but it’s particularly damaging to children. Facebook has admitted as much by announcing they will limit advertisers’ ability to target child users with ads. But there are huge questions about what this announcement means, and – as demonstrated by the WSJ’s investigations – Facebook has an appalling track record on matters of transparency and trustworthiness. 


That’s why a powerful cross-sector coalition of groups has submitted evidence to a parliamentary ‘super-enquiry’ interrogating the UK government’s Online Safety Bill. Our submission, backed by the Church of England, Global Witness, Privacy International and others, makes the case that surveillance advertising is conspicuous by its absence in the draft Bill, and needs to be urgently included. 


Online Safety Bill 


As currently drafted, the Online Safety Bill instructs the regulator, Ofcom, to ignore “paid-for advertisements” on social media or any other sites that fall within its remit. Those MPs and Peers combing through the detail of the Bill will, we hope, agree with us that it will fail in its stated aim, i.e. reducing the harm experienced online, unless this omission is corrected. 


Secondly, we make the point that surveillance advertising is in itself harmful to children. It is well documented that children are less equipped to understand how behavioural profiling works, and are more vulnerable to being manipulated. Surveillance advertising to children is already woefully under-regulated and must not be ignored in what is the Government’s major vehicle for reducing harm to children online. 


The committee will make its recommendations to Government before the end of the year, and we will be hoping that in future it will be robust regulation, not just investigative reporting, that shines a light on the harm caused by online surveillance for profit and creates a better, safer online world. 


Find out more about the End Surveillance Advertising to Kids campaign and sign up for regular updates.  


Back to Global Action Plan News