Time Booster Marketing’s Post

Meta’s Content Moderation Changes ‘Hugely Concerning ’, Says Molly Rose Foundation. Mark Zuckerberg’s move to change Meta’s content moderation policies risks pushing social media platforms back to the days before the teenager Molly Russell took her own life after viewing thousands of Instagram posts about suicide and self-harm, campaigners have claimed. The Molly Rose Foundation, set up after the 14-year-old’s death in November 2017, is now calling on the UK regulator, Ofcom, to “urgently strengthen” its approach to the platforms. Earlier this month, Meta announced changes to the way it vets content on platforms used by billions of people as Zuckerberg realigned the company with the Trump administration. In the US, factcheckers are being replaced by a system of “community notes” whereby users will determine whether content is true. Policies on “hateful conduct” have been rewritten, with injunctions against calling non-binary people “it” removed and allegations of mental illness or abnormality based on gender or sexual orientation now allowed. Meta insists content about suicide, self-injury and eating disorders will still be considered “high-severity violations” and it “will continue to use [its] automated systems to scan for that high-severity content”. But the Molly Rose Foundation is concerned about the impact of content that references extreme depression and normalises suicide and self-harm behaviours, which, when served up in large volumes, can have a devastating effect on children. It is calling on the communications watchdog to fast-track measures to “prevent teens from being exposed to a tsunami of harmful content” on Meta’s platforms, which also include Facebook. Andy Burrows, the Molly Rose Foundation’s chief executive, said: “Meta’s bonfire of safety measures is hugely concerning and Mark Zuckerberg’s increasingly cavalier choices are taking us back to what social media looked like at the time that Molly died". In May, Ofcom issued a draft safety code of practice which ordered tech firms to “act to stop their algorithms recommending harmful content to children and put in place robust age-checks to keep them safer”. The final codes are due to be published in April and are due to come into force in July after parliamentary approval. A Meta spokesperson said: “There is no change to how we define and treat content that encourages suicide, self-injury, and eating disorders. We don’t allow it and we’ll continue to use our automated systems to proactively identify and remove it. We continue to have community standards, around 40,000 people working on safety and security to help enforce them, and Teen Accounts in the UK, which automatically limit who can contact teens and the types of content they see”. Let us know your thoughts in the comment section and follow us for more updates! 😊 #MetaContenModeration #MetaLatestNews #SocialMedia #DigitalMarketing

  • No alternative text description for this image

To view or add a comment, sign in

Explore topics