Meta’s Content Moderation Changes ‘Hugely Concerning ’, Says Molly Rose Foundation. Mark Zuckerberg’s move to change Meta’s content moderation policies risks pushing social media platforms back to the days before the teenager Molly Russell took her own life after viewing thousands of Instagram posts about suicide and self-harm, campaigners have claimed. The Molly Rose Foundation, set up after the 14-year-old’s death in November 2017, is now calling on the UK regulator, Ofcom, to “urgently strengthen” its approach to the platforms. Earlier this month, Meta announced changes to the way it vets content on platforms used by billions of people as Zuckerberg realigned the company with the Trump administration. In the US, factcheckers are being replaced by a system of “community notes” whereby users will determine whether content is true. Policies on “hateful conduct” have been rewritten, with injunctions against calling non-binary people “it” removed and allegations of mental illness or abnormality based on gender or sexual orientation now allowed. Meta insists content about suicide, self-injury and eating disorders will still be considered “high-severity violations” and it “will continue to use [its] automated systems to scan for that high-severity content”. But the Molly Rose Foundation is concerned about the impact of content that references extreme depression and normalises suicide and self-harm behaviours, which, when served up in large volumes, can have a devastating effect on children. It is calling on the communications watchdog to fast-track measures to “prevent teens from being exposed to a tsunami of harmful content” on Meta’s platforms, which also include Facebook. Andy Burrows, the Molly Rose Foundation’s chief executive, said: “Meta’s bonfire of safety measures is hugely concerning and Mark Zuckerberg’s increasingly cavalier choices are taking us back to what social media looked like at the time that Molly died". In May, Ofcom issued a draft safety code of practice which ordered tech firms to “act to stop their algorithms recommending harmful content to children and put in place robust age-checks to keep them safer”. The final codes are due to be published in April and are due to come into force in July after parliamentary approval. A Meta spokesperson said: “There is no change to how we define and treat content that encourages suicide, self-injury, and eating disorders. We don’t allow it and we’ll continue to use our automated systems to proactively identify and remove it. We continue to have community standards, around 40,000 people working on safety and security to help enforce them, and Teen Accounts in the UK, which automatically limit who can contact teens and the types of content they see”. Let us know your thoughts in the comment section and follow us for more updates! 😊 #MetaContenModeration #MetaLatestNews #SocialMedia #DigitalMarketing
Time Booster Marketing’s Post
More Relevant Posts
-
Does the Meta algorithm promote destructive content? It seems like edifying and good content cannot get visibility, but self-harm and negative content still gets out and doesn't get taken down. Issues... A recent study by Danish organization Digitalt Ansvar has revealed alarming inadequacies in Instagram’s moderation of self-harm content, questioning the platform’s commitment to user safety. Here's what they did. Researchers created a private network of fake profiles, some as young as 13 years old, to share 85 progressively explicit pieces of self-harm-related content, including images of blood, razor blades, and messages encouraging self-harm. Over the course of a month, none of the content was removed, despite Meta’s claim that its AI removes 99% of harmful material before it is reported. “We thought that when we did this gradually, we would hit the threshold where AI or other tools would recognize or identify these images,” said Ask Hesby Holm, CEO of Digitalt Ansvar. “But big surprise—they didn’t.” The study further revealed that Instagram’s algorithm actively contributed to the spread of the self-harm network, connecting 13-year-old profiles to all members of the group once they interacted with one member. This, the researchers said, “suggests that Instagram’s algorithm actively contributes to the formation and spread of self-harm networks.” Digitalt Ansvar developed its own AI tool to test moderation capabilities, finding it could automatically detect 38% of self-harm images and 88% of the most severe cases. “This shows that Instagram has access to technology able to address the issue but has chosen not to implement it effectively.” It also raises concerns about Meta’s compliance with the EU’s Digital Services Act, which requires digital platforms to identify and mitigate risks to users’ physical and mental well-being. Meta defended its efforts, pointing to recent initiatives like Instagram Teen Accounts, which automatically restrict teenagers’ exposure to sensitive content. A Meta spokesperson said: “Content that encourages self-injury is against our policies, and we remove this content when we detect it. In the first half of 2024, we removed more than 12 million pieces related to suicide and self-injury on Instagram, 99% of which we proactively took down.” Despite these assurances, the findings are still deeply troubling. “We thought that they had some kind of machinery trying to figure out and identify this content,” he said, but the lack of action, even as content severity escalated, showed otherwise. There are critical gaps in Instagram’s approach to self-harm content. And, to anyone who has been on Instagram, the algorithms have fundamentally changed and good content is hard to get visibility over. #SocialMedia #AI #ContentModeration #DigitalSafety #MentalHealth #DigitalServicesAct https://lnkd.in/e2FYpJCS
To view or add a comment, sign in
-
Meta's shift toward a looser, less centralized moderation approach marks a bold step in rethinking how content is handled on major platforms. This opens the door for more diverse and open conversations, but it also means brands must take greater ownership of their spaces to prevent unwanted polarization. This will require brands to reevaluate their social media strategies, ensuring their communities align with their values while fostering respectful discourse. With Resolver's expertise in crafting clear policies and enabling consistent moderation, teams can confidently navigate this evolving landscape. Explore more in the blog:
Meta’s policy changes will have a profound impact on brands operating on its platforms. Moderating user comments and influencer posts will become more complex, and any moderation action could be perceived as an infringement on free speech. Brands must gain a comprehensive understanding of the new community standards, be prepared to justify content moderation decisions, and establish risk tolerance levels for certain types of speech. Read more from Darren Burrell on our blog. https://lnkd.in/eUgpfb6z
To view or add a comment, sign in
-
Meta’s policy changes will have a profound impact on brands operating on its platforms. Moderating user comments and influencer posts will become more complex, and any moderation action could be perceived as an infringement on free speech. Brands must gain a comprehensive understanding of the new community standards, be prepared to justify content moderation decisions, and establish risk tolerance levels for certain types of speech. Read more from Darren Burrell on our blog. https://lnkd.in/eUgpfb6z
To view or add a comment, sign in
-
Meta’s policy changes will have a profound impact on brands operating on its platforms. Moderating user comments and influencer posts will become more complex, and any moderation action could be perceived as an infringement on free speech. Brands must gain a comprehensive understanding of the new community standards, be prepared to justify content moderation decisions, and establish risk tolerance levels for certain types of speech. Read more from Darren Burrell on our blog. https://lnkd.in/eUgpfb6z
To view or add a comment, sign in
-
Zuckerberg's announcement last week certainly made for some interesting news to consider. By loosening content restrictions on sensitive topics like immigration and gender identity, it's clear there's more potential for controversial discourse in brand spaces on Meta properties. Now is the time to evaluate the practices and effectiveness of your social listening and online risk detection programs and to determine your brand's tolerance levels for speech about sensitive issues like immigration, gender and politics. Check out my blog post for a helpful checklist to guide that process. Resolver, a Kroll Business Kroll
Meta’s policy changes will have a profound impact on brands operating on its platforms. Moderating user comments and influencer posts will become more complex, and any moderation action could be perceived as an infringement on free speech. Brands must gain a comprehensive understanding of the new community standards, be prepared to justify content moderation decisions, and establish risk tolerance levels for certain types of speech. Read more from Darren Burrell on our blog. https://lnkd.in/eUgpfb6z
To view or add a comment, sign in
-
Advertisers Are Free to Not Advertise Next to 'Bad Stuff' CAN co-founder Jake Dubbins shared important insights with journalist Jack Benjamin (The Media Leader UK) in response to Meta’s controversial content moderation changes this week. As Jake told Jack, “Meta are free to do what they like… They can choose to ‘catch less bad stuff’. Advertisers are free to not advertise next to that bad stuff.” His words resonate deeply in light of Meta’s decision to scale back fact-checking and allow more harmful rhetoric. Advertisers now face a pressing challenge: Do your brand values align with the potential content your ads may support? Jake’s warning is clear - when brands appear next to extreme or divisive content, they risk alienating the very audiences they aim to engage. It’s an uncomfortable situation, especially for businesses striving for inclusivity and growth. 🔗 Link to TML article in full: https://lnkd.in/eTjBh6X8 The question for advertisers is not just about Reach, but Responsibility. How will you respond to these changes? Need guidance on Responsible Reach? ✉️ Contact hello@consciousadnetwork.org to find out about CAN's free membership, and check out our 7 manifestos: https://lnkd.in/e5dB2_NB #ConsciousAdvertising #BrandSafety #DEI #ResponsibleReach
To view or add a comment, sign in
-
Changes to Meta's content moderation policies signals a new reality for users, brands and trust & safety teams. We unpack what it means for brands here: https://lnkd.in/eeAe_pfu and how it may reshape the landscape of Trust & Safety here: https://lnkd.in/ekBw--nU
Meta’s policy changes will have a profound impact on brands operating on its platforms. Moderating user comments and influencer posts will become more complex, and any moderation action could be perceived as an infringement on free speech. Brands must gain a comprehensive understanding of the new community standards, be prepared to justify content moderation decisions, and establish risk tolerance levels for certain types of speech. Read more from Darren Burrell on our blog. https://lnkd.in/eUgpfb6z
To view or add a comment, sign in
-
By now, you’ve probably heard about Meta’s decision to end its third-party fact-checking program in the U.S. Instead of professional moderation, Meta is introducing a "Community Notes" system, where users flag and add context to potentially misleading posts. ❓This change raises a big question: How does this affect businesses like yours running ads and engaging with audiences on Meta’s platforms? From a moral standpoint, Meta’s history with moderating harmful content, like self-harm posts - is, let’s be honest, not great. Content that algorithms should catch still finds its way to young people and vulnerable audiences. So, is this shift about doing better? Or simply about saving budgets and keeping shareholders happy? (I have my suspicions.) But putting moral concerns aside, let’s focus on the practical implications for you as a business using Meta’s platforms. This change has real risks that you can’t ignore if you’re running paid ads or building a presence on Facebook and Instagram. ❓What’s at Stake for Businesses? 🧑💻Increased Risk of Misinformation Without professional fact-checkers, misinformation could spread more easily. Your carefully crafted ads or posts might end up next to false or misleading content. How would that impact your brand’s credibility? 🧑💻Brand Safety Concerns No fact-checking means less control over where your ads appear. You might need to spend extra time (and money) ensuring your ads aren’t shown alongside content that conflicts with your values or reputation. 🧑💻Inconsistent Moderation Standards With user-driven moderation, there’s a risk of inaccuracies. What if your ad is flagged unfairly, or a truthful post gets labeled as misleading? That could hurt engagement and confuse your audience. 🧑💻Extra Content Verification This shift makes it even more important for businesses to double-check their own content. Sharing credible, reliable posts will help you maintain trust with your audience. 🧑💻Staying Ahead of Policy Changes Meta’s policies are evolving fast. To avoid penalties or reduced visibility, businesses will need to keep a close eye on updates and adapt their strategies. ❓What Can You Do? While this change is meant to foster free expression, it does bring new challenges. If you’re running ads or building your presence on Meta, here are a few proactive steps you can take: ⚡️Regularly review your ad placements to ensure they align with your brand values. ⚡️Stay informed about Meta’s updates to adapt quickly. ⚡️Double down on creating trustworthy, high-quality content that resonates with your audience. ⚡️Set aside time to monitor how these changes impact your campaigns. I’d love to hear your thoughts. Are you worried about misinformation or brand safety? Or do you see opportunities in this shift? #MetaChanges #BrandSafety #Misinformation
To view or add a comment, sign in
-
-
🚨 Meta’s New Moderation Shift: What It Means for Brands 🚨 Meta’s updated community standards are changing the game for moderating user comments and influencer posts. With a greater risk of moderation being seen as infringing on free speech, brands are facing new challenges in maintaining their reputation and fostering open conversations online. Navigating these changes requires careful balance, and Resolver’s social moderation service offers support in managing content and addressing issues quickly, helping brands stay ahead in this shifting landscape. 💬 If you'd like to learn more, send me a DM - I’d love to chat and share how Resolver can make a difference for your brand!
Meta’s policy changes will have a profound impact on brands operating on its platforms. Moderating user comments and influencer posts will become more complex, and any moderation action could be perceived as an infringement on free speech. Brands must gain a comprehensive understanding of the new community standards, be prepared to justify content moderation decisions, and establish risk tolerance levels for certain types of speech. Read more from Darren Burrell on our blog. https://lnkd.in/eUgpfb6z
To view or add a comment, sign in
-
Meta's recent policy changes will deeply affect brands using its platforms. The shift to community-driven content moderation complicates the oversight of user comments and influencer posts, with moderation actions potentially viewed as limiting free speech. Brands must familiarize themselves with the new community standards, be ready to explain their moderation choices, and determine acceptable risk levels for different types of speech. Discover more insights from our Division Lead Darren Burrell on our blog!
Meta’s policy changes will have a profound impact on brands operating on its platforms. Moderating user comments and influencer posts will become more complex, and any moderation action could be perceived as an infringement on free speech. Brands must gain a comprehensive understanding of the new community standards, be prepared to justify content moderation decisions, and establish risk tolerance levels for certain types of speech. Read more from Darren Burrell on our blog. https://lnkd.in/eUgpfb6z
To view or add a comment, sign in