Accountable Tech

Accountable Tech

Civic and Social Organizations

We advocate for social media companies to strengthen the integrity of their platforms and our democracy.

About us

Major social media platforms increasingly serve as information gatekeepers, putting awesome power in the hands of tech giants with little oversight. These companies – like Facebook, Google, Twitter – were not fundamentally designed to serve the public good, but to generate growth and profits. Their surveillance-advertising business model relies on harvesting and monetizing data, so they developed products and algorithms to maximize engagement. Those algorithms inherently reward outrageous content, reinforce biases irrespective of truth, and filter like-minded individuals into small echo chambers. Moreover, bad actors are weaponizing these data-rich platforms to manipulate users – be it to suppress voters, prey on consumers, or promote extremism. Democracy requires a shared baseline of facts. But this online information landscape is instead creating a toxic patchwork of personalized pseudo-realities. Social media companies face an undeniably challenging task in self-regulating. But they can and must do more to mitigate harms and promote the greater good.

Industry
Civic and Social Organizations
Company size
2-10 employees
Headquarters
Washington
Type
Nonprofit

Locations

Employees at Accountable Tech

Updates

  • View organization page for Accountable Tech, graphic

    1,379 followers

    Earlier this year, Meta began limiting political content on Instagram, resulting in an average 65% drop in audience sizes for creators we studied over 10 weeks. As Geoffrey Fowler reported for the Washington Post, 1 in 5 Americans get their news from Instagram, making Meta’s actions detrimental to our information ecosystem. Instead of addressing disinformation or hate speech, Meta undermined the reach of authoritative content that helps people understand current events, civic engagement, and elections. Marginalized creators sharing vital experiences are particularly affected, as this policy silences their advocacy on climate change, gun violence prevention, racial justice, LGBTQ+ rights, and reproductive freedom. For example, creator Mrs. Frazzled saw a 63% drop in reach on posts using the word “vote” and a 40% drop when discussing politics. “These are integral parts of some people’s identities and livelihoods — Meta has limited their capability to talk about who they are and what they care about,” said our Campaigns Director Zach Praiss. https://lnkd.in/eHe-X9Ug

    • No alternative text description for this image
  • View organization page for Accountable Tech, graphic

    1,379 followers

    With just weeks before the 2024 U.S. presidential election, AI is spreading false narratives and harmful material at an alarming rate. What's worse, politicians and Big Tech CEOs are often the ones sharing manipulated photos, videos, and images. Electoral deepfakes endanger democracy because they can manipulate public opinion and undermine trust in institutions. Social media platforms must ensure deepfakes do not imperil our election integrity. #NoDeepfakesForDemocracy. Visit https://lnkd.in/esxvVReP to learn more.

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +4
  • View organization page for Accountable Tech, graphic

    1,379 followers

    Deepfakes threaten our democracy. We’re demanding #NoDeepfakesForDemocracy and calling on social media platforms to prevent the distribution of non-consensual and deceptive AI-generated electoral deepfakes ahead of the 2024 election and beyond. 👉 Before U.S. election day, platforms must implement robust detection and moderation systems to identify and prohibit non-consensual and intentionally deceptive deepfakes. 👉 Aside from requirements to label political AI-generated content, there must be a disclosure about the generative tool for traceability purposes. 👉 These companies must implement similar systems for other democracies around the world with elections this year. 👉 Social media platforms must collaborate with researchers to provide civil society, academics, and journalists access and insight into the spread of and enforcement against deceptive deepfakes. The time for action is now. Our coalition is calling on major social media companies to safeguard our democracy in the digital realm. Visit https://lnkd.in/esxvVReP to learn more

    • No alternative text description for this image
  • View organization page for Accountable Tech, graphic

    1,379 followers

    Yesterday, California Governor Gavin Newsom vetoed landmark AI safety bill SB 1047, which would have required third-party testing and other guardrails for frontier AI models to prevent cyberattacks and other misuse. As our Co-founder and Executive Director Nicole Gill said, this veto is "a massive giveaway to Big Tech companies and an affront to all Americans who are currently the unconsenting guinea pigs of an unregulated and untested" industry. This veto will not 'empower innovation' — it only further entrenches the status quo where Big Tech monopolies are allowed to rake in profits without regard for our safety, even as their AI tools are already threatening democracy, civil rights, and the environment with unknown potential for other catastrophic harms," she continued. Tech companies have proven time and time again that they can’t be trusted to regulate themselves. Rather than seizing the opportunity to stand up for his constituents, Governor Newsom bowed to the Big Tech billionaires. https://lnkd.in/ezkzzgiB

    California Governor Gavin Newsom vetoes controversial bill on AI safety

    California Governor Gavin Newsom vetoes controversial bill on AI safety

    axios.com

  • View organization page for Accountable Tech, graphic

    1,379 followers

    We're extremely disappointed to see California Governor Gavin Newsom veto #SB1047, a landmark AI safety bill that would have implemented crucial guardrails, including requiring pre-deployment safety testing and third-party auditing. The bill was overwhelmingly supported by the people — civil society groups, scientists, unions, etc. Tech companies have proven time and time again that they can’t be trusted to regulate themselves – and yet when given the opportunity to sign common sense, bipartisan AI safety guardrails into law, Governor Newsom caved to industry pressure. We know that AI is the future, but we can best harness its power while protecting the public from cyberattacks and other harms. Governor Newsom had the opportunity to stand up to tech billionaires by signing #SB1047, but instead, he sided with them.

    • No alternative text description for this image
  • View organization page for Accountable Tech, graphic

    1,379 followers

    We’re requesting information from Amazon, Apple, Google, Microsoft, Meta, TikTok, Yelp, and YouTube on how their platforms will protect users’ privacy from a potential Trump administration. Project 2025 seeks to weaponize Big Tech platforms to unleash new, devastating attacks on reproductive rights. After the fall of Roe, companies put out statements pledging to uphold reproductive freedom, but splashy PR campaigns aren’t enough to protect abortion rights and mitigate the spread of harmful medical disinformation. See our letter to the companies urging them to deliver on past privacy promises and protect consumers from a potential Trump administration and #Project2025: https://lnkd.in/e2JMmush

    • No alternative text description for this image
  • View organization page for Accountable Tech, graphic

    1,379 followers

    On Monday, Meta announced a new update to protect the privacy of teen users, but we won't be fooled. This decision is designed to evade actual independent oversight and regulation, especially as Congress comes closer than ever to passing landmark bipartisan legislation to protect kids’ safety and privacy online. Our Co-founder and Executive Director Nicole Gill said it best: “Today’s PR exercise falls short of the safety by design and accountability that young people and their parents deserve and only meaningful policy action can guarantee. Meta’s business model is built on addicting its users and mining their data for profit; no amount of parental and teen controls Meta is proposing will change that.” https://lnkd.in/eKq9zFUn

    Instagram makes teen accounts private as pressure mounts on the app to protect children

    Instagram makes teen accounts private as pressure mounts on the app to protect children

    apnews.com

  • View organization page for Accountable Tech, graphic

    1,379 followers

    Yesterday, the California state assembly passed the first-of-its-kind AI safety bill, SB 1047. This legislation would introduce crucial guardrails around AI development, including requiring pre-deployment safety testing and third-party auditing. If signed into law by Governor Gavin Newsom, SB 1047 would finally force Big Tech developers to prioritize our safety over their race for profits. We urge Governor Newsom to sign the bill immediately and urge Congress to follow California’s lead with bold, decisive action to ensure responsible AI development. https://lnkd.in/eHw2XfcG

    California legislature passes sweeping AI safety bill

    California legislature passes sweeping AI safety bill

    theverge.com

  • View organization page for Accountable Tech, graphic

    1,379 followers

    Our Digital Content Manager, Q Chaghtai, received a $9.72 payment one morning only to discover it came from an FTC settlement with BetterHelp. The platform sold sensitive personal data from the therapy platform to third parties. We must ban surveillance advertising to get to the root of Big Tech’s toxic business model fueling the collection, retention, and sale of our most personal data. Read his story in our latest staff post: https://lnkd.in/gZxjz3Pe

    • No alternative text description for this image

Similar pages

Browse jobs