FAR.AI

FAR.AI

Research Services

Berkeley, California 2,129 followers

Frontier alignment research to ensure the safe development and deployment of advanced AI systems.

About us

FAR.AI is a technical AI research and education non-profit, dedicated to ensuring the safe development and deployment of frontier AI systems. FAR.Research: Explores a portfolio of promising technical AI safety research directions. FAR.Labs: Supports the San Francisco Bay Area AI safety research community through a coworking space, events and programs. FAR.Futures: Delivers events and initiatives bringing together global leaders in AI academia, industry and policy.

Website
https://far.ai/
Industry
Research Services
Company size
11-50 employees
Headquarters
Berkeley, California
Type
Nonprofit
Founded
2022
Specialties
Artificial Intelligence and AI Alignment Research

Locations

Employees at FAR.AI

Updates

  • View organization page for FAR.AI, graphic

    2,129 followers

    Leading scientists from China and the West gathered at Venice for the third in a series of International Dialogues on AI Safety, urging swift action to prevent catastrophic AI risks that could emerge at any time. Congratulations to the Safe AI Forum team for a successful event, convened by AI pioneers Professors Stuart Russell, Andrew Yao, Yoshua Bengio, and Ya-Qin Zhang. FAR.AI is proud to support this important effort as a fiscal sponsor. 👉 Read the full statement at http://idais.ai 📖 Blog post: https://lnkd.in/gnvDfR_8 📰 NYT coverage: https://lnkd.in/gpWAg4kr ✨ Follow us for the latest on AI Safety insights!

  • View organization page for FAR.AI, graphic

    2,129 followers

    "The only equilibrium of this game actually turns out to be… where everybody, the agents and we, have a terrible utility, even though we got so close to aligning." Vincent Conitzer tackled the complexities of aligning AI in multi-agent systems at the Vienna Alignment Workshop hosted by FAR.AI. He proposed leveraging social choice theory to aggregate human feedback and called for interdisciplinary collaboration to strengthen AI safety and reliability. Key Highlights: - Structuring AI-human interactions to prevent failures - Risks of algorithmic interactions in multi-agent systems - Using social choice theory to aggregate human feedback - Importance of interdisciplinary collaboration 🎥 Watch the full recording and continue the discussion: https://lnkd.in/ecWGz_QA 🚀 Help build trustworthy, beneficial AI—explore careers at https://far.ai/jobs/. 👉 Follow us for the latest updates and insights on AI Safety!

  • View organization page for FAR.AI, graphic

    2,129 followers

    "We're not advocating for a position of techno-solutionism here. We view technical AI governance as merely one component of a comprehensive AI governance portfolio." At the Vienna Alignment Workshop hosted by FAR.AI, Ben Bucknall explored how technical analysis can strengthen AI governance. He called for its integration with socio-technical and political solutions to tackle challenges across the AI value chain. Key Highlights: - Merging technical governance with socio-technical strategies - Categorizing challenges into targets and capacities - Leveraging technical tools to support AI governance - Outlining a roadmap for future AI governance research 🎥 Watch the full recording and continue the discussion: https://lnkd.in/e-iX3Qav 🚀 Help build trustworthy, beneficial AI—explore careers at https://far.ai/jobs/. 👉 Follow us for the latest updates and insights on AI Safety!

  • View organization page for FAR.AI, graphic

    2,129 followers

    "One framing of the field of alignment is to align AI to human values. But of course, this begs the question, what are our values?" Oliver Klingefjord introduced a framework at the Vienna Alignment Workshop hosted by FAR.AI. It uses values cards and a moral graph to align AI behavior with human values in a context-aware and detailed manner. Key Highlights: - Viewing values as a language for evaluating options - Capturing evolving values with context sensitivity - Employing values cards and moral graphs for AI alignment 🎥 Watch the full recording and continue the discussion: https://lnkd.in/exhn4_xZ 🚀 Help build trustworthy, beneficial AI—explore careers at https://far.ai/jobs/. 👉 Follow us for the latest updates and insights on AI Safety!

  • View organization page for FAR.AI, graphic

    2,129 followers

    "Our prescription about how to deal with these potentially inaccurate reward models…is to actually be aware and do some extra work to quantify the uncertainty that the reward model has." Aditya Gopalan highlights the importance of resolving uncertainties in reward models to ensure reliable AI alignment at the Vienna Alignment Workshop hosted by FAR.AI. He proposes methods to quantify and manage these uncertainties for more robust reinforcement learning with human feedback. Key Highlights: - Addressing reward model uncertainties - Findings of inconsistencies in independently trained reward models - Proposal to quantify and manage uncertainty for reliable RLHF 🎥 Watch the full recording and continue the discussion: https://lnkd.in/ed-awDpJ 🚀 Help build trustworthy, beneficial AI—explore careers at https://far.ai/jobs/. 👉 Follow us for the latest updates and insights on AI Safety!

  • View organization page for FAR.AI, graphic

    2,129 followers

    "And more simply put, it's about who is in the driver's seat when we're thinking about people and their relationship with technology." At the Vienna Alignment Workshop hosted by FAR.AI, Alex Tamkin stressed the importance of preserving human agency as AI systems become more integrated into society. He proposed research on scalable oversight and control delegation to ensure people remain in charge. Key Highlights: - Preserving human agency in AI integration - Implementing scalable oversight of AI agents - Managing the delegation of control - Ensuring the ability to reclaim control from AI systems 🎥 Watch the full recording and continue the discussion: https://lnkd.in/eMZaRWUr 🚀 Help build trustworthy, beneficial AI—explore careers at https://far.ai/jobs/. 👉 Follow us for the latest updates and insights on AI Safety!

  • View organization page for FAR.AI, graphic

    2,129 followers

    "The task that I would argue we should really care about is aligning an automated alignment researcher." Jan Leike outlined strategies for scalable oversight and effective elicitation of AI capabilities to boost safety and reduce risks at the Vienna Alignment Workshop hosted by FAR.AI. Key Highlights: - Overcoming challenges in supervising AI on difficult tasks - Implementing scalable oversight - Eliciting AI capabilities with precision - Applying tampering and adversarial evaluations 🎥 Watch the full recording and continue the discussion: https://lnkd.in/gqeyazY8 🚀 Help build trustworthy, beneficial AI—explore careers at https://far.ai/jobs/. 👉 Follow us for the latest updates and insights on AI Safety!

  • View organization page for FAR.AI, graphic

    2,129 followers

    "How does a model call a lawyer and get advice about what's allowable and not allowable?" Gillian K. Hadfield emphasizes the need to integrate AI into institutional structures at the Vienna Alignment Workshop hosted by FAR.AI. Key Highlights: - Building governable AI systems instead of trying to fix all societal issues - Lack of technical infrastructure for AI governance - Importance of developing AI-specific institutions and structures for guidance 🎥 Watch the full recording and continue the discussion: https://lnkd.in/gAA_kWKF  🚀 Help build trustworthy, beneficial AI—explore careers at https://far.ai/jobs/. 👉 Follow us for the latest updates and insights on AI Safety!

  • View organization page for FAR.AI, graphic

    2,129 followers

    "If we perfectly solved alignment … I think that basically cuts our risk of extinction from AI maybe in half." David Krueger discusses different models of AI risk beyond alignment at the Vienna Alignment Workshop hosted by FAR.AI. Key Highlights: - Technical alignment updates have been positive recently - Solving alignment issues may not be enough for safety - Concern over gradual loss of control in addition to sudden rogue AI scenarios 🎥 Watch the full recording and continue the discussion: https://lnkd.in/gAA_kWKF  🚀 Help build trustworthy, beneficial AI—explore careers at https://far.ai/jobs/. 👉 Follow us for the latest updates and insights on AI Safety!

  • View organization page for FAR.AI, graphic

    2,129 followers

    How can strange bedfellows shape the future of AI policy? In his FAR.AI Seminar talk, Andrew Freedman, founder of Fathom.org, shares how his experience in cannabis regulation can help build unlikely coalitions to tackle AI governance. He highlights the need to unite diverse voices and shape policy before crisis-driven regulations take hold. Freedman emphasizes the power of consensus-building to address AI risks and opportunities. Key Takeaways: 🧠 Lessons from cannabis policy applied to AI 🗣️ Changing minds before an emergency forces reactionary rules ⚖️ Creating coalitions outside tech echo chambers 📺 Watch the full recording: https://lnkd.in/gs5ZE9Af — and subscribe for more insights!

    • No alternative text description for this image

Similar pages