Institute for AI Policy and Strategy (IAPS)

Institute for AI Policy and Strategy (IAPS)

Non-profit Organizations

IAPS works to reduce risks related to the development and deployment of frontier AI systems.

About us

The Institute for AI Policy and Strategy (IAPS) is a think tank of aspiring wonks working to understand and navigate the transformative potential of advanced AI. Our mission is to identify and promote strategies that maximize the benefits of AI for society and develop thoughtful solutions to minimize its risks. We aim to be humble yet purposeful: we’re all having to learn about AI very fast, and we’d love it if you could join us in figuring out what the future holds together.

Website
https://www.iaps.ai/
Industry
Non-profit Organizations
Company size
11-50 employees
Type
Nonprofit

Employees at Institute for AI Policy and Strategy (IAPS)

Updates

  • IAPS researchers Joe O'Brien Renan Araujo and Oliver Guest contributed to this memo in collaboration with the Oxford Martin AI Governance Initiative. The memo explores frameworks and criteria for determining which actors (e.g., government agencies, AI companies, third-party organizations) are best suited to develop AI model evaluations. Read more: https://bit.ly/3DYsUgL

    New Research memo! Who Should Develop Which AI Evaluations? In the rapidly advancing field of AI, model evaluations are critical for ensuring trust, safety, and accountability. But who should be responsible for developing these evaluations? Our latest research explores the challenges include: 1. Conflicts of interest when AI companies assess their own models 2. The information and skill requirements for AI evaluations 3. The blurred boundary between developing and conducting evaluations To tackle these challenges, our researchers propose a taxonomy of four development approaches and present nine criteria for selecting evaluation developers, which we apply in a two-step sorting process to identify capable and suitable developers. Lara Thurnherr Robert Trager Christoph Winter Amin Oueslati Clíodhna Ní Ghuidhir Anka Reuel Merlin Stein Oliver Guest Oliver Sourbut Renan Araujo Yi Zeng Joe O'Brien Jun Shern Chan Lorenzo Pacchiardi Seth Donoghue Oxford Martin School Read the full report here: https://lnkd.in/etHrqCms

    Who Should Develop Which AI Evaluations?

    Who Should Develop Which AI Evaluations?

    oxfordmartin.ox.ac.uk

  • We are excited to announce that Jennifer Marron has joined the Institute for AI Policy and Strategy as our Director of Policy and Engagement. Bringing an extensive background in foreign policy, national security, and bipartisan engagement, Jenny is well-positioned to advance IAPS’ mission of bridging the gap between cutting-edge research and sound policy for AI governance. Jenny previously served at the White House National Security Council (NSC) and held several roles within the Department of State from 2010-2019. There, she led teams focused on violence mitigation, conflict stabilization, and the application of data analysis to monitor global instability. With her expertise, Jenny will play a critical role in expanding our outreach and partnerships with stakeholders. Her leadership will help foster practical insights and informed dialogue on AI safety, aiding policymakers, civil society, technology leaders, and the general public in navigating this rapidly evolving field. Please join us in welcoming Jenny to IAPS!

    Jenny Marron — Institute for AI Policy and Strategy

    Jenny Marron — Institute for AI Policy and Strategy

    iaps.ai

  • While China does not have an official AI Safety Institute, there are several government-linked Chinese groups doing analogous work. Learn more in an article from IAPS' Oliver Guest and external researcher Karson Elmgren.

  • Institute for AI Policy and Strategy (IAPS) reposted this

    View profile for Sumaya Nur Adan, graphic

    Law & Policy | Research in AI Governance

    🌐 New Publication Alert: Key Questions for the International Network of AI Safety Institutes I am excited to share our latest commentary, Key Questions for the International Network of AI Safety Institutes, published just in time for the Network’s inaugural meeting in San Francisco on November 20-21. In this piece, my colleagues Renan Araujo, Oliver Guest, and I focus on creating an effective, inclusive framework for global AI safety, grounded in shared priorities and mutual interests. In this commentary, we explore: ▶️ Priority Areas for Collaborative Action: Standards, safety evaluations, and information-sharing practices that can help us work together on critical safety goals. ▶️Inclusive Engagement with Key Global Players: Examining how we can incorporate Chinese actors to balance cooperation and security, potentially through associate or observer roles. ▶️Defining the Role of AI Companies: Outlining a framework where companies can support our shared safety mission through observer roles or specific working groups, minimizing risks of industry capture. ▶️Network Structure and Secretariat: Proposing a tiered membership model (core, associate, observer) and a central secretariat to promote inclusivity, resource-sharing, and accountability. ▶️Alignment with Existing Global AI Initiatives: Identifying ways for the Network to complement forums like UN processes and the Bletchley Summit, enhancing our collective impact on AI governance. Our goal is to advance ideas that will strengthen cooperation and set the stage for effective action on AI safety at the international level. 🔗 Read the full commentary on the IAPS website: https://lnkd.in/eW8mShmk

  • Institute for AI Policy and Strategy (IAPS) reposted this

    View profile for Oliver Guest, graphic

    AI Governance research

    New paper out today! 🎉 Karson Elmgren and I did a systematic review to identify "Chinese AISI counterparts." These are government-linked institutions in China that do similar work to the US and UK AI Safety Institutes (AISIs). Chinese “AI safety” work is sometimes rounded off to Chinese government efforts to censor LLMs–and we did find some of that. However, we also found several institutions working on similar problems to US and UK AISI. To the extent that AISIs and other bodies seek to engage with Chinese counterparts, I hope this paper can help them decide which institutions to engage and what to put on the agenda. We make specific recommendations on the first page. It's been awesome to work with Karson, and to get input from some many talented people. Link to the paper here: https://lnkd.in/e82Q2ngQ Institute for AI Policy and Strategy (IAPS)

  • Institute for AI Policy and Strategy (IAPS) reposted this

    The Bureau of Industry and Security (BIS) recently published a notice of proposed rulemaking to amend its regulations to establish reporting requirements for developing advanced AI models and computing clusters. This was an expected follow-up to BIS’s responsibility to manage mandatory reporting under Executive Order 14110. Following BIS’s request for public comment on the rule, my colleague Joe O'Brien and I from Institute for AI Policy and Strategy (IAPS) provided a response, available here: https://lnkd.in/gjYr8WPU We appreciated how BIS outlined the importance of reporting for supporting the U.S. defense industrial base, especially to ensure safe and reliable integration of dual-use foundation models. There’s also a solid case to be made that transparency around dual-use foundation models helps the U.S. government prepare for the possibility of foreign adversaries and non-state actors using these models to threaten national security and public safety. Given the above, IAPS recommended ways that this reporting process could be strengthened by expanding the role of other stakeholders that could be involved in the reporting process, including third-party evaluators, civil society groups, and other public sector entities. We recommended the following to BIS: 1️⃣ Provide voluntary reporting pathway for individual company staff and third parties 2️⃣ Amend the notification schedule to enable BIS to capture information on spur-of-the-moment or rapid breakthroughs which quarterly reporting might miss, by authorizing BIS to request ad-hoc reports outside of the quarterly notification schedule 3️⃣ Convene a multistakeholder process to develop and refine reporting standards 4️⃣ Enable the sharing of safety- and security-critical information from BIS to other entities, by (A) developing clear guidance and criteria for determining when information should be shared with specific entities; (B) leveraging some process-based measures to enable more scalable information-sharing, e.g., groups of reports relevant to particular issues could be tagged to be shared with particular actors over a specific time period; and (C) more ambitiously, BIS could consider taking on a role as an information clearing house for safety- and security-critical info, working to process and triage reports before sharing with other entities.

  • Institute for AI Policy and Strategy (IAPS) reposted this

    View profile for Renan Araujo, graphic

    AI Policy Researcher @ IAPS | Oxford China Policy Lab Fellow | Lawyer

    🚨 Alert: New work by me and colleagues! Institute for AI Policy and Strategy (IAPS) 🏗 AI Safety Institutes (AISIs) have become a popular model for governments seeking to strengthen their AI governance ecosystem. Despite the uniqueness of each AISI, there are some institutional patterns in their expansion—what are these? 🌊 In this new policy brief, Oliver Guest, Kristina Fort, and I identify the UK, US, and Japan AISIs as the “first wave” of AISIs. First-wave AISIs share fundamental characteristics: they are technical government institutions with a focus on the safety of advanced AI systems and have no regulatory powers. 🧪 First-wave AISIs’ work revolves around safety evaluations, i.e., techniques that test AI systems across tasks to understand their behavior and capabilities on relevant risks, such as cyber, chemical, and biological misuse. 📖 They have displayed three core functions: research, standards, and cooperation. These activities have revolved around evaluations but also supported other work such as scientific consensus-building and foundational AI safety research. Read our policy brief below to understand the first wave of AISIs better and dig deeper into their core characteristics, functions, and challenges. Also here: https://lnkd.in/dTQqTvFZ

  • What research are AI companies doing into safe AI development? What research might they do in the future? To answer these questions, Oscar Delaney, Oliver Guest, and Zoe Williams looked at papers published by AI companies and the incentives of these companies. They found that enhancing human feedback, mechanistic interpretability, robustness, and safety evaluations are key focuses of recently published research. They also identified several topics with few or no publications, and where AI companies may have weak incentives to research the topic in the future: model organisms of misalignment, multiagent safety, and safety by design. (This report is an updated version that includes some extra papers omitted from the initial publication on September 12th.) https://lnkd.in/g_6DnqgX

    • No alternative text description for this image
  • Institute for AI Policy and Strategy (IAPS) reposted this

    View profile for Renan Araujo, graphic

    AI Policy Researcher @ IAPS | Oxford China Policy Lab Fellow | Lawyer

    🚨 Alert: New work by me and colleagues! Institute for AI Policy and Strategy (IAPS) 🏗 AI Safety Institutes (AISIs) have become a popular model for governments seeking to strengthen their AI governance ecosystem. Despite the uniqueness of each AISI, there are some institutional patterns in their expansion—what are these? 🌊 In this new policy brief, Oliver Guest, Kristina Fort, and I identify the UK, US, and Japan AISIs as the “first wave” of AISIs. First-wave AISIs share fundamental characteristics: they are technical government institutions with a focus on the safety of advanced AI systems and have no regulatory powers. 🧪 First-wave AISIs’ work revolves around safety evaluations, i.e., techniques that test AI systems across tasks to understand their behavior and capabilities on relevant risks, such as cyber, chemical, and biological misuse. 📖 They have displayed three core functions: research, standards, and cooperation. These activities have revolved around evaluations but also supported other work such as scientific consensus-building and foundational AI safety research. Read our policy brief below to understand the first wave of AISIs better and dig deeper into their core characteristics, functions, and challenges. Also here: https://lnkd.in/dTQqTvFZ

Similar pages

Browse jobs