OECD.AI’s Post

View organization page for OECD.AI, graphic

38,454 followers

Over the last year, AI safety has occupied the minds of AI experts from all sectors worldwide. Last week, our Director, Jerry Sheehan, participated in the pivotal 3rd International Dialogue on AI Safety (IDAIS), a significant event organised by the Safe AI Forum and Berggruen Institute. Yoshua Bengio and Stuart Russell, members of our OECD.AI Network of Experts, were present alongside other top experts in the field. Participants reached a consensus and issued a statement calling on governments and other actors to recognise AI safety as a global public good, distinct from broader geostrategic competition. Some of the key proposals in the statement include: 💡 Create an international body to coordinate national AI safety authorities, audit regulations and ensure minimal preparedness measures, and eventually set AI safety standards. 💡Developers should show they do not cross red lines, preserve privacy, and conduct pre-deployment testing and monitoring, including in high-risk cases that may cross those lines. 💡Verify safety claims and ensure global collaboration and trust, privacy protection, potentially through third-party governance and peer reviews to ensure global trust and reinforce international collaboration Read more about the event and statement on the IDAIS website 👉 https://idais.ai/ Ulrik Vestergaard Knudsen Audrey Plonk Karine Perset Luis Aranda Jamie Berryhill  Lucia Russo Noah Oder John Leo Tarver ⒿⓁⓉ Rashad Abelson Angélina Gentaz Valéria Silva Bénédicte Rispal Nikolas S. Sarah Bérubé #trustworthyai #aisafety #aipolicy #security

International Dialogues on AI Safety - International Dialogues on AI Safety

International Dialogues on AI Safety - International Dialogues on AI Safety

https://idais.ai

Abed Khooli

Consultant in AI, Data Science, Open Data and Digital Transformation and Media Innovation.

1mo

in ".. a statement calling on governments and other actors to recognise AI safety as a global public good, distinct from broader geostrategic competition.", isn't AI at the core of "geostrategic competition"? And if int'l bodies (ex. UN system, ICC, ICJ ...) already proved incapable of enforcing even more pressing basic rights, how will such a body enforce safety standards?

Given the importance of AI safety and its recognition by international organizations, we are glad to contribute to efforts that enforce best practices in AI safety and quality assurance.

Robert Kroplewski

#SAIL - Stewardship AI Lab | Interconnected Future Technology Governance, Designing & Standarisation | information technology convergence expert | solicitor kroplewski.com

1mo

Thank you that next to safety approach to AI mention also a trust. Safety without trustworthy framework narrows a chance to build a trust. Having this we can build both: geostrategic competionion and safety.

Chris Marsden

Professor of Artificial Intelligence (AI), Technology, and the Law at Monash University

1mo

What type of body? Doesn't it already exist under G7/G20/Bletchley process?

Karine Perset

Head, OECD.AI Policy Observatory, Working Party on AI governance (AIGO) and AI network of experts

1mo
Yael Simon

Strategist | Business Development | Philanthropic Consultant | Organisational Leadership | #AI in Philanthropy, Governance, Procurement | #AMLCTF

1mo
See more comments

To view or add a comment, sign in

Explore topics