Over the last year, AI safety has occupied the minds of AI experts from all sectors worldwide. Last week, our Director, Jerry Sheehan, participated in the pivotal 3rd International Dialogue on AI Safety (IDAIS), a significant event organised by the Safe AI Forum and Berggruen Institute. Yoshua Bengio and Stuart Russell, members of our OECD.AI Network of Experts, were present alongside other top experts in the field. Participants reached a consensus and issued a statement calling on governments and other actors to recognise AI safety as a global public good, distinct from broader geostrategic competition. Some of the key proposals in the statement include: 💡 Create an international body to coordinate national AI safety authorities, audit regulations and ensure minimal preparedness measures, and eventually set AI safety standards. 💡Developers should show they do not cross red lines, preserve privacy, and conduct pre-deployment testing and monitoring, including in high-risk cases that may cross those lines. 💡Verify safety claims and ensure global collaboration and trust, privacy protection, potentially through third-party governance and peer reviews to ensure global trust and reinforce international collaboration Read more about the event and statement on the IDAIS website 👉 https://idais.ai/ Ulrik Vestergaard Knudsen Audrey Plonk Karine Perset Luis Aranda Jamie Berryhill Lucia Russo Noah Oder John Leo Tarver ⒿⓁⓉ Rashad Abelson Angélina Gentaz Valéria Silva Bénédicte Rispal Nikolas S. Sarah Bérubé #trustworthyai #aisafety #aipolicy #security
Given the importance of AI safety and its recognition by international organizations, we are glad to contribute to efforts that enforce best practices in AI safety and quality assurance.
Thank you that next to safety approach to AI mention also a trust. Safety without trustworthy framework narrows a chance to build a trust. Having this we can build both: geostrategic competionion and safety.
What type of body? Doesn't it already exist under G7/G20/Bletchley process?
Consultant in AI, Data Science, Open Data and Digital Transformation and Media Innovation.
1moin ".. a statement calling on governments and other actors to recognise AI safety as a global public good, distinct from broader geostrategic competition.", isn't AI at the core of "geostrategic competition"? And if int'l bodies (ex. UN system, ICC, ICJ ...) already proved incapable of enforcing even more pressing basic rights, how will such a body enforce safety standards?