Fiduc-IA Corp’s Post

Fiduc-IA Corp reposted this

View profile for Francesca Rossi, graphic

IBM Fellow and AI Ethics Global Leader; AAAI President; GPAI expert and member of Steering Committee; IEEE member of Executive Committee of AI Ethics initiative; WEF GFC on AI for Humanity co-chair.

Make sure you provide your opinion on risk thresholds for advanced AI systems. We need everybody to weigh in. Deadline is September 10th. To me, compute threshold are not a useful metric for identifying safety risks. AI safety is a context-dependent evaluation of multiple factors, not a distinct property of a model. Also, passing a compute threshold does not necessarily indicate the presence of dangerous capabilities. In fact, it may well be that more compute will help in achieving higher levels of safety. Also, recently greater levels of performance are being achieved with smaller and smaller models. Rather than setting thresholds (based on compute or other), the evaluation of capabilities and, even more importantly, limitations of AI systems is a much better indication of issues that can turn in possible risks when the model will be used. Work on evaluation is ongoing in both academia and corporate environments, and there is still not a single unique way to do it. However, the danger in running a certain risk is very related to the deployment and use scenario. Even model with powerful capabilities or serious limitations can be safe to use in certain scenarios but dangerous in others. What is your opinion? Whatever it is, you should upload it to the OECD.AI site for the public consultation (https://lnkd.in/dKDP_4za). I already uploaded mine! #ai #airisks #advancedai #computethresholds

View organization page for OECD.AI, graphic

38,451 followers

PUBLIC CONSULTATION ON RISK ⚠️ THRESHOLDS https://lnkd.in/e98Pzw-b The OECD is joining forces with diverse stakeholders to explore potential approaches, opportunities and limitations for establishing risk thresholds for advanced AI systems. To inform this work, we are holding an open public consultation to obtain the views of all interested parties. We are interested in hearing your thoughts on the following key questions: ❓ What publications or other resources have you found helpful on AI risk thresholds? ❓ To what extent do you believe AI risk thresholds based on compute power are adequate and appropriate to mitigate risks from advanced AI systems? ❓ To what extent do you believe other AI risk thresholds would be valuable, and what are they? ❓ What strategies and approaches can governments or companies use to identify and set specific thresholds and measure real-world systems against those thresholds? What requirements should be imposed for systems that exceed any given threshold? ❓ What else should the OECD and collaborating organisations keep in mind concerning designing and/or implementing AI risk thresholds? 📅  10 SEPTEMBER DEADLINE TO PARTICIPATE https://lnkd.in/e98Pzw-b Francesca Rossi Stuart Russell Michael Schönstein Ulrik Vestergaard Knudsen Jerry Sheehan Audrey Plonk Celine Caira Luis Aranda Jamie Berryhill Lucia Russo Noah Oder John Leo Tarver ⒿⓁⓉ Rashad Abelson Angélina Gentaz Valéria Silva Bénédicte Rispal Johannes Leon Kirnberger Eunseo Dana Choi Pablo Gomez Ayerbe Sara Fialho Esposito Nikolas S. Sarah Bérubé Guillermo H. #airisk #aisafety #trustworthyai #oecd #risk

Seeking your views: Public consultation on risk thresholds for advanced AI systems – Deadline 10 September

Seeking your views: Public consultation on risk thresholds for advanced AI systems – Deadline 10 September

oecd.ai

Demetrius A. M.-A. Floudas

Policy Adviser | AI Governance Theorist | Geopolitics | International Legal Consultant | Attorney | Academic | .'.

1mo

I uploaded mine as well, obviously from a policy point of view. The very short summary is: To be effective, the OECD policy guidelines must be clear, unambiguous, easily communicable, devoid of excessive jargon, based on strategic principles, consistent, proportionate, and authoritative. A Code of Practice for companies will be utterly insufficient. Risk thresholds based on compute power are useful only as part of a broader set of criteria. Additional thresholds should be primarily quantitative to ensure immediate action if triggered. A critical challenge will be safeguarding against risks inherent in low-capability AI systems operating unintentionally or lege artis, causing Unforeseen Impairment of Humanity events (UIH).  However, such risks will probably only become regulated following calamitous incidents that shift public attention and vociferously demand stricter AI risk regulation.

Vincent Conitzer

Professor of CS @ Carnegie Mellon University; Professor of CS and Philosophy @ Oxford; Founder and President @ Econorithms, LLC

2mo

Of course I agree with this completely at one level -- any of us would be capable of burning through enormous amount of compute with no danger (other than emissions) whatsoever, and other metrics are more meaningful. But in another way, I think it's sensible to use compute or money spent as a useful flag, that clearly someone is expecting to build a very powerful and probably significantly deployed AI system because otherwise why would they be spending that amount of money. That doesn't mean it will necessarily be dangerous, so it will require further looking into along the lines that you suggest. And ideally we have other ways to flag other problematic AI systems too. But do you agree with compute / money spent as practically one potentially useful flag?

Like
Reply
Heather Domin, PhD

Global Leader, Responsible AI Initiatives at IBM | Associate Director, Notre Dame - IBM Tech Ethics Lab

2mo

We are aligned on opinions here, Francesca! I have added my comments as well. My view: AI risk is tied primarily to the use of the system, and is context and task specific. However, a holistic benchmark analysis that is appropriate for the modality of the data and task could be useful in indicating state of the art capabilities and thus, potentially, indicative of certain AI risks. If requirements are imposed for systems that exceed a given threshold, transparency and openness regarding factors such as risk mitigations, data protections, methods disclosure, and basic model details could be useful for helping stakeholders assess and manage potential risks.

Mimmo Squillace

IBM Technical Relations Executive - Presidente UNINFO

2mo

Francesca Rossi thanks for sharing... I fully agree with you that *only* compute threshold are not a useful metric. I would like to add that also standardisation is working on Risk Management for AI both at international level (ISO/IEC 23894 has been published at beginning of 2024) and at European level (Renaud Di Francesco leads the CEN/CENELEC project...)

Michel Van der Poorten

Business Partner Recruitment Leader @ IBM | Business AI Strategy Expert

2mo

Will certainly do as I fully agree to your view, Francesca Rossi ! Technical metrics have almost no correlation to the risk posed so why try to regulate those?

Peter Slattery, PhD

Lead at the AI Risk Repository | MIT FutureTech

2mo

Great points. This is a very difficult question. I will aim to submit something later this week.

Altiam Kabir

AI Educator | Learn AI Easily With Your Friendly Guide | Built a 100K+ AI Community for AI Enthusiasts (AI | ChatGPT | Tech | Marketing Pro)

2mo

Goodness, the nuances of AI safety are like walking a tightrope! It's about context and application, rather than just hitting a compute number. Your take on evaluation over thresholds is spot on. Francesca Rossi

Eugenio Sorice

Gruppo Ingegnere Guido Iorio Investigazioni Roma Consulente Esperto Strategie Investigative

2mo

Ottimo spunto

Helen Teplitskaia

Chair & Global Managing Partner at Imnex Group Inc., Founder & President, Global Alliance on Sustainability & AI (GASAI)

2mo

Dear Francesca Rossi, thank you for sharing! We will upload the opinion from Global Alliance on Sustainability & AI (GASAI).

See more comments

To view or add a comment, sign in

Explore topics