EU moves first on AI regulations
Courtesy of Canva

EU moves first on AI regulations

Earlier this month, leaders within the European Union reached an agreement on a years-long effort to regulate artificial intelligence. Begun in 2021, the EU Artificial Intelligence Act garnered consensus that would see the enactment and future enforcement of a set of regulations that will set the rules of the road for AI in the Old World. This is the first wide scale international set of requirements around the new technology and its approach differs from that taken by US policymakers.

Broadly, the EU AI Act relies on a risk-based system.

AI use cases are classified based upon the potential risks they pose to health, safety and fundamental rights. These cases are then sorted among different categories. 

  • Unacceptable Risk. AI systems falling into this group are banned. They include technology that presents a significant risk of exploiting people based on age, gender, race or other important characteristics. Further, social scoring systems are not allowed.
  • High Risk. Here, AI developers must demonstrate that their systems do not threaten health, safety and fundamental rights by way of risk management, human oversight, transparency, registration in public databases, recordkeeping and data governance, among other requirements. The types of technology implicated here include those involving biometrics, education and work, critical infrastructure, access to democratic institutions and others. 
  • Limited and Minimal Risk. This category sweeps up technology that includes situations where users interact with AI at a superficial level. Consider asking a chatbot a question on a website or viewing AI generated content on a streaming service. Transparency is the standard here, requiring that users are notified when they engage with AI.

Generative AI falls into a bit of a nebulous region within the AI Act. This was not explicitly considered in early discussions of the Act, though policymakers have noted it will be addressed in future implementing acts and is likely to be held to similar standards as AI within the high risk category. 

What happens if AI developers fail to comply with the regulations? They can expect hefty fines of $20M Euro or four percent of global revenue, whichever is higher.

Despite its “pioneer” status as the first substantial set of international AI regulations, the Act will not be fully implemented until 2025, a delay necessitated by time for countries to craft implementing legislation in their home jurisdictions. 

If the EU is the “hare” in AI regulation, the US is taking the pace of the “tortoise.” American policymakers have used much of the last year on bringing themselves up-to-speed on AI: its potential, challenges and possible futures. The Senate is still having AI Insight Forums with industry experts to study the topic. The President seems to be moving a bit faster with a suite of AI executive orders, but they tend to favor more study-and-prepare and less direct action. Clearly, the American approach is one of setting a foundation to understand the technology before moving. And being second to move may prove advantageous as stateside regulators glean learnings from across the pond.

Thoughtful regulation tends to move at the speed of understanding. Americans can hope that the tortoise repeats his performance in the forthcoming marathon of AI regulation.

 

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics