The EU AI Legislation Sets the Bar for Safety and Compliance
The European Parliament passed AI legislation this week. Before the end of the year, this Act will be ratified by most EU nations and gets enacted. This is a significant milestone in finalizing the world’s first comprehensive law on artificial intelligence.
For budding AI creators, this is a crucial moment akin to high school students familiarizing themselves with the exam format of a prestigious college entrance test. Just as the student’s performance determines their college prospects, compliance with this new law holds significant consequences. Passing ensures access to desired opportunities, while cheating incurs severe penalties and failure to necessitates a retake.
The Test
This new law applies to anyone who places an AI system in the EU.
The law’s priority is to ensure AI systems are safe, transparent, traceable, non-discriminatory, and environmentally friendly. The people should oversee the AI systems rather than automation to prevent harmful outcomes. The legislation is based on a comprehensive AI definition and the associated risk categories.
Each AI system is classified into risk categories — Prohibitive, High, Low, Minimal, and General Purpose AI systems. Higher-risk systems face stricter requirements, with the highest risk level leading to a ban. Less risky systems focus on transparency obligations to ensure users are aware of interacting with an AI system, not a human being.
Any EU citizen can file a complaint against an AI System provider. EU member states will have a designated authority to review the complaints. AI creators will be fined a max of 7% of the worldwide total company turnover or $43 million, whichever is higher, for severe compliance breaches.
Requirements
Legal experts and startups will create compliance scorecards for AI creators in the next few months. Stanford Researchers have already evaluated foundational model providers, like ChatGPT, for compliance with the EU law Act. They have classified compliance under four categories.
Recommended by LinkedIn
Most foundational model providers like OpenAI, stability.ai, Google, and Meta failed to comply with the new EU AI act. The top two reasons for non-compliance are copyright issues, where AI creators don’t disclose the copyright status of training data and lack of risk disclosure and mitigation plans. Compliance now requires disclosing all known risks and mitigation plans and providing evidence when risks cannot be mitigated.
Violations
Non-compliance from an AI system provider will result in fines. Here are the penalties based on risk category:
The fines are smaller for SMB and startups AI creators
Implications:
AI creators now have a regulatory compliance NorthStar. More compliance scorecards and tools will be available in the next six months, making compliance easier moving forward. Those aiming to commercialize in the EU must possess mature, compliant AI systems. While the EU’s rollout will be gradual, early compliance offers an advantage in capturing the EU market share.
Those who neglected safety by default despite having global ambitions must adapt quickly, despite the associated costs and time investments. Market leaders like Google, Meta, and Microsoft may hesitate to commercialize in the EU until their AI systems achieve compliance, requiring further investment in redesigning or fixing their AI systems. Additionally, they must consider environmentally friendly practices for model creation.
The US, Canada, the UK, and other major countries will face pressure to act. They can leverage the best parts of the EU AI Act to expedite their legislative timelines. Nevertheless, serious enactment of legislation is still at least two years away. The positive aspect is that they will find more willing collaborators among AI market leaders to refine and create a business-friendly, cost-effective regulation while prioritizing safety needs.
Deputy General Counsel | AI & Privacy
9moThis is a pivotal moment in establishing global AI governance! This legislation sets a precedent for the responsible development and deployment of AI systems. The United States stands to benefit significantly from the insights gained from the EU AI Act. Firstly, adopting similar legislation can establish a clear regulatory framework for AI creators, ensuring the safety, transparency, and non-discriminatory nature of AI systems. This framework would address key aspects such as data disclosure, model capabilities, energy consumption, and deployment documentation, aligning with the EU's emphasis on accountability. Secondly, the enforcement mechanisms and fines outlined in the EU AI Act provide a model for imposing consequences on non-compliant AI creators. By incorporating analogous penalties based on risk categories, the U.S. can encourage responsible practices and deter violations. The EU's initiative prompts other major countries, including the U.S., Canada, and the UK, to expedite their own AI regulations. Adopting legislation akin to the EU AI Act would position the United States at the forefront of global AI governance, fostering innovation while safeguarding against potential risks.