WHY AI NEEDS REGULATION

WHY AI NEEDS REGULATION

I read a Brookings Institution article over the week on the three challenges of AI Regulation. It had me thinking again about the need for comprehensive regulations around emerging tech as fast as possible. It's a conversation I have had with tech enthusiasts who question the ethical considerations surrounding how quickly the world is evolving and whether the law will be able to keep up with the pace of such development.

The CEO of OpenAI, Sam Altman; the President of Microsoft, Brad Smit; and, the CEO of Google, Sundar Pichai, all subscribe to the idea of certain regulatory standards to guide the creation, use, and implementation of artificial intelligence. The creation of a digital regulatory agency seems imminent - for the protection of man's technological evolution.

But why? Why does AI need regulation? 

No alt text provided for this image

In the words of Altman, there is a need for “a new agency that licenses any effort above a certain scale of capabilities and could take that license away and ensure compliance with safety standards”.

I agree with Altman's submission. The development of AI asks many questions that the law needs to answer via regulation. It is now a matter of when, but before then, let's examine the reasons why Artificial intelligence needs regulation:

1. Ethical considerations

Thanks to autonomous AI agents, AI systems are developing at a rate where they can make decisions that have significant ethical implications across several sectors such as healthcare, financial markets, operating systems and device use, justice, and autonomous vehicles.

Regulations will help ensure that AI is developed and used in ways that align with ethical principles and societal values. They can prevent the creation of biased or discriminatory AI systems, protect individual privacy, and establish guidelines for fair and accountable AI practices.

2. Safety and reliability

Machine learning means AI is able to learn from vast amounts of data being fed to its systems. And, it can make predictions or decisions based on that learning. However, we cannot rule out the possibility of errors, bias, and unexpected behaviors from the use and implementation of AI.

Regulations can set standards for safety and reliability to minimize risks associated with AI failures. They can mandate testing, validation, and transparency requirements to ensure that AI systems perform as intended and do not harm users or society at large. Interestingly, Sundar Pichai recently announced an agreement between Google and the European Union (EU) to develop an “AI Pact” of voluntary behavioral standards before implementing the EU’s AI Act.

3. Data protection and privacy

AI often relies on large amounts of data to train models and make accurate predictions. Regulations can establish rules for data protection and privacy to safeguard individuals' personal information.

These rules may include obtaining explicit consent for data collection, ensuring secure storage and handling of data, and limiting the use of sensitive information. Compliance with these regulations can help build trust in AI systems and mitigate the potential misuse of personal data.

4. Economic and societal impact

AI has the potential to disrupt industries, reshape job markets, and impact societal structures. A recent report by Goldman Sachs notes that Artificial intelligence (AI) could replace the equivalent of 300 million full-time jobs. This means it could replace a quarter of work tasks in the US and Europe but may also mean new jobs and a productivity boom as it could eventually increase the total annual value of goods and services produced globally by 7%.

Regulations can play a crucial role in managing these impacts and ensuring that AI benefits society as a whole. They can address issues such as job displacement by AI, equitable distribution of AI benefits, and fostering innovation while avoiding undue concentration of power. Regulation can also promote competition and prevent monopolistic practices in the AI industry.

Recommendation on Developing AI Regulation(s)

AI is a global technology that requires global discussions in what we call a 'Global Village' today. Regional efforts are laudable, but as mankind looks to consolidate efforts on technology, regulations should be fostered international cooperation and coordination. Tech giants must be included and approached with caution in the development of effective policies or laws to regulate AI.

For instance, talking about the European Union’s pending AI regulation, Altman said “We will try to comply, but if we can’t comply, we will cease operating [in Europe]”

Harmonized regulations can facilitate the exchange of AI technologies and promote interoperability. They can also address concerns related to AI governance, standards, and norms at the global level, reducing the risk of divergent approaches that may hinder collaboration and create barriers to the development and deployment of AI systems.

In Conclusion

AI regulation is necessary to ensure ethical use, promote safety and reliability, protect data and privacy, manage economic and societal impacts, and foster international cooperation. It aims to strike a balance between harnessing the benefits of AI and addressing the potential risks and challenges associated with its deployment.


Write to me by replying to this newsletter or send a message to me on LinkedIn if you have any opinions to share or questions to ask.

Join the 500+ persons enjoying this newsletter to receive weekly insight into the world of Web3, AI, Blockchain, and every irregular tech defining the world.

See you soon!






To view or add a comment, sign in

More articles by Amana Alkali

Insights from the community

Others also viewed

Explore topics