Lord Holmes brought forward a Private Members’ Bill on regulating Artificial Intelligence. It gave the UK House of Lords the chance to debate the issue. As you will see from my speech I am strongly in favour of “tech regulation” in the sense of holding platforms and AI companies to account. The main issues with AI are of course safety and transparency. But we must not lose the space for innovation, and ensure we remain a key player in The UK. I think we need to do a stocktake of where we are in terms of the number of organisations that have a stake in the AI debate, and set out a clear narrative of how we oversee AI at the moment. We don’t need to rush into regulation.
Let me say from the outset that I'm a regulator. I'm in favour of regulation. I strongly supported the Online Safety Act, despite the narrow byways and closes it found it sort of cul de sacs it ended up in. Because I believe that platforms and technology needs to be in some way accountable and I do not support people who say simply the job is too big, it shouldn't be attempted, we must attempt it. And what I always say about the Online Safety Act is the act itself, in my view. Is irrelevant. What is relevant is Ofcom and the amount of staff it now has an expertise it has, which I think will be making it one of the world's leaders in this space. We talk about AI now because AI has come to the forefront of consumers minds through applications such as ChatGPT. But of course large language models and the use of AI has been around for many, many years. But it is quite right now that we consider as AI becomes ubiquitous, how we could or should regulate it. And indeed, with the approaching election not just here in the UK but in the United States and in other areas around the world, we will see the abuse of artificial intelligence. And many people will wring their hands about how on earth to cope with the plethora of disinformation that is likely to emerge. I am often asked at a technology event, which I attend assiduously, what is the government's policy on artificial intelligence, And to a certain extent I have to make it up. But to a certain extent I think I've, broadly speaking, got it right. On the one hand, there is an important focus on safety for artificial intelligence to make it as safe as possible for consumers, which in in itself again begs the question whether that is possible, but also to ensure that the UK remains. An enormous, wonderful place for AI innovation. We're rightly proud that Deep Mind, although owned by Google, wishes to stay in the UK and indeed the Chancellor himself. Big dump Mustafa Salomon in a tweet yesterday for taking on the role of leading AI at Microsoft. So we it is very true that the UK remains a second tier, but the leading second tier nation in AI after China and the US. The question now is what do we mean by regulation? And I do not necessarily believe now is the moment to employer to create an AI safety regulator. And I was interested as well to hear the intervention of Lord Thomas are on referring to the 19th century. I would refer him to the late 20th century. In the early 21st century, the Internet itself has long been self regulated, at least in terms of the technology. And the global standards that exist. So it is possible for AI to proceed on the basis of a large basis of self regulation, but I think the government's approach to regulation is the right approach. Uh, we have the we have, for example, the Digital Regulatory Corporate Cooperation Forum, which brings together all the regulators that, whether obviously like Ofcom or indirectly like the FCA, have skin in the game when it comes to digital. And my specific question for the minister would be to bring the House up to date on the work of that forum and how he sees it developing. I was surprised though, by the creation of the AI Safety Institute as a standalone body. With such generous funding, and it does seem to me as well, the government doesn't need legislation to do an examination of the plethora of bodies that have sprung up really, over the last 10 or 15 years, many of whom do excellent work, but whose responsibilities, where they begin and where they end is confusing. the ADA Lovelace Institute, the Alan Turing Institute, the AI Safety Institute, Ofcom D sit Where does all this fit together into a clear narrative? That, for me, is absolutely the. Essential tasks that the government must now undertake. And I would also pick up on one particular remark that Baroness stole made in her speech, which is, while we look at the flashy stuff, if you like, disinformation and copyright and so on. She's quite right as well to say that we have to look at the pics and the shovels as AI becomes more prevalent and as the UK seeks to maintain our lead, boring but absolutely essential things like power networks for data centers. Are going to be essential, and that must be part of the government's task as well.
Can I ask you also look to debate the job loss implications. This serous social & economic consideration is being kicked down the road. The growth brought by AI is likely distributed to shareholder profits, especially to tech companies. Both job losses and lower paid work are significant possibilities. The Goldman projections on this state it could by 800m jobs globally lost to AI by 2030. That is the working population to China. The argument for new forms of taxation for AI for jobs it replaces has been proposed by likes of Bill Gates. It merits significant evaluation. If you want more on this and other linked AI considerations I’m happy to oblige.
AI is indeed a very complex and important issue, and can be likened to regulating the internet due to its pervasive nature and potential global impact. Just as with the internet, there are many challenges involved in regulating AI effectively while still fostering innovation and growth. It is therefore essential to strike a balance between regulation and innovation. Overly burdensome regulations could stifle progress and hinder the development of beneficial AI applications. Therefore, regulatory frameworks should be flexible enough to adapt to evolving technologies while still providing sufficient safeguards for individuals and society.
In addition this, the govt. also needs to look into Data centre/cloud monopolisation & regulation on which the AI is being processed. Otherwise, it's like trying to regulate sales of bullets while guns are being handed out for free.
Chair of charities/digital companies since 1995, former Labour MP; 4 national digital awards;
Worshipful Company of IT; Runner Up Charity Chair of the Year 2016; Chelsea Arts Club; Founder, Oxford Internet Institute.
We do need a regulator before the usual subjects out think us. We need to break up Ofcom. We need a trio of a PM, a Chancellor alongside and above all departments a second Chsncellor of Digital
AI is indeed a very complex and important issue, and can be likened to regulating the internet due to its pervasive nature and potential global impact. Just as with the internet, there are many challenges involved in regulating AI effectively while still fostering innovation and growth. It is therefore essential to strike a balance between regulation and innovation. Overly burdensome regulations could stifle progress and hinder the development of beneficial AI applications. Therefore, regulatory frameworks should be flexible enough to adapt to evolving technologies while still providing sufficient safeguards for individuals and society.
Business adviser, broadcaster, speaker & Member of the House of Lords; Trustee at Tate; Chair, UKASEAN Business Council
Lord Holmes brought forward a Private Members’ Bill on regulating Artificial Intelligence. It gave the UK House of Lords the chance to debate the issue. As you will see from my speech I am strongly in favour of “tech regulation” in the sense of holding platforms and AI companies to account. The main issues with AI are of course safety and transparency. But we must not lose the space for innovation, and ensure we remain a key player in The UK. I think we need to do a stocktake of where we are in terms of the number of organisations that have a stake in the AI debate, and set out a clear narrative of how we oversee AI at the moment. We don’t need to rush into regulation.
This is a really interesting take on California's AI bill that just passed the legislature, and one I hadn't thought much about: regulating based on a certain level of damage (i.e., after the fact, with a focus on major societal impacts) vs focusing on managing the risk to the consumer (i.e., putting guardrails in place for risky uses to protect individuals' rights).
The risk framework that the EU put together for AI use cases makes a ton of sense to me as a basis for future attempts to regulate AI. Versus, in my layperson's interpretation of the SB 1047 approach, saying "if your tech is used for really really bad stuff, you're liable."
The former gives a clear roadmap for a more positive use of AI and regulates the uses that are sketchy from the jump. The latter could mean the bad stuff happens, and then you get punished after the fact. And what about all the harm that might occur under the $500M mark?
Guess you convinced me, Lewis!
(Also, shoutout to my friends at MSR Communications for helping snag another great media placement! So fun to get the crew on TV)
ICYMI: VP of Legal and General Counsel Lewis Barr was on ABC7 News Bay Area Tuesday to discuss California #SB1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which is currently on Governor Newsom's desk, waiting for a signature.
Watch his segment to get caught up on what this new piece of AI regulation might mean for businesses and consumers. And don't miss Lewis's latest blog on efforts to regulate AI risk in the EU and the US: https://lnkd.in/gUcPuyUT#AIregulation#ResponsibleAI
🚨 Exciting update from the UK! A new Private Member's Bill on Public Authority Algorithmic and Automated Decision-Making Systems is in the works. If passed, this Bill could be a game-changer in aligning with the UK's commitments under the new Council of Europe AI treaty.
Key requirements include:
➡ Ex-ante impact and bias assessments
➡ Independent validation of system efficacy
➡ Transparency of assessments and logging
➡ Mandatory auditability and traceability
Stay up to date on the latest developments in #AI regulation around the world with our global Holistic AI Tracker. https://lnkd.in/dB5tix94
Don't miss out! #Regulation#EthicalAI
This is all the people touting the benefits of AI without fully grasping the costs associated with its implementation, ongoing support, and maintenance required to mitigate hallucinations. Majority of the organisations would struggle with producing unbiased data to train models let alone have the ongoing budget to continue to iterate to remove bias.
The European Union's Artificial Intelligence Act (EU AI ACT) does not only have a transformative impact on the European Union but the World.
Our latest blog article explores the EU AI ACT, its complexities, and how New Zealand measures up.
Curious to learn more? Dive deeper into the discussion by visiting the blog: https://lnkd.in/giraisM9#EUAIAct#AIRegulation#TechEthics#ArtificialIntelligence#Regulation#Compliance
🔍 The EU's new AI Act is here! 📜 UK firms note that this landmark legislation aims to ensure AI safety and trustworthiness, impacting any business using AI with EU consumers.
Key takeaways:
1) Different risk categories for #AI systems ⚖️
2) Transparency and #compliance requirements 📑
3) Broad extraterritorial impact 🌍
Stay ahead by #auditing your AI systems and keeping up with #regulatory changes. Let's embrace innovation while adhering to #ethical standards! 💡🔒
Read more below.
#iwork4dell
The EU’s Code of Practice for General-Purpose AI marks a key step in balancing innovation and regulation, with leading academics shaping its framework. However, tensions between AI providers and other stakeholders over data transparency and accountability highlight the challenges ahead for achieving consensus. Read the summary of Digital Watch Observatory
Interestingly the EU is making rules for a game in which it is not even involved.
With today's decision by the EU Parliament to implement the AI Act, Europe will become an even less attractive location for the development of artificial intelligence.
#CongratulationEurope
The UK’s Financial Conduct Authority (FCA) has recently published an update on its approach to artificial intelligence (AI). Key points include:
- Continuing to further the FCA’s understanding of how AI is deployed in UK financial markets.
- Building on the existing UK regulatory framework that covers firms’ use of technology, including AI.
- Continuing to collaborate closely with the BoE, PSR, and with other regulators through its Digital Regulation Cooperation Forum (DRCF) membership.
- Prioritising its international engagement on AI in line with recent developments such as the AI Safety Summit
- Working with DRCF member regulators to deliver the pilot AI and Digital Hubs.
- Assessing opportunities to pilot new types of regulatory engagement and environments in which the design and impact of AI on consumers and markets can be tested and assessed without harm materialising.
- Investing further into AI technologies in order for the FCA itself to proactively monitor markets, including for market surveillance purposes.
For more AI news please go to https://lnkd.in/etb5N9dU#AI#regulation#future
Can I ask you also look to debate the job loss implications. This serous social & economic consideration is being kicked down the road. The growth brought by AI is likely distributed to shareholder profits, especially to tech companies. Both job losses and lower paid work are significant possibilities. The Goldman projections on this state it could by 800m jobs globally lost to AI by 2030. That is the working population to China. The argument for new forms of taxation for AI for jobs it replaces has been proposed by likes of Bill Gates. It merits significant evaluation. If you want more on this and other linked AI considerations I’m happy to oblige.