This is a really interesting take on California's AI bill that just passed the legislature, and one I hadn't thought much about: regulating based on a certain level of damage (i.e., after the fact, with a focus on major societal impacts) vs focusing on managing the risk to the consumer (i.e., putting guardrails in place for risky uses to protect individuals' rights).
The risk framework that the EU put together for AI use cases makes a ton of sense to me as a basis for future attempts to regulate AI. Versus, in my layperson's interpretation of the SB 1047 approach, saying "if your tech is used for really really bad stuff, you're liable."
The former gives a clear roadmap for a more positive use of AI and regulates the uses that are sketchy from the jump. The latter could mean the bad stuff happens, and then you get punished after the fact. And what about all the harm that might occur under the $500M mark?
Guess you convinced me, Lewis!
(Also, shoutout to my friends at MSR Communications for helping snag another great media placement! So fun to get the crew on TV)
ICYMI: VP of Legal and General Counsel Lewis Barr was on ABC7 News Bay Area Tuesday to discuss California #SB1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which is currently on Governor Newsom's desk, waiting for a signature.
Watch his segment to get caught up on what this new piece of AI regulation might mean for businesses and consumers. And don't miss Lewis's latest blog on efforts to regulate AI risk in the EU and the US: https://lnkd.in/gUcPuyUT
#AIregulation #ResponsibleAI