Americas

  • United States

Asia

Oceania

Christopher Burgess
Contributing Writer

Clear as mud: global rules around AI are starting to take shape but remain a little fuzzy

Opinion
Sep 23, 20247 mins
CSO and CISOIT LeadershipRegulation

While the UN, the EU, and the US and its individual states all push to place limits and restrictions on AI, what’s emerging is a patchwork quilt that security leaders need to stay on top of.

artificial intelligence ai robot hands
Credit: Shutterstock

The state of AI legislation, rules, and regulations around the world is clear as mud.

That’s not surprising, given that dozens, if not hundreds of governments are all trying to find their footing in the fastest-growing technological advancement around. The United States is pushing for an international consensus on AI the rules of the road, while its individual states are pushing for their own solutions. Meanwhile, in Europe, the EU is pushing ahead with its own legislation on “worldwide rules” concerning AI.

The global patchwork of regulation makes it essential for those working in cybersecurity, especially CSOs and CISOs, to ensure that their general counsel (or outside counsel) signs off on how and where AI is being used within the enterprise and that it meets the requirements in whatever geographic areas they do business.

If they don’t, there is a real risk of running afoul of these coordinated efforts to rule AI that remain disconnected and disjointed in spots. After all, when what’s allowed in California may be different than what works in Colorado or Connecticut, how can we be expected to know what’s acceptable in Copenhagen or Canberra?

Who’s doing what to regulate AI on a global scale?

On the international stage, the United States has been advancing the need to have in place heightened security surrounding the use of AI while ensuring it is available to all, not just the technologically advanced nations.

The US effort has been led by Secretary of State Anthony Blinken, as he works to transform American foreign policy with respect to AI. Blinken has consistently highlighted throughout 2024 how technology is “at the heart of our competition with geopolitical rivals.”

Earlier in the year, at the United Nation’s General Assembly, led by the US delegation and Ambassador Linda Thomas-Greenfield, the US pushed for and built a coalition of more than 50 nations in support of ensuring equal access to AI, in the formulation of the first resolution on AI, which highlights, “the urgency of achieving global consensus on safe, secure and trustworthy artificial intelligence systems.”

The EU’s Artificial Intelligence Act while not all-encompassing, is also a good start. The highlight of the Act is its categorization of the types of AI, sorting by high- and low-risk systems and banning from the EU those focused on cognitive behavior manipulation or social scoring (the hallmark of China’s domestic effort to know their domestic population).

In addition, the Act prohibits “predictive policing based on profiling and systems that use biometric data to categorize people according to specific categories such as race, religion, or sexual orientation.”

There are still some areas of fuzziness in AI rules

There is some subjectivity within the EU efforts, as “high risk” is defined as able to cause harm to society, which could receive wildly different interpretations. That said, the effort comes from the right place, which is to protect and ensure the “fundamental rights of EU citizens.”

The EU Council views the act as designed to stimulate investment and innovation, while at the same time, carving out exceptions for “military and defense as well as research purposes.”

This perspective is not much different from the one the industry offered up in 2022 before the US Senate during discussions on the challenges of security, cybersecurity in the age of AI. At that hearing, two years ago, the Senate was urged not to stifle innovation as adversaries and economic competitors in other nations were not going to be slowing down their innovation.

What we need is a strategy that not only protects against potential misuse but also promotes the development of a stronger and more sustainable global AI ecosystem, according to Mike Price, chief technology officer at external cybersecurity provider ZeroFox.

When I asked Price for his thoughts on the US position around global AI that many nations should work together to ensure safety without hampering evolution, he agreed that “security considerations must remain at the forefront of these discussions to ensure that widespread AI adoption does not inadvertently amplify cybersecurity risks.”

Regulations are all fine and good, but they fall flat without education

But Price brought up a very important point: what good is all of this regulatory effort without an accompanying push for more education around AI?

“If AI capabilities become a linchpin for accelerating our SDGs (sustainable development goals), then the stakes are elevated to ensure that AI is used safely and securely,” Price told me.

“The truth remains that as it stands now, AI (particularly generative AI) capabilities favor bad actors, leaving the uneducated on AI vulnerable to attack.”

“For this reason, enhancing AI awareness and education is especially critical in sectors experiencing rapid AI deployment, where security measures may not be sufficiently robust or are at a bare minimum,” Price said. “In short, as AI grows in use and becomes more relied upon as a societal driver, so too must efforts in AI education and security to ensure long-term safety.

The fragmented nature of AI regulation in the US

As is the case with privacy, the lack of federal governance is driving the individual states to derive their own regulations and put in place their guard rails. California’s legislation passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), which is now on the Governor’s desk for signature or veto.

The bill mandates safety testing, the ability to turn off the AI model if it goes wonky, and empowers the attorney general to take companies to task if their AI creates a threat (taking over critical infrastructure, for example).

California, however, is not alone, as 33 other states have proposed or enacted legislature on the books which have set frameworks for AI usage. This highlights the very real possibility that the US will end up with 50 different sets of rules and regulations concerning AI and they may or may not fit into the framework State is proposing the global strategy encompass.

Turning back to Blinken, it is worthy of highlighting that he calls for solidarity in the governance of AI and not sovereignty. Furthermore, in June 2024, with transparency seemingly at the forefront, Blinken publicly shared how AI is being used to enhance the US State Department’s global diplomatic mission.

During this session he noted how mandatory training in AI for State employees is present, and that system testing and reviewing what queries are generated by the department’s employees are among the security tactics in play.

This is the correct perspective. Training should be mandated for all who are using tools which have AI components, especially cybersecurity tools. Additionally, on a more macro level, no one nation owns AI, yet every country will benefit from the power of AI and could be challenged by the misuse of AI.

Christopher Burgess
Contributing Writer

Christopher Burgess is a writer, speaker and commentator on security issues. He is a former senior security advisor to Cisco, and has also been a CEO/COO with various startups in the data and security spaces. He served 30+ years within the CIA which awarded him the Distinguished Career Intelligence Medal upon his retirement. Cisco gave him a stetson and a bottle of single-barrel Jack upon his retirement. Christopher co-authored the book, “Secrets Stolen, Fortunes Lost, Preventing Intellectual Property Theft and Economic Espionage in the 21st Century”. He also founded the non-profit, Senior Online Safety.

More from this author

Show me more

  翻译: