What is AI sovereignty? And why it should be the highest priority

What is AI sovereignty? And why it should be the highest priority

Definition of Enterprise AI sovereignty

a.     Free to govern and control one’s own enterprise AI (EAI) systems and data

b.     Autonomous ability to craft and execute AI strategy to survive and thrive

c.     Free of negative influences and strategic conflicts of vendors

Concept of sovereignty

The concept of regional and tribal sovereignty dates back at least several thousand years. In the modern era, national sovereignty was declared on Principles of International Law by the United Nations, which states that no state or international organization may intervene in matters that fall within the domestic jurisdiction of another state. The concept of business sovereignty or technical sovereignty is similar. 

We’ve recently heard quite a bit about AI sovereignty for nations, particularly from Jensen Huang , CEO at Nvidia. Jensen talks about the need for nations to develop their own supercomputing infrastructure so they can develop their own industries, presumably to avoid becoming dependent on others. The combination of cloud providers and their LLM firm partners are obviously the greatest risk to AI sovereignty.  

Overview of EAI sovereignty

Full authority should rest with the board of directors, chief executive, and/or assigns, to create, change and adapt their EAI strategy as deemed necessary, including configuration, applications, data sources, hosting, components and personnel, so they can effectively manage the enterprise and execute the mission of the organization.

The bylaws and organizational structure vary. In many companies the CEO leads EAI with support of the board, others are led by the CIO or CAIO, and some have a committee that reports to the CEO and board. Many companies have subsidiaries that may be partially integrated with the parent company, which also requires integration of the EAI strategy.

Business leaders, consumers and regulators are facing an unprecedented threat to sovereignty from big tech companies and their LLM partners. I’ve chronicled the costs and risks previously in this newsletter (see my paper and video on SPEAR AI systems from January, 2024). In bulleted form, those costs and risks relating to sovereignty include but are not limited to the following:

  • The combination of web-scale data scraping and interactive nature of LLM chatbots represent the largest form of data intelligence collecting in history. Every academic paper, multimedia post, podcast, blog, press release, video interview, product description, address, email—every piece of digital information produced by individuals or organizations and shared on the Internet is likely to be included in the data store of current LLM chatbots. That digital content is at risk of being recompiled in interactive format and presented to an inquirer, including state actors, competitive intelligence professionals, attorneys, private investigators, regulators, and anyone else.
  • Mass transfer of knowledge capital. All of the data scraped and licensed by LLM bots includes the majority of knowledge capital that has been published and can be accessed via the Internet. It essentially includes the majority of the sum total of published knowledge bases, intellectual property and the recorded know-how from knowledge workers, which represents the majority of value in the knowledge economy. As LLM bots improve they will be able to glean value from their data stores, recompile in infinite combinations, and create products from that knowledge base to compete directly with the authentic sources of the knowledge, or make it available to others. The cost of creating the combined published knowledge bases is in the tens of trillions of dollars. So far LLM bots are exploiting them for free or nearly free.
  • Ultra-consolidation of markets, wealth, and economy. The goal of AGI (artificial general intelligence) is by definition generalized knowledge on all topics—as deeply as data will allow. The scale necessary to collect and train the data and make it available for interactive prompts to anyone is currently believed to be possible only by a few cloud arms from big techs, including AWS, GCP, Azure, IBM, Oracle, Apple, and Meta in the U.S., and Alibaba, Huawei, Baidu, Tencent, and Kingsoft Cloud in China.

Much more efficient methods are in R&D and will mature rapidly, but in the interim, if the courts and Congress fail to intervene and stop what I and many others believe is an illegal transfer of knowledge capital, it’s difficult to overstate the likely damage to the economy. A collapse of the knowledge economy seems highly probable, which would include an extinction event in knowledge-based and creative industries, starting with publishing and entertainment in the near-term, and in the mid-term pharmaceuticals and large swaths of healthcare and education. All knowledge-based industries would be at risk.

Such a scenario for the modern economy would be comparable to the Great Depression, and perhaps worse.

Sovereignty ≠ self-sufficiency

Sovereignty ultimately requires competitiveness, and competitiveness requires many pieces of an elegant puzzle that should be customized for each organization and individual, which is evidenced by our KOS. Although Nvidia and others promoting AI sovereignty are also promoting their products and systems—presumably due to the perception of an advantage over others, enterprise leaders need to be careful about NIH (Not Invented Here) syndrome in AI. Attempting to go it alone or becoming dependent on one or two vendors is very unwise.

There is a false impression in some cultures—especially very large companies, that everything can be done in-house beyond chips and vast cloud infrastructure. For example, I had the chairman of an AI committee in a Fortune 50 company tell me (after months of discussions), that they had decided not to work with any external EAI firms. They were rightfully concerned about sovereignty and developing internal resources, but I think were also concerned about turf protection, which I passed on to their CEO who was my main contact.

The company invested billions of dollars in AI, have a lot of talent, and have achieved some success in important niche products, but they are not the leader in AI they claim to be. It’s the market power of the company and strengths in business that enable the vast spend in AI, not the other way around.

No country or business that comes to mind is fully self-sustaining. The closest are nation states like North Korea who are cut off from most of the world, and suffer frequent famines despite assistance from China, or something like subsistence farming. Both of these models result in poverty. Many empires have fallen due to failure to secure competitive technology, including countless businesses during my 40+ year career. The challenge is to establish a combination of vendor relationships that optimize opportunities while mitigating risks for a specific company, or even as individuals.

The past few years has painfully reinforced the lesson of how important trustworthy and secure supply chains can be, most notably in over-reliance in Europe on energy from Russia, and much of the supply chain for the U.S. in China. However, the lesson from those experiences (and big tech dominance) shouldn’t translate to attempting to do everything in-house. Internal conflicts can be as problematic as external. Specialty talent is key in AI and no single company has a monopoly on talent. Some vendors are far ahead of anyone else in important areas, and they include quite a few small companies (See the previous edition of this newsletter, “Wisdom is all you need”).

Our approach to AI sovereignty at KYield

We took a completely different approach than most to AI systems development at KYield. Beginning with a theorem on yield management of knowledge in 1997, developed while operating the leading learning network at the time (GWIN – popular with thought leaders), we’ve always placed the interests of humans and organizations as our top priority—as in customers, not the strategic desires of a few big tech companies. All of the leading LLM firms have been funded by strategic partners in big tech, and they appear to be reliant on big tech's vast cloud computing infrastructure. That dependency translates to control.

In contrast, our human-centered approach in the KOS required an end-to-end, data-centric architecture, system-wide rules-based governance, and strong security. We’ve embraced safety-critical engineering and protection of intellectual property and knowledge work as core principles (see our 15 EAI principles). Since our KOS is focused on high quality data rather than scale like LLMs, it’s far more accurate and tailored to each enterprise by the people in the organization, and by each individual for their digital assistant (DANA).

The combination of the KOS (EAI OS) and DANA (DA) provides functionality that LLM bots can’t perform, including:

1.     Rules-based governance across the entire human enterprise

2.     Multiple types of security (embedded from inception)

3.     Accurate, precision data management, verified from the source

4.     Prescient search tailored to each entity

5.     Data valves to manage quality and quantity

6.     Prevention of crises—large and small

7.     Secure knowledge networks complete with tailored graphs

8.     Personalized learning for every individual

9.     Personalized analytics to help each individual achieve goals

10.  Generative AI trained on secure data owned or licensed by the organization

As you can see, the KOS protects and strengthens enterprise sovereignty whereas DANA empowers and enhances individual sovereignty. All data is owned and controlled by the customer at all times.

Bottom line is AI sovereignty is absolutely critical—the future survival of the organization may very well depend on it.

Excellent article. I too, am concerned about dominance and control. And, although I am all for innovation freedom, I still can't yet get behind complete carte blanche with AI sovereignty. Why? Dominance and control...but, I want to learn more.

Mark Montgomery

Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.

10mo

I thought I would share LinkedIn's bot response to linked question premium subscribers see under post: "How does AI sovereignty impact business?" Response below --> AI sovereignty impacts businesses by ensuring they can autonomously craft and execute AI strategies, free from external influences. It emphasizes the importance of having full authority over AI systems and data governance, crucial for thriving in a competitive landscape. This autonomy helps mitigate risks associated with dependency on external cloud providers and partners, safeguarding business and technical sovereignty. ~~~~~ MM: pretty good, apparently gleaned from multiple sources, including my newsletter and Bing's sources.

Charles Barber

Collaboration specialist at Eva Mechler Master Furniture Maker

10mo

Mark Montgomery as well as individual copyright infringement (art, design, craft) which I believe is happening already; should we be concerned about the prospect of Ai killing originality; and should we be concerned about Ai copying and reforming Ai (while we end up with no flavour except vanilla).

Like
Reply
CHESTER SWANSON SR.

Realtor Associate @ Next Trend Realty LLC | HAR REALTOR, IRS Tax Preparer

10mo

Thanks for sharing.

To view or add a comment, sign in

More articles by Mark Montgomery

  • My response to the 'AI Action Plan' RFI

    My response to the 'AI Action Plan' RFI

    This is a special edition of the EAI newsletter consisting of my response to the RFI from the Office of Science and…

    5 Comments
  • A Prudent Approach to Agentic AI

    A Prudent Approach to Agentic AI

    When considering adoption of EAI systems, enterprise decision-makers should consider the influence of Big Tech…

    2 Comments
  • Knowledge Distillation and Compression

    Knowledge Distillation and Compression

    Given the recent shock from DeepSeek and their R1 distillation model, I thought we should focus this edition on…

  • How to future-proof businesses with AI systems

    How to future-proof businesses with AI systems

    We focus on the auto sector in this edition, but the same issues apply to most industries. Introduction Few would…

    3 Comments
  • AI in 2025: A Forecast for Business and Markets

    AI in 2025: A Forecast for Business and Markets

    Looking back on the first two years of consumer LLM chatbots, this era would have made a great sci-fi movie. Given the…

  • The AI Arms Race is Threatening the Future of the U.S.

    The AI Arms Race is Threatening the Future of the U.S.

    (Note: I wrote this piece as an op-ed prior to the election and submitted it to two of world's leading business…

    5 Comments
  • Is your AI assistant a spy and a thief?

    Is your AI assistant a spy and a thief?

    Millions of workers are disclosing sensitive information through LLM chatbots According to a recent survey by the US…

    15 Comments
  • Industry-Specific EAI Systems

    Industry-Specific EAI Systems

    This a timely topic for us at KYield. We developed an industry-specific executive guide in August and shared…

    1 Comment
  • How to Achieve Diffusion in Enterprise AI

    How to Achieve Diffusion in Enterprise AI

    It may not be possible without creative destruction Not to be confused with the diffusion process in computing, this…

    3 Comments
  • Are we finally ready to get serious about cybersecurity in AI?

    Are we finally ready to get serious about cybersecurity in AI?

    Just when many thought it wouldn't get worse (despite warnings that it would), cybersecurity failures have started to…

    4 Comments

Insights from the community

Others also viewed

Explore topics