Some thoughts on adopting AI in your organization
This is not a pipe.

Some thoughts on adopting AI in your organization

I spend a lot of time learning about generative AI from various perspectives. I also spend some time talking about AI, for example with decision-makers. Today, I was prompted to write about strategies for introducing generative AI into organizations and ensuring its widespread adoption. What pitfalls are there, and what is a good strategy?

Please take what I write with an appropriate grain of salt. I have worked in a number of fields, with different titles, but I've never worked with change management. And when it comes to generative AI, well, we are all novices. Learning together is a good thing.

Allow Mistakes, Avoid the Worst Risks

My first thought on adopting generative AI has to do with experimentation, learning and risks. Did I mention that we're all novices? That means that if we want to make use of generative AI in a good way, we'll have to experiment and learn.

From a management perspective, it makes sense to look at the risks with generative AI. There are risks of leaking sensitive data, broadcasting embarrassing errors, incurring AI-induced biases, or deploying a customer-facing chatbot that makes promises your organization can't fulfill.

However, I think it is important to recognize that there is a scale of risks. Posting meeting notes with somewhat business-sensitive content to a chatbot is bad, but hosting a chatbot that occasionally provides harmful messages to kids is worse.

If you want to stimulate use of generative AI, you must give space to learn how to use it. Learning means that we make mistakes. Find out what the worst risks are, and make sure that everyone knows how to avoid them. But also make sure that everyone knows that it is okay to make small mistakes, as long as we learn from them.

Your Employees Are Already Using Generative AI

Studies on the use of generative AI in workplaces reveal that employees often start using AI before official guidelines are established, sometimes even in defiance of existing policies. This means that "introducing" generative AI at a workplace is the wrong framing. Adoption has already started. If comparing the workplace to a garden, there are probably seeds all over the place – and some have already grown quite a bit.

From a management perspective it makes more sense to find out things like these:

  • How is AI already being used?
  • How widespread is the adoption?
  • Why are some (or many) not using AI? (Is it in certain parts of the organization? Is it due to uncertainty of organization policies? Lack of access? Or a general slow adoption?)
  • How can I create an environment where new users are helped onboard?
  • How can I create an environment where early adopters are encouraged to experiment and share experiences?

Helping People Onboard

I believe that there are some clear actions that can be taken to help people start using generative AI.

  • Provide access. This could be standard or enterprise accounts, which is quick and easy but costs a bit. Another approach is to use tools like Chatbot UI to set up a chatbot accessible from the organization network only, for a fraction of the cost of single subscriptions.
  • Make it clear that it is okay to use chatbots and other generative AI. AI can, for some people, have an aura of being dangerous or prohibited. As said above: When making clear that using AI is ok, also make clear what big risks should be avoided.
  • Provide a few well-chosen use cases. These should be relevant, easy to get started with, and provide clear time savings. Short screencasts are probably better than examples in written form. Examples could include things like turning quick notes into drafts, translating from German to English, asking for editorial feedback on a text, or help getting unstuck on a problem.

Support Power Users

Power users – people who actively experiment and have built a lot of knowledge – can play an important role in getting widespread adoption of AI in the organization. These are the people who have the best chance of finding new use cases, both simple and complex, and they could also be a formal or informal support for others.

Power users are also the ones trying out new tools, keeping track of AI development, and in general connecting technology with field expertise relevant for the organization. Providing support for this, without having power users becoming detached from the rest of the organization, could provide a lot of value in the long run.

How to support power users probably depends a lot on the organization and the individuals – I am no expert here. Simply showing appreciation and initiating an informal group for sharing experiences could go a long way in some cases. In my experience, power users are motivated by their own will to learn and explore, so building on that is probably a good idea.

In Short

  • Encourage learning and experimenting, but make sure that you avoid the biggest risks. Not all risks are equal.
  • Don't introduce AI, but help spread the use of AI that is already present.
  • Help new users with easy access and use cases with easy wins.
  • Support power users by providing them more opportunities to learn and explore.

Feel free to share your own thoughts and experiences. We're all novices, and learning together is a good thing.

Johan Falk

AI Strategist | Expert in AI Policy, Regulation, and Risk Management | Former Head of AI and Education at Skolverket

8mo

Här är ett poddavsnitt (på svenska) på samma tema, men med klart bredare ingång. Värt att lyssna på! https://pca.st/episode/3a7b8bb5-5f36-44bf-88a6-c48537a93d2f Ping Jörgen Larsson

Insightful reflections on AI adoption—embracing change while understanding the nuances of technology integration can truly transform organizational dynamics.

Jörgen Larsson

Change Management in the Era of AI and Data Governance @ Jaxbird AB | Leadership, Operations Development

8mo

That's a good angle on the challenge. It definitely broadens the picture. But what do you think about automation of routine work and simpler decision-making that might instead free up employees for more important tasks?

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics