AI has a trust issue
Illustrations by Yodai Yasunaga

AI has a trust issue

Let's face it, AI and Generative AI is everywhere these days. From LinkedIn caption suggestions to the friendly chatbots answering your customer service questions, AI is silently shaping our daily lives, and soon enough it'll be built into every single personal device (see Apple's recent intelligence integration).

While the benefits of AI are undeniable, there's a growing elephant in the room: trust. Our recent research at Smart Design shows that as users become more experienced with AI's capabilities, concerns about data privacy and security, bias, and even existential risks like job displacement become top of mind.

On top of that, a recent survey of AI experts paints a concerning picture, with almost half of them predicting a 10% chance of AI leading to human extinction. What other technology, so widely adopted, offers such immense potential while simultaneously posing such significant threats?

The bottom line: AI has a trust problem.

For tech companies and businesses looking to integrate AI, designing for user trust is paramount.


So how do we attempt to do that?

At Smart Design, we have deep experience in people, products, and user trust in emerging technologies. Based on this experience, and our recent research with over 50 GenAI users on the topic of GenAI risks, benefits and safety, we've arrived at a set of universal needs that products and technology need to meet in order to build user trust. Those are:

  1. Feeling empowered: Users must find it useful. They can do more because of it, they can save time, offload menial tasks, or they're able to unlock a new skill or be more creative.
  2. Feeling safe and secure: Users need to be reassured that their data is secure, that they are protected from viruses, bugs or hacks, and they know they won't come across anything threatening or disturbing.
  3. Feeling knowledgeable: Users want to have a good understanding of how the product or service works so they have confidence in the validity of its outputs.
  4. Feeling in control: Users want to have full control and ownership over the tool to wield it how they want and to produce outputs how they want.
  5. Feeling respected: Their ideas, beliefs, and values need to be represented, and included, and their relationship with the product and tool must be consensual and not exploited or taken advantage of.
  6. Feeling a sense of purpose: Users want to feel as if supporting or using this product or service not only brings them value, but also benefits the community, society or the planet.


Common Generative AI tools don't meet these needs

GenAI tools are great at delivering on the first need: users tend to feel overwhelmingly empowered by faster workflows, offloading tedium, and unlocking new ways to create. But these tools severely lack in the other 5 trust needs.

  • Safety and security: Unclear data practices, lack of regulation, fear of security breaches, disturbing content
  • Knowledge: Limited explanations or guidance, inaccurate mental models of how the AI or LLMs work
  • Control: Output limitations, lack of precision, lack of autonomy
  • Respect: Biased outputs, content theft from creators
  • Purpose: Job displacement, environmental impact

Some of these trust needs are starting to be addressed, but product makers still have a long way to go to close this trust gap.


What can you do?

Evaluate your AI tool or feature against these trust needs using the following cards. Does it address all aspects, or does it have a trust gap?


Interested in learning more?

At Smart Design , we help companies such as Google and Meta navigate the evolving AI landscape. We uncover user needs to design AI products that are safe, responsible, and trustworthy. One such example is a recent partnership with Google: we interviewed professional creatives to understand their concerns with GenAI and outline ways to design products to support their needs.

Paul Sinton

Owner of SixDegreesNorth - Coach, Trainer & Consultant enquiries@sixdegreesnorth.co.uk

1mo

A great article! I was at a local leadership networking event recently and trust was a big issue around AI, both generally and in regards to leadership/leadership skills/leadership development. I don't think that it helps when there are so many diverse ideas, opinions and perceptions about what AI is and what it does/does not do. The Terminatoresque doom and gloom outlooks don't help either!!!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics