The AI Trust Fall

The AI Trust Fall

If I asked you, “When someone turns in a work assignment, how accurate is it? 80%, 90%, 95% or perhaps 100%?”

We don’t think this way about coworkers’ spreadsheets. But we will probably think this way about AI & this will very likely change the way product managers on-board users.

When was the last time you signed up for a SaaS & wondered : Would the data be accurate? Would the database corrupt my data? Would the report be correct?

But today, with every AI software now tucking a disclaimer at the bottom of the page, we will be wondering. “Gemini may display inaccurate info, including about people, so double-check its responses” & “ChatGPT/Claude can make mistakes. Check important info” are two examples.

In the early days of this epoch, mistakes will be common. Over time, less so, as accuracies improve.

The more important the work, the greater peoples’ need to be confident the AI is correct. We will demand much better than human error rates. Self-driving cars provide an extreme example of this trust fall. Waymo & Cruise have published data arguing self-driving cars are 65-94% safer.

Yet, 2/3 of Americans surveyed by the AAA fear them.

We suffer from a cognitive bias : work performed by a human is likely more trustworthy because we understand the biases & the limitations. AIs are a Schrodinger’s cat stuffed in a black box. We don’t comprehend how the box works (yet), nor can we believe our eyes if the feline is dead or alive when we see it.

New product on-boarding will need to mitigate this bias.

One path may be starting with low-value tasks where the software-maker has tested exhaustively the potential inputs & outputs. Another tactic may be to provide a human-in-the-loop to check the AI’s work. Citations, references, & other forms of fact-checking will be a core part of the product experience. Independent testing might be another path.

As with any new colleague, the first impressions & a series of small wins will determine the person’s trust. Severe errors in the future will erode confidence, that must be rebuilt - likely with the help of human support teams who will explain, develop tests for the future, & assure users.

I recently asked a financial LLM to analyze NVIDIA’s annual report. A question about the company’s increase in dividend amount vaporized its credibility, raising the question : is it less work to do the analysis myself than to check the AI’s work?

That will be the trust fall for AI. Will the software catch us if we trust it?

Jason M. Lemkin

SaaStr Annual 2025 is May 13-15 in SF Bay!! See You There!!

3mo

It in essence passes the test by default for jobs no one is willing to do anymore

Kirill Soloviev

LLM quality evaluation made easy for non-developers · AI LQA SaaS & translator feedback automation for Localization teams · Linguistic Quality Management expert · 20yrs in tech & int'l

3mo

Trust but verify, this is what we always say at ContentQuo! We’ve built the company to help buyers of AI-powered services develop trust towards their vendors based on easy to understand & openly shared quality metrics. In our first vertical, this really helped alleviate trust issue over time and move to constructive data-based vendor performance conversations and procurement processes. Expecting to see more of it as LLM space grows!

Konstantin Ristl

CEO @ Histack.io - Use AI Outreach to Drive Sales | SaaS Founder | AI Enthusiast | Mentor

3mo

 love your post! I’m a big advocate for pointing out that not only AI but also humans make mistakes. When trusting someone with a task for the first time, I’m more skeptical than after seeing them perform it multiple times. As for people: Not every task fits every person, just like not every task fits AI.

Jeff Dodge

Chief Revenue Officer at Fog Solutions - Building AI Foundations and Solutions on Azure

3mo

Great piece. Your point about humans trusting self-driving cars is particularly poignant as the data is pretty overwhelming that lives would be saved if self-driving cars were broadly adopted in suitable circumstances. Ultimately humans do want to know "who is accountible" and "who will be punished if something goes wrong" even if there are less occurances of something going wrong. When it's a human at fault, society punishes that human and it 'feels right' ie justice is served. When AI is wrong, the only 'person' to hold accountable in most circumstances is a corporation, and generally a trillion dollar corporation and they have a lot to lose finincially so the risk/reward for those corporations is tricky and even more so the legal ramification.

Luke Shalom

CEO @ Grow Solo I help the C-suite turn industry expertise into predictable pipeline

3mo

Trusting AI without fully knowing it can be worrying.

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics