The Call for “Responsible” on Top of Ethical and Trustworthy AI
It’s not just ethics and trustworthiness that matter in deploying AI; Responsible AI is a major concern too. — (Source: RaffMaster via Shutterstock)

The Call for “Responsible” on Top of Ethical and Trustworthy AI

By: Ryan Jacobs, Solutions Architect, Apps, Insight Canada


In the era of AI, words matter

I’m not talking about prompt engineering — the skill of properly prompting generative AI to ensure it delivers the desired outputs. I’m talking about the individual governance frameworks employed in further developing and using the emerging (maybe “skyrocketing” is more accurate) technology. Words like “trustworthy” and “ethical” are often used interchangeably to define the correct approach. Add in “responsible” and lines begin to blur. However, there is a noteworthy difference among the three.

Ethical, responsible and trustworthy AI (oh my)

How should these be viewed? There isn’t a consensus. In my opinion, the issue is with trying to “humanize” AI. It doesn’t help using terms like “personality” to describe it and “hallucinate” when it starts to respond nonsensically. Ethics are rooted in morality, but AI lacks the ability to act based on its own. AI’s ethics come from training data. Not only that, whose morals are we following to determine those ethics?

That’s a question people have been asking for millennia: “What is right?” That question is too broad for this article, but we should differentiate between ethical, trustworthy and responsible AI. How do we develop and use AI to reduce harm but still achieve the outcomes we want? That’s why responsible AI is a north star here.

We start by looking at the data to train the model. This is why you often hear about “bias in the model.” Policing is a good example, where facial recognition software can result in minorities overwhelmingly targeted as (mistakenly) intending to commit crimes. Ethically, a model trained on that data is flawed — intentionally or not. Its use is therefore also ethically flawed. So, it isn’t trustworthy.

Consider another example: A University of Chicago study determined models trained with patient data to identify cancer could also ascertain from which medical institutions new sets of images originated. This results in shortcuts to predict outcomes instead of relying on biology and genetics. Deployed in this way, an ethical AI model should help doctors determine trustworthy treatment options. However, if the AI determines the patient is at a specific hospital with less access to quality care that serves more of a disadvantaged population group, it could predict a worse prognosis, conceivably leading to missed treatment opportunities.

It’s probably impossible to completely eliminate bias, but there are ways to minimize it. In the University of Chicago example, developers could distribute disease outcomes evenly across each included institution. The rub is, even if a model is retrained to remove the bias, it won’t be trusted by the end users — doctors here. It’s been tainted, serving as an example of an untrustworthy but ethical model.

The opposite can be true too; AI can be incredibly effective at intrusively targeting you with personalized ads online. All things considered, that’s an example of AI you can trust. However, it can be used unethically.

The ultimate goal is to say a given model is ethical to the point we trust we can use it in a responsible manner. That responsibility falls squarely on humans… even from the get-go when developing the ethical model in the first place.

Placing your trust as a client in Insight

Trust is an issue we run into when deploying apps for clients that leverage AI.

In fact, Insight Canada was recently announced as Microsoft Americas Partner of the Year for AI & Copilot Innovation. So, let’s look at Microsoft Copilot. Like with any gen AI app, responsible deployment requires setting up security boundaries. This way, for example, human resources data doesn’t become available to just anyone.

Along with proper security-verification processes, a big document-management effort may be needed. AI provides shockingly effective access to all data in an organization. An initial guardrail setup ensures users only get access to the data they need.

This type of safeguard fits into “responsible,” more than “ethical” or “trustworthy.” Which boundaries are we going to put up to make sure the right data is accessible? The correct boundaries will permit access to those who need it, while limiting or prohibiting unethical use. Every client I’ve talked to about AI, the first question is, “Can it do this?” The second is, “Can we ensure our data is kept secure?” And the answer to both is usually, “Yes!” But only as long it follows a responsible approach.

Insight recently integrated gen AI into a solutions provider’s legal software to streamline contract creation for their law-firm customers. In creating the contracts, the app had to demonstrate semantic interpretation of transcripts to find keywords and Natural Language Processing (NLP) of those keywords. We delivered something that differentiates the client from competitors, boosting the productivity of its customer base, more than 60,000 law firms across the world.

The idea is for law firms to input transcripts of customer interviews to create drafts of contracts, like wills. These transcripts form the training data, but ethical use of that training data means the output absolutely should not include personal information.

Today’s gen AI is disruptive technology.

The law firm example is just the beginning of where the need for responsibility comes into play. Since the AI is trained to draft contracts that will eventually be legally binding, trustworthiness in the output is paramount, but verification is essential. Therefore, there’s a necessary guardrail in place to ensure the solution doesn’t say the generated output is legally binding: a lawyer needs to look at it. You must keep the human in the mix. That’s the responsible thing to do.

intellectual property ethical concept with businessman holding balance justice to encourage ai user no patent infringement and copy right of human artist in business activity
The use of humans to verify AI-generated output is an example of a control to ensure responsible use. — (Source: Aree_S via Shutterstock)

Gen AI’s ability to produce the unexpected is what makes it the disruptive technology it is today. For that reason, it must be controlled to ensure responsible use. It’s a balancing act, because if you restrict it too much, it’s little better than a search engine. There’s still value in that: You can interact with it in a human, natural-language way that assists people who may otherwise struggle to get good search results. Still, responsible AI use means you need to be careful. That’s why you hear every time, “This was AI-generated. Make sure a human checks it.”

One of the challenges we face with governance in general is that policies are written down for people to follow. This can be effective with some teeth. In the legal solutions provider’s case, the policy is effectively set by the bar association. If a lawyer signs something without verifying it and something is wrong, they face ramifications.

Technical controls to back the written policies are even better. Many of the gen AI tools we develop for clients start as simply as ensuring the AI’s personality (there’s that word again) stays within specifications. To make it work broadly, society definitely needs more controls in place. Otherwise, the challenges we’ve faced with gen AI to date will seem like a drinking fountain compared to the tsunami of what’s to come.

Keeping up with generative AI

I can’t remember anything that’s taken off like this in my 20+ years working in IT. The internet itself took decades. Facebook took four and a half years to reach 100 million users, which seemed lightning-fast then. There are a lot of examples of things that grew quickly, but ChatGPT blows them all away, having reached that same 100 million-user milestone in two months.

Personal privacy on the internet still lacks any real regulation globally, despite longtime demands for it. Now the rampant use of gen AI for things like deepfakes has sparked the beginnings of legislation to regulate responsible use. Speaking from a strictly legal perspective, the world is very far behind where it needs to be to support responsible AI in light of how ubiquitous it has become.

As to whether or not it will bring value to our clients, the only approach is a responsible one. Everything must be properly vetted, from the training data up to and including how model use brings that value. Just look at all the legal battles we’re seeing over copyrights, which have led in part to Microsoft’s commitments to assist clients in deploying AI tools responsibly. The Microsoft Customer Copyright Commitment, which requires customers to implement their commitments and mitigations framework (with which Insight can help), is one more sign we’ve stepped into a new era.

Needless to say, there will be a lot of movement in this space. To throw one final wrench in the conversation, there is the climate change issue. AI is a huge sink of power, with some companies saying they’re going to build nuclear reactors to power these data centres.

These global concerns are a part of responsible AI. We need to make sure our use of it is responsible on a grand scale, not just the individual scale of each prompt we send. The governance landscape will need to adapt as quickly as our AI use evolves. Actions must speak louder than words … and, as has been established, words can be pretty powerful.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics