Removing AI Bias for Persons with Disabilities (second in series)

Removing AI Bias for Persons with Disabilities (second in series)

For many of us, artificial intelligence (AI) no longer needs to be explained, whether we interact with it in our personal life, using a smart assistant app, or in the workplace, gaining data-based insights.

We are seeing machine learning (ML) and deep learning (DL) adoption in many more businesses today. In a 2017 report titled “The State of Artificial Intelligence for Enterprises,” Teradata determined 80% of enterprises are investing in some form of AI technologies, and according to forecasts from Gartner, the global AI market is projected to reach USD $169.41 billion dollars by 2025, displaying an impressive CAGR of 55.6% from 2016 to 2025.

Digital features such as automated writing, predictability, patterning, and advanced analytics provide instant matching to human behaviours and thinking. Machines continue to learn and improve the algorithms themselves, helping increase our digital intelligence. How smart are we?

While some organizations are still in the early stages of investigating risk, others have moved further along the maturity curve, and are adopting AI technology to help streamline and reengineer routine business processes and tasks. Organizational goals include reducing costs, improving productivity, and making faster business decisions.

However, deploying and scaling AI effectively is still of concern. Barriers need to be overcome, including data silos, lack of IT infrastructure, interoperability issues, budget constraints, and a shortage of in-house skills.

AI defined

There are two common definitions of AI: (1) Narrow AI (Weak AI) is a specific type of artificial intelligence in which a technology outperforms humans in a narrowly defined task. Siri, for example, operates within a pre-defined range of functionality; and (2) Artificial General Intelligence (AGI or Strong AI) is a machine that has the capacity in understanding or learning any task that a human being can, often doing a superior job. In healthcare, for example, AI systems are better in detecting cancers than human doctors.

Big Data is fueling AI

We all leave digital footprints. If you are on social media, every interaction you do is generating data, and being collected and analyzed by marketers. Companies want to better understand consumer behaviour and preferences, sifting through mountains of data to design more personalized offerings.

Internet of Things (IoT)

Telecom provider Ericsson estimates there will be roughly 29 billion connected devices by 2022, of which around 18 billion will be collecting and sharing data as Internet of Things (IoT) devices. Think of the many security systems, thermostats, and lights all around us.

"Conversational AI" from IBM Watson

AI is also making great strides in customer interactions, across a multitude of communication channels, including phone, website, online chat, email, and social media. AI-powered Chatbots, are enabling conversations via auditory or textual methods with virtually unlimited numbers of customers at the same time – a feat simply unattainable by humans.

IBM Watson is one of the recognizable brands in the growing field of “conversational AI.” Juniper Research predicts by 2022, 85% of all customer interactions will be handled without a human agent, and Chatbots will save companies USD $8 billion dollars per year.

Facebook’s recent acquisition of Israeli startup Servicefriend, is indicative of the great potential of Chatbots. Customer service bots are being geared for Facebook’s crypto-currency Libra launch in 2020. An unprecedented efficiency level of 18.3 agent hours for every 1,000 interactions is now possible.

AI needs to augment human capabilities

"Businesses can be more successful when machines and humans work together, according to a 2018 Accenture study."

Businesses can be more successful when machines and humans work together, according to a 2018 Accenture study. The authors, Paul Daugherty and Jim Wilson, conclude AI needs to augment human capabilities, rather than replace humans with machines. Machine-learning algorithms must be taught how to perform the work they’re designed to do and participate in “collaborative intelligence.” 

But, what happens if the algorithm we design is unfair and incorporates bias?

More decisions are being delegated to machines, whether to authorize access, recognize illnesses, predict risks, determine loan and credit worthiness, academic potential, and future employment performance, to name but a few. However, our most vulnerable populations, including persons with disabilities (PWDs) are at great risk.

Stop teaching machines our stereotypes and presumptions

From an accessibility data collection perspective, there is growing concern about how AI will capture non-traditional digital and human patterns that may exclude the exceptions and the experiences of our most marginalized, underserved populations, and diverse abilities.

Jutta Treviranus: "Do our machines understand and serve individuals that are different, or fail to recognize and ignore anyone that does not conform to the model of an average human?” 

At the 2017 DEEP conference, Jutta Treviranus, Professor and accessibility advocate at OCADU in Toronto, expressed this well: “This trend has the potential to amplify existing inequities. Whether we welcome or fear machine intelligence, it is important that we attend to what we teach machines. Do our machines understand and serve individuals that are different, or fail to recognize and ignore anyone that does not conform to the model of an average human?” 

Interest in algorithm transparency is growing. Fiddler, a Mountain View, California-based startup, recently raised U.S. $10.2 million dollars for their “Explainability AI” engine, used in managing, analyzing and validating data.

Having an explanation does not always solve the problem. Additionally, when variables in algorithms are changed, and proxies used instead, the same biases may still reoccur. We need to examine algorithm outputs, and begin incorporating variables that specify diversity. Let’s stop teaching machines our stereotypes and presumptions.

I invite all accessibility stakeholders to engage in our series, and help debunk the many myths that have created accessibility barriers. Please comment below.

Stay up to date on the entire series by searching for #AequumTrends2020, and clicking the "Follow" button. Thanks! Pina 

About Pina D'Intino

Pina D’Intino is an internationally recognized speaker on accessibility, strategist, integration specialist, and entrepreneur who helps businesses achieve inclusion through diversity, accessibility, and leadership. Fluently versed in project and organizational frameworks, Pina confidently navigates through the complexity of business processes, environments and different business cultures.

To learn more visit: www.aequumglobalaccess.com or contact Pina at Pina@Aequumaccess.com

David Wysocki, M.Sc., OT (Reg) (Ont.)

Accessibility Professional & Occupational Therapist

5y

Agree with  Claudio Luis Vera.  The article (and Pina) deserve a thumbs-up like, lightbulb of insight, applause and heart.  Having only to choose 1, I'd say the insight is most illuminating.

David Wysocki, M.Sc., OT (Reg) (Ont.)

Accessibility Professional & Occupational Therapist

5y

Excellent coverage of insightful concerns, and well written!  I'd say it represents the advocacy of the future, but the future is obviously now and more important than ever.   Thanks much Pina!

Claudio Luis Vera, MBA, CPWA

Senior Design Lead, Certified Accessibility Professional, experienced in data analytics and AI

5y

If I could like, heart, applaud and give this article a lightbulb for insight, I would. We really need to educate ourselves on how algorithms work if there’s ever going to be any real oversight on AI. I look forward to seeing the rest of this series.

Steven McNeil, MCPM-T

Open-minded Accessibility (A11y) Champion & Advocate; Fact-based Storyteller; Ex-CIBC UX Accessibility Analyst; Professional with Project Management & Business Analysis skill sets, preceded with I.T. background

5y

Pina, Keep this up!  Get the word out!  We are not machine generated algorithms.

Like
Reply
Florence Chapman

Diversity, Equity Inclusion and Accessibility Human Resources and Business Process Manager, at Sobeys Inc.

5y

Thanks for articulating so well both the accelerating opportunities and greater challenges faced by people with disabilities in greater business adoption of AI!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics