Bias - from society to AI

Bias - from society to AI

Hi! I’m Tytti, Saidot’s Communications Manager. I am a digital marketing and communications professional with curiosity for new innovations and challenges. I’ve recently returned back to the Nordic Finland, after having spent 10 years abroad. I am very excited with what Saidot is doing in the AI transparency space, and I am taking in all the learnings from the team and the industry with open arms. 

I’ve always been interested in how human factors impact our work. Last year, I wrote an article on Human Factors in the Health & Safety Industry that was published by the Power Magazine, US. In this article, I explored how human characteristics and psychology explain our behaviour. In a similar manner, I am interested in the human element in AI. 

I also have a personal fascination for different cultures and world views (this took me to travel the world), and how these make us see the world through different lenses. I believe that if we are able to recognise and study our societal biases (both positive and negative), we should also be able to better tackle bias in AI. 

Human element of AI

What is sometimes forgotten is that behind every code, AI application and a robot lies a human being; a human being with individual life experiences, education, cultural background and opinions. These characteristics shape us, and no matter how much we want to think that we can stay neutral, the way we handle and analyse data, or the type of data we collect in the first place, is influenced by these. Thus, we must be careful not to transfer our negative personal, societal, cultural biases to AI.

For instance, when training an intelligent system, we use different types of data to teach systems. If this training input is non-inclusive, what we get is a biased system that, for no fault of its own, will discriminate.

There are a number of cases where big companies have detected bias in their active AI systems:

In all of the above cases, the data used and collected for the development of the system led the AI to discriminate segments of the population. When not recognised, such a system can have a significant impact on people’s lives, when we talk about the deep-learning systems that find patterns, learn and make decisions based on the data they’ve been fed.

Hence, by being exposed to imperfect data, automated systems and learning algorithms inherit and reflect the existing prejudices and biases of our society.What’s worse, these societal biases can be amplified by algorithms. AI is increasingly involved in important decision-making that has an impact on people’s lives, e.g. credit applications, recruitment, social benefits’ applications, inter alia, hence tackling bias in AI is ever more important. I mean:

In the end, an AI algorithm that makes a bad prediction of what movie we should stream next has little consequence. An AI algorithm that recommends what treatment we should receive has our life resting on its accuracy.” - Kalev Leetaru

In the search for the origins of AI bias

Bias is something intangible, a state of mind, and depending on the type of bias, it can can be challenging to detect; explicit vs. implicit (unconscious) bias. The obstacle here is that bias in AI systems comes from the same source (humans) and is equally as challenging (not impossible) to detect. What’s more, bias can creep in at many stages, and it can perpetuate in a system unintentionally. ‍

Since biases are nothing new to our society, data collection can also become demanding. Existing data used for training can already be biased due to past discrimination, and particular sets of data can be missing from the past. As in the Amazon recruitment case, lack of data from female applicants led the recruitment engine to not like women. 

According to my colleague, Christian Fricke (Lead Back-End Developer at Saidot)

The root of bias in automated systems and learning algorithms is located in our societal structures (1) inherently biased humans creating the data we use in machine learning to (2) generalise over a very large hypothesis space

So, to make AI the “better version of us”, we should keep discussing and looking for ways to tackle discrimination and prejudice in both our society and AI, especially when we’re building intelligent systems that we want to give more power. Kalev Leetaru explains well this handover from humans to AI:

At the early stages when algorithms are adopted to a function, a significant amount of human review is involved. However, further in time, scrutiny towards the algorithm decreases and trust increases, meaning that an analyst, for example, will be given more content that they must verify, this leading to less time spent with each algorithm. As this pattern of less caution keeps repeating itself, the human starts to steadily trust the machine, sometimes over their own expertise.“

‍Lack of diversity ‍

Although bias can be challenging to detect at times, there are some waterproof methods to mitigate it. Within computer programming, one such area is the lack of diversity. According to a report by the AI Now Institute, “More than 80% of AI professors are men, and only 15% of AI researchers at Facebook and 10% of AI researchers at Google are women.“ The same report reveals that only 2,5% of Google’s workforce is black, the same number for Microsoft and Facebook standing at 4%.

Another study found that in 2018, women represented 25% of computer occupations.

In addition to homogeneity in the industry, what might heavily impact bias in AI is the motive behind a system; What is it developed for? And, how is it programmed to do it? A learning system will actively find ways to reach the set goal in a better way, however it will do this within the patterns it was taught. We must therefore make sure that the way the system is coded doesn’t perpetuate bias.

Tackling bias

So, what is being done, and what could be done to minimise bias in AI? (It is not AI per se that is biased, but it is in the implicit bias that issues arise.)

Tackling, mitigating, avoiding - whichever suits here best - bias in AI is not easy. This goes down to our society, to the fact that all humans have biases which have existed for a long time. The information we have and will feed to AI systems, comes from the society, and the patterns that learning systems recognise are based on existing (imperfect) societal structures - unless we’re able to teach the system differently, of course. The issue is that we cannot “fix” an existing system by simply feeding new types of data to it. We must reprogram, validate, test and train for better result.

So, to use AI to help us tackle bias, we should build the systems to be less biased in the first place; make sure data representation is as accurate as possible, that programs do not perpetuate bias, and validate results, inter alia. In addition, we must recognise bias, inequality and discrimination in our societies to diversify our understanding and opinions:

“We must make sure our house is in order – we can’t expect an AI algorithm that has been trained on data that comes from society to be better than society – unless we’ve explicitly designed it to be.” - Bernards Marr

‍Regulations and legislation would further motivate organisations to be more transparent and proactively seek bias in their systems and create bias-mitigating tools. In the US, for example, a proposed bill, the Algorithmic Accountability Act, would make companies check their algorithms for bias, and in Canada the Algorithmic Impact Assessment helps organisations to assess and mitigate the risks associated with deploying an automated decision system.‍

Education plays a vital role, too. Understanding data evaluation and getting the tools to recognise AI bias during training could have a widespread impact in the industry. And finally, let’s not forget about how embracing and improving diversity amongst developers, having multidisciplinary teams at some stage of AI algorithm creation, and work teams would help us in taking into account different perspectives and avoid implicit biases making their way to AI.

The broader your base of people developing the product,” Vogel said, “the more you ensure that it’s a workable product and that it’s more globally applicable.”, Joan Michelson

AI has so much potential to help us become not only less bias (in decision-making) but also more productive and precise in our work. AI can mitigate human error and increase our productivity, but for humans and AI to work well together, we need to use the technology for the good. #AIforgood 

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics