The Dark Side of AI: Bias, Discrimination and Manipulation

The Dark Side of AI: Bias, Discrimination and Manipulation

AI has made great strides in recent years, offering many benefits and the potential to change our lives. But as we explore AI further, we need to confront the dark side that accompanies its development and deployment.

One major concern is the presence of bias and discrimination in AI decision-making processes.

AI systems are designed to learn from extensive datasets and make decisions based on patterns and correlations. However, these decisions often reflect the biases and prejudices present in society, perpetuating systemic inequalities and discriminatory practices.

No alt text provided for this image

Beyond the issue of bias is the problem is that AI has the potential to MANIPULATE HUMAN BEHAVIOUR. By using complex algorithms and collecting data, AI systems can tailor information and experiences to individuals, influencing their opinions, preferences, and actions.

The manipulation of human behaviour raises concerns about privacy, autonomy, and the erosion of free will. AI's influence can be seen in targeted advertisements, filter bubbles, and personalised content feeds, subtly shaping our thoughts and decisions.

It's not an exaggeration to say that popular platforms like Google and Facebook know their users better than their own families and friends do.

Many companies collect massive amounts of data as input for their AI algorithms.

No alt text provided for this image

For instance, Facebook Likes can accurately predict various characteristics of users, such as sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, substance use, parental separation, age, and gender.

If something as simple as a 'like' button can reveal so much, imagine the information that can be extracted from search keywords, online clicks, posts, and reviews.

This issue extends beyond the digital giants. Placing comprehensive AI algorithms at the core of individuals' digital lives comes with risks. Let's consider some examples:

Target, a chain store in the US, utilised AI and data analytics to predict whether women were pregnant and send them targeted ads for baby products.

In a particular case, a father became extremely upset and lodged a complaint with Target regarding the email ads his teenage daughter was receiving. He accused Target of trying to encourage her to engage in inappropriate behaviour.

However, he later discovered that his daughter was indeed pregnant, and Target had deduced this from analysing her online activities even before the father himself was aware of the situation.

Read more on that here.

No alt text provided for this image

In another instance, Uber users raised concerns that they were being charged higher fares when their smartphone battery was low, even though the official pricing model of Uber does not consider the battery level as a determining factor.

A small study conducted by the Belgian newspaper Dernière Heure found that Uber charged 6 percent more for a ride when it was requested from a smartphone with only 12 percent battery remaining, compared to the same ride requested from a phone with 84 percent battery.

No alt text provided for this image

Big tech companies have faced allegations of manipulating search result rankings to their own advantage.

Facebook was recently fined a record-breaking amount by the US Federal Trade Commission for its manipulation of user privacy rights, which ultimately led to a decline in the quality of service provided.

The penalty imposed on Facebook amounts to $5 billion, making it the largest fine ever imposed on a company for violating consumer privacy.

This fine is nearly 20 times larger than any previous penalty imposed worldwide for privacy or data security violations. In fact, it stands as one of the largest penalties ever enforced by the US government for any violation.

No alt text provided for this image

Big tech companies have faced allegations of manipulating search result rankings to their own advantage.

Facebook was recently fined a record-breaking amount by the US Federal Trade Commission for its manipulation of user privacy rights, which ultimately led to a decline in the quality of service provided.

The penalty imposed on Facebook amounts to $5 billion, making it the largest fine ever imposed on a company for violating consumer privacy.

This fine is nearly 20 times larger than any previous penalty imposed worldwide for privacy or data security violations. In fact, it stands as one of the largest penalties ever enforced by the US government for any violation.

Sayonara

Vusi Thembekwayo

No alt text provided for this image
No alt text provided for this image


Leutsoa Khobotlo

I am an ecopreneur dedicated to innovatively merging business and environmental sustainability to create a greener future. Actively involved on trainings/workshops in the allied fields. +266 53034241

1y

I am inspired by your work

Sivaram Rajagopalan

CEO & Principal Consultant - Shiva Consultants

1y

Unfortunately individuals & corporations have a need to control & dominate their ecosystems... so its not really surprising that some organizations take advantage when they can. Its happened throughout the ages that those with power can and might exploit. AI & Data usage is a such a power, and unless we can actively regulate and increase transparency, we will be exploited. Even with regulations, there can still be bad actors & states that will try to bend situations to their benefit. Important for us to become better informed (and also watch for this information bias) and push for identifiability when using AI. eg An AI driven service-chat should inform me that its an AI. just as they want to KYC us, we should KYA (know your AI)!!!

Like
Reply

I'm glad you are exploring this as there definitely is a dark side which is concerning 🙏

Like
Reply

To view or add a comment, sign in

Insights from the community

Explore topics