Dealing with AI Biases Part 1: Acknowledging the Bias

CMS Wire

The post-pandemic world has paved the way for rapid digital adoption. And we are now being ushered into a new age of AI and the metaverse. Although the technologies of the metaverse are still in their infancy, with many focused on gaming, the adoption of AI in consumer applications is widespread (Thanks to ChatGPT). However, business adoption of AI has been slower due to two major areas of concern that are under constant scrutiny:

  1. The black box problem: AI’s decisions are uninterpretable
  2. The AI bias problem: AI’s decisions may be biased

The black box problem occurs when it’s unclear how AI is coming to a decision. This creates fear for decision-makers because they cannot understand how AI is coming up with its recommendations. However, not all business AI solutions are black boxes. Moreover, business leaders have often used many other technologies they can’t explain. Therefore, the black box “problem” is just a convenient scapegoat for decision-makers who are afraid of losing control. Today, there are many technologies developed in the explainable AI (XAI) community that provide much greater transparency to black box algorithms. So the black box problem is something that is only a problem of the past. 

However, the bias problem is a legitimate problem that deserves closer examination. It’s only a matter of time before AI applications will become more prevalent in high-stake industries. In these industries, businesses may be relying on AI to help review resumes, automate job interviews, or even determine one’s creditworthiness. Biased decisions from AI could lead to discriminative business practices that ultimately result in a PR crisis and/or huge fines. This could prevent businesses from even considering the use of such AI, as these industries may have compliance requirements. 

How Does an AI Become Biased?

In order to address the AI bias problem, we must first understand where these biases come from. Here, we will use the simple definition that AI is a machine mimicry of human behaviors with two important characteristics:

  1. The ability to automate decisions and subsequent actions.
  2. The ability to learn and improve future decision-making with usage.

Since AI must automate “human” decisions, it must mimic our decision-making processes. AI does this through machine learning (ML), which attempts to recreate our mental model of how the world operates, so biases in AI are a result of the biased models created by ML. 

Now, because all ML models must be trained with data to fix their model parameters, an important reason why ML creates biased models is that the data used to train the model is biased. But most of the data used in ML training are generated by humans, from the past decisions they make. Therefore, AI bias is simply a reflection of the inherent biases in our own decision-making processes.

How Can We Combat Bias in AI?

Knowing this sequence of causality is important, because it provides us with multiple points of attack to address this AI bias problem. Today, most AI practitioners in the industry are treating AI bias as a data problem. To a large extent, if we can fix the biased data, we fix the AI bias problem. In fact, many technologies and startups have been created to discover, monitor, and potentially correct for the biases in the data. However, fixing the biased data is not the only way to address the problem.

There are many ways to deal with biases in AI, depending on the context and use case of the AI in question. Here are 3 general approaches to the AI bias problem:

  1. Simply acknowledging the biases
  2. Correcting the biases:
    • Inherited biases – from bias in training data
    • Emergent biases – from bias in the machine learning process
  3. Fixing the root cause of biases

Since a detailed exposition in all 3 would make this article too long, we will cover the first approach today and save the others for future entries of this mini-series on the AI bias problem.

Acknowledging your AI is Bias
Acknowledging your AI is Bias

What Does Acknowledging the Biases Mean?

The first step to acknowledging the biases is to understand what types of biases exist in the AI under scrutiny. This requires a detailed analysis of both the input and the output of the training process before the AI is used in a deployed production environment. It is important to note that these are not the input data and the output decisions of the AI under normal operation. 

The main input to the training process is the training data that is used to determine the parameters of the model in the AI. Training data may be biased because it’s typically collected from a much smaller sample of the target population. The target population includes all potential users or anyone who might be affected by the AI’s decision. It’s crucial to ensure the target population is well represented in the training data. Otherwise, the AI will inherit the bias from the training data and not function properly for groups that are not well represented in the training data.

The output of the training process is the model, so we must establish continuous monitoring of the model in operation. Analytics on the AI’s decision must be analyzed to see if all the biased AI decisions can be explained by the biased training data. Emergent biases that are not expected from the training data must be further analyzed to understand how they arose.

Prove Your Acknowledgement with Actions

Once we have a good understanding of both the inherited and emergent biases, the question next is what do we do about it? The simplest thing we can do is to do nothing about the biases. So how can we know if someone is really acknowledging the biases? They must prove it with actions. 

Since all the known biases will still be in the AI, the best way to acknowledge their existence is to ensure this AI is used appropriately and responsibly. That means we must not over-generalize the AI’s applicability beyond the training data. If the AI exhibits biases in certain groups, then it should not be used among those groups. This will essentially prevent biased decisions from manifesting and will limit the negative impacts of those decisions. 

Therefore, acknowledging the biases is more than just passively admitting the existence of biases. It takes effort to identify the biases, and it requires action to ensure the AI is not used outside of its generalization boundary.

Conclusion

Bias in AI is an emerging issue that is hindering AI adoption in certain high-stakes industries. The first step to combatting bias in AI is to acknowledge that there are biases in AI, and take steps to identify them and understand their scope and impact. Finally, to truly acknowledge these biases, we must use AI carefully and make sure its usage is not over-generalized.

Stay tuned. We will discuss how we can correct the biases over the next 2 installments of this mini-series.

DISCLAIMER: The content of this article was first published by CMS Wire. Here is the latest revision and a current rendition of the raw manuscript submitted to the publisher. It differs slightly from the originally published version.