Explainable AI: The Key to Opening the Black Box

AIThority logo

Last week I posted the AI black box problem. So this week, I want to offer a solution to solution. In fact, this week at INFORMS Business Analytics Conference, I’ll be showing you how we overcome this black box problem with Explainable AI (XAI) at the Executive Insights Keynote. If you are at INFORMS, I hope to see you there!

As many skeptics still reject AI-based technologies on the grounds of the “black box” problem, it’s apparent that transparency on how the AI arrives at a particular decision is crucial to AI adoption. However, the black box problem is just a convenient scapegoat. Many business applications of AI use simple models that are fully interpretable. Yet, some AI systems indeed use highly complex models, and their interpretation can be challenging. However, there is nothing inherently uninterpretable about any of the commonly accepted black box models. 

Although black box models have a large number of model parameters (as opposed to a simple linear model that only has 2 parameters: a slope and a y-intercept), each and every one of those model parameters is accessible and can be examined. Moreover, these parameters can be collected, tabulated, and analyzed just like any other data set. It is even possible to trace the evolution of every single input and watch how it evolves and transforms into the output by the model along with its parameters. 

So in reality, black box models are merely hard to interpret – and while it may take some (or in some cases a lot of) work to make sense of what the model is doing, it’s definitely not impossible. The black box problem is inherently just a complexity problem.

Interpretability vs Explainability: There is a Difference

Simple models (e.g. linear, logistics, decision trees, additive models, etc.) are interpretable because we can directly examine the model parameters and infer how these models transform their inputs into output. Therefore, these models are self-explanatory and do not need to be further explained. In short, interpretability implies self-explainability.

In contrast, black box models (e.g. deep neural networks, random forest, generative adversarial networks, gradient boosted machines, etc.) are not interpretable (not self-explanatory). These models require explainability—meaning they need to be explained retrospectively after the model is trained. The degree of explainability is directly related to the spectrum of complexities of the black boxes. More complex models are less explainable (meaning they are harder to explain and require more work to decipher).

The bottom line is, the boundaries between what is labeled interpretable and uninterpretable are fuzzy. In my previous article, I mentioned that one of the necessary criteria for a model to be considered a black box is the number of model parameters it has. But we can’t just draw a line and say that models with more than a million parameters are uninterpretable, because models with 999,999 parameters don’t suddenly become interpretable. 

Therefore, model interpretability and model complexity is actually two directly related continuums. And black boxes are more like gray boxes with different shades of gray. The important point is that, with enough work, all black boxes (even those with stochasticity and randomness) are explainable!

Black box interpretability and explainability aren't the same
Black box interpretability and explainability aren’t the same

When are Interpretability and Explainability Required?

If you have business problems that require the learning capacity of a black box, you must meet two criteria in order to utilize the black box effectively. 

The first is training data volume, because black box models require huge amounts of training data. There are no hard and fast rules for how much data is needed to train a black box model, because the precise training data volume required is not only data dependent but problem specific. However, a general rule of thumb is that the number of data samples should exceed the number of model parameters. Remember, black box models have a huge number of parameters that can easily go over millions or billions.

The second criterion to consider is computing resources. Due to the large number of parameters they have, black box models also take a long time to train. In practice, distributed computing resources (often with GPU) are required to make using black boxes feasible.

If you can meet these two criteria, then you should leverage the power and benefits that come with the black boxes. One should never shy away from using black box models just because they are “uninterpretable”. If a magic black box is able to tell you which stock to buy and its recommendations are consistently beating the market, it shouldn’t matter if it’s uninterpretable or not. In many situations, it is only when the model’s performance is inconsistent (e.g. when it fails or behaves unexpectedly) that businesses are interested in interpreting the model to understand what it’s doing. Don’t let the lack of interpretability stand in the way of making good business decisions. 

With that said, however, some industries and business problems require explainability for compliance and regulatory obligations. For example, the lending industry has strict requirements for explanations when a loan applicant is denied a loan. Telling the applicant that they didn’t score high enough by some mystery black box is not sufficient, and it’s a good way to get your business operation shut down. This is when we need to explain the uninterpretable.

Explaining the Uninterpretable

Although humans are superior in handling many cognitive tasks, including critical thinking, creativity, empathy, and subjectivity, we are not great at handling complexity. Psychologists have found that humans can only keep track of about 7±2 things in our working memory. But machines (e.g. a computer) can keep track of millions and billions of items (limited only by the size of their RAM) and still operate with little performance degradation. Since the black box problem is merely a complexity problem, we can use machine-aided analysis or other machine learning (ML) algorithms to explain the black boxes. 

Because many of the black box models are well established statistical tools for decades, their interpretability and explainability are not new problems. Today, we’ve merely scaled up many of these models due to the availability of data and computing resources to train them, making them even more complex. This has created a need for explainable AI (XAI).

Explainable AI (XAI) is key to opening the black box
Explainable AI (XAI) is key to opening the black box

Although XAI is an active area of research today, the XAI community has been around for as long as the black boxes—neural network (NN) was invented back in the 1950s. In, fact, I was a part of this XAI community, and I’ve developed a technique that explains NN trained to mimic the visual processing within the human brain. Today, there exists a myriad of methods developed in specialized domains to explain different types of black boxes. There are even open-source tools for domain-agnostic and model-agnostic black box explanation packages (e.g. LIME and Shapley value). Both of these are popular XAI techniques, and my teams are currently using them in our R&D work. 

Glass Box ML Frameworks

The XAI methods above are all post hoc analyses. They are extra steps performed after the model is trained (i.e. when all the model parameters are determined). When XAI methods are applied to black-box models, the black boxes are turned into glass boxes (models that have greater transparency and interoperability). However, since these methods are applied post hoc, often through another ML, the explanations are approximations at best.

In an attempt to obtain more precise explanations of the black boxes more directly, several glass box ML approaches have been developed recently. From the theory of ensemble learning, it’s been proven that simple models can be combined in a specific way to produce arbitrarily complex models to fit any data. Glass box ML leverages this principle to construct complex models by ensembling together a series of simple interpretable models. Currently, we are also using the glass box approach to develop a multi-stage model for demand forecasting. 

Conclusion

Ultimately, while black box models are difficult to interpret by the human brain, they are all explainable with the help of sophisticated analytics and algorithms. The growing number of methods and ML frameworks developed by the XAI community is allowing us to look inside the black boxes, turning them into glass boxes. Today, XAI is proving that the infamous “black box problem” is really not a problem after all. Business leaders who continue to scapegoat the use of AI due to its black-box nature are essentially foregoing an efficient and reliable way to optimize their business decisions for something that is only a problem of the past.

DISCLAIMER: The content of this article was first published by AITHORITY. Here is the latest revision and a current rendition of the raw manuscript submitted to the publisher. It differs slightly from the originally published version.