The Black Box Problem: A Convenient Scapegoat

AIThority logo

In the increasingly digital world of the post-pandemic age, companies are facing a customer experience (CX) challenge. Because CX is determined by the difference between what businesses deliver and what their customers expect, it’s summarized in a simple equation:

CX = brand delivery – customer expectation

Hence, a satisfying CX is when brands deliver exactly what their customers expect. While a positive CX (delighted customer) will result when brands deliver more than the customer’s expectations, a negative CX (disappointed customer) will ensue when brands deliver less than the customer’s expectations.

Although companies have perfected the delivery of their products and services, customer expectation is very difficult to gauge without the tacit intelligence of a human being. This is because customer expectation is very volatile and changes rapidly depending on what the customers have seen in their digital environment. 

Fortunately, artificial intelligence (AI) is able to mimic human decisions with high accuracy when it’s trained with sufficient data. Moreover, digital channels are well suited to capture massive amounts of data that can be used to train an AI. Therefore AI is becoming a crucially important tool for businesses to gauge customer expectations online, where the customer is invisible. And this is a prerequisite to providing a consistently good CX.

Not Everything You Can’t Explain is a Black Box

Despite the importance of AI for CX, many companies are still reluctant to use it, especially for direct customer engagements. The ultimate cause of this reluctance is fear. Fear of the unfamiliar new technology, the uncertain ROI due to its high total cost of ownership, and the uninterpretable nature of many AI solutions (a.k.a. the “black box” problem). 

The AI black box problem is just a convenient scapegoat

As AI becomes more pervasive in consumer technologies and its performance constantly improves, its business application is also gaining traction. This has led to a growing number of AI SaaS vendors, which keeps the total cost of ownership for Business AI very affordable. However, the black box problem continues to be a convenient scapegoat for those who really have no excuse other than the fear of ceding control to something they don’t understand, and ultimately afraid of change.

In reality, businesses use many technologies that they don’t fully understand and can’t explain. It’s safe to say that every company uses a computer today, but it’s doubtful that anyone at that company can explain how a computer works, which requires esoteric knowledge of semiconductors physics. But no one is claiming that the computer is a black box and therefore we shouldn’t use it. Every business also uses the Internet. But can anyone explain how each of the 7 layers of the internet routing protocol works? Doubtful! Yet, the internet is not considered a black box. The list of examples goes on.

Not all AI-based solutions are black boxes

Despite that every business is currently using some “black box-ish” technology that they can’t explain, there are many business applications of AI that are not black boxes. In fact, there are many AI solutions that use fully interpretable models (e.g. linear, logistics, decision trees, additive models, etc.). Whether a system is AI is less dependent on what kind of model it uses, but depends on 2 criteria.

  1. Automation: the ability to make proper decisions and/or subsequent actions autonomously.
  2. Learning: the ability to learn and adapt to improve its performance over time.

Therefore, even a simple linear model can be the core model of an AI system. Provided that it is re-trained frequently enough to capture the essential dynamics of the data it’s trying to model, so it’s able to automate decisions and actions based on the continued influx of data. 

In practice, many businesses and industries have limited data availability, computing resource constraints, response time requirements, or compliance obligations that preclude them from using more complex models. So they can only use simple interpretable models when developing their AI solutions. This, however, doesn’t make them any less qualified as an AI solution.

What Makes a Black Box… “Black”?

So what kinds of AI systems are truly black boxes? It is generally accepted that very complex models (e.g. deep neural networks, random forests, generative adversarial networks, gradient boosted machines, etc.) are essentially black boxes. These models are considered black boxes because they all have 2 properties:

  1. High dimensionality: they have huge numbers of model parameters
  2. Nonlinearity: they have nonlinear input-output relationships

Black box models are difficult to interpret because they have vast amounts (thousands and millions) of model parameters. An extreme example is GPT-3, a deep learning transformer-based language model, which has 175 billion model parameters; and it’s recent successor GPT-4, which has over a trillion parameters. Moreover, the relationships between the inputs and outputs are nonlinear. This means a fixed change in any input can translate to an arbitrary change in the output, sometimes huge, but sometimes tiny.

black box problem: linear vs nonlinear input-output relationship

Note that having bazillions of parameters alone does not make a model uninterpretable if the input-output relationships are linear. If that were the case, any changes in any number of inputs can always be translated to a proportional change in the output in a consistent manner. This is because the sums, products, or compositions of linear functions are still linear. Likewise, nonlinearity is not a problem if there are only a few of them. However, our brains are just not capable of keeping track of a large number of nonlinear relationships.

large number of linear vs nonlinear relationships


What makes a black box model uninterpretable is just complexity, nothing more! And the black box problem is an inherent inability of our human brain to distill those complex models down to something simple enough for us to explain in words. However, not everything that you don’t understand or can’t explain is a black box, including many AI technologies. In fact, many business AI solutions use simple models and are perfectly interpretable. So next time you hear the naysayers rejecting AI with the black box problem, you should think twice and ask yourself, are you willing to risk having an inferior CX for your customers over the immaterial fear of a convenient scapegoat?

DISCLAIMER: The content of this article was first published by AITHORITY. Here is the latest revision and a current rendition of the raw manuscript submitted to the publisher. It differs slightly from the originally published version.