Customizing the Infinitely Configurable AI: Your Data as Configuration Parameters
In today’s highly volatile business environment, the adoption of artificial intelligence (AI) has become indispensable for the success of businesses. AI offers innovative solutions for efficient operations, enhancing customer experiences, and driving optimized decision-making. By leveraging AI technologies, companies can stay competitive, adapt to rapid market changes, and unlock new growth opportunities. AI is not only a tool to ensure the survival of business under uncertainties, it’s a means for business to thrive in an increasingly dynamic landscape.
Despite this, enterprises are still reluctant to adopt AI because there’s a misconception that many out-of-the-box AI tools are not quite production-ready. And enterprises will need to invest much time and resources to properly train their algorithms to perfect their AI. Although this is not completely wrong, companies do need to change their mindset when evaluating and adopting AI solutions. Simply waiting for an algorithm with more acceptable performance, or waiting for the AI industry to mature is definitely the wrong approach.
Evaluation of AI Requires a Mental Shift
IT leaders often evaluate technologies based on a set of business requirements. Many out-of-the-box solutions will need to be configured and customized in order to meet the needs of a business. So there are typically thousands of available configurations and settings in any software solutions. This is the case for many technologies, ranging from backend ERP and CRM to consumer-facing e-commerce platforms. This highly configurable nature of many software solutions is what allowed them to meet the requirements of a wide variety of businesses. I call this “meeting the requirement through configuration.”
When a business requirement cannot be met via configuration, the typical conclusion is that the technology is not ready for the business. So the normative response is to hold off on the implementation and wait. And this is the right decision to make, until AI comes along. When evaluating an AI-based technology, “meeting the requirement through configuration” doesn’t work. This is because AI is a totally different kind of technology. It’s able to learn and adapt to a business operations, and this learning and adaptation process takes time.
AI solutions are, by design, much more versatile than traditional software solutions, so it’s natural to think that they would require more configuration and customization to meet the specific needs of any one business. If one adopt this perspective, then AI is pretty much an infinitely configurable technology that can adapt to every plausible business scenario. How can one configure and customize such a technology? Wouldn’t it take such a huge amount of configuration parameters that make using AI impractical? Indeed, this would be infeasible, let alone practicality, if one were to manually configure an AI-based technology to suit every company.
The beauty of AI-based technology is that we don’t need to manually configure it for every possible use case. We can think of the AI configuration and customization process in two phases:
- Phase 1 is the conventional manual configuration. This is a similar process that every technology deployment must go through to configure and customize the technology to meet the needs of a specific business. And this is executed manually by the implementation team.
- Phase 2 is basically a self-configuration process, because it happens automatically in the background as people use the AI and generate usage and feedback data. Learning from this data (which also happens automatically) will configure the AI to operate more and more optimally to meet the specific business needs. I call this “meeting the requirement through learning.”
Manual Configuration vs. Automated Learning
Since AI has the capacity to learn, it will adapt to a particular business. This doesn’t mean that AI solutions don’t require any manual configuration. As with any technology, AI solutions are configurable. But relative to all the possible scenarios and use cases that AI is able to handle, the amount of manual configuration is, relatively speaking, pretty minimal.
So what’s a tradeoff? The key is learning! In this perspective, the huge amount of feedback data required by learning can be viewed as merely configuration parameters that tune the AI and optimize it to handle a specific set of business requirements. Since the collection of feedback and learning from that feedback are automated in AI, the only price we pay is time. That is, time required to collect enough data for AI to learn how it should behave under a specific business context. For an infinitely configurable AI that can adapt to practically any business situation, this is a small price to pay.
Although we don’t foresee an AI tool that requires absolutely zero manual configuration in the near future, the amount of manual configuration (in phase 1) will probably continue to decrease while the amount of automatic learning (in phase 2) increases. This is what allows AI to adapt to an increasingly wider variety of companies across more diverse industries with less and less manual configuration. This means that many out-of-the-box AI solutions will not perform well against any particular set of business requirements even after we have exhausted all the manual configuration. What should we do then?
Doing the Counterintuitive
When evaluating an AI-based technology, don’t forget the phase 2 self-configuration process. This takes time, but will allow the AI to configure itself through feedback and learning to suit the business more optimally. Slowing down or stopping the deployment is unwise, because that is preventing the AI from learning how to adapt. And the longer one waits, the less manual configuration will be available in the future. Because the trend is to reduce the amount of manual configuration (i.e. phase 1) and push more and more of this customization process to the self-configuration through learning (i.e. phase 2). How an AI performs will depend less on how it’s configured, but more on the data and feedback that were generated and used to train it via learning. Two AI systems that are manually configured exactly the same way may behave very differently due to the different feedback data used to train each system.
A smarter move is to deploy the AI on a limited scope and let people use it, so the AI can collect feedback data and learn from it. This is necessary for the AI to learn how human users make decisions under a specific business context. So give the AI a chance to learn and adapt.
With ordinary technology that has no capacity to learn, if it doesn’t work, it shouldn’t be used. Using it more is simply a waste of time, since nothing will change as a result of this usage. But with AI, one must do the counterintuitive. That is, people need to use the AI more frequently especially when it doesn’t work. And the worse the AI performs, the more we need to use it under the limited scope. So we need to dedicate a small team to test and use the AI in real business situations, and push the limit of the AI tool under evaluation. Because this will allow AI to identify problematic scenarios (where it doesn’t perform as expected) faster. It will also allow AI to collect more feedback data in less time, so it can learn from it faster. Working with an ill-performed AI may be slower initially, but this investment of time will pay off later as a huge efficiency gain as the AI learns.
Conclusion
When we are dealing with a technology that can learn, we must not think of them as static machines that simply can or cannot perform certain tasks. Instead, treat them more like new interns or new employees that have the potential to learn and improve their performance. Working with them may be slower initially, but once they’ve learned what’s require of them, they can help grow and scale an organization. So when an AI solution fails to make the cut, what it needs is a patient mentor. Those who invest the time to coach this new tool will reap the benefit of AI automation where the dramatic gain in scale and velocity will outmatch any human employee.
DISCLAIMER: The content of this article was first published by CMS Wire. Here is the latest revision and a current rendition of the raw manuscript submitted to the publisher. It differs slightly from the originally published version.