What do you do when the model is underperforming? When the models’ performance does not meet our expectations, we usually spend time searching for the flaws, selecting and analyzing the cases where it failed to understand why it happened. Then, we try to apply more robust solutions, train, test, and repeat. In some cases, we succeed but in others, the model’s performance does not increase, no matter how hard we try.
What to do? The temptation to give up increases with the number of failed attempts. Since trying to fix the model’s defects didn’t lead to success, what about doing the opposite? Try to focus on the cases where your model succeeds. Select those cases, analyze them, and measure the value they contain. Do not toss your model into the garbage bin because it is missing some cases. Instead, take advantage when it gets them right!
First of all, analyze your predictions! When the model is underperforming, the predictions might be distributed in two ways:
Completely random: no matter the range of scores you are selecting, the percentage of positives and negatives is similar to the global population. When this happens, the model didn’t learn anything and you need to rethink the learning strategy.
Accurate at the tails and random at the center: This is the most common case. Sometimes, the model is able to predict correctly the instances that are strong positives or strong negatives but it fails on categorizing the instances that have features correlated with both classes. I.e. the model struggles to find a good boundary between classes. If that’s the case, here is the solution: instead of defining a single boundary, define 2, one for the positive tail and another for the negative tail; if the prediction score is under the negative boundary or over the positive boundary, leave it with the model, otherwise, add a human in the loop and pass it to them.
The tails analysis should focus on two main factors:
the performance of the model on the tails – extract the visualization of the Sidekick KPI over the inclusion rate (the inclusion rate decreases when the positive and negative boundaries are pushed to the extremes)
the opportunity size on the tails – extract the visualization of the Hero KPI over the inclusion rate.
If Hero and Sidekick KPIs are new concepts to you, check our Data Ignite Course.
The viability of this approach depends on how you are going to integrate AI into your business. Integration of the type filtering, where the AI is used to reduce the workload that is passed to a human, any inclusion rate higher than 0% can be profitable. However, an integration of the type replacing, where the aim is to replace an existing process with an AI system, might require a higher inclusion rate to become profitable. But, when can we use it in practice?
In Healthcare, most of the Use Cases of diagnosis support are of the type filtering. Making an autonomous AI system for disease diagnosis can be very challenging or even unrealistic since in a lot of cases specialists’ opinions are not unanimous. With filtering the AI is able to screen a small segment of patients, but with high confidence in the decision, while the remaining patients are forwarded to a doctor.
Hot and cold leads are a type of use case where the hottest and coldest leads are identified to further play action on those leads or on the remaining. For example, if you are too sure that a segment of leads is going to churn, no matter what, you might avoid investing in customer service on those leads/clients. On the other hand, if you’re too sure that a lead is going to convert, you don’t need to invest more resources to convince them, or you could already think of a strategy to upsell other products. Since these use cases depend on segment identification, identifying the tails is a useful and profitable strategy.
Recommendation Systems are another type of use case that can benefit from this approach. Usually, the models have a good performance when they have a history of the client but they tend to fail on new customers, without action history – cold start. When this happens, select the customer segment for which the model has a good performance and start there.
If you think this solution is not profitable enough for your business, don’t think of it as the end of the road but as the road itself. After selecting the segment where your AI is reliable, you can put the solution into production and use the cash flow it is returning to invest in data acquisition, data labeling, and deeper model exploration. This way, you’ll need an initial investment to create a simple solution and the remaining investigation process can be supported by itself.
So, keep in mind:
Bad models don’t have to be useless models – explore their potential before flushing all the work you invested in them
Hero KPIs are much better indicators than sidekick KPIs
If it’s still not a good end, see it as the means.
Like this story?
Subscribe to Our Newsletter
Special offers, latest news and quality content in your inbox once per month.
Signup single post
Revolutionizing Industry: The Impact of Large Language Models
May 25, 2023 in
Large Language Models (LLMs) are THE hot topic of the year. If the name LLM sounds unfamiliar to you, I’m pretty sure you’ve heard of ChatGPT, OpenAI, and Bard. People who don’t know how to code have gained access to a tool that allows them to build Proof of Concepts for ideas they’ve been meaning […]
The Problem What do you do when the model is underperforming? When the models’ performance does not meet our expectations, we usually spend time searching for the flaws, selecting and analyzing the cases where it failed to understand why it happened. Then, we try to apply more robust solutions, train, test, and repeat. In some […]
The problem So there you are. You have collected your data, analyzed it, processed it, and built your sophisticated model architecture. After many hours of training and evaluating, you have come to a very unpleasant conclusion: you need more data. Before you readjust your budget to fit the extra data acquisition and labeling, let me […]