In medio stat virtus? Not always!

How to create value from mediocre models

The Problem

What do you do when the model is underperforming? When the models’ performance does not meet our expectations, we usually spend time searching for the flaws, selecting and analyzing the cases where it failed to understand why it happened. Then, we try to apply more robust solutions, train, test, and repeat. In some cases, we succeed but in others, the model’s performance does not increase, no matter how hard we try.

What to do? The temptation to give up increases with the number of failed attempts. Since trying to fix the model’s defects didn’t lead to success, what about doing the opposite? Try to focus on the cases where your model succeeds. Select those cases, analyze them, and measure the value they contain. Do not toss your model into the garbage bin because it is missing some cases. Instead, take advantage when it gets them right!


The Solution

First of all, analyze your predictions! When the model is underperforming, the predictions might be distributed in two ways:

  • Completely random: no matter the range of scores you are selecting, the percentage of positives and negatives is similar to the global population. When this happens, the model didn’t learn anything and you need to rethink the learning strategy.
  • Accurate at the tails and random at the center: This is the most common case. Sometimes, the model is able to predict correctly the instances that are strong positives or strong negatives but it fails on categorizing the instances that have features correlated with both classes. I.e. the model struggles to find a good boundary between classes. If that’s the case, here is the solution: instead of defining a single boundary, define 2, one for the positive tail and another for the negative tail; if the prediction score is under the negative boundary or over the positive boundary, leave it with the model, otherwise, add a human in the loop and pass it to them.

The tails analysis should focus on two main factors:

  • the performance of the model on the tails – extract the visualization of the Sidekick KPI over the inclusion rate (the inclusion rate decreases when the positive and negative boundaries are pushed to the extremes)
  • the opportunity size on the tails – extract the visualization of the Hero KPI  over the inclusion rate.
Course, Templates

Data Ignite

If Hero and Sidekick KPIs are new concepts to you, check our Data Ignite Course.

Learn More


The Use Cases

The viability of this approach depends on how you are going to integrate AI into your business. Integration of the type filtering, where the AI is used to reduce the workload that is passed to a human, any inclusion rate higher than 0% can be profitable. However, an integration of the type replacing, where the aim is to replace an existing process with an AI system, might require a higher inclusion rate to become profitable. But, when can we use it in practice?

In Healthcare, most of the Use Cases of diagnosis support are of the type filtering. Making an autonomous AI system for disease diagnosis can be very challenging or even unrealistic since in a lot of cases specialists’ opinions are not unanimous. With filtering the AI is able to screen a small segment of patients, but with high confidence in the decision, while the remaining patients are forwarded to a doctor.

Hot and cold leads are a type of use case where the hottest and coldest leads are identified to further play action on those leads or on the remaining. For example, if you are too sure that a segment of leads is going to churn, no matter what, you might avoid investing in customer service on those leads/clients. On the other hand, if you’re too sure that a lead is going to convert, you don’t need to invest more resources to convince them, or you could already think of a strategy to upsell other products. Since these use cases depend on segment identification, identifying the tails is a useful and profitable strategy.

Recommendation Systems are another type of use case that can benefit from this approach. Usually, the models have a good performance when they have a history of the client but they tend to fail on new customers, without action history – cold start. When this happens, select the customer segment for which the model has a good performance and start there.

If you think this solution is not profitable enough for your business, don’t think of it as the end of the road but as the road itself. After selecting the segment where your AI is reliable, you can put the solution into production and use the cash flow it is returning to invest in data acquisition, data labeling, and deeper model exploration. This way, you’ll need an initial investment to create a simple solution and the remaining investigation process can be supported by itself.

So, keep in mind:

  • Bad models don’t have to be useless models – explore their potential before flushing all the work you invested in them
  • Hero KPIs are much better indicators than sidekick KPIs
  • If it’s still not a good end, see it as the means.

Like this story?

Subscribe to Our Newsletter

Special offers, latest news and quality content in your inbox once per month.

Signup single post

This field is for validation purposes and should be left unchanged.

Recommended Articles

Can Your Business Optimize AI Predictive Models?

Predictive models are transforming the AI landscape. They can forecast future events, identify past occurrences, and even predict present situations. However, building a successful predictive model is not as simple as it seems. To achieve an effective predictive model, you need to consider three crucial moments: the prediction time, the prediction window, and the data […]

Read More
Is Your Business Ready for Generative AI Risks?

Generative AI is a powerful tool that many companies are rushing to incorporate into their operations. However, it’s crucial to understand the possible risks associated with this technology. In this article, we’ll discuss the top nine risks that could impact your business’s readiness for AI integration. Stay ahead of the curve, and make sure you’re […]

Read More
Can the STAR Framework Streamline Your AI Projects?

As a manager dealing with AI projects, you may often find yourself overwhelmed. The constant addition of promising projects to the backlog can lead to a mounting technical debt within your team, forcing you to neglect the core aspects of your business. Here at NILG.AI, we have a solution for this challenge: the STAR framework. […]

Read More