Reject Option: Your AI Model Has the Right to Remain Silent

Smart Silence, Safer AI

When it comes to AI models, we often expect them always to provide an answer. But what if we could trust them more when they choose to remain silent? This concept, known as the ‘Reject option‘, allows AI models to abstain from answering when they are not confident, opening up many applications in your business and reducing risk.

Why Should AI Models Abstain?

Just like human experts, AI models can gain our trust when they know when to say, “I don’t know“. This capacity to abstain from answering, known as the ‘reject option’, can be a game-changer for many applications.

Consider a model that diagnoses whether a patient has cancer. Would you trust a model that always gives a definitive ‘yes‘ or ‘no‘, or one that sometimes says, “I don’t know, let’s get a second opinion”? The same applies to a model monitoring a manufacturing pipeline for quality assurance. A model that can say, “I don’t know, let’s get a technician to check this” can be more reliable than one that always guesses.

How Can You Use the Reject Option in Business?

There are two main strategies for using the reject option in business: filtering and incremental deployment.


In the filtering strategy, the AI model automates most of the cases and asks for help with the rest. This help could come from a second AI model or a human expert. This strategy essentially adds an additional stage to your workflow, a fallback option for when the AI model cannot provide an answer.

Incremental Deployment

In the incremental deployment strategy, you start by only covering the top cases where the model is most confident. As you gain trust in the model, you start giving it more and more cases, even if the AI is not that confident about them. This strategy allows for a safer deployment and is particularly useful for initial pilots.

How to Compute the Expected Impact?

To compute the expected impact of using the reject option, you need to understand the trade-off between the percentage of times the model abstains and its predictive performance.

You start by computing how much benefit you would get from the model predicting 100% of cases and how much you would lose from the errors. Then, for any given percentage of abstentions, you calculate how much you would gain from the cases the AI automates and how much you would spend on the cases it doesn’t.

Remember, there is always a fixed cost for both the AI (infrastructure) and the human component (salaries), and an elastic cost for each case, depending on how much the AI abstains or predicts.

How to Get Models That Can Abstain?

The easiest way to have models that can abstain is by looking at the tails of your predictions and only predicting for those cases. This approach involves setting two thresholds, one for the lower bound and one for the upper bound, and choosing these thresholds to maximize your return.

However, this approach may not be enough. A more advanced approach involves allowing your model to predict which cases it wants to answer and what the answer should be for those cases. This can be achieved by having two outputs from your model: one indicating whether the model will abstain or not, and the other providing the answer if it’s not abstaining.


Allowing your AI models to abstain when they are not confident can open up many applications in your business and reduce risk. It’s a strategy worth considering as you plan your AI deployment. If you want to embrace smarter AI decisions, book a strategic meeting with our experts and transform risk into reliability today!


Like this story?

Subscribe to Our Newsletter

Special offers, latest news and quality content in your inbox once per month.

Signup single post

This field is for validation purposes and should be left unchanged.

Recommended Articles

Can Your Business Optimize AI Predictive Models?

Predictive models are transforming the AI landscape. They can forecast future events, identify past occurrences, and even predict present situations. However, building a successful predictive model is not as simple as it seems. To achieve an effective predictive model, you need to consider three crucial moments: the prediction time, the prediction window, and the data […]

Read More
Is Your Business Ready for Generative AI Risks?

Generative AI is a powerful tool that many companies are rushing to incorporate into their operations. However, it’s crucial to understand the possible risks associated with this technology. In this article, we’ll discuss the top nine risks that could impact your business’s readiness for AI integration. Stay ahead of the curve, and make sure you’re […]

Read More
Can the STAR Framework Streamline Your AI Projects?

As a manager dealing with AI projects, you may often find yourself overwhelmed. The constant addition of promising projects to the backlog can lead to a mounting technical debt within your team, forcing you to neglect the core aspects of your business. Here at NILG.AI, we have a solution for this challenge: the STAR framework. […]

Read More