Stop removing outliers just because!

Learn how to tell your model to pay attention to outliers


Outliers are data points that stand out for being different from the remaining data distribution. An outlier can be:

  • An odd value in a feature
  • A data point distant from the centroid of the data
  • A data point in a region of low density, but between areas of high density.

Suppose you have been working in data science. In that case, you are already familiar with the concept, and you have probably integrated different methods in your pipelines to detect, transform, or even remove outliers from your data.

If so, be careful! Outliers do not respect the data distribution, so you should not pretend they do by removing inconvenient data points or transforming their features to become closer to the remaining data distribution. I know you need to handle them to avoid nonsensical values disturbing your pipeline. Still, you can also let the model know those data points are outliers instead of ignoring that information. The two main questions here are: Why? and How?

Why? Because you never know the cause of an outlier. It can represent either an error in the data acquisition process or a real anomaly in your population. This is highly relevant when dealing with Fraud Detection, Predictive Maintenance, or Compliance Validation situations. In these use cases, you want to detect the odd values (outliers) to prevent further risks.

How? Creating new features that represent how odd a data point is. If you make this information clear to the model, it will be able to detect the outliers by itself. But if outliers can be of different forms (as seen in the figure above), what features can represent their oddness? Find the answer in the next section!


Features for Outliers

Mahalanobis Distance

You can use different distance algorithms to compute how far a data point is from the centroid. We recommend the Mahalanobis distance since it is better to deal with multivariate outliers resulting from unusual combinations between multiple variables. For example, consider these three variables: weight, height, and gender. A height of 150 cm is not that unusual for the Portuguese female population, and a weight of 90kg is not uncommon for the Portuguese male. However, a female Portuguese with 150 cm and 90 kg would be very unique.

Density Estimation


Previously we saw a feature that measures the distance to the centroid, but what if the data distribution has a shape like the one in the figure below?

In this case, an outlier can be a data point located in the regions where the density distribution has a depression, no matter the distance to the centroid. So a good feature to represent it would be the data density in the neighborhood of the data point.

You can use methods like KDE (Kernel Density Estimation) to estimate the density. However, this method can be too computationally expensive. So we propose a more straightforward and cheaper method: binning. 

There are two ways of using binning to estimate density distribution:

  • Use bins with equal widths: Split the data into equal-width bins and compute the density of each bin. The fewer points the bin has, the more normal the data point is.
  • Use bins with equal frequencies: Split the data into equal-frequency bins and compute the width of the bin. The larger the bin, the more abnormal the data point is


Autoencoders Reconstruction

Training an autoencoder with your data will let the encoder learn the data distribution of the different variables and their relationship. Then, when the autoencoder receives a data point that deviates from the remaining data, it won’t be able to reconstruct the data point correctly.

A good feature to represent outliers would be the distance between the input, X, and the output, X’ (e.g., cosine distance). Higher distances will be correlated with odder data points.

Do you want to further discuss this idea?

Book a meeting with Paulo Maia

Meet Paulo Learn More

Final Remarks

Now that you know how to detect outliers, you have a new trick to detect possible frauds, anomalies, or errors without needing to collect data for all those exceptions. Here, we presented you with three different ways to do so.

For more ideas on how to get the most out of your data, subscribe to our newsletter below and stay tuned.

Like this story?

Subscribe to Our Newsletter

Special offers, latest news and quality content in your inbox once per month.

Signup single post

This field is for validation purposes and should be left unchanged.

Recommended Articles

Can Your Business Optimize AI Predictive Models?

Predictive models are transforming the AI landscape. They can forecast future events, identify past occurrences, and even predict present situations. However, building a successful predictive model is not as simple as it seems. To achieve an effective predictive model, you need to consider three crucial moments: the prediction time, the prediction window, and the data […]

Read More
Is Your Business Ready for Generative AI Risks?

Generative AI is a powerful tool that many companies are rushing to incorporate into their operations. However, it’s crucial to understand the possible risks associated with this technology. In this article, we’ll discuss the top nine risks that could impact your business’s readiness for AI integration. Stay ahead of the curve, and make sure you’re […]

Read More
Can the STAR Framework Streamline Your AI Projects?

As a manager dealing with AI projects, you may often find yourself overwhelmed. The constant addition of promising projects to the backlog can lead to a mounting technical debt within your team, forcing you to neglect the core aspects of your business. Here at NILG.AI, we have a solution for this challenge: the STAR framework. […]

Read More