Category: Machine Learning

You Have the Right to Remain Silent

The Miranda warning prevents us from self-incrimination. You have the right to remain silent. Anything you say will be used against you. If we hold ML models accountable for their predictions, shouldn’t we at least grant them that right? Can we expect ML models to know everything? I guess we don’t! Moreover, it would be […]

Written by on Aug 2, 2021

An Introduction to Multiple Instance Learning

Multiple Instance Learning (MIL) is a form of weakly supervised learning where training instances are arranged in sets, called bags, and a label is provided for the entire bag, opposedly to the instances themselves. This allows to leverage weakly labeled data, which is present in many business problems as labeling data is often costly: Medical […]

Written by on May 18, 2021

An AI-Based Image Content Retrieval System

Similarity measurement is the basis for any information retrieval, management, or data mining system. Both in industry and in the scientific community, similarity detection has been shown to be extremely useful when applied to different use cases. Over time, the information available on the internet has been growing in an exponential way, making it harder […]

Written by on Mar 7, 2021

Embedding Domain Knowledge

In the good old days, working as a Machine Learning Engineer meant working 95% of the time on feature engineering and 5% on training models with the extracted features. This was a manually intensive and time-consuming process, that usually led to inflexible proofs of concept that could hardly be adapted to new settings. Fortunately, Deep […]

Written by on Feb 17, 2021

Difficult Targets to Optimize: the ROC AUC

In many binary classification problems, especially in domains with highly unbalanced problems (such as the medical domain and rare event detection), we need to make sure our model does not become too biased for the more predominant class.  Thus, you may have heard that accuracy is not a good metric to validate classifiers in unbalanced […]

Written by on Dec 18, 2020

Explainable AI in Healthcare

Transparency is of utmost importance when AI is applied to high stake decision problems where additional information on the underlying process beyond the output of the model may be required. Taking the automation of loan attribution as an example, a client that has a loan denied will surely want to know why did that happen […]

Written by on Nov 24, 2020

Embedding Domain Knowledge for Estimating Customer Lifetime Value

As part of the rise of Deep Neural Networks in the ML community, we have observed an increasing fit-predict approach, where AI practitioners don’t take the time to think about the domain knowledge that is already available and how to embed that knowledge in the models. In this blogpost, we will cover how we created […]

Written by on Apr 6, 2020

Appendix: Embedding Domain Knowledge for Estimating Customer Lifetime Value

This is an appendix to the blog post Embedding Domain Knowledge for Estimating Customer Lifetime Value. We will describe some alternatives we considered for solving the proposed problem, but did not end up being implemented. First, let’s assume we have a pre-trained model for estimating the probability of the target and . Estimating Lifetime Value using […]

Written by on Apr 6, 2020

Objectively Estimating Data Quality

In Artificial Intelligence, it is important to measure the quality of the data we are trying to use. For instance, if we want to classify a cervix image according to the degree of cancer, how do we know if that image follows the acquisition protocol and can be used for diagnosing the patient [1] so […]

Written by on Feb 27, 2020