Increasing Efficiency with Active Learning

How human-machine cooperation can help you do more with less.

The problem: Labeling data is boring (and expensive)

So there you are. You have collected your data, analyzed it, processed it, and built your sophisticated model architecture. After many hours of training and evaluating, you have come to a very unpleasant conclusion: you need more data. Before you readjust your budget to fit the extra data acquisition and labeling, let me introduce you to a way of increasing efficiency with Active Learning!

The Solution: Active Learning

So, what is this magical solution? Well, active learning is the idea that a machine learning algorithm can achieve greater accuracy with fewer training labels if it is allowed to choose the data from which it learns, i.e., if allowed to be curious (Settles, 2009).

In practice, we give a machine learning model our unlabeled data, and then from the predictions, we sample the data points that the model found hardest to predict. Then it’s time to put the most elegant and important machine to work: the human brain. Once we have our sample, we give it to a human annotator (or oracle) They will decide whether or not the data is worth annotating. Finally, we feed the model the newly annotated data from our sample. And voilá: we hope it works.

The intuition is that giving the model a small sample of hard-to-predict data can improve model performance just as much as giving it the entire dataset. So far, the process is pretty straightforward. But you have been left wondering… how do you sample the data then?

Sampling your data

Like anything in machine learning, there is no one-size-fits-all approach. When it comes to sampling though, there are a few tried and tested solutions that are worth a go. For example, if you are building a probabilistic classification model, a good measure might be uncertainty. Uncertainty sampling is a very popular strategy that is based on evaluating how uncertain a model is when predicting a data point. A direct approach to obtain this information can be applying the Least Confident Method to the predictions or calculating the prediction entropy. There are other methods that might be useful, such as calculating how a certain data point will modify the model predictions (Expected Model Change) or even calculating how the prediction losses will vary (Expected Error Reduction). These methods are trickier and more computationally heavy, so they are not as easy to apply.

Suppose you have multiple models trained on the dataset you want to label. In that case, you can apply the Query-by-Committee method, where, for each model, you calculate the predictions and then select the cases where the models disagree the most—essentially allowing them to vote for the data to be labeled. A little democracy in your AI strategy can significantly improve your labeling efficiency. If you want to rig the election, you can always attribute different voting weights for each model. We won’t judge.

Another aspect you might want to consider is representativeness. It is easy to understand that, when your model gives you the data it finds most difficult to annotate, it will probably choose some outliers. This will again depend on your specific situation, but you will generally want to give your model data representative of the underlying data distribution. So, for example, if you are working with image data with millions of acquisitions, there is a chance that some of the images are pitch black, or completely blurry. Your model will have difficulty labeling those examples, but they won’t help improve its performance.

 

Practical Considerations

So far, you already know what active learning is, and how it works. However, there are aspects beyond the theory that you should understand. To take full advantage of this tool, you must know how you should apply it and the ways it can be affected by external factors.

To err is human…

Active learning’s central tool is human intuition and the ability of the oracle (human annotator) to apply that same intuition to a problem. But, much like any other experiment in another field, the tools might not work as expected. Depending on the data that the oracle is analyzing, they may find some data points difficult to understand. Moreover, if the data is composed of, for example, medical images, the oracle may not even have the knowledge to annotate it, as some medical images are difficult to comprehend even for professionals. This means that annotations will vary from person to person.

Another important aspect of human nature is that people can be affected by distractions or fatigue. So annotations are subject to different annotators and are also impacted by the person’s surroundings and the time they have spent labeling. Even if the person is focused and knowledgeable, they might still misunderstand the task, which is why it is important to build proper user interfaces and labeling protocols that provide the required information.

Mind the costs!

One might think that reducing the amount of data required to train a model reduces the overall cost of training that model. However, that cost is being paid by the oracle (and by the person that hired them), in the form of human effort, time (and money). Naturally, the task of the oracle should be as effortless as possible, so the objective should not only be to reduce the amount of data to annotate, but also to reduce the effort required to annotate it. This is why, in some cases, it can be useful to let the model help, by providing “pre-annotations”, or a prediction.

Knowing when to stop.

When using interactive learning systems, it is important to understand at which point acquiring new data becomes more costly than the errors made by the current model. If it would require excessive resources (e.g., time, money,…) to generate relatively small gains, then it could be argued that, in some instances, it may not be worth it to use active learning. There is a line for using active learning, and understanding where the line is is important.

It’s time to listen to your AI!

Now that you know about active learning, give it a try! Let the model choose the data for you, while you sit and relax. Then spend some time annotating that data while the model sits and relaxes. AI is a two-way street, and you’ll find that human-machine collaboration can significantly boost your project’s efficiency.

If you want to learn more about using model insights to improve your projects, feel free to contact me, and we can discuss what solution is best for you!

 

Do you want to further discuss this idea?

Book a meeting with Paulo Maia

Meet Paulo Learn More

Like this story?

Subscribe to Our Newsletter

Special offers, latest news and quality content in your inbox once per month.

Signup single post

Consent(Required)
This field is for validation purposes and should be left unchanged.

Recommended Articles

Article
AI City Intelligence

Imagine being able to make better decisions about where to live, where to establish a new business, or how to understand the changing dynamics of urban neighborhoods. Access to detailed, up-to-date information about city environments allows us to answer these questions with greater confidence, but the challenge lies in accessing and analyzing the right data. […]

Read More
Article
EcoRouteAI: Otimização de Ecopontos com Inteligência Artificial

O Plano Estratégico para os Resíduos Urbanos (PERSU) 2030 definiu metas ambiciosas para a gestão de resíduos em Portugal, com o objetivo de aumentar a reciclagem e melhorar a sustentabilidade ambiental. No entanto, os atuais índices de reciclagem e separação de resíduos ainda estão aquém do necessário, tanto a nível nacional quanto europeu, criando desafios […]

Read More
Article
NILG.AI named Most-Reviewed AI Companies in Portugal by The Manifest

The artificial intelligence space has been showcasing many amazing technologies and solutions. AI is at its peak, and many businesses are using it to help propel their products and services to the top! You can do it, too, with the help of one of the best AI Companies in Portugal: NILG.AI. We focus on your […]

Read More