So there you are. You have collected your data, analyzed it, processed it, and built your sophisticated model architecture. After many hours of training and evaluating, you have come to a very unpleasant conclusion: you need more data. Before you readjust your budget to fit the extra data acquisition and labeling, let me introduce you to a way of increasing efficiency with Active Learning!
So what is this magical solution?
Well, active learning is the idea that a machine learning algorithm can achieve greater accuracy with fewer training labels if it is allowed to choose the data from which it learns, i.e., if allowed to be curious (Settles, 2009).
In practice, we give a machine learning model our unlabeled data, and then from the predictions, we sample the data points that the model found hardest to predict. Then it’s time to put the most elegant and important machine to work: the human brain. Once we have our sample, we give it to a human annotator (or oracle) They will decide whether or not the data is worth annotating. Finally, we feed the model the newly annotated data from our sample. And voilá: we hope it works.
The intuition is that giving the model a small sample of hard-to-predict data can improve model performance just as much as giving it the entire dataset. So far, the process is pretty straightforward. But you have been left wondering… how do you sample the data then?
Sampling your data
Like anything in machine learning, there is no one-size-fits-all approach. When it comes to sampling though, there are a few tried and tested solutions that are worth a go. For example, if you are building a probabilistic classification model, a good measure might be uncertainty. Uncertainty sampling is a very popular strategy that is based on evaluating how uncertain a model is when predicting a data point. A direct approach to obtain this information can be applying the Least Confident Method to the predictions or calculating the prediction entropy. There are other methods that might be useful, such as calculating how a certain data point will modify the model predictions (Expected Model Change) or even calculating how the prediction losses will vary (Expected Error Reduction). These methods are trickier and more computationally heavy, so they are not as easy to apply.
Suppose you have multiple models trained on the dataset you want to label. In that case, you can apply the Query-by-Committee method, where, for each model, you calculate the predictions and then select the cases where the models disagree the most—essentially allowing them to vote for the data to be labeled. A little democracy in your AI strategy can significantly improve your labeling efficiency. If you want to rig the election, you can always attribute different voting weights for each model. We won’t judge.
Another aspect you might want to consider is representativeness. It is easy to understand that, when your model gives you the data it finds most difficult to annotate, it will probably choose some outliers. This will again depend on your specific situation, but you will generally want to give your model data representative of the underlying data distribution. So, for example, if you are working with image data with millions of acquisitions, there is a chance that some of the images are pitch black, or completely blurry. Your model will have difficulty labeling those examples, but they won’t help improve its performance.
So far, you already know what active learning is, and how it works. However, there are aspects beyond the theory that you should understand. To take full advantage of this tool, you must know how you should apply it and the ways it can be affected by external factors.
To err is human…
Active learning’s central tool is human intuition and the ability of the oracle (human annotator) to apply that same intuition to a problem. But, much like any other experiment in another field, the tools might not work as expected. Depending on the data that the oracle is analyzing, they may find some data points difficult to understand. Moreover, if the data is composed of, for example, medical images, the oracle may not even have the knowledge to annotate it, as some medical images are difficult to comprehend even for professionals. This means that annotations will vary from person to person.
Another important aspect of human nature is that people can be affected by distractions or fatigue. So annotations are subject to different annotators and are also impacted by the person’s surroundings and the time they have spent labeling. Even if the person is focused and knowledgeable, they might still misunderstand the task, which is why it is important to build proper user interfaces and labeling protocols that provide the required information.
Mind the costs!
One might think that reducing the amount of data required to train a model reduces the overall cost of training that model. However, that cost is being paid by the oracle (and by the person that hired them), in the form of human effort, time (and money). Naturally, the task of the oracle should be as effortless as possible, so the objective should not only be to reduce the amount of data to annotate, but also to reduce the effort required to annotate it. This is why, in some cases, it can be useful to let the model help, by providing “pre-annotations”, or a prediction.
Knowing when to stop.
When using interactive learning systems, it is important to understand at which point acquiring new data becomes more costly than the errors made by the current model. If it would require excessive resources (e.g., time, money,…) to generate relatively small gains, then it could be argued that, in some instances, it may not be worth it to use active learning. There is a line for using active learning, and understanding where the line is is important.
It’s time to listen to your AI!
Now that you know about active learning, give it a try! Let the model choose the data for you, while you sit and relax. Then spend some time annotating that data while the model sits and relaxes. AI is a two-way street, and you’ll find that human-machine collaboration can significantly boost your project’s efficiency.
If you want to learn more about using model insights to improve your projects, feel free to contact me, and we can discuss what solution is best for you!
Special offers, latest news and quality content in your inbox once per month.
Signup single post
Increasing Efficiency with Active Learning
Mar 3, 2023 in
The problem So there you are. You have collected your data, analyzed it, processed it, and built your sophisticated model architecture. After many hours of training and evaluating, you have come to a very unpleasant conclusion: you need more data. Before you readjust your budget to fit the extra data acquisition and labeling, let me […]
NILG.AI is among the winners of the Vox Pop Open Call for Urban Mobility solutions
Feb 24, 2023 in
Out of 53 applications, the judges chose 18 projects, and NILG.AI earned the second-highest score for its innovative solution to improve mobility for wheelchair users. We are passionate about using data intelligence to drive positive change in communities, and this challenge allowed us to do just that. The challenge The existence of multiple barriers to […]
NILG.AI among the winning startups of HODCON Challenge 2022
Feb 13, 2023 in
We are proud to announce that NILG.AI was among the winning startups of the open call for the Hands on Data 2022 Conference. Hands on Data is an open innovation initiative that creates matchmaking opportunities between major corporations in the Ruhr area and global startups. From a pool of over 100 applications of start-ups from […]