Protect your AI Model from attackers!

Follow along this blog post to learn how to prevent adversarial attacks on your model!

Machine learning models can achieve amazing results performing tasks they were designed to. They can also have catastrophic performance if the data we feed the model is not compliant with the data used to train it. This can be exploited as an adversarial attack on our model. Adversarial attacks are a common and growing problem in AI, where attackers use misrepresentative data to mess with our model’s performance, affecting our product and reputation.

How can we protect our models from adversarial attacks?

One way to shield our models from adversarial attacks is to detect out-of-distribution samples before feeding them to our models. 

The main idea is to create a pipeline that checks if the sample follows the same distribution as the training data used to train our model. If it does, great – you’re good to go. Otherwise, reject the input sample. 

 

How can we distinguish an in-sample from an out-sample?

Due to the ability to perform distribution fitting, generative models have become some of the best anomaly detection methods. Gaussian Mixture Model (GMM) is a probabilistic clustering model that assumes all data was generated by a mixture of n number of Gaussian distributors. GMMs became handy for outlier detection by detecting data samples from low-density regions. 

 

Let’s imagine we have built an Image Classification model, and we are trying to prevent users from input images that are not suitable to be processed by our model.

 

Firstly, we need to create a dataset with in-sample and out-sample data points, and label them accordingly. The in-sample (positive) data points will be our validation + test data used in the Image Classification model. The out-sample data points could be images from ImageNet/COCO datasets for example. Ideally, we should also include images similar to our dataset that would likely be errors from an uninformed user. For example, if our model is trained to classify objects that are present in a bathroom (toilet, bathtub, sink, etc.), an out-sample image could be an object you can find inside your house that is not present in bathrooms (bed, couch, chairs, etc).

 

Secondly, we need to extract a numerical representation of the images – the embeddings. Since we are dealing with unstructured data, we need to extract the latent space of each image in our dataset before we fit our data into the GMM model. 

Now, we fit the in-sample data to the GMM and validate the performance at detecting outliers using both in-sample and out-of-sample data. At this stage, we might need to spend some time tweaking the parameters of the GMM. One important thing to remember is that the number of classes we have in our dataset could indicate how many Gaussian mixture components we need in the GMM model. 

 

What if the GMM can’t distinguish well enough between real cases and adversarial attacks?

In cases where pre-trained models were used to extract the embeddings, what can happen is that the latent spaces of in-sample and out-samples are still very similar when compared with the broad domain present in the datasets used to train those pre-trained models. ImageNet contains images of animals, nature, and objects of all kinds. So, when we use a pre-trained model that was trained on ImageNet to extract embeddings from our domain-specific images, it is normal for those latent spaces to be quite similar. 

How can we help the GMM model distinguish the in-samples and out-samples latent spaces better? By using Principal Component Analysis (or any other dimensionality reduction technique) we should be able to remove general similarities between the samples. Also, in this stage, we might have to spend some time fine-tuning the parameters and figuring out how much reduction we should apply. After dimensionally reducing our latent spaces, we fit them into the GMM model. 

 

Conclusion

Using GMMs to detect if a sample is too different from the training distribution is a great way of protecting our models from adversarial usage. By checking if a sample belongs to the training distribution we are guaranteeing that the model is performing the task as it was designed to, preventing the chances of miss performing and thus increasing the reliability and reputation of our product.

Like this story?

Subscribe to Our Newsletter

Special offers, latest news and quality content in your inbox once per month.

Signup single post

Consent(Required)
This field is for validation purposes and should be left unchanged.

Recommended Articles

Article
Will AI Impact Your Future Job?

Artificial Intelligence (AI) is making waves across all industries and is ready to change every aspect of our professional lives. Today, as we stand on the edge of a technological revolution, the future of AI and its impact on the job market has become a hot topic of discussion. In this article, we’ll offer our […]

Read More
Article
Link to Leaders Awarded NILG.AI Startup of the Month

NILG.AI is Startup of the Month Link to Leaders awarded NILG.AI the Startup of the Month (check news). Beta-i nominated us after winning two of their Open Innovation challenges: VOXPOP Urban Mobility Initiatives and Re-Source. AI with Geospatial Data At VOXPOP, NILG.AI built an AI-based mobility index for wheelchair users for the municipality of Lisbon […]

Read More
Article
Can Machine Learning Revolutionize Your Business?

Today, the buzz around machine learning (ML) is louder than ever. But what is it exactly, and more importantly, can it revolutionize your business? In essence, ML is a technology that empowers machines to learn from data, improve over time, and make predictive decisions. It has the potential to redefine how businesses operate. In this […]

Read More