Protect your AI Model from attackers!

Follow along this blog post to learn how to prevent adversarial attacks on your model!

Machine learning models can achieve amazing results performing tasks they were designed to. They can also have catastrophic performance if the data we feed the model is not compliant with the data used to train it. This can be exploited as an adversarial attack on our model. Adversarial attacks are a common and growing problem in AI, where attackers use misrepresentative data to mess with our model’s performance, affecting our product and reputation.

How can we protect our models from adversarial attacks?

One way to shield our models from adversarial attacks is to detect out-of-distribution samples before feeding them to our models. 

The main idea is to create a pipeline that checks if the sample follows the same distribution as the training data used to train our model. If it does, great – you’re good to go. Otherwise, reject the input sample. 


How can we distinguish an in-sample from an out-sample?

Due to the ability to perform distribution fitting, generative models have become some of the best anomaly detection methods. Gaussian Mixture Model (GMM) is a probabilistic clustering model that assumes all data was generated by a mixture of n number of Gaussian distributors. GMMs became handy for outlier detection by detecting data samples from low-density regions. 


Let’s imagine we have built an Image Classification model, and we are trying to prevent users from input images that are not suitable to be processed by our model.


Firstly, we need to create a dataset with in-sample and out-sample data points, and label them accordingly. The in-sample (positive) data points will be our validation + test data used in the Image Classification model. The out-sample data points could be images from ImageNet/COCO datasets for example. Ideally, we should also include images similar to our dataset that would likely be errors from an uninformed user. For example, if our model is trained to classify objects that are present in a bathroom (toilet, bathtub, sink, etc.), an out-sample image could be an object you can find inside your house that is not present in bathrooms (bed, couch, chairs, etc).


Secondly, we need to extract a numerical representation of the images – the embeddings. Since we are dealing with unstructured data, we need to extract the latent space of each image in our dataset before we fit our data into the GMM model. 

Now, we fit the in-sample data to the GMM and validate the performance at detecting outliers using both in-sample and out-of-sample data. At this stage, we might need to spend some time tweaking the parameters of the GMM. One important thing to remember is that the number of classes we have in our dataset could indicate how many Gaussian mixture components we need in the GMM model. 


What if the GMM can’t distinguish well enough between real cases and adversarial attacks?

In cases where pre-trained models were used to extract the embeddings, what can happen is that the latent spaces of in-sample and out-samples are still very similar when compared with the broad domain present in the datasets used to train those pre-trained models. ImageNet contains images of animals, nature, and objects of all kinds. So, when we use a pre-trained model that was trained on ImageNet to extract embeddings from our domain-specific images, it is normal for those latent spaces to be quite similar. 

How can we help the GMM model distinguish the in-samples and out-samples latent spaces better? By using Principal Component Analysis (or any other dimensionality reduction technique) we should be able to remove general similarities between the samples. Also, in this stage, we might have to spend some time fine-tuning the parameters and figuring out how much reduction we should apply. After dimensionally reducing our latent spaces, we fit them into the GMM model. 



Using GMMs to detect if a sample is too different from the training distribution is a great way of protecting our models from adversarial usage. By checking if a sample belongs to the training distribution we are guaranteeing that the model is performing the task as it was designed to, preventing the chances of miss performing and thus increasing the reliability and reputation of our product.

Like this story?

Subscribe to Our Newsletter

Special offers, latest news and quality content in your inbox once per month.

Signup single post

This field is for validation purposes and should be left unchanged.

Recommended Articles

Five Years Helping Entrepreneurs Embrace AI

We celebrate the 5th anniversary of NILG.AI with immense pride and gratitude. Over the years, we’ve embarked on a transformative journey, and today, as we commemorate this milestone, we want to take a moment to reflect on our accomplishments and share our exciting plans for the future. NILG.AI Fifth Anniversary: A Year of Remarkable Achievements […]

Read More
Classifying text using LLMs

  Text classification is one of the most common use cases in Natural Language Processing, with numerous practical applications – now easier to access with Large Language Models. Companies use text classification in multiple scenarios to become more efficient: Tagging large volumes of data: reducing manual labor with better filtering, automatically organizing large volumes of […]

Read More
NILG.AI will be at the Iqony Technology Days 2023

Innovation drives the future of the industry, and at NILG.AI, we’re committed to leading this charge. Therefore, we’re excited to announce our participation in the upcoming Iqony Technology Days 2023, scheduled for this September in Brühl, Cologne. Our enthusiasm is fueled by our recent victory in the Hands On Data (HOD) challenge earlier this year.  […]

Read More