Protect your AI Model from attackers!

Follow along this blog post to learn how to prevent adversarial attacks on your model!

Machine learning models can achieve amazing results performing tasks they were designed to. They can also have catastrophic performance if the data we feed the model is not compliant with the data used to train it. This can be exploited as an adversarial attack on our model. Adversarial attacks are a common and growing problem in AI, where attackers use misrepresentative data to mess with our model’s performance, affecting our product and reputation.

How can we protect our models from adversarial attacks?

One way to shield our models from adversarial attacks is to detect out-of-distribution samples before feeding them to our models. 

The main idea is to create a pipeline that checks if the sample follows the same distribution as the training data used to train our model. If it does, great – you’re good to go. Otherwise, reject the input sample. 

 

How can we distinguish an in-sample from an out-sample?

Due to the ability to perform distribution fitting, generative models have become some of the best anomaly detection methods. Gaussian Mixture Model (GMM) is a probabilistic clustering model that assumes all data was generated by a mixture of n number of Gaussian distributors. GMMs became handy for outlier detection by detecting data samples from low-density regions. 

 

Let’s imagine we have built an Image Classification model, and we are trying to prevent users from input images that are not suitable to be processed by our model.

 

Firstly, we need to create a dataset with in-sample and out-sample data points, and label them accordingly. The in-sample (positive) data points will be our validation + test data used in the Image Classification model. The out-sample data points could be images from ImageNet/COCO datasets for example. Ideally, we should also include images similar to our dataset that would likely be errors from an uninformed user. For example, if our model is trained to classify objects that are present in a bathroom (toilet, bathtub, sink, etc.), an out-sample image could be an object you can find inside your house that is not present in bathrooms (bed, couch, chairs, etc).

 

Secondly, we need to extract a numerical representation of the images – the embeddings. Since we are dealing with unstructured data, we need to extract the latent space of each image in our dataset before we fit our data into the GMM model. 

Now, we fit the in-sample data to the GMM and validate the performance at detecting outliers using both in-sample and out-of-sample data. At this stage, we might need to spend some time tweaking the parameters of the GMM. One important thing to remember is that the number of classes we have in our dataset could indicate how many Gaussian mixture components we need in the GMM model. 

 

What if the GMM can’t distinguish well enough between real cases and adversarial attacks?

In cases where pre-trained models were used to extract the embeddings, what can happen is that the latent spaces of in-sample and out-samples are still very similar when compared with the broad domain present in the datasets used to train those pre-trained models. ImageNet contains images of animals, nature, and objects of all kinds. So, when we use a pre-trained model that was trained on ImageNet to extract embeddings from our domain-specific images, it is normal for those latent spaces to be quite similar. 

How can we help the GMM model distinguish the in-samples and out-samples latent spaces better? By using Principal Component Analysis (or any other dimensionality reduction technique) we should be able to remove general similarities between the samples. Also, in this stage, we might have to spend some time fine-tuning the parameters and figuring out how much reduction we should apply. After dimensionally reducing our latent spaces, we fit them into the GMM model. 

 

Conclusion

Using GMMs to detect if a sample is too different from the training distribution is a great way of protecting our models from adversarial usage. By checking if a sample belongs to the training distribution we are guaranteeing that the model is performing the task as it was designed to, preventing the chances of miss performing and thus increasing the reliability and reputation of our product.

Like this story?

Subscribe to Our Newsletter

Special offers, latest news and quality content in your inbox.

Signup single post

Consent(Required)
This field is for validation purposes and should be left unchanged.

Recommended Articles

Article
AI City Intelligence

Imagine being able to make better decisions about where to live, where to establish a new business, or how to understand the changing dynamics of urban neighborhoods. Access to detailed, up-to-date information about city environments allows us to answer these questions with greater confidence, but the challenge lies in accessing and analyzing the right data. […]

Read More
Article
EcoRouteAI: Otimização de Ecopontos com Inteligência Artificial

O Plano Estratégico para os Resíduos Urbanos (PERSU) 2030 definiu metas ambiciosas para a gestão de resíduos em Portugal, com o objetivo de aumentar a reciclagem e melhorar a sustentabilidade ambiental. No entanto, os atuais índices de reciclagem e separação de resíduos ainda estão aquém do necessário, tanto a nível nacional quanto europeu, criando desafios […]

Read More
Article
NILG.AI named Most-Reviewed AI Companies in Portugal by The Manifest

The artificial intelligence space has been showcasing many amazing technologies and solutions. AI is at its peak, and many businesses are using it to help propel their products and services to the top! You can do it, too, with the help of one of the best AI Companies in Portugal: NILG.AI. We focus on your […]

Read More