Duplicate detection in text data

An overview of my internship at NILG.AI

A common use case seen across several industries is the creation of systems capable of detecting the similarity between pairs of objects – images and texts. For example, duplicate detection in marketplaces, or recommendation systems that show similar objects to the ones the user has searched for, can use such systems. They can also be useful for detecting plagiarism in a thesis or articles due to the massive number of publications over the years. So, being text so a widespread data modality, being able to do duplicate detection in text is a critical task in Machine Learning.

But how can we make these systems? A human can easily perceive the similarity between two sentences said differently. For example, the sentences “She survived” and “She did not die” have the same meaning. A text similarity algorithm is expected to retrieve a very high similarity rate. To do this, Machine Learning is the right path, but it’s not that easy, due to the complexities of Natural Language Processing (NLP).

This article describes the creation of two tools I developed with the orientation of Pedro Dias, a Data Scientist at NILG.AI, for my curricular internship.

Natural Language Processing and text modeling

NLP is a subfield of artificial intelligence concerned with the interactions between computers and human language, particularly how to program computers to process and analyze large amounts of natural language data.

Text modeling was the main base of my work. It consists of analyzing text data to find a group of words from a collection of documents that best represents the information in the collection.

Of course, there are many ways to perform feature extraction from texts, but the path chosen was to use Word2vec and bag-of-N-gram.

Word2Vec is a method to obtain word embedding, a term used to represent words in a vector space for text analysis. Once trained, it can detect synonymous words or suggest additional words for a partial sentence.

Bag-of-N-gram is a technique that counts how many times an N-gram appears in a document. An N-gram is a sequence of N words where N is a number between 1 and infinite. For example, given the sentence “Hello neighbor next door,” “Hello World” / “next door” is a 2-gram while “Hello neighbor next door” is a 4-gram.

Duplicate Detection in Text Using Machine Learning

One of the tools created had the objective of retrieving the similarity between two texts inserted by the user. To do that, an abstract representation of these texts was created with different methods. The next step was calculating the distance between these abstract representations, generating the probability of being similar. Below is a scheme to better understand these steps. 

The other tool created allowed the user to enter a text, and it returned the most 10 similar texts from a bank of texts. This bank of texts belongs to a dataset from Quora Question Pairs

There were three types of approaches in the development of the product: an unsupervised, using Word2Vec and Bag-of-N-gram for the abstract representation, a supervised, using Logistic Regression, and another that simulates real-life situations explained further on. 

All of these methods used the dataset mentioned above, and represented in the figure below, to train the model.

Real-life simulation

In this approach, the initial dataset has a particularity. The second question is replaced by a synonymous phrase, but only the rows that indicate that the questions are duplicates. This simulates the case where two phrases with different but similar words have the same meaning, and a scenario where there is no annotated data for the duplicates, but we can still train a model.

Development of a Rest API and a Web App

Furthermore, two services were created to make the tool that can retrieve the similarity between two texts, and the one that, given a text, returns the most similar texts from a bank of texts. These services were implemented for every method. Integrating these models in a Rest API (backend) and a Dash Interface (frontend), you can find the final result in this dashboard.

Automation of deployment

This was made to release the versions without much effort and a much more seamless way of delivering products in the industry. To put the above-mentioned services into production using Terraform, an EC2 machine, and an ECR repository were created to store the docker images of each interface.

By generating the docker image on a computer and pushing them to the ECR repository, it is only needed to access the machine-generated EC2 instance and pulling these images from the ECR repository. The image below intends to clarify this process further.

 

These steps are all aggregated in a deployment script to make the release easier for a possible client.

Conclusions on my internship: Duplicate detection in text data

I believe that the work on duplicate detection in text was carried out successfully, although I consider that I could increase the accuracy and reliability of the tools created with more time. 

This experience allowed me to be part of a project that has a lot of use for companies or private entities, such as detecting duplicate data to delete it, or detecting plagiarism.

Overall, I felt that it was an internship that put me more in touch with the business world and gave me a lot of knowledge in the machine learning area, especially about NLP and supervised and unsupervised learning. It also gave me more knowledge about how to make a deployment, not only focusing on the artificial intelligence area.

Ultimately, I would like to thank my excellent tutor from NILG.AI, who was always willing to help and teach me throughout the semester.

Like this story?

Subscribe to Our Newsletter

Special offers, latest news and quality content in your inbox once per month.

Signup single post

Consent(Required)
This field is for validation purposes and should be left unchanged.

Recommended Articles

Article
AI City Intelligence

Imagine being able to make better decisions about where to live, where to establish a new business, or how to understand the changing dynamics of urban neighborhoods. Access to detailed, up-to-date information about city environments allows us to answer these questions with greater confidence, but the challenge lies in accessing and analyzing the right data. […]

Read More
Article
EcoRouteAI: Otimização de Ecopontos com Inteligência Artificial

O Plano Estratégico para os Resíduos Urbanos (PERSU) 2030 definiu metas ambiciosas para a gestão de resíduos em Portugal, com o objetivo de aumentar a reciclagem e melhorar a sustentabilidade ambiental. No entanto, os atuais índices de reciclagem e separação de resíduos ainda estão aquém do necessário, tanto a nível nacional quanto europeu, criando desafios […]

Read More
Article
NILG.AI named Most-Reviewed AI Companies in Portugal by The Manifest

The artificial intelligence space has been showcasing many amazing technologies and solutions. AI is at its peak, and many businesses are using it to help propel their products and services to the top! You can do it, too, with the help of one of the best AI Companies in Portugal: NILG.AI. We focus on your […]

Read More