Duplicate detection in text data

An overview of my internship at NILG.AI

A common use case seen across several industries is the creation of systems capable of detecting the similarity between pairs of objects – images and texts. For example, duplicate detection in marketplaces, or recommendation systems that show similar objects to the ones the user has searched for, can use such systems. They can also be useful for detecting plagiarism in a thesis or articles due to the massive number of publications over the years. So, being text so a widespread data modality, being able to do duplicate detection in text is a critical task in Machine Learning.

But how can we make these systems? A human can easily perceive the similarity between two sentences said differently. For example, the sentences “She survived” and “She did not die” have the same meaning. A text similarity algorithm is expected to retrieve a very high similarity rate. To do this, Machine Learning is the right path, but it’s not that easy, due to the complexities of Natural Language Processing (NLP).

This article describes the creation of two tools I developed with the orientation of Pedro Dias, a Data Scientist at NILG.AI, for my curricular internship.

Natural Language Processing and text modeling

NLP is a subfield of artificial intelligence concerned with the interactions between computers and human language, particularly how to program computers to process and analyze large amounts of natural language data.

Text modeling was the main base of my work. It consists of analyzing text data to find a group of words from a collection of documents that best represents the information in the collection.

Of course, there are many ways to perform feature extraction from texts, but the path chosen was to use Word2vec and bag-of-N-gram.

Word2Vec is a method to obtain word embedding, a term used to represent words in a vector space for text analysis. Once trained, it can detect synonymous words or suggest additional words for a partial sentence.

Bag-of-N-gram is a technique that counts how many times an N-gram appears in a document. An N-gram is a sequence of N words where N is a number between 1 and infinite. For example, given the sentence “Hello neighbor next door,” “Hello World” / “next door” is a 2-gram while “Hello neighbor next door” is a 4-gram.

Duplicate Detection in Text Using Machine Learning

One of the tools created had the objective of retrieving the similarity between two texts inserted by the user. To do that, an abstract representation of these texts was created with different methods. The next step was calculating the distance between these abstract representations, generating the probability of being similar. Below is a scheme to better understand these steps. 

The other tool created allowed the user to enter a text, and it returned the most 10 similar texts from a bank of texts. This bank of texts belongs to a dataset from Quora Question Pairs

There were three types of approaches in the development of the product: an unsupervised, using Word2Vec and Bag-of-N-gram for the abstract representation, a supervised, using Logistic Regression, and another that simulates real-life situations explained further on. 

All of these methods used the dataset mentioned above, and represented in the figure below, to train the model.

Real-life simulation

In this approach, the initial dataset has a particularity. The second question is replaced by a synonymous phrase, but only the rows that indicate that the questions are duplicates. This simulates the case where two phrases with different but similar words have the same meaning, and a scenario where there is no annotated data for the duplicates, but we can still train a model.

Development of a Rest API and a Web App

Furthermore, two services were created to make the tool that can retrieve the similarity between two texts, and the one that, given a text, returns the most similar texts from a bank of texts. These services were implemented for every method. Integrating these models in a Rest API (backend) and a Dash Interface (frontend), you can find the final result in this dashboard.

Automation of deployment

This was made to release the versions without much effort and a much more seamless way of delivering products in the industry. To put the above-mentioned services into production using Terraform, an EC2 machine, and an ECR repository were created to store the docker images of each interface.

By generating the docker image on a computer and pushing them to the ECR repository, it is only needed to access the machine-generated EC2 instance and pulling these images from the ECR repository. The image below intends to clarify this process further.

 

These steps are all aggregated in a deployment script to make the release easier for a possible client.

Conclusions on my internship: Duplicate detection in text data

I believe that the work on duplicate detection in text was carried out successfully, although I consider that I could increase the accuracy and reliability of the tools created with more time. 

This experience allowed me to be part of a project that has a lot of use for companies or private entities, such as detecting duplicate data to delete it, or detecting plagiarism.

Overall, I felt that it was an internship that put me more in touch with the business world and gave me a lot of knowledge in the machine learning area, especially about NLP and supervised and unsupervised learning. It also gave me more knowledge about how to make a deployment, not only focusing on the artificial intelligence area.

Ultimately, I would like to thank my excellent tutor from NILG.AI, who was always willing to help and teach me throughout the semester.

Like this story?

Subscribe to Our Newsletter

Special offers, latest news and quality content in your inbox.

Signup single post

Consent(Required)
This field is for validation purposes and should be left unchanged.

Recommended Articles

Article
Top Business Process Automation Examples to Boost Efficiency

Unleashing the Power of Automation: 7 Ways to Transform Your Business Want to boost efficiency and free up your team for strategic work? This listicle provides seven practical business process automation examples to help you streamline operations and improve your bottom line. Discover how automating processes like customer onboarding, invoice processing, and report generation can […]

Read More
Article
Transform Your Business with Intelligent Process Automation

Demystifying Intelligent Process Automation: Beyond Basic Automation Intelligent Process Automation (IPA) is so much more than just putting repetitive tasks on autopilot. Think of it as a whole new way businesses are approaching process improvement. We’re not just talking about simple rule-based automation anymore; we’re talking systems that learn and adapt as they go. That […]

Read More
Article
Overcoming Digital Transformation Challenges: Expert Tips

Beyond Anarchy: Climbing the Digital Transformation Ladder Ready to move past digital chaos and embrace data-driven decisions? This list tackles 8 key digital transformation challenges, guiding you from basic SOPs to advanced AI. We’ll cover hurdles like legacy system integration, cultural resistance, security concerns, talent shortages, and budget constraints. Learn how to define your digital […]

Read More