Classifying text using LLMs

Learn how to automatically tag text with the usage of Large Language Models, and what are the trade-offs between different methods!


Text classification is one of the most common use cases in Natural Language Processing, with numerous practical applications – now easier to access with Large Language Models. Companies use text classification in multiple scenarios to become more efficient:

  • Tagging large volumes of data: reducing manual labor with better filtering, automatically organizing large volumes of text.
  • Enhancing Search/Recommendation Systems: Search and recommendation can be enhanced by a better understanding of the searched queries.
  • Sentiment Analysis: Understanding public opinion/customer feedback by determining the emotion expressed in text is valuable for.
  • Customer Support: Facilitate ticket prioritization and routing to the correct team by categorizing customer support tickets.

All of these use cases were solvable in the past without using LLMs. However, the uprising of these models has reduced the amount of necessary training data for obtaining good results, and has also increased the average performance of these use cases, taking less time for reaching them!

In this blog post, we will cover several techniques for text classification before the uprising of the most recent LLMs (OpenAI, LLaMA, Bing, …) and after.  

FREE eBook: How to transform your business with AI

Download our eBook and discover the most common pitfalls when implementing AI projects and how to prevent them.

Send me the eBook

Most common techniques for Text Classification using Large Language Models

The most common techniques for text classification are:

  • Zero-Shot Classification: asking a model for a label directly, without giving any examples. Although it’s the simplest option, and you don’t need any data, performance is quite limited, and you can end-up with an outcome that is not a part of your fixed class list (hallucination). 
    • Pre-LLMs: Using open-source models such as TARS 
    • Post-LLMs: Directly requesting LLMs to generate a label, passing a final structure. This approach is slower than pre-LLMs: although much more accurate.
  • Few-Shot Classification: you pass a few examples per class, and require a low amount of annotated data.
    • Pre-LLMs: Using open-source models such as TARS
    • Post-LLMs: Using LLMs by passing in the prompt’s context the samples of each class. Will be more accurate than the previous approach.
  • Raw embedding feature extraction: we convert the text into a numerical representation (embedding) and train a model on top of that, which retrieves a probability score that can be used for making decisions.  However, you require a larger amount of annotated data.   
    • Pre-LLMs: Using open-source embeddings such as GloVE.
    • Post-LLMs: Using OpenAI embeddings, which are trained on larger amounts of data and typically outperform other embedding methods. This is a paid option, of which you need to consider the trade-offs compared to using an open source solution. 
  • Embeddings of enriched text: Before extracting the embeddings, we try to uncover more information about the text, “enriching it”. 
    • Pre-LLMs: Not frequently used. 
    • Post-LLMs: ask the LLM to give you more information about the text: for example, if it’s a Google Search, LLMs can give you more information about what that search encompasses. It’s a slower approach than Pre-LLMs, but it’s the technique with the highest scores we’ve seen so far.

“Let’s assume you’re an Encyclopedia, and you have to define the concepts I’m providing. Your explanation must be succinct (couple of paragraphs), like the summary section of a Wikipedia article talking about the concept. (…)”

Below is a comparative chart, summarizing the trade-offs of the methods in terms of required data, speed and accuracy.



We showed you several ways of doing text classification using Large Language Models. LLMs allow you to reach acceptable performance in a few hours of work and are pretty good for an initial benchmark – despite this, don’t forget about older methods, which can be a fallback when you want faster outcomes or when paying for LLMs’ requests is not feasible in the scale of your use case. 

Want to revolutionize the way you do text classification? Know more by contacting us!

Do you want to further discuss this idea?

Book a meeting with Paulo Maia

Meet Paulo Learn More

Like this story?

Subscribe to Our Newsletter

Special offers, latest news and quality content in your inbox once per month.

Signup single post

This field is for validation purposes and should be left unchanged.

Recommended Articles

Can Your Business Optimize AI Predictive Models?

Predictive models are transforming the AI landscape. They can forecast future events, identify past occurrences, and even predict present situations. However, building a successful predictive model is not as simple as it seems. To achieve an effective predictive model, you need to consider three crucial moments: the prediction time, the prediction window, and the data […]

Read More
Is Your Business Ready for Generative AI Risks?

Generative AI is a powerful tool that many companies are rushing to incorporate into their operations. However, it’s crucial to understand the possible risks associated with this technology. In this article, we’ll discuss the top nine risks that could impact your business’s readiness for AI integration. Stay ahead of the curve, and make sure you’re […]

Read More
Can the STAR Framework Streamline Your AI Projects?

As a manager dealing with AI projects, you may often find yourself overwhelmed. The constant addition of promising projects to the backlog can lead to a mounting technical debt within your team, forcing you to neglect the core aspects of your business. Here at NILG.AI, we have a solution for this challenge: the STAR framework. […]

Read More