Building Production-Grade spaCy Text Classification Pipelines for Business Data
Unlock the full potential of spaCy with this guide to building production-grade text classification pipelines for business data.
As the rise of few-shot learning with pretrained models as a way to work with data with few labels continues, companies are looking at ways to optimize this process by increasing accuracy and lower runtime, deployment, and training costs. Let’s take a look at the new architecture SetFit that shows serious results compared to other few shot learning models such as GPT-3 and T-FEW.
Before looking into the Sentence Transformer fine-tuning (SetFit) we need to quickly review an important area of Natural Language Processing (NLP) called few-shot learning.
Few-shot learning focuses on leveraging a few examples of how to complete a machine learning task to show a model how to complete this task on a new input. These methods deal with getting good performance results when a very limited amount of data is available by leveraging pre-trained large language models (LLMs) and prompt based programming.
This technique of learning from the data is highly sought because despite the fact that tons of text data is available free of cost and is easily accessible to us, annotating the data with high-quality labels requires a lot of human effort and comes with high costs in terms of time investment and money.
Text classification with few-shot learning is a technique focused on classifying text into any number of categories with very few examples of how to classify text provided for each class (or none at all for some classes). This is specifically useful for resource-constrained domains or for use cases when manual data labelling process is extremely expensive such as legal documents, financial documents, or long form documents.
Few-shot learning is a great way to get started with NLP tasks if quality data is a bottleneck. With that being said, the current few-shot learning workflows have a number of drawbacks that can make them difficult to use in a production setting.
There are a few key reasons that hinder the adoption of few-shot techniques in building production level systems-
For example, some of the popular large neural models and their estimated costs of training are as shown:
- GPT-3: ~$12 Million
- DALL-E: ~$500k- $1 million
- Stable Diffusion: ~$600k
In the example of few-shot learning (FSL) based text classification our production results are mostly based on the data variance coverage we can reach in our prompt examples. As data variance grows from new training data or user inputs the models ability to leverage the few shot examples to correctly complete the task on a given input becomes much worse. There are a few techniques that can be used to help with this such as prompt optimization frameworks that turn baseline GPT-3 into in-context learning models as shown in our articles, but these don’t always solve the underlying problem.
In general, the style of manual prompt engineering can bring a lot of uncertainty in the results for most of the current few-shot learning techniques that are available. Moreover, generating handcrafted prompts requires a lot of effort on its own, to the extent that some FSL models like T-FEW even rely on “dataset-specific prompts” to perform well in practice.
To overcome these two bottlenecks this article talks about SetFit, which is a new architecture proposed by the research teams at Hugging Face with Intel labs and the UKP Lab that works to solve these two hurdles in the way of adopting FSL techniques, and also possesses manifold advantages over the existing techniques for few-shot learning tasks like prompt based GPT-3 and so on.
SetFit uses rich text embeddings which removes the need for prompts (which also means more reliability and less variability in the predicted results - solves bottleneck 2), and is a very efficient framework for few-shot learning tasks requiring a much-labeled training data compared to the existing methods for the same tasks.
The diagram below compares SetFit with the other methods for classification using few-shot learning. SetFit requires significantly less number of train samples and still achieves better accuracy compared to the other techniques for few-shot learning.
SetFit using only 8 labeled examples per class on the Customer Reviews (CR) sentiment dataset gives results that are comparable to those of fine-tuning RoBERTa Large on the full training set of three thousand examples! It’s important to note that the fine-tuned RoBERTa is 3 times larger than the SetFit model used. Along with this, the uncertainty bands demonstrate better reliability (less uncertainty in terms of variability) in SetFit’s results. All of this with comparatively much smaller and cost-effective models!
The original research paper of SetFit unveiled many benefits over the current Few shot learning techniques. These benefits make SetFit a viable model for businesses and startups to deploy in multiple language applications.
SetFit is designed and based on the Sentence Transformers (STs) architecture which allows it to dispense with prompts altogether by virtue of rich text embeddings generated using fine-tuned Sentence Transformers. These rich sentence embedding vectors are further fed as input into a neural classifier head to classify text into one of the many classes. This is a key benefit of SetFit as although prompt engineering has its own benefits in terms of being able to leverage Masked Language Modelling as the training goal for all tasks, it is proved with experiments that the quality of prompts being used plays a significant role in the performance outputs of the models leading to very high variability in the predictions generated.
This can be challenging to use in a high data variance environment, and SetFit overcomes this by leveraging dense vector embeddings for sentences as a whole generated using fine-tuned STs, and thus SetFit’s results show lesser variability and more accuracy with a smaller amount of data needed. Apart from the metric point of view, this also saves the cost and labour involved in generating rich quality or model-specific prompts.
SetFit does not require a pretrained LLM to achieve high accuracy, works with sentence transformers, and also requires very fewer data examples to be able to achieve prediction accuracies comparable to those achieved by large-scale models like T5 and GPT-3. This means for a similar (and in some cases better) level of performance, the training time for SetFit is an order of magnitude lesser comparatively, and therefore making it computationally less expensive as compared to GPT-3. To put into perspective GPT-3 vs SetFit, SetFit is 1,600 times smaller in size than GPT-3 and as well see shorty is able to beat it by impressive margins in multiple popular use cases. The same speed benefits can be seen during inference time as well, making it much easier for companies to leverage a LLM with efficient few-shot learning..
SetFit can be used with any of the available Sentence Transformers models on the Hugging Face Hub. This means SetFit supports the classification of text in not just English language but multiple languages. All that is required on our end is to simply fine-tune a multilingual checkpoint from the hub.
The original paper conducts experiments with SetFit over a range of different datasets and in multiple scenarios including distillation and non-English data. Setfit is compared to standard Pre-trained language model fine-tuning, state-of-the-art PET and PEFT-based methods such as ADAPET (2021) and T-FEW (2022) and even more interestingly to the recent prompt-free techniques such as PERFECT (2022).
To summarize the results from these experiments:
SETFIT’s performance when evaluated on a number of few-shot text classifications tasks shows that it outperforms the state-of-the-art prompt-free method and ranks alongside much larger prompt-based few-shot models. As an added benefit to this, SetFit now also supports multi-label text classification i.e. classification problems where a data point can belong to more than one class at a time.
Let’s take a look at some architectural details of SetFit and highlight the prime features that differentiate SetFit from other few-shot learning methods. Since SetFit works with a different type of transformer model called Sentence Transformers we’ll first understand how sentence transformers are trained and designed.
SetFiT is based on the Sentence Transformers architecture rather than vanilla transformers that use token embeddings. Sentence Transformers use Siamese networks and triplet network structures to modify the pre-trained transformer models in order to generate “semantically meaningful sentence embeddings”. The architecture works by Sentence Transformers generating embeddings for sentences as a whole (as compared to generating embeddings for tokens which is the case for vanilla transformer models) and the generated embeddings are semantically informed, i.e. the distance between pairs of semantically similar sentences is minimized and the distance between sentence pairs that are semantically distant is maximized. This is why Sentence Transformers are a way to generate dense vector representations of textual data.
As a mathematical implication of the kind of embeddings generated by Sentence transformer models, two sentences can be compared using cosine similarity to find a quantitative measure of how similar they are. These text embeddings generated using sentence transformers perform impressively for tasks involving semantic properties of sentences like semantic textual similarity, semantic search, or paraphrase mining. SetFit leverages these vector embeddings to represent the input data. It is important to note that the construction of vanilla transformer-based models like BERT, RoBERTa, etc makes them difficult to use for tasks like semantic similarity search as well as for unsupervised tasks like clustering, and hence STs work to solve the challenge of exploiting semantic properties of sentences.
The training design for SetFit exhibits a simple two-step strategy as is demonstrated in the figure below:
1. Sentence transformer fine-tuning
2. Classification head training.
Since the data available is limited, this step uses contrastive training to generate a much bigger dataset consisting of triplets coming from all classes in a text classification setting, where positive and negative pairs are created by an in-class and an out-class selection strategy, and the third entry in the triplet is the corresponding label. For in-class pairs the label is set to 1 to generate a triplet consisting of sentences coming from the same class. While for out-class pairs the label is set to 0 to generate a triplet consisting of sentences coming from different classes.
After this, a Sentence Transformer model is fine-tuned on these sentence pairs (or triplets) to generate dense vector representations.
After the Sentence Transformer model is fine-tuned we use the original (limited) training data and generate sentence embeddings per training sample. These embeddings (as inputs) and the original class labels are now used as the dataset to train the classification head in the second step. A logistic regression model is used as the text classification head in the original research paper and architecture design.
After training SetFit can be used for inference or generating predictions for a text classification NLP task in pretty much the same way. The fine-tuned Sentence Transformer model first generates an encoding vector for an unseen text input sentence and hence produces a sentence embedding for it. After this the logistic classification head that was trained in the training step is used to classify the input text by producing the class prediction of the input data based on its sentence embedding.
By just changing the underlying Sentence Transformer in the fine-tuning step, SetFit can be used for multiple languages including text classification in German, Japanese, Mandarin, French, and Spanish, in both in-language and cross-linguistic settings.
In this section, we will demonstrate the practical value of using SetFit for business use cases by answering the following question -
How does SetFit fare in comparison to other few-shot learning methods?
First of all, let us compare SetFit with other models available for few-shot learning tasks to get an idea about how cost and compute-effective SetFit can be:
The table shows the model sizes in decreasing order of the number of parameters. SetFit is very small compared to the biggest model GPT-3 which contains a massive number of 175 Billion parameters. More parameters means a larger model which makes deployment, inference, and training more expensive and time consuming. While the tradeoff for larger models normally means higher accuracy and better data variance coverage, we can see below that SetFit performs exceptionally well on a number of tasks relative to the larger model competitors in a few-shot environment.
Let us first compare the performance of SetFit with other models for some standard datasets; we will be comparing SetFit with some baselines like PERFECT, ADAPET, and also with finetuning of standard transformers.
Each table uses a different training set size (N=) where the first two are training samples per class and the last one is the entire training data set. SetFit achieves comparable results to the leader in each test dataset despite being significantly smaller. SetFit performs up to par with T-Few 3B in most categories and is 27x times smaller!
A box plot representation of the comparison of Setfit performance against other methods on 3 classification datasets is also shown below:
While till now we saw the results for standard datasets that are not specifically targeted at few-shot learning tasks, it is worth looking at the RAFT benchmark which is specifically designed for benchmarking few-shot methods. The following table shows the leaderboard performance of the different few-shot learning methods on the RAFT Benchmark (Results are as of September 2022).
SetFit also supports zero-shot text classification and performs competitively (and sometimes outperforms) in comparison to the zero-shot pipeline in Transformers and is impressively 5-15 times faster! Here is a comparison of accuracy scores for classification in a zero-shot manner using SetFit vs zero-shot classification using Transformers.
While performance during inference is often the most important metric businesses evaluate when designing production level systems, costs and latency issues are often considered when resources are more limited. For models with slow inference time this becomes an issue and hinders the adoption of state-of-the-art models into a production environment for most companies. SetFit has training and inference costs that are dramatically different from other models in the few-shot environment space. While this can partially be mapped to the reduced size of the model, we’ve seen above that this does not lead to lower accuracy in many use cases.
Setfit is super fast and efficient to train with a competitive performance compared to larger models like GPT-3 and T-FEW. As we can see above the accuracy drop is only 1.7% with 28 times faster training time. This means you can train SetFit on an Nvidia V100 with 8 labelled examples per class in 30 seconds for a cost of $0.025. On the other hand, training T-Few 3B requires an NVIDIA A100 and takes “11 minutes”, at a cost of around $0.7 for the exact same experiment. 28 TIMES MORE!! SetFit can even run on a single GPU like the ones found on Google Colab notebooks. You can even train SetFit on CPU in just a few minutes. This is a huge boost for companies that have limited resources and still want to push out model versions in a fast paced ai development environment.
As we’ve seen, SetFit achieves upto-the-mark performance while being much smaller in size with much faster training and inference times. Through the use of model distillation we can achieve even smaller SetFit models to account for low data resource scenarios.
Previous experiments in Deep Learning research have shown model distillation to be an effective technique in reducing the computational load while preserving much of the original model’s performance. That said, the paper on SetFit also compares the performance of SetFit as a student model vs Transformers as a student model in a few-shot distillation setup with a limited amount of unlabelled training data at hand. Both student models have nearly the same size (15 m parameters) for a fair comparison in few-shot learning scenarios.
The graphical representation below shows the comparison of SetFit as a student model vs Transformers as a student model in terms of the accuracy vs the size of unlabeled training data. AG news, EMOTION, and SST- 5 datasets are compared and it is clearly evident that the SetFit student outperforms transformer-based student models for smaller datasets. Showing that even with distillation SetFit is a very efficient and high-performing few-shot learning technique relative to other models in various environments and resource constraints.
Width.ai builds custom NLP software (just like this SetFit model) for businesses to leverage internally or as a part of their product. Schedule a call today and let’s talk about if SetFit is right for your use case or if a pretrained LLM would work better. Let’s Talk!
Unlock the full potential of spaCy with this guide to building production-grade text classification pipelines for business data.
We compare 12 AI text summarization models through a series of tests to see how BART text summarization holds up against GPT-3, PEGASUS, and more.
Let’s take a look at what intent classification is in conversational ai and how you can build a GPT-3 intent classification model for conversational ai and chatbot pipelines.
Discover the capabilities of zero-shot object detection, which enables anyone to use a model out-of-the-box without any training and generate production-grade results.
What is facial expression recognition and what SOTA models are being used today in production
Get a simple TensorFlow facial recognition model up & running quickly with this tutorial aimed at using it in your personal spaces on smartphones & IoT devices.
Explore accurate classification algorithms using the latest innovations in deep learning, computer vision, and natural language processing.
Learn what human activity recognition means, how it works, and how it’s implemented in various industries using the latest advances in artificial intelligence.
What is image classification and how we build production level TensorFlow image classification systems for recognizing various products on a retail shelf.
Explore the application of intelligent document processing (IDP) in different industries and dive in-depth on intelligent document pipelines.
How to build an image classification model in PyTorch with a real world use case. How you can perform product recognition with image classification
Let's build a custom CTA generator that you'll actually want to use for your website copy
We’re going to look at how we built a state of the art NLP pipeline for blended summarization and NER to process master service agreements (MDAs) that vary the outputs based on the input document and what is deemed important information.
Get a comprehensive overview of a purchase order vs. invoice, including when businesses use each, what information goes in them, and more.
Learn what Google Shopping categories are used for and how you can automate fitting products to this taxonomy using ai.
Automatically categorize your Shopify store products to the Shopify Product Taxonomy instantly with ai based PIM software
Dive deep into 3-way invoice matching, including how it works, eight benefits for your business, and the problems with doing it manually.
Smart farming using computer vision and deep learning provides the most promising path forward in the slow-moving industry of agriculture.
How we leveraged large language models to build a legal clause rewriting pipeline that generates stronger language and more clarity in legal clauses
Using ai for document information extraction to automate various parts of the loan process.
Apply AI to your favorite sport with this guide. Learn how automated ball tracking can change the game for coaches and players.
Categorize your ecommerce products to the 2021 google product taxonomy tree instantly with our Ai software
Surveying the current landscape of ecommerce automation and how you can use ai to automate huge chunks of your product management.
Classify your product data against an existing product category database or generate categories and tags in seconds using artificial intelligence
Warehouse automation plays a crucial role across your supply chain. Learn about how machine learning and ai software can be integrated into your warehouse automation stack.
4 different NLP methods of summarizing longer input text into different methods such as extractive, abstractive, and blended summarization
iscover an invoice OCR tool that will revolutionize the way you handle invoices. There’s no human intervention needed & a dramatically lower per-invoice cost.
Instead of invoice matching taking upwards of a week, it could take mere seconds with the proper automation solution. Learn more here.
Manual and template-based invoicing are riddled with low accuracy and required human intervention. Learn how to systematically eliminate these issues with the right invoice data capture software.
A complete walkthrough guide on how to use visual search in ecommerce stores to create more sales and real examples of companies already using it.
Automating the extraction of data from invoices can reduce the stress of your accountants by finding inaccuracies, digitizing paper invoices, and more.
How you can use machine learning based data matching to compare data features in a scalable architecture for deduping, record merging, and operational efficiency
Learn how lifetime value or LTV prediction can improve your marketing strategies. Then, discover the best statistical & machine learning models for your predictions.
A deep understanding of how we use gpt-3 and other NLP processes to build flexible chatbot architectures that can handle negotiation, multiple conversation turns, and multiple sales tactics to increase conversions.
The popular HR company O.C. Tanner, which has been in business since 1927 and has over 1500 employees, was looking to research and design two GPT-3 software products to be used as internal tools with their clients. GPT-3 based products can be difficult to outline and design given the sheer lack of publicly available information around optimizing and improving these systems to a production level.
We’ll compare Tableau vs QlikView in terms of popularity, integrations, ease of use, performance, security, customization, and more.
With a context-aware recommender system, you can plan ways to recreate some of the contextual conditions that persuade them to buy more from you.
We’re going to walk through building a production level twitter sentiment analysis classifier using GPT-3 with the popular tweet dataset Sentiment140.
Find out how machine learning in medical imaging is transforming the healthcare world and making it more efficient with three use cases.
Discover ways that machine learning in health care informatics has become indispensable. Review the results of two case studies and consider two key challenges.
Accelerate your growth by pivoting key areas of your business to AI. Your business outcomes will be achieved quicker & you’ll see benefits you didn’t plan for.
We built a GPT-3 based software solution to automate raw data processing and data classification. Our model handles keyword extraction, named entity recognition, text classification | Case Study
We built a custom GPT-3 pipeline for key topic extraction for an asset management company that can be used across the financial domain | Case Study
How you can use GPT-3 to create higher order product categorization and product tagging from your ecommerce listings, and how you can create a powerful product taxonomy system with ai.
5 ways you can use product matching software in ecommerce to create real value that raises your sales metrics and improves your workflow operations.
Data mining and machine learning in cybersecurity enable businesses to ensure an acceptable level of data security 24/7 in highly dynamic IT environments. Learn how data security is getting increasingly automated.
Product recognition software has tremendous potential to improve your profits and slash your costs in your retail business. Find out just how useful it is.
Big data has evolved from hype to a crucial part of scaling your organization in every modern industry. Learn more about how big data is transforming organizations and providing business impacts.
Learn how natural language processing can benefit everybody involved in education from individual students and teachers to entire universities and mass testing agencies.
Here’s how automated data capture systems can benefit your business in some key ways and some real-life examples of what it looks like in practice.
Use these power ai and machine learning tools to create business intelligence in your marketing that pushes your business understanding and analytics past your competition.
We built a custom ML pipeline to automate information extraction and fine tuned it for the legal document domain.
In this practical guide, you'll get to know the principles, architectures, and technologies used for building a data lake implementation.
Find out how machine learning in biology is accelerating research and innovation in the areas of cancer treatment, medical devices, and more.
An enterprise data warehouse (EDW) is a repository of big data for an enterprise. It’s almost exclusive to business and houses a very specific type of data.
Dlib is a versatile and well-diffused facial recognition library, with perhaps an ideal balance of resource usage, accuracy and latency, suited for real-time face recognition in mobile app development. It's becoming a common and possibly even essential library in the facial recognition landscape, and, even in the face of more recent contenders, is a strong candidate for your computer vision and facial recognition or detection framework.
Learn how to utilize machine learning to get a higher customer retention rate with this step-by-step guide to a churn prediction model.
Machine learning algorithms are helping the oil and gas industry cut costs and improve efficiency. We'll show you how.
We’ll show you the difference between machine learning vs. data mining so you know how to implement them in your organization.
Here’s why you should use deep learning algorithms in your business, along with some real-world examples to help you see the potential.
Beam search is an algorithm used in many NLP and speech recognition models as a final decision making layer to choose the best output given target variables like maximum probability or next output character.
Best Place For was looking for an image recognition based software solution that could be used to detect and identify different food dishes, drinks, and menu items in images sourced from blogs and Instagram. The images would be pulled from restaurant locations on Instagram and different menu items would be identified in the images. This software solution has to be able to handle high and low quality images and still perform at the highest production level, while accounting for runtime as well as accuracy.
Deep learning recommendation system architectures make use of multiple simpler approaches in order to remediate the shortcomings of any single approach to extracting, transforming and vectorizing a large corpus of data into a useful recommendation for an end user.
Let's take a look at the architecture used to build neural collaborative filtering algorithms for recommendation systems
GPT-3 is one of the most versatile and transformative components that you can include in your framework, application or service. However, sensational headlines have obscured its wide range of capabilities since its launch. Let’s take a look at the ways that companies and researchers are achieving real-world results with GPT-3, and examine the untapped potential of this 'celebrity AI'.
How to get started with machine learning based dynamic pricing algorithms for price optimization and revenue management
Let's take a look at how you can use spaCy, a state of the art natural language processing tool, to build custom software tools for your business that increase ROI and give you data insights your competitors wish they had.
The landscape for AI in ecommerce has changed a lot recently. Some of the most popular products and approaches have been compromised or undermined in a very short time by a new global impetus for privacy reform, and by the way that the COVID-19 pandemic has transformed the nature of retail.
Extremely High ROI Computer Vision Applications Examples Across Different Industries
Building Data Capture Services To Collect High ROI Business Data With Machine Learning and AI
Software packages and Inventory Data tools that you definitely need for all automated warehouse solutions
Inventory automation with computer vision - how to use computer vision in online retail to automate backend inventory processes