Building Production-Grade spaCy Text Classification Pipelines for Business Data
Unlock the full potential of spaCy with this guide to building production-grade text classification pipelines for business data.
You have probably heard of OpenAI's GPT-3 and ChatGPT by now. They're at the heart of all the news about artificial intelligence (AI) becoming sentient and taking over everyone's job. They're quite amazing, from generating poems on the fly to spitting out software code. But, being cloud-based services, they have disadvantages that make them impractical for some businesses and industries.
Imagine the possibilities for your business if you could, instead, use your own private AI that compares well to GPT-3 on some tasks and is completely under your control. In this article, you'll learn about this versatile AI model and how to use it for language tasks like document summarization.
Summarization produces a shorter version of a document, called a summary, that ideally shows the following characteristics:
In natural language processing (NLP), we use two types of summarization:
In the real world, often both are used together. You may want abstractive summarization for some sections of the document and extractive summarization for others.
In the next section, you'll learn about a deep neural network that can do abstractive summarization.
BART stands for bidirectional autoregressive transformer, a reference to its neural network architecture. BART proposes an architecture and pre-training strategy that makes it useful as a sequence-to-sequence model (seq2seq model) for any NLP task, like summarization, machine translation, categorizing input text sentences, or question answering under real-world conditions. In this article, we'll focus on its summarization capabilities.
BART is just a standard encoder-decoder transformer model. Its power comes from three ideas:
During pre-training, BART learns a language model that contains all the complexities and nuances of a real-world natural language. By deliberately introducing all kinds of noise, deletions, and modifications in its input data to make it difficult to learn, BART gains the ability to generate linguistically correct sequences even when input text is noisy, erroneous, or missing.
This denoising language model is versatile and can be adapted for any downstream text generation or understanding tasks like summarization, question-answering, machine translation, or text classification.
To learn to summarize at a high level, a pre-trained BART model is fine-tuned on a summarization task dataset. Such a dataset consists of pairs of input documents and their manually created summaries, often by subject matter experts that have a great idea of the perfect “goal state” summary for the use case. Summaries are often considered “relative” as what it means to be a good summary mostly depends on the user and the information they were expecting to see in the summary.
Fine-tuning refines the pre-trained language model's network weights to learn summarization-specific concepts like paraphrasing, saliency, generalization, hypernymy, and more. A fine-tuned bart model has learned the underlying relationships between various patterns in the input text and the goal output text.
How does the BART summarization model compare to the other summarization models out there? Research groups still compare these models using the old recall-oriented understudy for gisting evaluation (ROUGE) metrics. But ROUGE looks for common words and n-grams between the generated and reference summaries — the more there are, the higher the score. Since abstractive models paraphrase the text, they may not score well, and high scores may not result in good summaries under real-world conditions.
So, we compared these models qualitatively by reviewing their summaries, as readers in the real world are likely to. This is the preferred method of evaluating all types of natural language generation. We evaluated them in four industries where long-form texts are common and summaries are useful:
We also evaluated them on longer documents like research papers and books, a real-world phenomenon seen in all four industries.
We evaluated 12 models: four state-of-the-art (SOTA) abstractive models that can be run locally, four SOTA extractive models that can be run locally, and the four famous GPT-3 models running on OpenAI's cloud. Here are the 12:
All eight local models were run on Kaggle's CPUs without any GPUs. For the four GPT-3 models, the prompt was "Summarize this in 4 sentences:" followed by the document text.
In the sections below, we analyze the summaries generated by these models in each industry.
We started with a relatively simple use case of summarizing essays. Both students and teachers may use such a feature to improve their understanding and writing. It's useful outside the education sector too — to summarize news articles, for example.
We evaluated the models on this student essay from Kaggle's automated essay scoring dataset:
As you can see, the essay has some spelling mistakes, grammar errors, and structural weaknesses. We can consider it a rather noisy input, the kind that's quite common in the real world. The student has used the first-person point of view.
How do the models fare?
The four GPT-3 abstractive models fare well, with all of them changing the perspective to the third person and getting rid of the spurious text:
The four extractive models, though SOTA, retain too many errors and spurious text to be useful in practice. Use them only if your documents are of good quality.
From simple general essays, we move to the highly specialized domain of medical reports. Their summaries are useful to doctors, nurses, researchers, and insurers.
We use a medical report from the Kaggle medical transcriptions dataset as the input document.
The report contains multiple sections that may be important to different medical specialists. An ideal summary must retain all the sections and produce a summary per section. However, due to limits imposed by the models on the input lengths, we had to crop out the last 10% or so of the input document. These limits are not inherent and can be increased by retraining the models.
The first three GPT-3 models — Davinci, Curie, and Babbage — retain and summarize much of the relevant details (along with some irrelevant ones) though they discard the sections. Considering the size difference between these models it’s interesting to review the similarity of the outputs.
In contrast, the Ada model reproduces all the sections and also summarizes each one to some extent, but it does feel like it went off-track and retained too much.
The four extractive models discard too many details. But this is not inherent and can be overcome by increasing parameters like the number of sentences to retain.
We now move to another specialized domain: legal documents like contracts, agreements, and clauses.
For our tests, we use a rather lengthy legal contract from the contract understanding Atticus dataset.
It has about 23 clauses covering all aspects of a business relationship. Again, due to model limitations (which can be increased), we had to crop the document to about 40%.
The Davinci GPT-3 model produces a very high quality abstract like summary with agreement details. Babbage generates a rather extractive summary while Curie fails on this document, possibly due to its odd format.
All four extractive models do well to tell us the signing parties, but the usefulness of the other details is relative as most summarization tasks of information dense documents.
The final comparison is on the challenge of summarizing scientific research papers. We use an astronomy research paper from the arXiv summarization dataset as our test sample.
It's a lengthy document full of jargon, numbers, and noisy text like LaTeX symbols. Like the previous two domains, we had to crop it, retaining just about 12% of the full document but that includes the introduction and overview parts of the typical research paper.
All four GPT-3 do a fairly good job of summarizing the paper's chosen topic. Davinci takes the task quite literally and summarizes everything, but leaves out the most important one-line summary. Curie, Babbage, and Ada include a one-line summary of the paper and the first two even retain considerable detail.
All four extractive models fare very poorly. It looks like they are tripping over some spurious text.
Three of the four analyses above used long documents that went beyond the limits imposed by the models. For example, the text supplied to BART must not go beyond 1024 subword tokens and longer documents have to be cut. Important context and details that the summarizers can use are lost forever.
In the real world, most documents like contracts and research papers breach these rather small limits. So, what can you do to retain all the information without cutting out anything? There are a couple of approaches you can take:
Based on the comparisons above, we can observe many benefits (and potential benefits) of using BART for your summarization needs.
BART's not the only model that learns to generate text when text is noisy or missing. BERT deliberately masks tokens, and PEGASUS masks entire sentences. But what sets BART apart is that it explicitly uses not just one but multiple noisy transformations. So, BART learns to generate grammatically correct sentences even when supplied with noisy or missing text. That's why it's the only model in our experiments that manages to get rid of most of the spurious text in every domain.
The BART paper demonstrates by scoring better than other comparable models on most benchmarks:
As we saw above, BART produces good results on a number of input types for advanced model summarization. The other models are more finicky, faring very well sometimes and very poorly other times. Such inconsistency generally requires more post-processing of the generated text using complex rules. BART doesn't suffer those problems as much.
Related to the previous point, grammatical errors of the other models may be particularly difficult to eliminate during post-processing. BART manages to generate grammatically correct text almost every time, most probably thanks to explicit learning to handle noisy, erroneous, or spurious text.
As we saw, BART's summaries are often comparable to GPT-3's Curie and Babbage models. So, businesses can get close to GPT-3's quality and versatility but with lower costs and more flexibility regarding fine-tuning and deployments, since BART can run locally on private servers and GPUs.
Some industries are just not comfortable about uploading their internal sensitive documents to a cloud provider like OpenAI or AWS. BART enables them to summarize their documents securely with good quality on their private servers.
GPT-3 is an amazing service but also has some limitations:
A model like BART that you can fine-tune and run on your servers doesn't suffer these problems as badly.
The HuggingFace library provides excellent pre-trained models and scripts to help you fine-tune BART on your custom data.
BART and PEGASUS are the most common backbones on which more advanced models implement their improvements. These advanced models, like MoCa, BRIO, and SimCLS, that head the SOTA leaderboards on various datasets are using BART as their backbone.
Among the abstractive non-cloud models, BART often takes less time compared to T5 or PEGASUS. The summarization is near real-time.
BART inference runs fine on CPUs without requiring any GPUs. For training, of course, you'd want a GPU.
If we compare model file sizes (as a proxy to the number of parameters), we find that BART-large sits in a sweet spot that isn't too heavy on the hardware but also not too light to be useless:
A versatile language model like BART is essentially a private, self-managed GPT3-level model for your company's needs. You can use it for intelligent document understanding, machine translation, summarization, document classification, and much more while keeping all your sensitive business information secure and private. Contact us for expertise in building up your AI and machine learning capabilities!
Unlock the full potential of spaCy with this guide to building production-grade text classification pipelines for business data.
Let’s take a look at what intent classification is in conversational ai and how you can build a GPT-3 intent classification model for conversational ai and chatbot pipelines.
Discover the capabilities of zero-shot object detection, which enables anyone to use a model out-of-the-box without any training and generate production-grade results.
What is facial expression recognition and what SOTA models are being used today in production
Get a simple TensorFlow facial recognition model up & running quickly with this tutorial aimed at using it in your personal spaces on smartphones & IoT devices.
Explore accurate classification algorithms using the latest innovations in deep learning, computer vision, and natural language processing.
Learn what human activity recognition means, how it works, and how it’s implemented in various industries using the latest advances in artificial intelligence.
What is the the SetFit architecture and how does it outperform GPT-3 and other few shot large language models
What is image classification and how we build production level TensorFlow image classification systems for recognizing various products on a retail shelf.
Explore the application of intelligent document processing (IDP) in different industries and dive in-depth on intelligent document pipelines.
How to build an image classification model in PyTorch with a real world use case. How you can perform product recognition with image classification
Let's build a custom CTA generator that you'll actually want to use for your website copy
We’re going to look at how we built a state of the art NLP pipeline for blended summarization and NER to process master service agreements (MDAs) that vary the outputs based on the input document and what is deemed important information.
Get a comprehensive overview of a purchase order vs. invoice, including when businesses use each, what information goes in them, and more.
Learn what Google Shopping categories are used for and how you can automate fitting products to this taxonomy using ai.
Automatically categorize your Shopify store products to the Shopify Product Taxonomy instantly with ai based PIM software
Dive deep into 3-way invoice matching, including how it works, eight benefits for your business, and the problems with doing it manually.
Smart farming using computer vision and deep learning provides the most promising path forward in the slow-moving industry of agriculture.
How we leveraged large language models to build a legal clause rewriting pipeline that generates stronger language and more clarity in legal clauses
Using ai for document information extraction to automate various parts of the loan process.
Apply AI to your favorite sport with this guide. Learn how automated ball tracking can change the game for coaches and players.
Categorize your ecommerce products to the 2021 google product taxonomy tree instantly with our Ai software
Surveying the current landscape of ecommerce automation and how you can use ai to automate huge chunks of your product management.
Classify your product data against an existing product category database or generate categories and tags in seconds using artificial intelligence
Warehouse automation plays a crucial role across your supply chain. Learn about how machine learning and ai software can be integrated into your warehouse automation stack.
4 different NLP methods of summarizing longer input text into different methods such as extractive, abstractive, and blended summarization
iscover an invoice OCR tool that will revolutionize the way you handle invoices. There’s no human intervention needed & a dramatically lower per-invoice cost.
Instead of invoice matching taking upwards of a week, it could take mere seconds with the proper automation solution. Learn more here.
Manual and template-based invoicing are riddled with low accuracy and required human intervention. Learn how to systematically eliminate these issues with the right invoice data capture software.
A complete walkthrough guide on how to use visual search in ecommerce stores to create more sales and real examples of companies already using it.
Automating the extraction of data from invoices can reduce the stress of your accountants by finding inaccuracies, digitizing paper invoices, and more.
How you can use machine learning based data matching to compare data features in a scalable architecture for deduping, record merging, and operational efficiency
Learn how lifetime value or LTV prediction can improve your marketing strategies. Then, discover the best statistical & machine learning models for your predictions.
A deep understanding of how we use gpt-3 and other NLP processes to build flexible chatbot architectures that can handle negotiation, multiple conversation turns, and multiple sales tactics to increase conversions.
The popular HR company O.C. Tanner, which has been in business since 1927 and has over 1500 employees, was looking to research and design two GPT-3 software products to be used as internal tools with their clients. GPT-3 based products can be difficult to outline and design given the sheer lack of publicly available information around optimizing and improving these systems to a production level.
We’ll compare Tableau vs QlikView in terms of popularity, integrations, ease of use, performance, security, customization, and more.
With a context-aware recommender system, you can plan ways to recreate some of the contextual conditions that persuade them to buy more from you.
We’re going to walk through building a production level twitter sentiment analysis classifier using GPT-3 with the popular tweet dataset Sentiment140.
Find out how machine learning in medical imaging is transforming the healthcare world and making it more efficient with three use cases.
Discover ways that machine learning in health care informatics has become indispensable. Review the results of two case studies and consider two key challenges.
Accelerate your growth by pivoting key areas of your business to AI. Your business outcomes will be achieved quicker & you’ll see benefits you didn’t plan for.
We built a GPT-3 based software solution to automate raw data processing and data classification. Our model handles keyword extraction, named entity recognition, text classification | Case Study
We built a custom GPT-3 pipeline for key topic extraction for an asset management company that can be used across the financial domain | Case Study
How you can use GPT-3 to create higher order product categorization and product tagging from your ecommerce listings, and how you can create a powerful product taxonomy system with ai.
5 ways you can use product matching software in ecommerce to create real value that raises your sales metrics and improves your workflow operations.
Data mining and machine learning in cybersecurity enable businesses to ensure an acceptable level of data security 24/7 in highly dynamic IT environments. Learn how data security is getting increasingly automated.
Product recognition software has tremendous potential to improve your profits and slash your costs in your retail business. Find out just how useful it is.
Big data has evolved from hype to a crucial part of scaling your organization in every modern industry. Learn more about how big data is transforming organizations and providing business impacts.
Learn how natural language processing can benefit everybody involved in education from individual students and teachers to entire universities and mass testing agencies.
Here’s how automated data capture systems can benefit your business in some key ways and some real-life examples of what it looks like in practice.
Use these power ai and machine learning tools to create business intelligence in your marketing that pushes your business understanding and analytics past your competition.
We built a custom ML pipeline to automate information extraction and fine tuned it for the legal document domain.
In this practical guide, you'll get to know the principles, architectures, and technologies used for building a data lake implementation.
Find out how machine learning in biology is accelerating research and innovation in the areas of cancer treatment, medical devices, and more.
An enterprise data warehouse (EDW) is a repository of big data for an enterprise. It’s almost exclusive to business and houses a very specific type of data.
Dlib is a versatile and well-diffused facial recognition library, with perhaps an ideal balance of resource usage, accuracy and latency, suited for real-time face recognition in mobile app development. It's becoming a common and possibly even essential library in the facial recognition landscape, and, even in the face of more recent contenders, is a strong candidate for your computer vision and facial recognition or detection framework.
Learn how to utilize machine learning to get a higher customer retention rate with this step-by-step guide to a churn prediction model.
Machine learning algorithms are helping the oil and gas industry cut costs and improve efficiency. We'll show you how.
We’ll show you the difference between machine learning vs. data mining so you know how to implement them in your organization.
Here’s why you should use deep learning algorithms in your business, along with some real-world examples to help you see the potential.
Beam search is an algorithm used in many NLP and speech recognition models as a final decision making layer to choose the best output given target variables like maximum probability or next output character.
Best Place For was looking for an image recognition based software solution that could be used to detect and identify different food dishes, drinks, and menu items in images sourced from blogs and Instagram. The images would be pulled from restaurant locations on Instagram and different menu items would be identified in the images. This software solution has to be able to handle high and low quality images and still perform at the highest production level, while accounting for runtime as well as accuracy.
Deep learning recommendation system architectures make use of multiple simpler approaches in order to remediate the shortcomings of any single approach to extracting, transforming and vectorizing a large corpus of data into a useful recommendation for an end user.
Let's take a look at the architecture used to build neural collaborative filtering algorithms for recommendation systems
GPT-3 is one of the most versatile and transformative components that you can include in your framework, application or service. However, sensational headlines have obscured its wide range of capabilities since its launch. Let’s take a look at the ways that companies and researchers are achieving real-world results with GPT-3, and examine the untapped potential of this 'celebrity AI'.
How to get started with machine learning based dynamic pricing algorithms for price optimization and revenue management
Let's take a look at how you can use spaCy, a state of the art natural language processing tool, to build custom software tools for your business that increase ROI and give you data insights your competitors wish they had.
The landscape for AI in ecommerce has changed a lot recently. Some of the most popular products and approaches have been compromised or undermined in a very short time by a new global impetus for privacy reform, and by the way that the COVID-19 pandemic has transformed the nature of retail.
Extremely High ROI Computer Vision Applications Examples Across Different Industries
Building Data Capture Services To Collect High ROI Business Data With Machine Learning and AI
Software packages and Inventory Data tools that you definitely need for all automated warehouse solutions
Inventory automation with computer vision - how to use computer vision in online retail to automate backend inventory processes