A Guide to Leveraging BERT for Extractive Text Summarization on Lectures
A technical guide to using BERT for extractive summarization on lectures that outperforms other NLP models
SpaCy describes itself as industrial-strength natural language processing (NLP). But what makes it better than other Python frameworks like the natural language toolkit (NLTK), scikit-learn, TensorFlow, or PyTorch? Unfortunately, many of its unique capabilities are under the hood and easy to miss. Its documentation prefers technical details and is rather modest about marketing its big-picture benefits.
In this article, we dive into spaCy's unique but hidden benefits. We’ll illustrate them by using spaCy text classification for some interesting use cases.
In our article NLP with spaCy, we explored spaCy's features and common use cases like sentiment analysis in detail. So here, we'll just do a quick demo of spaCy for you to get some hands-on familiarity with it.
1. Load a pre-trained pipeline:
2. Apply the pipeline to your text:
3. Examine universal and language-specific part-of-speech tags:
4. Get named entities:
5. Visualize named entities:
6. Examine token embeddings:
Notice that with just one call on the pipeline, we got out so much useful information — tokens, part-of-speech tags, parse tree, word roots, named entities, word embeddings, spans, noun phrases, and more. Plus, you can apply algorithms like vector norms and cosine similarities to get basic document search features in your application.
The demo above is probably the extent to which most developers know and use spaCy. But it has a lot more to offer than you’d think. In the next few sections, we'll explore spaCy's unique advantages and underlying technical designs.
SpaCy's top-level application programming interface (API) is already exceptionally developer-friendly. Just call "nlp('your text data')" and you're done. It automatically tokenizes, adds part-of-speech annotations, finds named entities, guesses text categories, and more. If you need to customize a few steps, that's simple too.
But sometimes, that may not be enough for real-world requirements. Your data may need special tokenization. Or your language may not be supported by spaCy. Or you may want to use a domain-specific pre-trained model.
In such situations, developers tend to string together multiple frameworks using simplistic glue code. They may still use spaCy just to get the tokens. Then they pass those tokens through a PyTorch model to get high-quality embeddings. Next, they pass those embeddings through a sklearn random forest for text classification. And perhaps apply some custom rules at the end to handle any edge cases.
However, such glued pipelines break often and need constant code changes to keep up with new requirements. Plus, the simplistic glue code often becomes an information bottleneck that fails to pass on important knowledge from one stage to the next. Not every application developer will know concepts like forward and backward propagation, and even if they do, designing knowledge flow between components is non-trivial.
SpaCy solves all this elegantly. Instead of treating a pipeline as a mere idea, spAcy provides concrete APIs to construct pipelines and insert custom components, including machine learning models or rule-based models. SpaCy also has built-in support for the forward and backward flow of knowledge through all the components of a pipeline. SpaCy thus gives a solid scaffolding for any real-world text processing scenario while avoiding brittle glue code.
At the time of writing this in January 2023, spaCy fully supports 24 languages and another 48 partially. Full support for a language means it typically has at least three pre-trained pipelines, ranging from an efficient small model to a more accurate large model. A pre-trained spaCy model comes with built-in support for common tasks like tokenization, tagging, parsing, lemmatization, and named entity recognition. Partial support means just basic tokenization features that work for most text.
Some major languages also have transformer-based pre-trained pipelines that improve accuracy further.
SpaCy allows you to create custom components and plug them into any spaCy pipeline. This is how specialized libraries like scispacy work. You can also create an entire custom pipeline implementation if your design requires special dependencies between different components.
For example, you can seamlessly combine machine learning components with rule-based post-processing components to handle real-life edge cases.
SpaCy abstracts its token-to-embedding conversion through the Tok2Vec, or token-to-vector, interface. Any implementation with this interface can be plugged into any spaCy pipeline. The default implementation is a convolutional model, HashEmbedCNN, that defaults to 96-dimensional embeddings but can go up to 300 dimensions.
By default, spaCy shares a single Tok2Vec embedding model (and its weights) with multiple components. It generates embeddings for them whenever they ask and receives gradient updates (via Tok2VecListener) from them during fine-tuning. This design is depicted below.
But you can also configure every component to have its own independent Tok2Vec embedding layer as shown below.
Whether it's shared or multiple, the Tok2Vec abstraction allows you to easily swap out the default convolutional model with any other embedding model, including highly capable transformer models like BERT, RoBERTa, BART, and similar.
Due to their inherent capabilities, transformer-based embedding models can produce higher accuracy compared to default models with embedding dimensions as high as 768, 1024, or more. Their high learning capacity means that in the shared layer mode, the same transformer model can get updates from all the components and achieve accurate multi-task learning.
SpaCy supports any pre-trained transformer model from the HuggingFace repository or the PyTorch hub.
SpaCy's deep learning capabilities come from its use of a standalone deep learning library called thinc (pronounced "think"). Not only does this library have its own data structures and logic for standalone deep learning, but it also provides the ability to wrap and reuse models and layers from other frameworks like PyTorch and TensorFlow. This way, spaCy's able to integrate absolutely any PyTorch, TensorFlow, or MXNet model or layer out there, including all the latest state-of-the-art models.
SpaCy's architecture is unique because it has multiple planes of abstraction. On one plane, you can see it as just another natural language toolkit without ever knowing about any of its deep learning capabilities.
But underneath that, it's also a full-fledged, end-to-end deep learning framework similar to PyTorch or TensorFlow.
SpaCy manages to do this without compromising on its pipeline abstraction. Essentially, every component in the pipeline is trainable and uses deep neural networks under the hood. Through a clever design involving listeners, it allows gradient updates to backpropagate through all the components during fine-tuning and achieves end-to-end multi-task learning.
For some reason, if you don't want end-to-end learning or want to keep one intermediate component out, it supports independent model learning too. We've explained these design details in the transformers section above.
Like PyTorch and TensorFlow, spaCy's deep learning library, thinc, uses Cython components under the hood for speed and efficiency on a single machine. Simultaneously, spaCy also supports parallel and distributed model training on multiple machines. You can use the same training code transparently for both local and distributed training, and keep all aspects of distributed testing outside your code in a config file.
Its ability to transparently switch to distributed training for large models makes spaCy unique compared to other frameworks like NLTK and scikit-learn.
Its use of Cython data structures and operations enables spaCy to run efficiently, use less memory, exploit multicore capabilities of CPUs, and even run on GPUs.
SpaCy comes with a powerful declarative configuration system that allows you to build custom pipelines that can be finely configured at every level, from the bottom-most helper to the top-level pipeline components.
This enables applications to keep pipeline setups outside their code and swap pipelines on demand by simply loading a different configuration file without any code modification. You can easily switch your application to a prototype, development, staging, or production machine learning model at runtime.
In these sections, you'll learn in depth about two applications of text classification and their implementation using spaCy.
What is intent classification? In applications like customer service chatbots and ticketing systems, understanding the customer's intention is critical. Are they complaining about the functioning of your product or service? Or is it a billing problem? Or do they want to upgrade or downgrade their subscription but aren't using the right words?
Intent classification understands what the customer wants and applies an accurate intention label for filing, routing, and resolving the issue properly.
For intent classification with spaCy, you'd use spaCy's text categorization component. Let's first understand its architecture and capabilities.
SpaCy's TextCategorizer, or textcat, is a trainable pipeline component for any type of single-label or multilabel text categorization task, including whole-document classification, intent classification, or sentiment analysis. You must explicitly append it to a pipeline using add_pipe because it's not included by default.
Internally, TextCategorizer can use any thinc model that accepts a list of documents and produces a matrix of class probabilities.
SpaCy's default text classification model is an ensemble of two models — a traditional bag-of-words model called TextCatBOW and a neural model with attention, called TextCatCNN. Unlike most other spaCy components, these are not pre-trained. All three are completely untrained and it's up to you to train them from scratch on your training data.
There are a couple of ways to customize this text categorization architecture. You can either keep the ensemble model and plug in a custom embedding model into its Tok2Vec attribute, or you can throw away the ensemble entirely and use a custom thinc model:
For this example, we'll use the banking77 intent classification dataset to train the textcat component as a banking intent classifier. The dataset contains customer queries received by a bank. They're categorized into 77 intents like card_arrival, card_linking, card_payment_wrong_exchange_rate, and so on.
Instead of writing training code, the easiest way to train a textcat or any other custom spaCy model is by using spaCy projects. It comes with a bunch of templates that we can simply customize for our data. We first clone the repository, create a copy of the textcat_demo pipeline directory, and call it textcat_intent.
1. First, we replace the data files under "assets/" with the training and test data evaluation files from our dataset and modify project.yml suitably.
2. We modify convert.py script to read the dataset under "assets/" into DocBin files that spaCy can read efficiently while training:
3. Then run the data converter. The files it creates are suitable for training.
4. We train the model:
5. We evaluate the metrics:
6. We package the trained pipeline as an installable pip package:
7. We'll install this custom pipeline package:
8. Load it like any other pipeline:
9. And finally do a quick interference check to compare the ground truth label with the predicted label:
Your custom intent classifier is now ready for production deployment on your server!
In spaCy, a span is any contiguous slice of a document and has a start and end position. A span may align with a well-defined unit like a word, sentence, or paragraph, or it may cross multiple words, sentences, and paragraph boundaries with ad hoc start and end positions.
Span categorization examines arbitrary slices, which may even overlap with one another, and assigns labels to them. Applications of span categorization include:
Let's learn about spaCy's span categorization architecture next.
SpaCy's SpanCategorizer, or spancat, is the pipeline component responsible for span categorization. It consists of a suggester component, typically an n-gram extractor, and a classifier model.
The classifier model can be any thinc Model that accepts a list of documents and their spans and produces a probability matrix. SpaCy's default model is called SpanCategorizer.v1. It consists of:
You can customize spancat too just like textcat above, either by using your own embedding model or replacing the entire "SpanCategorizer.v1" model with your custom logic.
For span categorization, we follow a similar workflow as above but this time we'll just experiment with one of the spancat demos that ship with spaCy projects. It's called Healthsea and its objective is to analyze user reviews of health supplements to detect any health impacts. It consists of data like this:
1. Like before, we start by running a preprocessing script to convert the raw data to a binary format suitable for spaCy:
2. Then we train the spancat component on this data:
3. Create an installable package for the pipeline:
4. Install it using pip, load the model, and apply it to your text:
As you saw in this article, spaCy offers elegant, and a tad opinionated, programming models for any kind of text-processing workflow. Though there's a bit of a learning curve to understanding its internals, we have found that the effort pays off in terms of accuracy, flexibility, and maintainability compared to stringing together disparate frameworks with glue code.
We have multiple years of experience in building production-grade data science, text, and natural language processing systems. Get in touch with us for expert guidance on automating text processing and data science in your business operations.
A technical guide to using BERT for extractive summarization on lectures that outperforms other NLP models
Discover how prompt based LLMs like GPT-3 & GPT-4 are transforming news summarization with its zero-shot capabilities and adaptability to specialized tasks like keyword-based summarization. Learn about the limitations of current evaluation metrics and the potential future directions in text summarization research.
Discover the PEZ method for learning hard prompts through optimization, a powerful technique that enhances generative models for image generation and language tasks, improves transferability, and enables few-shot learning
Take a look at how Width.ai built 17 generative ai pipelines for use in the Keap.com marketing copy generation product
A deep look at how recurrent feature reasoning outperforms other image inpainting methods for difficult use cases and popular datasets.
See a comparison of GPT-3 vs. GPT-J, a self-hosted, customizable, open-source transformer-based large language model you can use for your business workflows.
Discover how transformer networks are revolutionizing image and video segmentation, and get insights on modern semantic segmentation vs. instance segmentation.
Discover how the state-of-the-art mask-aware transformer produces visually stunning and semantically meaningful images and how it stacks up against Stable Diffusion & DALL-E for large-hole inpainting
We compare 12 AI text summarization models through a series of tests to see how BART text summarization holds up against GPT-3, PEGASUS, and more.
Let’s take a look at what intent classification is in conversational ai and how you can build a GPT-3 intent classification model for conversational ai and chatbot pipelines.
Discover the capabilities of zero-shot object detection, which enables anyone to use a model out-of-the-box without any training and generate production-grade results.
What is facial expression recognition and what SOTA models are being used today in production
Get a simple TensorFlow facial recognition model up & running quickly with this tutorial aimed at using it in your personal spaces on smartphones & IoT devices.
Explore accurate classification algorithms using the latest innovations in deep learning, computer vision, and natural language processing.
Learn what human activity recognition means, how it works, and how it’s implemented in various industries using the latest advances in artificial intelligence.
What is the the SetFit architecture and how does it outperform GPT-3 and other few shot large language models
What is image classification and how we build production level TensorFlow image classification systems for recognizing various products on a retail shelf.
Explore the application of intelligent document processing (IDP) in different industries and dive in-depth on intelligent document pipelines.
How to build an image classification model in PyTorch with a real world use case. How you can perform product recognition with image classification
Let's build a custom CTA generator that you'll actually want to use for your website copy
We’re going to look at how we built a state of the art NLP pipeline for blended summarization and NER to process master service agreements (MDAs) that vary the outputs based on the input document and what is deemed important information.
Get a comprehensive overview of a purchase order vs. invoice, including when businesses use each, what information goes in them, and more.
Learn what Google Shopping categories are used for and how you can automate fitting products to this taxonomy using ai.
Automatically categorize your Shopify store products to the Shopify Product Taxonomy instantly with ai based PIM software
Dive deep into 3-way invoice matching, including how it works, eight benefits for your business, and the problems with doing it manually.
Smart farming using computer vision and deep learning provides the most promising path forward in the slow-moving industry of agriculture.
How we leveraged large language models to build a legal clause rewriting pipeline that generates stronger language and more clarity in legal clauses
Using ai for document information extraction to automate various parts of the loan process.
Apply AI to your favorite sport with this guide. Learn how automated ball tracking can change the game for coaches and players.
Categorize your ecommerce products to the 2021 google product taxonomy tree instantly with our Ai software
Surveying the current landscape of ecommerce automation and how you can use ai to automate huge chunks of your product management.
Classify your product data against an existing product category database or generate categories and tags in seconds using artificial intelligence
Warehouse automation plays a crucial role across your supply chain. Learn about how machine learning and ai software can be integrated into your warehouse automation stack.
4 different NLP methods of summarizing longer input text into different methods such as extractive, abstractive, and blended summarization
iscover an invoice OCR tool that will revolutionize the way you handle invoices. There’s no human intervention needed & a dramatically lower per-invoice cost.
Instead of invoice matching taking upwards of a week, it could take mere seconds with the proper automation solution. Learn more here.
Manual and template-based invoicing are riddled with low accuracy and required human intervention. Learn how to systematically eliminate these issues with the right invoice data capture software.
A complete walkthrough guide on how to use visual search in ecommerce stores to create more sales and real examples of companies already using it.
Automating the extraction of data from invoices can reduce the stress of your accountants by finding inaccuracies, digitizing paper invoices, and more.
How you can use machine learning based data matching to compare data features in a scalable architecture for deduping, record merging, and operational efficiency
Learn how lifetime value or LTV prediction can improve your marketing strategies. Then, discover the best statistical & machine learning models for your predictions.
A deep understanding of how we use gpt-3 and other NLP processes to build flexible chatbot architectures that can handle negotiation, multiple conversation turns, and multiple sales tactics to increase conversions.
The popular HR company O.C. Tanner, which has been in business since 1927 and has over 1500 employees, was looking to research and design two GPT-3 software products to be used as internal tools with their clients. GPT-3 based products can be difficult to outline and design given the sheer lack of publicly available information around optimizing and improving these systems to a production level.
We’ll compare Tableau vs QlikView in terms of popularity, integrations, ease of use, performance, security, customization, and more.
With a context-aware recommender system, you can plan ways to recreate some of the contextual conditions that persuade them to buy more from you.
We’re going to walk through building a production level twitter sentiment analysis classifier using GPT-3 with the popular tweet dataset Sentiment140.
Find out how machine learning in medical imaging is transforming the healthcare world and making it more efficient with three use cases.
Discover ways that machine learning in health care informatics has become indispensable. Review the results of two case studies and consider two key challenges.
Accelerate your growth by pivoting key areas of your business to AI. Your business outcomes will be achieved quicker & you’ll see benefits you didn’t plan for.
We built a GPT-3 based software solution to automate raw data processing and data classification. Our model handles keyword extraction, named entity recognition, text classification | Case Study
We built a custom GPT-3 pipeline for key topic extraction for an asset management company that can be used across the financial domain | Case Study
How you can use GPT-3 to create higher order product categorization and product tagging from your ecommerce listings, and how you can create a powerful product taxonomy system with ai.
5 ways you can use product matching software in ecommerce to create real value that raises your sales metrics and improves your workflow operations.
Data mining and machine learning in cybersecurity enable businesses to ensure an acceptable level of data security 24/7 in highly dynamic IT environments. Learn how data security is getting increasingly automated.
Product recognition software has tremendous potential to improve your profits and slash your costs in your retail business. Find out just how useful it is.
Big data has evolved from hype to a crucial part of scaling your organization in every modern industry. Learn more about how big data is transforming organizations and providing business impacts.
Learn how natural language processing can benefit everybody involved in education from individual students and teachers to entire universities and mass testing agencies.
Here’s how automated data capture systems can benefit your business in some key ways and some real-life examples of what it looks like in practice.
Use these power ai and machine learning tools to create business intelligence in your marketing that pushes your business understanding and analytics past your competition.
We built a custom ML pipeline to automate information extraction and fine tuned it for the legal document domain.
In this practical guide, you'll get to know the principles, architectures, and technologies used for building a data lake implementation.
Find out how machine learning in biology is accelerating research and innovation in the areas of cancer treatment, medical devices, and more.
An enterprise data warehouse (EDW) is a repository of big data for an enterprise. It’s almost exclusive to business and houses a very specific type of data.
Dlib is a versatile and well-diffused facial recognition library, with perhaps an ideal balance of resource usage, accuracy and latency, suited for real-time face recognition in mobile app development. It's becoming a common and possibly even essential library in the facial recognition landscape, and, even in the face of more recent contenders, is a strong candidate for your computer vision and facial recognition or detection framework.
Learn how to utilize machine learning to get a higher customer retention rate with this step-by-step guide to a churn prediction model.
Machine learning algorithms are helping the oil and gas industry cut costs and improve efficiency. We'll show you how.
We’ll show you the difference between machine learning vs. data mining so you know how to implement them in your organization.
Here’s why you should use deep learning algorithms in your business, along with some real-world examples to help you see the potential.
Beam search is an algorithm used in many NLP and speech recognition models as a final decision making layer to choose the best output given target variables like maximum probability or next output character.
Best Place For was looking for an image recognition based software solution that could be used to detect and identify different food dishes, drinks, and menu items in images sourced from blogs and Instagram. The images would be pulled from restaurant locations on Instagram and different menu items would be identified in the images. This software solution has to be able to handle high and low quality images and still perform at the highest production level, while accounting for runtime as well as accuracy.
Deep learning recommendation system architectures make use of multiple simpler approaches in order to remediate the shortcomings of any single approach to extracting, transforming and vectorizing a large corpus of data into a useful recommendation for an end user.
Let's take a look at the architecture used to build neural collaborative filtering algorithms for recommendation systems
GPT-3 is one of the most versatile and transformative components that you can include in your framework, application or service. However, sensational headlines have obscured its wide range of capabilities since its launch. Let’s take a look at the ways that companies and researchers are achieving real-world results with GPT-3, and examine the untapped potential of this 'celebrity AI'.
How to get started with machine learning based dynamic pricing algorithms for price optimization and revenue management
Let's take a look at how you can use spaCy, a state of the art natural language processing tool, to build custom software tools for your business that increase ROI and give you data insights your competitors wish they had.
The landscape for AI in ecommerce has changed a lot recently. Some of the most popular products and approaches have been compromised or undermined in a very short time by a new global impetus for privacy reform, and by the way that the COVID-19 pandemic has transformed the nature of retail.
Extremely High ROI Computer Vision Applications Examples Across Different Industries
Building Data Capture Services To Collect High ROI Business Data With Machine Learning and AI
Software packages and Inventory Data tools that you definitely need for all automated warehouse solutions
Inventory automation with computer vision - how to use computer vision in online retail to automate backend inventory processes