A Deep Guide to Text-Guided Open-Vocabulary Segmentation
Discover the power of text-guided open-vocabulary segmentation using large language models like GPT-4 & ChatGPT for automating image and video processing tasks.
GPT-4 is gaining hype with its abilities in zero shot and few shot settings for many different tasks. The ability to get pretty good results across a wide range of use cases in a very low data environment is a game changer for many people who want to get something off the ground quickly.
For such use cases where you want fine-grained control, or have a great dataset with a wide data variance cover, you should opt for fine-tuning custom language models like the bidirectional encoder representations from transformers (BERT). Fine-tuning models for task specific uses is still the best way to achieve high accuracy results in a production environment, even when the models are much smaller in size (like SETFIT!). In this deep dive, we talk about effectively leveraging BERT for extractive text summarization on lectures as well as any other documents that need precise control.
Below, we explore how to use a pre-trained BERT model for summarization by studying the methods described in a 2019 paper by Derek Miller, "Leveraging BERT for Extractive Text Summarization on Lectures," and its accompanying software, lecture-summarizer.
The main intuition is that groups of sentences will have some shared themes. You can implement grouping of similar things using machine learning techniques, such as clustering. But how do you apply a geometric operation like clustering to sentences?
One way is to imagine language as an N-dimensional universe where each of the N axes represents a linguistic aspect. Some aspects may be intuitive, like the number of nouns, level of sarcasm, or degree of relevance to the area of medicine. Other aspects may be something completely unintuitive, like the third power of the product of the vowel count and pronoun density. Nevertheless, all are useful relationships that a neural network has inferred from patterns in real-world text corpora.
You can think of every possible sentence as a point somewhere in this N-dimensional aspect universe. Every sentence can be represented by a list of N coordinates that position it in this space. By converting sentences into these geometric representations, you can easily apply clustering to identify similarities. Two sentences with nearly equal coordinates on every axis are very likely to be conceptually similar.
Such a representation for a sentence, using an N-dimensional coordinate vector in an N-dimensional space of linguistic aspects, is called an "embedding" for that sentence. BERT's job is to identify the linguistic aspects and produce embeddings for any sentence it's given.
The inputs to lecture-summarizer are lecture transcripts, either from subtitle files (.srt files) or from online sources.
Given a transcript, it first uses the natural language toolkit (NLTK) to split the text into sentences. Next, it cleans up the list of sentences to improve the quality of summaries:
The next step is to get embeddings for the sentences. The software uses the PyTorch BERT model for that.
So far, we've implied that BERT produces one embedding per sentence, but that's not true! It produces an embedding for every token in a sentence. So, two problems must be solved:
One answer is the CLS token's embedding from the last layer. At its input, BERT inserts a special token called CLS at the start of every sentence. Because of the way BERT's attention mechanisms work, the CLS token's embedding can be considered as representing the entire sentence.
However, this 2019 SBERT paper demonstrated that CLS embeddings don't work well for sentence-level semantic similarity tasks like clustering. Instead, manual inspection showed that the quality of clustering is best for embeddings from the second-to-last layer. Lecture-summarizer extracts and averages them to get a sentence-level embedding.
Another factor that affects quality is the model size. BERT has two variants — BERT-base produces 768-dimensional embeddings, while BERT-large produces 1,024-dimensional embeddings. The software allows the user to select the preferred model. We strongly recommend BERT-large because it's gone through more training and its embeddings are more expressive.
Lecture-summarizer uses K-means clustering to find distinct clusters. You can think of each cluster as a topic or theme present in the lecture.
A major drawback of K-means is that it doesn't automatically determine the number of clusters; the user must specify it. But lecture-summarizer doesn't directly ask for the number of clusters either. Instead, it gets that number indirectly by asking for a ratio of summarization. If a lecture has 100 sentences and the ratio is 0.1, the software endeavors to select 10 (= 0.1 x 100) sentences for the summary by identifying 10 clusters and picking the best sentence from each cluster.
Another potential problem with K-means is its assumption that the N-dimensional embedding space is Euclidean. It does so for mathematical convenience, but in reality, the embedding space produced by BERT is most probably not Euclidean. So, the clusters it identifies may not be the best possible ones.
K-means identifies related clusters of sentences and their centers in the N-dimensional embedding space. You can think of each cluster as a theme or topic in the lecture, and each cluster center as an imaginary sentence that perfectly conveys its theme.
The next step then — and this is where the extractive nature of this software comes from — is to select one actual sentence from the lecture that is closest to that imaginary ideal sentence at the center of each cluster. The software does this by calculating the distance between each sentence and its respective cluster center and selecting the nearest sentence.
The final summary is simply the list of all sentences closest to the cluster centers from the previous step. In a long lecture, they can come from locations scattered across wide gaps and may sound disconnected. We'll explain some ideas to improve this aspect in later sections.
We evaluated the quality of the summaries on the TED talks dataset because of its lengthy, lecture-like transcripts. Since expert summaries aren't available for quantitative evaluation using metrics like the recall-oriented understudy for gisting evaluation (ROUGE), we analyze the results qualitatively.
The first example is a talk on the philosophy and foundations of modern science. Throughout the talk, the speaker uses anecdotes and references to historical events to explain how science progressed. The software extracted this summary using BERT-large:
The second talk is by a famous architect on his achievements in his youth. Here's the summary generated using BERT-large:
Given the nature of the talk, the summary does well at capturing the topic, his achievements, and the thinking behind them. The context is missing in some sentences, but we'll address that in a later section.
This last example is a talk on the psychology of happiness with surprising results, experiments, and observations. Here's the summary generated with BERT-large:
While stock BERT does a reasonable job, the quality of summaries can be easily improved by applying some modern techniques to the summarization model.
As we explained before, BERT produces token embeddings, and deriving good sentence embeddings from them isn't simple. A simple improvement is to use a model like Sentence-BERT that's optimized for sentence-level similarity. The sentence-transformers Python package makes implementation simple and comes with a large number of high-quality official models and many more community models on the Hugging Face Model Hub. We’re huge fans of SBERT and use it for operations in chatbots, GPT-3 prompts, data matching and more. It’s very easy to fine-tune these embeddings to move the accuracy from generic to extremely task specific. We’ve taken data matching use cases from 60ish percent to 97% through fine-tuning this model.
For example, observe the improvements it brought to the psychology talk's summary:
One of the best ways of improving the quality of summaries is by giving users more control over the process. We do this in two ways.
First, a major weakness of the software is that it picks just one sentence per centroid. If a lecture has 100 sentences and the user specifies a summary ratio of 0.2, it'll look for 20 centroids. But most lectures are likely to have just a handful of themes — maybe three to five at max, not dozens. So one enhancement is to allow users to specify the number of centroids and the number of summary sentences independent of each other.
The other improvement is related to cluster initialization. Instead of random starts, we allow users to specify a couple of high-level topics. The embeddings for these topics become the seed centroids around which clustering begins. This makes the final summary much more likely to be oriented around the user-specified topics, as shown below:
A problem with extracted sentences is their use of pronouns like "it" or "he." Without the full context, it's not always clear what or who is being referenced. That first Example 1 summary has a few such ambiguous references (highlighted below):
References to some earlier entity are called anaphora. Coreferences are a similar concept — they're noun phrases that refer to the same entity, like "the United States" and "our country." If we can also include the referred entities in a summary, these ambiguities go away and the summary becomes clearer.
Luckily, the spaCy framework provides pipeline components like Coreferee to identify coreferences and anaphora. With that information, lecture-summarizer can include additional sentences in the summary that clarify such references, as shown by this version with referred concepts highlighted:
The three improvement techniques shown here were all independently enabled. By enabling all of them together, the summaries are of far higher quality compared to the stock software.
In addition to its core summarization capabilities, the lecture-summarizer software comes with additional features of practical use to students, teachers, educational institutions, and libraries.
The software ships with lecture management capabilities to store lecture content along with its name and course details in a database. It exposes these features through a representational state transfer application programming interface (REST API) endpoint for use by other applications.
Another feature of the software is a summary management API to create and store the generated summaries for use by other applications. However, lookup is only by an identifier and not by content similarity.
The third feature is a lecture searcher implementation to look up summaries by content similarity. It implements this by using a persistent vector search engine called Annoy to store the embeddings and query their nearest neighbors in the embedding space.
With large language models (LLMs) like ChatGPT, GPT-3, and others becoming so popular, some of you may be wondering how this BERT summarization approach fares against them. Here are our thoughts on this.
For any language task, if you supply more context, the quality of results will also improve. Bidirectional language models (LMs) like BERT include the context from both the left and right of a token to generate its embedding. In contrast, causal LMs like GPT3 are trained to use only the context available so far, i.e., the context to their left.
So bidirectional models are superior for document-understanding tasks like summarization where context already exists on both sides. Causal language models are better for conversational tasks like chatbots because relying on just the previous tokens forces them to generate language that's more human-like. This same idea applies for generative ai focused tasks as there isn’t any context to leverage other than previous tokens in use cases like marketing copy generation.
Of course, mere theoretical superiority doesn't necessarily generate better summaries. The number of parameters of the neural network and the volume and diversity of its training also matter. But all else being equal, bidirectional is the way to go for summarization.
LLMs excel at abstractive summarization, but extractive summaries are indispensable in some industries where exactness matters, such as:
Instructing LLMs on summarization tasks that require an abstract idea of the goal can be hit or miss. We’ve seen that GPT models do a great job on summarization tasks that have a clearly defined goal such as key-based or sentence-based summarization. But as we’ve outlined in papers like Revolutionizing News Summarization it still struggles with tasks like aspect-based summarization.
All LLMs of the current generation can accept only a limited number of input tokens, typically around 2,000 to 4,000. For longer documents, you have to use capable chunking strategies to divide the document semantically, have the LLM summarize the chunks, and then recombine the partial summaries. Even newer models like GPT-4 that support much larger prompts struggle to hold the main context presented in the prompt with larger inputs. This is a common problem with these larger prompts as the model loses focus on any given token as the size of the input expands.
However, this isn't a problem at all for clustering-based approaches. Even if the number of sentences numbers in the millions, the same model can embed and cluster them all to extract summaries.
Most LLMs are pay-per-use that charge for every 1,000 input and output tokens. Moreover, the more capable models, like GPT-3 Davinci, are an order of magnitude costlier than less capable ones, like GPT-3 Ada. If you're summarizing a lot of lectures and other teaching materials, you'll quickly incur huge costs per document.
In contrast, you can run models like BERT on your own server or even laptop without even a GPU. They're also much more lightweight compared to self-hosted LLMs like GPT-J.
Although probably not applicable to most lectures, you may work in industries where you must keep your documents confidential. Since most LLMs are managed by third parties, you'll have to trust their information security practices. But with self-hosted models like BERT, you can keep all your documents on your own infrastructure and implement your standard access policies.
Using techniques like knowledge distillation, MobileBERT has compressed BERT down to run on smartphones. That opens up possibilities of real-time, on-demand summarization of anything you see by simply capturing it with your smartphone camera.
The techniques described here apply not just to lectures but to any domain or business that routinely processes large numbers of documents for summarization or other language understanding tasks. Contact us for expertise on your language processing needs.
Discover the power of text-guided open-vocabulary segmentation using large language models like GPT-4 & ChatGPT for automating image and video processing tasks.
Learn how CLIPSeg segmentation, in combination with GPT-4 and ChatGPT, can enable diverse applications from medical image diagnosis to remote sensing.
Can GPT-4 make your life as a finance or banking employee easier? Learn how GPT-4 and NLP can be used in finance to increase revenues and streamline workflows.
A deep dive into how we reached SOTA accuracy in product similarity matching through a custom fine-tuning pipeline that refines the CLIP model for image similarity.
Boost your conversions and sales numbers with NLP in sales using OpenAI's GPT-3 and GPT-4. You can use chatbots to improve customer experience and loyalty.
Explore the use of GPT for opinion summarization through innovative pipeline methods, evaluation metrics like ROUGE and BERTScore, and human evaluation insights. Dive into novel entailment-based evaluation tools for a comprehensive understanding of model performance in capturing diverse user opinions.
Come aboard the large language model revolution with our deep dive on AI21 vs. GPT-3 for business use cases like ad copy generation and math proof generation.
Discover how prompt based LLMs like GPT-3 & GPT-4 are transforming news summarization with its zero-shot capabilities and adaptability to specialized tasks like keyword-based summarization. Learn about the limitations of current evaluation metrics and the potential future directions in text summarization research.
Discover the PEZ method for learning hard prompts through optimization, a powerful technique that enhances generative models for image generation and language tasks, improves transferability, and enables few-shot learning
Take a look at how Width.ai built 17 generative ai pipelines for use in the Keap.com marketing copy generation product
A deep look at how recurrent feature reasoning outperforms other image inpainting methods for difficult use cases and popular datasets.
See a comparison of GPT-3 vs. GPT-J, a self-hosted, customizable, open-source transformer-based large language model you can use for your business workflows.
Discover how transformer networks are revolutionizing image and video segmentation, and get insights on modern semantic segmentation vs. instance segmentation.
Discover how the state-of-the-art mask-aware transformer produces visually stunning and semantically meaningful images and how it stacks up against Stable Diffusion & DALL-E for large-hole inpainting
Unlock the full potential of spaCy with this guide to building production-grade text classification pipelines for business data.
We compare 12 AI text summarization models through a series of tests to see how BART text summarization holds up against GPT-3, PEGASUS, and more.
Let’s take a look at what intent classification is in conversational ai and how you can build a GPT-3 intent classification model for conversational ai and chatbot pipelines.
Discover the capabilities of zero-shot object detection, which enables anyone to use a model out-of-the-box without any training and generate production-grade results.
What is facial expression recognition and what SOTA models are being used today in production
Get a simple TensorFlow facial recognition model up & running quickly with this tutorial aimed at using it in your personal spaces on smartphones & IoT devices.
Explore accurate classification algorithms using the latest innovations in deep learning, computer vision, and natural language processing.
Learn what human activity recognition means, how it works, and how it’s implemented in various industries using the latest advances in artificial intelligence.
What is the the SetFit architecture and how does it outperform GPT-3 and other few shot large language models
What is image classification and how we build production level TensorFlow image classification systems for recognizing various products on a retail shelf.
Explore the application of intelligent document processing (IDP) in different industries and dive in-depth on intelligent document pipelines.
How to build an image classification model in PyTorch with a real world use case. How you can perform product recognition with image classification
Let's build a custom CTA generator that you'll actually want to use for your website copy
We’re going to look at how we built a state of the art NLP pipeline for blended summarization and NER to process master service agreements (MDAs) that vary the outputs based on the input document and what is deemed important information.
Get a comprehensive overview of a purchase order vs. invoice, including when businesses use each, what information goes in them, and more.
Learn what Google Shopping categories are used for and how you can automate fitting products to this taxonomy using ai.
Automatically categorize your Shopify store products to the Shopify Product Taxonomy instantly with ai based PIM software
Dive deep into 3-way invoice matching, including how it works, eight benefits for your business, and the problems with doing it manually.
Smart farming using computer vision and deep learning provides the most promising path forward in the slow-moving industry of agriculture.
How we leveraged large language models to build a legal clause rewriting pipeline that generates stronger language and more clarity in legal clauses
Using ai for document information extraction to automate various parts of the loan process.
Apply AI to your favorite sport with this guide. Learn how automated ball tracking can change the game for coaches and players.
Categorize your ecommerce products to the 2021 google product taxonomy tree instantly with our Ai software
Surveying the current landscape of ecommerce automation and how you can use ai to automate huge chunks of your product management.
Classify your product data against an existing product category database or generate categories and tags in seconds using artificial intelligence
Warehouse automation plays a crucial role across your supply chain. Learn about how machine learning and ai software can be integrated into your warehouse automation stack.
4 different NLP methods of summarizing longer input text into different methods such as extractive, abstractive, and blended summarization
iscover an invoice OCR tool that will revolutionize the way you handle invoices. There’s no human intervention needed & a dramatically lower per-invoice cost.
Instead of invoice matching taking upwards of a week, it could take mere seconds with the proper automation solution. Learn more here.
Manual and template-based invoicing are riddled with low accuracy and required human intervention. Learn how to systematically eliminate these issues with the right invoice data capture software.
A complete walkthrough guide on how to use visual search in ecommerce stores to create more sales and real examples of companies already using it.
Automating the extraction of data from invoices can reduce the stress of your accountants by finding inaccuracies, digitizing paper invoices, and more.
How you can optimize email marketing campaigns with machine learning based models that improve conversion & click-through rates.
How you can use machine learning based data matching to compare data features in a scalable architecture for deduping, record merging, and operational efficiency
Learn how lifetime value or LTV prediction can improve your marketing strategies. Then, discover the best statistical & machine learning models for your predictions.
A deep understanding of how we use gpt-3 and other NLP processes to build flexible chatbot architectures that can handle negotiation, multiple conversation turns, and multiple sales tactics to increase conversions.
The popular HR company O.C. Tanner, which has been in business since 1927 and has over 1500 employees, was looking to research and design two GPT-3 software products to be used as internal tools with their clients. GPT-3 based products can be difficult to outline and design given the sheer lack of publicly available information around optimizing and improving these systems to a production level.
We’ll compare Tableau vs QlikView in terms of popularity, integrations, ease of use, performance, security, customization, and more.
With a context-aware recommender system, you can plan ways to recreate some of the contextual conditions that persuade them to buy more from you.
We’re going to walk through building a production level twitter sentiment analysis classifier using GPT-3 with the popular tweet dataset Sentiment140.
Find out how machine learning in medical imaging is transforming the healthcare world and making it more efficient with three use cases.
Discover ways that machine learning in health care informatics has become indispensable. Review the results of two case studies and consider two key challenges.
Accelerate your growth by pivoting key areas of your business to AI. Your business outcomes will be achieved quicker & you’ll see benefits you didn’t plan for.
We built a GPT-3 based software solution to automate raw data processing and data classification. Our model handles keyword extraction, named entity recognition, text classification | Case Study
We built a custom GPT-3 pipeline for key topic extraction for an asset management company that can be used across the financial domain | Case Study
How you can use GPT-3 to create higher order product categorization and product tagging from your ecommerce listings, and how you can create a powerful product taxonomy system with ai.
5 ways you can use product matching software in ecommerce to create real value that raises your sales metrics and improves your workflow operations.
Data mining and machine learning in cybersecurity enable businesses to ensure an acceptable level of data security 24/7 in highly dynamic IT environments. Learn how data security is getting increasingly automated.
Product recognition software has tremendous potential to improve your profits and slash your costs in your retail business. Find out just how useful it is.
Big data has evolved from hype to a crucial part of scaling your organization in every modern industry. Learn more about how big data is transforming organizations and providing business impacts.
Learn how natural language processing can benefit everybody involved in education from individual students and teachers to entire universities and mass testing agencies.
Here’s how automated data capture systems can benefit your business in some key ways and some real-life examples of what it looks like in practice.
Use these power ai and machine learning tools to create business intelligence in your marketing that pushes your business understanding and analytics past your competition.
We built a custom ML pipeline to automate information extraction and fine tuned it for the legal document domain.
In this practical guide, you'll get to know the principles, architectures, and technologies used for building a data lake implementation.
Find out how machine learning in biology is accelerating research and innovation in the areas of cancer treatment, medical devices, and more.
An enterprise data warehouse (EDW) is a repository of big data for an enterprise. It’s almost exclusive to business and houses a very specific type of data.
Dlib is a versatile and well-diffused facial recognition library, with perhaps an ideal balance of resource usage, accuracy and latency, suited for real-time face recognition in mobile app development. It's becoming a common and possibly even essential library in the facial recognition landscape, and, even in the face of more recent contenders, is a strong candidate for your computer vision and facial recognition or detection framework.
Learn how to utilize machine learning to get a higher customer retention rate with this step-by-step guide to a churn prediction model.
Machine learning algorithms are helping the oil and gas industry cut costs and improve efficiency. We'll show you how.
We’ll show you the difference between machine learning vs. data mining so you know how to implement them in your organization.
Here’s why you should use deep learning algorithms in your business, along with some real-world examples to help you see the potential.
Beam search is an algorithm used in many NLP and speech recognition models as a final decision making layer to choose the best output given target variables like maximum probability or next output character.
Best Place For was looking for an image recognition based software solution that could be used to detect and identify different food dishes, drinks, and menu items in images sourced from blogs and Instagram. The images would be pulled from restaurant locations on Instagram and different menu items would be identified in the images. This software solution has to be able to handle high and low quality images and still perform at the highest production level, while accounting for runtime as well as accuracy.
Deep learning recommendation system architectures make use of multiple simpler approaches in order to remediate the shortcomings of any single approach to extracting, transforming and vectorizing a large corpus of data into a useful recommendation for an end user.
Let's take a look at the architecture used to build neural collaborative filtering algorithms for recommendation systems
GPT-3 is one of the most versatile and transformative components that you can include in your framework, application or service. However, sensational headlines have obscured its wide range of capabilities since its launch. Let’s take a look at the ways that companies and researchers are achieving real-world results with GPT-3, and examine the untapped potential of this 'celebrity AI'.
How to get started with machine learning based dynamic pricing algorithms for price optimization and revenue management
Let's take a look at how you can use spaCy, a state of the art natural language processing tool, to build custom software tools for your business that increase ROI and give you data insights your competitors wish they had.
The landscape for AI in ecommerce has changed a lot recently. Some of the most popular products and approaches have been compromised or undermined in a very short time by a new global impetus for privacy reform, and by the way that the COVID-19 pandemic has transformed the nature of retail.
Extremely High ROI Computer Vision Applications Examples Across Different Industries
Building Data Capture Services To Collect High ROI Business Data With Machine Learning and AI
Software packages and Inventory Data tools that you definitely need for all automated warehouse solutions
Inventory automation with computer vision - how to use computer vision in online retail to automate backend inventory processes