Width.ai

A Guide to Leveraging BERT for Extractive Text Summarization on Lectures

Karthik Shiraly
·
August 15, 2023
summary results

GPT-4 is gaining hype with its abilities in zero shot and few shot settings for many different tasks. The ability to get pretty good results across a wide range of use cases in a very low data environment is a game changer for many people who want to get something off the ground quickly.

For such use cases where you want fine-grained control, or have a great dataset with a wide data variance cover, you should opt for fine-tuning custom language models like the bidirectional encoder representations from transformers (BERT). Fine-tuning models for task specific uses is still the best way to achieve high accuracy results in a production environment, even when the models are much smaller in size (like SETFIT!). In this deep dive, we talk about effectively leveraging BERT for extractive text summarization on lectures as well as any other documents that need precise control.

How to Use BERT for Extractive Summarization

Below, we explore how to use a pre-trained BERT model for summarization by studying the methods described in a 2019 paper by Derek Miller, "Leveraging BERT for Extractive Text Summarization on Lectures," and its accompanying software, lecture-summarizer.

1. Intuition Behind the Approach

clusters
Clusters and centroids (Source)

The main intuition is that groups of sentences will have some shared themes. You can implement grouping of similar things using machine learning techniques, such as clustering. But how do you apply a geometric operation like clustering to sentences?

One way is to imagine language as an N-dimensional universe where each of the N axes represents a linguistic aspect. Some aspects may be intuitive, like the number of nouns, level of sarcasm, or degree of relevance to the area of medicine. Other aspects may be something completely unintuitive, like the third power of the product of the vowel count and pronoun density. Nevertheless, all are useful relationships that a neural network has inferred from patterns in real-world text corpora.

You can think of every possible sentence as a point somewhere in this N-dimensional aspect universe. Every sentence can be represented by a list of N coordinates that position it in this space. By converting sentences into these geometric representations, you can easily apply clustering to identify similarities. Two sentences with nearly equal coordinates on every axis are very likely to be conceptually similar.

Such a representation for a sentence, using an N-dimensional coordinate vector in an N-dimensional space of linguistic aspects, is called an "embedding" for that sentence. BERT's job is to identify the linguistic aspects and produce embeddings for any sentence it's given.

2. Input Data and Pre-Processing

The inputs to lecture-summarizer are lecture transcripts, either from subtitle files (.srt files) or from online sources.

Given a transcript, it first uses the natural language toolkit (NLTK) to split the text into sentences. Next, it cleans up the list of sentences to improve the quality of summaries:

  • It discards sentences below 75 characters.
  • It removes sentences related to quizzes, which are a common feature of online courses.
  • It ignores sentences that are continuations of previous sentences, like the ones starting with "or," "but," and similar phrases. This is a problem in many classroom lectures because they're usually delivered in informal conversational tones. It's less of a problem in formal talks or speeches.

3. How to Obtain Good Embeddings from BERT

3-dimensional embedding spaces for illustrative purposes. Actual BERT embeddings are 768- or 1,024-dimensional, more than we can visualize (Source)

The next step is to get embeddings for the sentences. The software uses the PyTorch BERT model for that.

So far, we've implied that BERT produces one embedding per sentence, but that's not true! It produces an embedding for every token in a sentence. So, two problems must be solved:

  • From which part of BERT do you get the best embeddings? It has many layers, and every layer is a potential source.
  • How do you combine multiple token embeddings into a single embedding for the entire sentence?

One answer is the CLS token's embedding from the last layer. At its input, BERT inserts a special token called CLS at the start of every sentence. Because of the way BERT's attention mechanisms work, the CLS token's embedding can be considered as representing the entire sentence.

However, this 2019 SBERT paper demonstrated that CLS embeddings don't work well for sentence-level semantic similarity tasks like clustering. Instead, manual inspection showed that the quality of clustering is best for embeddings from the second-to-last layer. Lecture-summarizer extracts and averages them to get a sentence-level embedding.

Another factor that affects quality is the model size. BERT has two variants — BERT-base produces 768-dimensional embeddings, while BERT-large produces 1,024-dimensional embeddings. The software allows the user to select the preferred model. We strongly recommend BERT-large because it's gone through more training and its embeddings are more expressive.

4. How to Cluster the Embeddings

Lecture-summarizer uses K-means clustering to find distinct clusters. You can think of each cluster as a topic or theme present in the lecture.

A major drawback of K-means is that it doesn't automatically determine the number of clusters; the user must specify it. But lecture-summarizer doesn't directly ask for the number of clusters either. Instead, it gets that number indirectly by asking for a ratio of summarization. If a lecture has 100 sentences and the ratio is 0.1, the software endeavors to select 10 (= 0.1 x 100) sentences for the summary by identifying 10 clusters and picking the best sentence from each cluster.

Another potential problem with K-means is its assumption that the N-dimensional embedding space is Euclidean. It does so for mathematical convenience, but in reality, the embedding space produced by BERT is most probably not Euclidean. So, the clusters it identifies may not be the best possible ones.

5. Select the Sentences Closest to the Cluster Centers

K-means identifies related clusters of sentences and their centers in the N-dimensional embedding space. You can think of each cluster as a theme or topic in the lecture, and each cluster center as an imaginary sentence that perfectly conveys its theme.

The next step then — and this is where the extractive nature of this software comes from — is to select one actual sentence from the lecture that is closest to that imaginary ideal sentence at the center of each cluster. The software does this by calculating the distance between each sentence and its respective cluster center and selecting the nearest sentence.

6. Final Step — Form the Summary

The final summary is simply the list of all sentences closest to the cluster centers from the previous step. In a long lecture, they can come from locations scattered across wide gaps and may sound disconnected. We'll explain some ideas to improve this aspect in later sections.

7. Evaluating the Summaries

We evaluated the quality of the summaries on the TED talks dataset because of its lengthy, lecture-like transcripts. Since expert summaries aren't available for quantitative evaluation using metrics like the recall-oriented understudy for gisting evaluation (ROUGE), we analyze the results qualitatively.

Example 1: A Lecture on Modern Science

The first example is a talk on the philosophy and foundations of modern science. Throughout the talk, the speaker uses anecdotes and references to historical events to explain how science progressed. The software extracted this summary using BERT-large:

This summary doesn't quite offer a solid introduction or conclusion, and some context is missing. But it does capture the format of the talk and its theme of scientific progress.

Example 2: A Lecture on Architecture

The second talk is by a famous architect on his achievements in his youth. Here's the summary generated using BERT-large:

Given the nature of the talk, the summary does well at capturing the topic, his achievements, and the thinking behind them. The context is missing in some sentences, but we'll address that in a later section.

Example 3: A Talk on Psychology

This last example is a talk on the psychology of happiness with surprising results, experiments, and observations. Here's the summary generated with BERT-large:

This summary seems to be missing some context. An average reader can't judge easily if the topic is psychology or personal finance.

How to Improve the Quality of Summaries

While stock BERT does a reasonable job, the quality of summaries can be easily improved by applying some modern techniques to the summarization model.

1. Use Sentence Transformers

As we explained before, BERT produces token embeddings, and deriving good sentence embeddings from them isn't simple. A simple improvement is to use a model like Sentence-BERT that's optimized for sentence-level similarity. The sentence-transformers Python package makes implementation simple and comes with a large number of high-quality official models and many more community models on the Hugging Face Model Hub. We’re huge fans of SBERT and use it for operations in chatbots, GPT-3 prompts, data matching and more. It’s very easy to fine-tune these embeddings to move the accuracy from generic to extremely task specific. We’ve taken data matching use cases from 60ish percent to 97% through fine-tuning this model.

For example, observe the improvements it brought to the psychology talk's summary:

Now it's much more clear to a reader that this is something related to psychology.

2. User Guidance on Centroids and Multiple Sentences Per Centroid

One of the best ways of improving the quality of summaries is by giving users more control over the process. We do this in two ways.

First, a major weakness of the software is that it picks just one sentence per centroid. If a lecture has 100 sentences and the user specifies a summary ratio of 0.2, it'll look for 20 centroids. But most lectures are likely to have just a handful of themes — maybe three to five at max, not dozens. So one enhancement is to allow users to specify the number of centroids and the number of summary sentences independent of each other.

The other improvement is related to cluster initialization. Instead of random starts, we allow users to specify a couple of high-level topics. The embeddings for these topics become the seed centroids around which clustering begins. This makes the final summary much more likely to be oriented around the user-specified topics, as shown below:

This is a summary of the same talk on modern science as Example 1. This time, it was seeded with three topics — science, history, and theories. Instead of random, disconnected anecdotes, it offers some actual insights into science, experiments, and theories. There's also a bit of history thrown in.

3. Coreference and Anaphora Resolution With SpaCy

A problem with extracted sentences is their use of pronouns like "it" or "he." Without the full context, it's not always clear what or who is being referenced. That first Example 1 summary has a few such ambiguous references (highlighted below):

References to some earlier entity are called anaphora. Coreferences are a similar concept — they're noun phrases that refer to the same entity, like "the United States" and "our country." If we can also include the referred entities in a summary, these ambiguities go away and the summary becomes clearer.

Luckily, the spaCy framework provides pipeline components like Coreferee to identify coreferences and anaphora. With that information, lecture-summarizer can include additional sentences in the summary that clarify such references, as shown by this version with referred concepts highlighted:

The three improvement techniques shown here were all independently enabled. By enabling all of them together, the summaries are of far higher quality compared to the stock software.

Lecture Summarization System Architecture

In addition to its core summarization capabilities, the lecture-summarizer software comes with additional features of practical use to students, teachers, educational institutions, and libraries.

1. Lecture Management Service

The software ships with lecture management capabilities to store lecture content along with its name and course details in a database. It exposes these features through a representational state transfer application programming interface (REST API) endpoint for use by other applications.

2. Summary Management

Another feature of the software is a summary management API to create and store the generated summaries for use by other applications. However, lookup is only by an identifier and not by content similarity.

3. Summary Search Engine

The third feature is a lecture searcher implementation to look up summaries by content similarity. It implements this by using a persistent vector search engine called Annoy to store the embeddings and query their nearest neighbors in the embedding space.

BERT vs. GPT-3 or ChatGPT for Summarization

With large language models (LLMs) like ChatGPT, GPT-3, and others becoming so popular, some of you may be wondering how this BERT summarization approach fares against them. Here are our thoughts on this.

1. Bidirectional vs. Causal Models - Which Is Better?

For any language task, if you supply more context, the quality of results will also improve. Bidirectional language models (LMs) like BERT include the context from both the left and right of a token to generate its embedding. In contrast, causal LMs like GPT3 are trained to use only the context available so far, i.e., the context to their left.

So bidirectional models are superior for document-understanding tasks like summarization where context already exists on both sides. Causal language models are better for conversational tasks like chatbots because relying on just the previous tokens forces them to generate language that's more human-like. This same idea applies for generative ai focused tasks as there isn’t any context to leverage other than previous tokens in use cases like marketing copy generation.

Of course, mere theoretical superiority doesn't necessarily generate better summaries. The number of parameters of the neural network and the volume and diversity of its training also matter. But all else being equal, bidirectional is the way to go for summarization.

2. Extractive Summaries Are Critical for Some Use Cases

LLMs excel at abstractive summarization, but extractive summaries are indispensable in some industries where exactness matters, such as:

  • Legal industry: Legal phrases often have specific meanings and connotations that must be preserved intact in summaries to avoid risks of illegality or litigation.
  • Medical reports: Similarly, medical terms have specific meanings that must be retained because they may be critical for prognosis and treatment.
  • Contract management: If a contract is misunderstood or key information is missed, a business may face severe financial, legal, or reputational repercussions. So, summaries must present contractual terms intact.
  • Regulatory compliance: Every clause in financial or other regulations contains a lot of important conditions and details that businesses that wish to remain compliant cannot ignore.
  • News media: Exact quotes and verbatim reproduction of details are often required in news media and publications.

3. Uncertainties of GPT Prompt Engineering

Instructing LLMs on summarization tasks that require an abstract idea of the goal can be hit or miss. We’ve seen that GPT models do a great job on summarization tasks that have a clearly defined goal such as key-based or sentence-based summarization. But as we’ve outlined in papers like Revolutionizing News Summarization it still struggles with tasks like aspect-based summarization.

Graphic from the original research paper that shows GPT struggling with aspect-based summarization (Source)

4. Long Documents Are Difficult for GPT-3

All LLMs of the current generation can accept only a limited number of input tokens, typically around 2,000 to 4,000. For longer documents, you have to use capable chunking strategies to divide the document semantically, have the LLM summarize the chunks, and then recombine the partial summaries. Even newer models like GPT-4 that support much larger prompts struggle to hold the main context presented in the prompt with larger inputs. This is a common problem with these larger prompts as the model loses focus on any given token as the size of the input expands.

However, this isn't a problem at all for clustering-based approaches. Even if the number of sentences numbers in the millions, the same model can embed and cluster them all to extract summaries.

5. Cost Benefits of Self-Hosted Models

Most LLMs are pay-per-use that charge for every 1,000 input and output tokens. Moreover, the more capable models, like GPT-3 Davinci, are an order of magnitude costlier than less capable ones, like GPT-3 Ada. If you're summarizing a lot of lectures and other teaching materials, you'll quickly incur huge costs per document.

In contrast, you can run models like BERT on your own server or even laptop without even a GPU. They're also much more lightweight compared to self-hosted LLMs like GPT-J.

6. Better Data Security With Self-Hosted Models Like BERT

Although probably not applicable to most lectures, you may work in industries where you must keep your documents confidential. Since most LLMs are managed by third parties, you'll have to trust their information security practices. But with self-hosted models like BERT, you can keep all your documents on your own infrastructure and implement your standard access policies.

7. Run MobileBERT on Your Smartphone

Using techniques like knowledge distillation, MobileBERT has compressed BERT down to run on smartphones. That opens up possibilities of real-time, on-demand summarization of anything you see by simply capturing it with your smartphone camera.

Want to build your own summarization system?

The techniques described here apply not just to lectures but to any domain or business that routinely processes large numbers of documents for summarization or other language understanding tasks. Contact us for expertise on your language processing needs.

References