A Deep Guide to Text-Guided Open-Vocabulary Segmentation
Discover the power of text-guided open-vocabulary segmentation using large language models like GPT-4 & ChatGPT for automating image and video processing tasks.
Most document processing tasks that require information extraction to automate a previously human focused process are not black and white. Successful information extraction from these documents isn’t strictly an abstractive or extractive summary, named entities, or one word topics. While what’s shown in research and pretrained models follows the above, production systems typically want a blended mix of the above that best fits the use case and the exact input document.
We’re going to look at how we built a state of the art NLP pipeline for blended summarization and NER to process master service agreements (MDAs) that vary the outputs based on the input document and what is deemed important information. This is a mix of the popular NER and long form summarization work we’ve done in the legal document domain that leverages our key knowledge of production level GPT-3 pipelines. You can read about our legal document cover sheet processing and long text summarization.
The goal of the pipeline is to pass in a master service agreement document and extract both summaries of the information and any specific points we would want to note about the two parties. The goal of the readable output is to make a review of the document much quicker through the summaries created and provide specific extracted named entities to store in a database for reference. These are the most commonly seen ways for processing long form legal documents to solve a downstream use case. Let’s quickly look at key reasons why the blended nature of the summarization & dynamic NER is beneficial for solving the key problems.
Most summarization work today is focused on solving very black and white versions of abstractive and extractive summarization for any input use case. Real world use cases that are actually beneficial are some blended version of these two. The problem most companies run into is the pretrained models and/or datasets are set up as abstractive or extractive and therefore can lead to less than stellar results in the customer's eyes when they want just a small tweak towards the other side.
GPT-3 ability to leverage small amounts of data to learn a relationship on the fly allows us to get blended summarization off the ground quicker, make adjustments to the blend easier, and leverage things like prompt optimization to create the flexibility required for this task.
Standard named entity recognition focuses on creating a set of entities that we want to extract from documents such as counterparty, address, service provider, price, and finding them in documents provided. We must create all entities we wish to extract upfront and label them in each document. If we want to add a new entity for the model to find we go through and retrain the entire model to find examples of this new entity.
What happens if the set of entities we care about is very flexible given the available information in a given example? What if the information we care about surrounding a specific topic or entity changes for different document examples? With standard NER we’d have to make sure we outline everything up front and account for any additional information we could ever want to extract. If this entity isn’t very common or is very different from the other entities it can be very hard to get the NER model to pick up on this new field.
Our multi-task model learns what an entity is instead of what information leads to a given entity. These entities are then recognized and extracted just like a NER model and become either part of the summary or there own separate output, given the user case and preference.
Let’s take a look at the essential parts of the pipeline and evaluate how they play a role in blended summarization with a multi-task NLP model.
Here we leverage a piece of our state of the art document pipeline to extract the text and understand the positioning of specific text. We use this same pipeline for document processing tasks such as invoices and resumes where we want to extract exact fields oftentimes split by headers.
This pipeline already contains high powered OCR built for tough image scans of documents and contains a deep learning framework for understanding the positioning of key text which in other use cases helps us map fields. Here we can use it to recognize clause headers that will help us keep specific context together. Considering the level of input variance and noise this pipeline is built to handle, these PDF scanned MDAs are a breeze.
As I’ve written about countless times in the other GPT-3 articles on Width.ai the chunking algorithm is one of the two most important parts of a production level GPT-3 system for long form input tasks. This algorithm focuses on splitting the long form MDA documents into smaller chunks while doing a few important things:
1. Making sure context isn’t split between chunks. This often leads to the summary containing too much information about a single topic, or the topic being removed from the summary completely. This is now even more important as the loss of context can affect the dynamic NER part of our model.
2. Provide some standardization to the size of input chunks to GPT-3.
3. Keep costs down on longer documents.
4. Understand dynamically changing size based on the number of pages per document.
5. Using the previously recognized clause headers as guidance for part of the chunking algorithm.
We built a custom chunking algorithm for this task while leveraging models we’ve used for large summarization tasks such as topic extraction from 2 hour financial interviews.
We’ve built a custom multi-task GPT-3 pipeline to go from the input master service agreement text to a generated output that contains a blended summary and any entities the model deems valuable. The summary varies in length based on the amount of key information and the overall length of the input, and is a mix of abstractive and extractive summarization based on key information in the input document. The number of entities varies as well based on the amount of exact information worth noting.
The key to this GPT-3 pipeline's success is the work done with prompt optimization and fine-tuning. Prompt optimization is a custom framework we build for all of our GPT-3 products that focuses on dynamically building the prompt at runtime based on what our input is. This allows us to dynamically put relevant information in front of our input and help guide GPT-3 to a solid generation. This algorithm has been shown to increase GPT-3s accuracy by over 30% in many tasks and is a key component to increasing data variance coverage in production.
Fine-tuning allows us to provide initial “steering” to GPT-3 for what a successful output looks like. We’ve fine-tuned the baseline davinci model on a number of examples to give our model baseline context of the task at hand. This becomes especially important when you consider we are asking GPT-3 to perform two tasks at once.
The goal of the module is to combine our smaller summaries into a new summary that condenses the information. This process not only helps condense to a more concise summary but also helps remove duplicate information. This is a trained model that provides a much higher quality output than just squishing the summaries together, and allows the end user to tweak the size of the final summary.
The final piece of our multi-task NLP pipeline is a bit of output cleanup and product improvement. Considering the output from GPT-3 includes both our blended summary & named entities as one we build simple processing to do the following:
1. Split these two tasks apart
2. Convert the string NER fields into key-value pairs
3. Any bad character cleanup
Confidence metrics allow us to put a model confidence value on the outputs of the GPT-3 models in real time. This is incredibly useful to allow you to regenerate poor results, use user feedback, analyze and optimize models, and understand exactly how well your model is performing in production. This is a custom model that evaluates the output relative to high-quality token sequences for the same use case.
We ended up with a document processing pipeline used to process master service agreements into a blended information summary and dynamic named entities. Most of the training data master service agreements are between 20 and 30 pages but we’ve processed much longer agreements into longer summaries and more entities.
There are a few immediate results that we achieved with this pipeline that are worth noting:
1. Reduction in service agreement review time. The goal of the blended summary and entities is to provide a lawyer/reviewer with all the important information they would want to see when reviewing a master service agreement. This means a broader and more abstractive summary of information that is just worth mentioning, and the ability to include exact information for key points. The blended summary above includes both of those and two named entities we would want to have when reviewing any service agreement. A top level review of a master service agreement might take an hour of time whereas generating this blended summary takes less than 30 seconds. With the hourly rate of a lawyer in the US being around $300, this pipeline is an awesome time and cost reducer.
2. Quickly update an internal client database. These named entities can be extracted and stored in a contract review database for easy reference. Fields that appear in any MSA such as company name, contract value, payment terms, and others can be automatically stored away while the blended summary information is manually reviewed for risk.
3. Pipeline allows for easy summary optimization. As mentioned before, production level summarization is almost never black and white between extractive and abstractive. A “goal” summary for a customer that provides valuable information is normally some mix of these two based on the exact document that is fed in. This makes it very difficult to evaluate the multi-task output using machine based evaluation metrics. The accuracy of an output is relative to the reader in what information they feel should be included in the summary and how it should be portrayed. This evaluation is critical to being able to iterate on the pipeline and make improvements in the right direction for what the customer believes is a better summary. This pipeline allows for easy human based evaluation and iteration with the use of prompt optimization for the GPT-3 models. We use human feedback on outputs both during development and production use to allow the models to constantly be improving.
4. Base pipeline can be retrained for different document types. One of the long term results we achieved from this is developing the pipeline in a way that can easily be retrained for different documents other than MSAs. As long as the inputs and outputs stay the same, the middle models can be easily fine-tuned on the new document set.
A beautiful summary and NER of a few termination agreement clauses.
Schedule a call today to take a look at how we can process long form documents into summaries, entities, or both! We’ve built these same pipelines for input documents up to 100 pages in length and a number of legal use cases. You can get started quickly leveraging one of these pipelines to automate legal document processing or any other long form input.
Discover the power of text-guided open-vocabulary segmentation using large language models like GPT-4 & ChatGPT for automating image and video processing tasks.
Learn how CLIPSeg segmentation, in combination with GPT-4 and ChatGPT, can enable diverse applications from medical image diagnosis to remote sensing.
Can GPT-4 make your life as a finance or banking employee easier? Learn how GPT-4 and NLP can be used in finance to increase revenues and streamline workflows.
A deep dive into how we reached SOTA accuracy in product similarity matching through a custom fine-tuning pipeline that refines the CLIP model for image similarity.
Boost your conversions and sales numbers with NLP in sales using OpenAI's GPT-3 and GPT-4. You can use chatbots to improve customer experience and loyalty.
Explore the use of GPT for opinion summarization through innovative pipeline methods, evaluation metrics like ROUGE and BERTScore, and human evaluation insights. Dive into novel entailment-based evaluation tools for a comprehensive understanding of model performance in capturing diverse user opinions.
Come aboard the large language model revolution with our deep dive on AI21 vs. GPT-3 for business use cases like ad copy generation and math proof generation.
A technical guide to using BERT for extractive summarization on lectures that outperforms other NLP models
Discover how prompt based LLMs like GPT-3 & GPT-4 are transforming news summarization with its zero-shot capabilities and adaptability to specialized tasks like keyword-based summarization. Learn about the limitations of current evaluation metrics and the potential future directions in text summarization research.
Discover the PEZ method for learning hard prompts through optimization, a powerful technique that enhances generative models for image generation and language tasks, improves transferability, and enables few-shot learning
Take a look at how Width.ai built 17 generative ai pipelines for use in the Keap.com marketing copy generation product
A deep look at how recurrent feature reasoning outperforms other image inpainting methods for difficult use cases and popular datasets.
See a comparison of GPT-3 vs. GPT-J, a self-hosted, customizable, open-source transformer-based large language model you can use for your business workflows.
Discover how transformer networks are revolutionizing image and video segmentation, and get insights on modern semantic segmentation vs. instance segmentation.
Discover how the state-of-the-art mask-aware transformer produces visually stunning and semantically meaningful images and how it stacks up against Stable Diffusion & DALL-E for large-hole inpainting
Unlock the full potential of spaCy with this guide to building production-grade text classification pipelines for business data.
We compare 12 AI text summarization models through a series of tests to see how BART text summarization holds up against GPT-3, PEGASUS, and more.
Let’s take a look at what intent classification is in conversational ai and how you can build a GPT-3 intent classification model for conversational ai and chatbot pipelines.
Discover the capabilities of zero-shot object detection, which enables anyone to use a model out-of-the-box without any training and generate production-grade results.
What is facial expression recognition and what SOTA models are being used today in production
Get a simple TensorFlow facial recognition model up & running quickly with this tutorial aimed at using it in your personal spaces on smartphones & IoT devices.
Explore accurate classification algorithms using the latest innovations in deep learning, computer vision, and natural language processing.
Learn what human activity recognition means, how it works, and how it’s implemented in various industries using the latest advances in artificial intelligence.
What is the the SetFit architecture and how does it outperform GPT-3 and other few shot large language models
What is image classification and how we build production level TensorFlow image classification systems for recognizing various products on a retail shelf.
Explore the application of intelligent document processing (IDP) in different industries and dive in-depth on intelligent document pipelines.
How to build an image classification model in PyTorch with a real world use case. How you can perform product recognition with image classification
Let's build a custom CTA generator that you'll actually want to use for your website copy
Get a comprehensive overview of a purchase order vs. invoice, including when businesses use each, what information goes in them, and more.
Learn what Google Shopping categories are used for and how you can automate fitting products to this taxonomy using ai.
Automatically categorize your Shopify store products to the Shopify Product Taxonomy instantly with ai based PIM software
Dive deep into 3-way invoice matching, including how it works, eight benefits for your business, and the problems with doing it manually.
Smart farming using computer vision and deep learning provides the most promising path forward in the slow-moving industry of agriculture.
How we leveraged large language models to build a legal clause rewriting pipeline that generates stronger language and more clarity in legal clauses
Using ai for document information extraction to automate various parts of the loan process.
Apply AI to your favorite sport with this guide. Learn how automated ball tracking can change the game for coaches and players.
Categorize your ecommerce products to the 2021 google product taxonomy tree instantly with our Ai software
Surveying the current landscape of ecommerce automation and how you can use ai to automate huge chunks of your product management.
Classify your product data against an existing product category database or generate categories and tags in seconds using artificial intelligence
Warehouse automation plays a crucial role across your supply chain. Learn about how machine learning and ai software can be integrated into your warehouse automation stack.
4 different NLP methods of summarizing longer input text into different methods such as extractive, abstractive, and blended summarization
iscover an invoice OCR tool that will revolutionize the way you handle invoices. There’s no human intervention needed & a dramatically lower per-invoice cost.
Instead of invoice matching taking upwards of a week, it could take mere seconds with the proper automation solution. Learn more here.
Manual and template-based invoicing are riddled with low accuracy and required human intervention. Learn how to systematically eliminate these issues with the right invoice data capture software.
A complete walkthrough guide on how to use visual search in ecommerce stores to create more sales and real examples of companies already using it.
Automating the extraction of data from invoices can reduce the stress of your accountants by finding inaccuracies, digitizing paper invoices, and more.
How you can optimize email marketing campaigns with machine learning based models that improve conversion & click-through rates.
How you can use machine learning based data matching to compare data features in a scalable architecture for deduping, record merging, and operational efficiency
Learn how lifetime value or LTV prediction can improve your marketing strategies. Then, discover the best statistical & machine learning models for your predictions.
A deep understanding of how we use gpt-3 and other NLP processes to build flexible chatbot architectures that can handle negotiation, multiple conversation turns, and multiple sales tactics to increase conversions.
The popular HR company O.C. Tanner, which has been in business since 1927 and has over 1500 employees, was looking to research and design two GPT-3 software products to be used as internal tools with their clients. GPT-3 based products can be difficult to outline and design given the sheer lack of publicly available information around optimizing and improving these systems to a production level.
We’ll compare Tableau vs QlikView in terms of popularity, integrations, ease of use, performance, security, customization, and more.
With a context-aware recommender system, you can plan ways to recreate some of the contextual conditions that persuade them to buy more from you.
We’re going to walk through building a production level twitter sentiment analysis classifier using GPT-3 with the popular tweet dataset Sentiment140.
Find out how machine learning in medical imaging is transforming the healthcare world and making it more efficient with three use cases.
Discover ways that machine learning in health care informatics has become indispensable. Review the results of two case studies and consider two key challenges.
Accelerate your growth by pivoting key areas of your business to AI. Your business outcomes will be achieved quicker & you’ll see benefits you didn’t plan for.
We built a GPT-3 based software solution to automate raw data processing and data classification. Our model handles keyword extraction, named entity recognition, text classification | Case Study
We built a custom GPT-3 pipeline for key topic extraction for an asset management company that can be used across the financial domain | Case Study
How you can use GPT-3 to create higher order product categorization and product tagging from your ecommerce listings, and how you can create a powerful product taxonomy system with ai.
5 ways you can use product matching software in ecommerce to create real value that raises your sales metrics and improves your workflow operations.
Data mining and machine learning in cybersecurity enable businesses to ensure an acceptable level of data security 24/7 in highly dynamic IT environments. Learn how data security is getting increasingly automated.
Product recognition software has tremendous potential to improve your profits and slash your costs in your retail business. Find out just how useful it is.
Big data has evolved from hype to a crucial part of scaling your organization in every modern industry. Learn more about how big data is transforming organizations and providing business impacts.
Learn how natural language processing can benefit everybody involved in education from individual students and teachers to entire universities and mass testing agencies.
Here’s how automated data capture systems can benefit your business in some key ways and some real-life examples of what it looks like in practice.
Use these power ai and machine learning tools to create business intelligence in your marketing that pushes your business understanding and analytics past your competition.
We built a custom ML pipeline to automate information extraction and fine tuned it for the legal document domain.
In this practical guide, you'll get to know the principles, architectures, and technologies used for building a data lake implementation.
Find out how machine learning in biology is accelerating research and innovation in the areas of cancer treatment, medical devices, and more.
An enterprise data warehouse (EDW) is a repository of big data for an enterprise. It’s almost exclusive to business and houses a very specific type of data.
Dlib is a versatile and well-diffused facial recognition library, with perhaps an ideal balance of resource usage, accuracy and latency, suited for real-time face recognition in mobile app development. It's becoming a common and possibly even essential library in the facial recognition landscape, and, even in the face of more recent contenders, is a strong candidate for your computer vision and facial recognition or detection framework.
Learn how to utilize machine learning to get a higher customer retention rate with this step-by-step guide to a churn prediction model.
Machine learning algorithms are helping the oil and gas industry cut costs and improve efficiency. We'll show you how.
We’ll show you the difference between machine learning vs. data mining so you know how to implement them in your organization.
Here’s why you should use deep learning algorithms in your business, along with some real-world examples to help you see the potential.
Beam search is an algorithm used in many NLP and speech recognition models as a final decision making layer to choose the best output given target variables like maximum probability or next output character.
Best Place For was looking for an image recognition based software solution that could be used to detect and identify different food dishes, drinks, and menu items in images sourced from blogs and Instagram. The images would be pulled from restaurant locations on Instagram and different menu items would be identified in the images. This software solution has to be able to handle high and low quality images and still perform at the highest production level, while accounting for runtime as well as accuracy.
Deep learning recommendation system architectures make use of multiple simpler approaches in order to remediate the shortcomings of any single approach to extracting, transforming and vectorizing a large corpus of data into a useful recommendation for an end user.
Let's take a look at the architecture used to build neural collaborative filtering algorithms for recommendation systems
GPT-3 is one of the most versatile and transformative components that you can include in your framework, application or service. However, sensational headlines have obscured its wide range of capabilities since its launch. Let’s take a look at the ways that companies and researchers are achieving real-world results with GPT-3, and examine the untapped potential of this 'celebrity AI'.
How to get started with machine learning based dynamic pricing algorithms for price optimization and revenue management
Let's take a look at how you can use spaCy, a state of the art natural language processing tool, to build custom software tools for your business that increase ROI and give you data insights your competitors wish they had.
The landscape for AI in ecommerce has changed a lot recently. Some of the most popular products and approaches have been compromised or undermined in a very short time by a new global impetus for privacy reform, and by the way that the COVID-19 pandemic has transformed the nature of retail.
Extremely High ROI Computer Vision Applications Examples Across Different Industries
Building Data Capture Services To Collect High ROI Business Data With Machine Learning and AI
Software packages and Inventory Data tools that you definitely need for all automated warehouse solutions
Inventory automation with computer vision - how to use computer vision in online retail to automate backend inventory processes