Solving Modern Document Classification Challenges With Deep Learning | SOTA Document Classification Use Cases
Explore accurate classification algorithms using the latest innovations in deep learning, computer vision, and natural language processing.
Visual focused automation in a business setting has exponentially increased in accuracy and speed with the growth of computer vision architectures. This has allowed manual focused industries to easily leverage computer vision through everyday devices such as phones and ceiling cameras without needing expensive hardware or vendor locked APIs. We’ve built a number of these systems for various use cases centered around how computer vision architectures built on pytorch can be leveraged to automate business processes.
One such use case is setting up an image recognition system with PyTorch that is able to recognize product SKUs, gather information from the classified image, and feed that information into an inventory management system, CMS or wherever it might be needed. Let’s talk a bit about performing image classification with pytorch and walk through an example use case.
Image classification is the process of taking an image as input and outputting a class label that best describes the image. Depending on the label chosen during training, this can range in granularity from detecting the class of a product (beverage,meat,flour) in an image to detecting specific SKUs (7oz flank steak, Coke Zero 12oz, Coke Glass Bottle 7oz). The label can be as precise as the information we can give the model during training.
Image classification used in single view multi object settings often requires a prior step of image segmentation or object recognition. The upstream model is used to recognize the instances of products, then the products are cropped and we perform image classification on the cropped product.
PyTorch is an open source deep learning framework used for developing and training custom neural networks. It is widely used by researchers and developers due to its flexibility and ease of use. PyTorch also has a number of unique features that make it well suited for image classification tasks. We’ll dive into these themes in the product recognition walkthrough below.
There are a few reasons why we prefer PyTorch over TensorFlow for this image classification system. The biggest reason is that PyTorch is much easier to use and debug than TensorFlow for custom architectures, which speeds up development time and reduces the number of iteration cycles required to reach production. Its higher flexibility allows much more customization in how the model works and lets us fit a model to the given task. This has also lead to PyTorch becoming the favored framework for research, with new projects and pre-trained models coming out regularly, which we can use to make better AI systems faster using the popular architectures built into PyTorch.
Some of the most popular architectures that are included in PyTorch are AlexNet, VGG, Inception, and ResNet. Each of these architectures has been pre-trained on large datasets and is able to achieve state-of-the-art performance on many image classification tasks. Depending on the individual needs, the models often need to be fine-tuned or customized for the specific application to get the best results.
Since PyTorch has become the gold standard in research in recent years, using it we can apply cutting edge techniques being developed and published by researchers around the world.
Product recognition systems are trained to detect and classify objects in images, just like any other image classification model. The main difference is in the type of data that is used to train the model. Product recognition models are trained on a dataset of product images, which can be gathered from online resources or taken from a company's product catalog. They can then be used to automate the process of sorting and organizing products, building a catalog, inventory products, or to track product information. Computer vision product recognition + text analysis can also be used to automatically generate targeted marketing content or product recommendations.
There are a few steps that need to be followed in order to deploy a product classification model. We’ll walk through each of these steps in detail in the next section.
We will go over these steps in use case example for food retail shelves with the goal of automating parts of the inventory management process. We want to be able to take a photo of a product shelf and see which products appear on it to automate the inventory taking process. We’ll focus on a two model architecture for finding local features in the full view image and classifying the exact product SKUs.
Our use case has the additional constraint that we would like to use a single white background product images for training, as these images are generally available for most products, saving us the step of having to create images via cropping each product from full shelf images.
The issues here normally come from the fact that these images are much cleaner than what you will have for each product on the full shelf. Product images are centered, straight facing, and much more clear than what you are left with after cropping. From experience we’ve seen that adding preprocessing to the training workflow is the only way to replicate the high level of noise you will have in the production setting.
This constraint will influence our training strategy and AI model choice. We settle on a network and training approach with single-shot capabilities, meaning we will not require many images of the same product from many angles to have the model recognize it. This sets our choice of model on a pipeline consisting of a feature extractor pretrained on ImageNet and a TransformNet architecture. The neural networks will perform separate tasks and feed into each other to learn to recognize products from a single image in many contexts. We can construct the networks using PyTorch's NeuralNetworks module, which allows for creating arbitrarily large neural networks with a few lines of code.
We need relevant data both for training the model and evaluating its performance. For our use case example, our relevant datasets would be a product database, such as the one used for the client's online store, or internal CMS with images of the products.
In our case, the target data of stocked shelves with products will need to be collected and labeled, how much data is needed depends largely on how much variety there is in the data (store decoration, different shelving types, how much the lighting in the store differs) and how well existing datasets and pre-trained models can be leveraged. Good planning and use of existing labeled datasets can save thousands of dollars in data labeling work. For the supermarket product recognition model, a public research database exists that we can use to build a proof-of-concept to see if our idea is feasible.
Using a training dataset with a high variation of views of the products is key to be able to operate in a production environment. Pictures of the retail shelves will never be perfectly straight on and the front side of products won’t always be easy to see. If you don’t include variations of the visible product your training accuracy will be much higher than that of your test set, and model optimizations alone will not greatly improve the results.
Labeling the data is often the most time-intensive part of any project, smart planning can help reduce time and cost
The goal of the application also shapes the approach in pre-processing. Common techniques are changing the colors, size, angle and cropping of images to extract as much useful data from the training data as possible. PyTorch offers a variety of built in methods for augmenting the training image data such as flipping, offset, color shift and others through its torchvision.transforms module.
For our retail product recognition tool, too aggressive data augmentation would be counter productive as products can be very similar to each other and small changes in color might actually make results worse. We settle on simple cropping and normalization, which is a process of aligning the image values around the mean of the training data in a way that is proven to boost neural network performance.
It is often possible to save time and resources by using a pre-trained model and perform so called "fine-tuning", which tunes an existing model to a new specified purpose. As important is the choice of appropriate "objective function" to calculate the model loss, which will determine how the model evaluates its performance during training.
Since our model will both classify and localize the product in the image, we require two loss functions, a localization loss measure that gives the location error and a prediction loss that measures classification error.
We use the nn.HingeEmbeddingLoss and nn.SmoothL1Loss from the PyTorch built in loss functions for classification and location respectively. The ReLU activation function is most commonly used as it adds non-linearity to our equation.
After training we see the model is very good at recognizing arbitrary amounts of the products on the shelf, giving us not just the product presence, but also its location with satisfying accuracy, which we quantify through exhaustive experiments.
Once we have a model that produces satisfying results, it needs to be made robustly accessible to wherever it will be eventually used, be that in an app on a mobile device or the background of a large scale server.
Deployment & cloud infrastructure when dealing with image inputs can be a bit more difficult than other machine learning domains. Oftentimes these input images can be quite large in size which can cause slowdown in batch scenarios. Building asynchronous architecture to handle taking in these images and processing them is pretty valuable here. Some managed deployment services don’t allow inputs larger than 5MB which can be a serious problem for our images. The common tradeoff people make is reducing the image quality. As we’ve seen above, this can create issues in high noise retail shelf environments.
The testing and maintenance workflow for computer vision based systems is much more rigorous than other similar machine learning systems for a few reasons:
- Image inputs generally have more background noise and less relevant features than text based problems
- Data variance that is different from what is seen in the test environment grows quicker for computer vision problems without a domain change. Slight changes to angle, brightness, display sizes, and distance to products all affect the results.
- Leveraging pre-training and models with massive agnostic training (like LLMs) doesn’t easily exist in computer vision.
A robust training and optimization iterative workflow that is well planned out from start to end allows any changes needed to be made seamlessly and improvements to the system to be facilitated as our understanding of how real users produce images. Regular testing, user experience gathering, and good model versioning can allow the models to be continually improved. The models can be automatically tested each time a new release is pushed out or changes to our training dataset are made.
We often deploy training pipelines that build right into product databases to allow for quick and automated fine-tunes to our models. This data comes from real product use which greatly improves our data variance coverage in production. Reducing the friction required to deploy new versions of models is one of the best ways to reduce total development time, and increase the lifecycle of ai products.
Width.ai builds custom NLP & computer vision software (just like this product classification model!) for businesses to leverage internally or as a part of their production. Schedule a call today and let’s talk about how we can help you deploy custom image classification models, or any other computer vision system.
Explore accurate classification algorithms using the latest innovations in deep learning, computer vision, and natural language processing.
Learn what human activity recognition means, how it works, and how it’s implemented in various industries using the latest advances in artificial intelligence.
What is the the SetFit architecture and how does it outperform GPT-3 and other few shot large language models
What is image classification and how we build production level TensorFlow image classification systems for recognizing various products on a retail shelf.
Explore the application of intelligent document processing (IDP) in different industries and dive in-depth on intelligent document pipelines.
Let's build a custom CTA generator that you'll actually want to use for your website copy
We’re going to look at how we built a state of the art NLP pipeline for blended summarization and NER to process master service agreements (MDAs) that vary the outputs based on the input document and what is deemed important information.
Get a comprehensive overview of a purchase order vs. invoice, including when businesses use each, what information goes in them, and more.
Learn what Google Shopping categories are used for and how you can automate fitting products to this taxonomy using ai.
Automatically categorize your Shopify store products to the Shopify Product Taxonomy instantly with ai based PIM software
Dive deep into 3-way invoice matching, including how it works, eight benefits for your business, and the problems with doing it manually.
Smart farming using computer vision and deep learning provides the most promising path forward in the slow-moving industry of agriculture.
How we leveraged large language models to build a legal clause rewriting pipeline that generates stronger language and more clarity in legal clauses
Using ai for document information extraction to automate various parts of the loan process.
Apply AI to your favorite sport with this guide. Learn how automated ball tracking can change the game for coaches and players.
Categorize your ecommerce products to the 2021 google product taxonomy tree instantly with our Ai software
Surveying the current landscape of ecommerce automation and how you can use ai to automate huge chunks of your product management.
Classify your product data against an existing product category database or generate categories and tags in seconds using artificial intelligence
Warehouse automation plays a crucial role across your supply chain. Learn about how machine learning and ai software can be integrated into your warehouse automation stack.
4 different NLP methods of summarizing longer input text into different methods such as extractive, abstractive, and blended summarization
iscover an invoice OCR tool that will revolutionize the way you handle invoices. There’s no human intervention needed & a dramatically lower per-invoice cost.
Instead of invoice matching taking upwards of a week, it could take mere seconds with the proper automation solution. Learn more here.
Manual and template-based invoicing are riddled with low accuracy and required human intervention. Learn how to systematically eliminate these issues with the right invoice data capture software.
A complete walkthrough guide on how to use visual search in ecommerce stores to create more sales and real examples of companies already using it.
Automating the extraction of data from invoices can reduce the stress of your accountants by finding inaccuracies, digitizing paper invoices, and more.
How you can use machine learning based data matching to compare data features in a scalable architecture for deduping, record merging, and operational efficiency
Learn how lifetime value or LTV prediction can improve your marketing strategies. Then, discover the best statistical & machine learning models for your predictions.
A deep understanding of how we use gpt-3 and other NLP processes to build flexible chatbot architectures that can handle negotiation, multiple conversation turns, and multiple sales tactics to increase conversions.
The popular HR company O.C. Tanner, which has been in business since 1927 and has over 1500 employees, was looking to research and design two GPT-3 software products to be used as internal tools with their clients. GPT-3 based products can be difficult to outline and design given the sheer lack of publicly available information around optimizing and improving these systems to a production level.
We’ll compare Tableau vs QlikView in terms of popularity, integrations, ease of use, performance, security, customization, and more.
With a context-aware recommender system, you can plan ways to recreate some of the contextual conditions that persuade them to buy more from you.
We’re going to walk through building a production level twitter sentiment analysis classifier using GPT-3 with the popular tweet dataset Sentiment140.
Find out how machine learning in medical imaging is transforming the healthcare world and making it more efficient with three use cases.
Discover ways that machine learning in health care informatics has become indispensable. Review the results of two case studies and consider two key challenges.
Accelerate your growth by pivoting key areas of your business to AI. Your business outcomes will be achieved quicker & you’ll see benefits you didn’t plan for.
We built a GPT-3 based software solution to automate raw data processing and data classification. Our model handles keyword extraction, named entity recognition, text classification | Case Study
We built a custom GPT-3 pipeline for key topic extraction for an asset management company that can be used across the financial domain | Case Study
How you can use GPT-3 to create higher order product categorization and product tagging from your ecommerce listings, and how you can create a powerful product taxonomy system with ai.
5 ways you can use product matching software in ecommerce to create real value that raises your sales metrics and improves your workflow operations.
Data mining and machine learning in cybersecurity enable businesses to ensure an acceptable level of data security 24/7 in highly dynamic IT environments. Learn how data security is getting increasingly automated.
Product recognition software has tremendous potential to improve your profits and slash your costs in your retail business. Find out just how useful it is.
Big data has evolved from hype to a crucial part of scaling your organization in every modern industry. Learn more about how big data is transforming organizations and providing business impacts.
Learn how natural language processing can benefit everybody involved in education from individual students and teachers to entire universities and mass testing agencies.
Here’s how automated data capture systems can benefit your business in some key ways and some real-life examples of what it looks like in practice.
Use these power ai and machine learning tools to create business intelligence in your marketing that pushes your business understanding and analytics past your competition.
We built a custom ML pipeline to automate information extraction and fine tuned it for the legal document domain.
In this practical guide, you'll get to know the principles, architectures, and technologies used for building a data lake implementation.
Find out how machine learning in biology is accelerating research and innovation in the areas of cancer treatment, medical devices, and more.
An enterprise data warehouse (EDW) is a repository of big data for an enterprise. It’s almost exclusive to business and houses a very specific type of data.
Dlib is a versatile and well-diffused facial recognition library, with perhaps an ideal balance of resource usage, accuracy and latency, suited for real-time face recognition in mobile app development. It's becoming a common and possibly even essential library in the facial recognition landscape, and, even in the face of more recent contenders, is a strong candidate for your computer vision and facial recognition or detection framework.
Learn how to utilize machine learning to get a higher customer retention rate with this step-by-step guide to a churn prediction model.
Machine learning algorithms are helping the oil and gas industry cut costs and improve efficiency. We'll show you how.
We’ll show you the difference between machine learning vs. data mining so you know how to implement them in your organization.
Here’s why you should use deep learning algorithms in your business, along with some real-world examples to help you see the potential.
Beam search is an algorithm used in many NLP and speech recognition models as a final decision making layer to choose the best output given target variables like maximum probability or next output character.
Best Place For was looking for an image recognition based software solution that could be used to detect and identify different food dishes, drinks, and menu items in images sourced from blogs and Instagram. The images would be pulled from restaurant locations on Instagram and different menu items would be identified in the images. This software solution has to be able to handle high and low quality images and still perform at the highest production level, while accounting for runtime as well as accuracy.
Deep learning recommendation system architectures make use of multiple simpler approaches in order to remediate the shortcomings of any single approach to extracting, transforming and vectorizing a large corpus of data into a useful recommendation for an end user.
Let's take a look at the architecture used to build neural collaborative filtering algorithms for recommendation systems
GPT-3 is one of the most versatile and transformative components that you can include in your framework, application or service. However, sensational headlines have obscured its wide range of capabilities since its launch. Let’s take a look at the ways that companies and researchers are achieving real-world results with GPT-3, and examine the untapped potential of this 'celebrity AI'.
How to get started with machine learning based dynamic pricing algorithms for price optimization and revenue management
Let's take a look at how you can use spaCy, a state of the art natural language processing tool, to build custom software tools for your business that increase ROI and give you data insights your competitors wish they had.
The landscape for AI in ecommerce has changed a lot recently. Some of the most popular products and approaches have been compromised or undermined in a very short time by a new global impetus for privacy reform, and by the way that the COVID-19 pandemic has transformed the nature of retail.
Extremely High ROI Computer Vision Applications Examples Across Different Industries
Building Data Capture Services To Collect High ROI Business Data With Machine Learning and AI
Software packages and Inventory Data tools that you definitely need for all automated warehouse solutions
Inventory automation with computer vision - how to use computer vision in online retail to automate backend inventory processes