Building Production-Grade spaCy Text Classification Pipelines for Business Data
Unlock the full potential of spaCy with this guide to building production-grade text classification pipelines for business data.
Data matching is the workflow process of comparing different data values in structured or unstructured format based on similarity or an underlying entity. Similarity can mean whatever you want it to be based on your specific use case and what you define as an “entity”. Data matching has a bunch of advantages in the spaces of data cleansing, manual process automation, product market analysis, and sales intelligence. Companies with poor data workflows are shown to lose about 12% of their revenue because of data quality (it’s true!).
Old school data matching tools focus on using outdated methods such as rule based approaches and fuzzy matching. We’re going to look at how we use a custom machine learning pipeline to achieve better results that get even better over time with your data. Once you’ve used a machine learning based approach for data matching you’ll never go back to these old approaches.
At a high level data matching is the process of comparing data such as product data, user data, customer data, etc based on similarity by some metric. There are many different types of data that can be compared for similarity with the most popular being product data matching. Different data matching software tools require different data points with some being able to decide similarity based on a product image + product title, and some tools allowing you to just use a single data point such as just product title. The main idea is to link different sets of data points based on several matching identifiers that are outlined.
Data matching with machine learning is a powerful matching engine architecture built to leverage the learning capabilities of machine learning algorithms such as natural language processing, image similarity, linear combinators to match data on a deeper level. These pipelines learn a real relationship between the data you consider a match and what you don’t. We’ll see how this greatly outperforms the entity matching and fuzzy matching based products that constantly require tweaking and adjustments over time.
Data matching with machine learning gives you a whole new level of flexibility in terms of a few key categories.
1 - We fine-tune the architecture for your specific use case which allows you to redefine what a “match” is. Since we’re matching based on similarity this can mean a bunch of different things in your backend system. Two matching SKUs, two UPC codes, integers exact or in a range - it’s completely up to you. We also have the flexibility to tune the architecture to use a wide range of data types including images, long integers, product descriptions, SSN numbers, you name it. We have the different machine learning algorithms in place to make the data matching software pipeline fit your data, as it should.
2 - These machine learning models use training and fine-tuning to learn a deeper relationship between your data and what is considered a match in a specific instance. This leads to matching that is much deeper than just high level entity pairing or fuzzy matching, both of which are not trained for your specific use case. The fine-tuning not only allows you to use data matching for more use cases, but reach a higher level of accuracy because the backbone models are now “focused” on only your use case. This means less edge cases and false positives break through, less adjustments need to be made throughout the life cycle of the product, and your model transitions to new data easier.
What’s the point in using data matching software if you have to change coded rules and parameters each time your data changes a bit? These pipelines have a much stronger relationship with your exact data and the backbone natural language processing or computer vision models are resilient enough to see through shifts in data over time. Let’s look at this real example:
We’re matching product UPC codes based on if the underlying SKU is the same. This is to clean up sales data records to understand the sales data for a specific SKU. The UPC codes come in as either full UPC codes or partial codes:
UPC code 1: 7-25272-73070-6
UPC code 2: 2527273070
These are both considered the same UPC code and are a match. Rules and fuzzy matching can do perfectly fine on this one.
UPC code 1: 725272730706
UPC code 2: 2527273070
UPC code 3: 1-725272730706-1
UPC code 1: 864515000449
UPC code 2: 8-64515000449-AA
UPC code 3: 8-64515-00044-9–1*
UPC code 4: APX-8-64515-00044-
What happens if we increase the noise that the model must decipher to find a match between the same underlying UPC code? As you can see the original rules and fuzzy matching would have to be adjusted as our data variance grows. Machine learning based data matching learns the real relationship between the numbers and the sequences to know when to ignore noise why keeping the same baseline knowledge of what makes up a UPC code match. As you can imagine instances like these above come up all the time when adding new data sources, the data sources change format, the initial problem set changes. We don’t want to have to keep adjusting our data matching algorithm each time something like this happens. On top of that, our data matching software algorithm can use NLP to even tell you why something is a match or not!
The data matching algorithms have never seen these UPC codes before and are matching them strictly on an understanding of what makes up a UPC code. There’s no rules or fuzzy matching going on, this is just natural language processing. We can provide the algorithm with any level of variation of data and cover it. That’s the power of machine learning based data matching.
Let’s show a few of the many different ways you can use machine learning based data matching and what algorithms are used.
Using just the title of an ecommerce or retail product we can match data records that have the same underlying SKU. This architecture is trained to perform data matching with super high levels of noise, missing fields, and little standardization. The best part is this allows you to match an entity that could be made up of multiple columns (an entire product!) by just a single title.
Here we use GPT-3 as the baseline NLP model of the architecture, with a few other models sprinkled in throughout the pipeline to improve accuracy.
The amount of noise in the product titles that can be covered even at a baseline level is extremely high, and gets even higher when tuned for your use case. This example has no spaces between the name and bottle size, different bottle counts, and noise from another field on the end. As you can imagine it’s impossible to write rules that could imagine this level of noise. On top of that we get back easy to evaluate confidence metrics with GPT-3, and this one came back 87.40% confident in the result.
Completely different domain? No problem as our architecture still works its magic to tell that these two pairs of Shein leggings are not the same SKU. This time even more accurate as the confidence score is 88.08%!
By using deep learning based computer vision models we can directly compare database images for duplicate records. These images can be internal product images, competitor product images, or any other image data.
These computer vision models learn how to recognize and extract features from images instead of the actual entity in the image. This allows you to deploy the architecture on any image dataset without needing to train on your specific data.
This image similarity architecture can be used for any image data matching use case including:
No matter the type of data used for a text field, machine learning based data matching tools can help you match these fields and link similar entities together. Whether it’s customer data short as names, or long form legal documents we can use deep learning pipelines to match to the same person or same document type. Fine-tuning these text field data matching tools allows us to quickly adjust to any use case and account for variation in string length.
We’ve matched documents over 25 pages and data entities with over 25 fields, no matter which one you’re looking for, our deep learning architectures can match them!
Using popular similarity models and large language models we can compare long form text fields to link exact or similar entities. This works on any long form text, and here we’re looking at comparing entire product descriptions to match similar products. This is also a great way to show how you can decide what “similarity” means for your use case! Before we saw using the underlying SKU as a match, and here we’re going to use a more relative “clothing type” match. This is a great way to compare products across competitors and get an understanding of the exact market.
The model can also take variables such as price field, size field, and others into account.
Using the same NLP models outlined above we build an architecture that allows the models to learn a relationship between what leads to two data points being a match and two products not being a match. You have a level of control over what specifics generally lead to a match (name, price, keywords, etc) that you can’t get with more rigid approaches.
In many instances a single entity (or row) is made up of multiple data points from different fields. You can compare these individually to another set of multiple data points (first name to first name) or you can compare the entire row to an entire row (first name last name email to the same from another entity). Linking data fields together allows you to create a new equation for record matches. These architectures are usually much larger and incorporate more supervised learning than some of the other data matching methods described.
By using existing customer relationships between rows of customer data we can fine-tune our NLP architecture to link records.
Sign up for a demo today to see how our data matching solution can work in your organization. No matter the use case or data type we can use ai and deep learning today to match records at the highest matching accuracy in the industry. Let’s schedule a demo!
Unlock the full potential of spaCy with this guide to building production-grade text classification pipelines for business data.
We compare 12 AI text summarization models through a series of tests to see how BART text summarization holds up against GPT-3, PEGASUS, and more.
Let’s take a look at what intent classification is in conversational ai and how you can build a GPT-3 intent classification model for conversational ai and chatbot pipelines.
Discover the capabilities of zero-shot object detection, which enables anyone to use a model out-of-the-box without any training and generate production-grade results.
What is facial expression recognition and what SOTA models are being used today in production
Get a simple TensorFlow facial recognition model up & running quickly with this tutorial aimed at using it in your personal spaces on smartphones & IoT devices.
Explore accurate classification algorithms using the latest innovations in deep learning, computer vision, and natural language processing.
Learn what human activity recognition means, how it works, and how it’s implemented in various industries using the latest advances in artificial intelligence.
What is the the SetFit architecture and how does it outperform GPT-3 and other few shot large language models
What is image classification and how we build production level TensorFlow image classification systems for recognizing various products on a retail shelf.
Explore the application of intelligent document processing (IDP) in different industries and dive in-depth on intelligent document pipelines.
How to build an image classification model in PyTorch with a real world use case. How you can perform product recognition with image classification
Let's build a custom CTA generator that you'll actually want to use for your website copy
We’re going to look at how we built a state of the art NLP pipeline for blended summarization and NER to process master service agreements (MDAs) that vary the outputs based on the input document and what is deemed important information.
Get a comprehensive overview of a purchase order vs. invoice, including when businesses use each, what information goes in them, and more.
Learn what Google Shopping categories are used for and how you can automate fitting products to this taxonomy using ai.
Automatically categorize your Shopify store products to the Shopify Product Taxonomy instantly with ai based PIM software
Dive deep into 3-way invoice matching, including how it works, eight benefits for your business, and the problems with doing it manually.
Smart farming using computer vision and deep learning provides the most promising path forward in the slow-moving industry of agriculture.
How we leveraged large language models to build a legal clause rewriting pipeline that generates stronger language and more clarity in legal clauses
Using ai for document information extraction to automate various parts of the loan process.
Apply AI to your favorite sport with this guide. Learn how automated ball tracking can change the game for coaches and players.
Categorize your ecommerce products to the 2021 google product taxonomy tree instantly with our Ai software
Surveying the current landscape of ecommerce automation and how you can use ai to automate huge chunks of your product management.
Classify your product data against an existing product category database or generate categories and tags in seconds using artificial intelligence
Warehouse automation plays a crucial role across your supply chain. Learn about how machine learning and ai software can be integrated into your warehouse automation stack.
4 different NLP methods of summarizing longer input text into different methods such as extractive, abstractive, and blended summarization
iscover an invoice OCR tool that will revolutionize the way you handle invoices. There’s no human intervention needed & a dramatically lower per-invoice cost.
Instead of invoice matching taking upwards of a week, it could take mere seconds with the proper automation solution. Learn more here.
Manual and template-based invoicing are riddled with low accuracy and required human intervention. Learn how to systematically eliminate these issues with the right invoice data capture software.
A complete walkthrough guide on how to use visual search in ecommerce stores to create more sales and real examples of companies already using it.
Automating the extraction of data from invoices can reduce the stress of your accountants by finding inaccuracies, digitizing paper invoices, and more.
Learn how lifetime value or LTV prediction can improve your marketing strategies. Then, discover the best statistical & machine learning models for your predictions.
A deep understanding of how we use gpt-3 and other NLP processes to build flexible chatbot architectures that can handle negotiation, multiple conversation turns, and multiple sales tactics to increase conversions.
The popular HR company O.C. Tanner, which has been in business since 1927 and has over 1500 employees, was looking to research and design two GPT-3 software products to be used as internal tools with their clients. GPT-3 based products can be difficult to outline and design given the sheer lack of publicly available information around optimizing and improving these systems to a production level.
We’ll compare Tableau vs QlikView in terms of popularity, integrations, ease of use, performance, security, customization, and more.
With a context-aware recommender system, you can plan ways to recreate some of the contextual conditions that persuade them to buy more from you.
We’re going to walk through building a production level twitter sentiment analysis classifier using GPT-3 with the popular tweet dataset Sentiment140.
Find out how machine learning in medical imaging is transforming the healthcare world and making it more efficient with three use cases.
Discover ways that machine learning in health care informatics has become indispensable. Review the results of two case studies and consider two key challenges.
Accelerate your growth by pivoting key areas of your business to AI. Your business outcomes will be achieved quicker & you’ll see benefits you didn’t plan for.
We built a GPT-3 based software solution to automate raw data processing and data classification. Our model handles keyword extraction, named entity recognition, text classification | Case Study
We built a custom GPT-3 pipeline for key topic extraction for an asset management company that can be used across the financial domain | Case Study
How you can use GPT-3 to create higher order product categorization and product tagging from your ecommerce listings, and how you can create a powerful product taxonomy system with ai.
5 ways you can use product matching software in ecommerce to create real value that raises your sales metrics and improves your workflow operations.
Data mining and machine learning in cybersecurity enable businesses to ensure an acceptable level of data security 24/7 in highly dynamic IT environments. Learn how data security is getting increasingly automated.
Product recognition software has tremendous potential to improve your profits and slash your costs in your retail business. Find out just how useful it is.
Big data has evolved from hype to a crucial part of scaling your organization in every modern industry. Learn more about how big data is transforming organizations and providing business impacts.
Learn how natural language processing can benefit everybody involved in education from individual students and teachers to entire universities and mass testing agencies.
Here’s how automated data capture systems can benefit your business in some key ways and some real-life examples of what it looks like in practice.
Use these power ai and machine learning tools to create business intelligence in your marketing that pushes your business understanding and analytics past your competition.
We built a custom ML pipeline to automate information extraction and fine tuned it for the legal document domain.
In this practical guide, you'll get to know the principles, architectures, and technologies used for building a data lake implementation.
Find out how machine learning in biology is accelerating research and innovation in the areas of cancer treatment, medical devices, and more.
An enterprise data warehouse (EDW) is a repository of big data for an enterprise. It’s almost exclusive to business and houses a very specific type of data.
Dlib is a versatile and well-diffused facial recognition library, with perhaps an ideal balance of resource usage, accuracy and latency, suited for real-time face recognition in mobile app development. It's becoming a common and possibly even essential library in the facial recognition landscape, and, even in the face of more recent contenders, is a strong candidate for your computer vision and facial recognition or detection framework.
Learn how to utilize machine learning to get a higher customer retention rate with this step-by-step guide to a churn prediction model.
Machine learning algorithms are helping the oil and gas industry cut costs and improve efficiency. We'll show you how.
We’ll show you the difference between machine learning vs. data mining so you know how to implement them in your organization.
Here’s why you should use deep learning algorithms in your business, along with some real-world examples to help you see the potential.
Beam search is an algorithm used in many NLP and speech recognition models as a final decision making layer to choose the best output given target variables like maximum probability or next output character.
Best Place For was looking for an image recognition based software solution that could be used to detect and identify different food dishes, drinks, and menu items in images sourced from blogs and Instagram. The images would be pulled from restaurant locations on Instagram and different menu items would be identified in the images. This software solution has to be able to handle high and low quality images and still perform at the highest production level, while accounting for runtime as well as accuracy.
Deep learning recommendation system architectures make use of multiple simpler approaches in order to remediate the shortcomings of any single approach to extracting, transforming and vectorizing a large corpus of data into a useful recommendation for an end user.
Let's take a look at the architecture used to build neural collaborative filtering algorithms for recommendation systems
GPT-3 is one of the most versatile and transformative components that you can include in your framework, application or service. However, sensational headlines have obscured its wide range of capabilities since its launch. Let’s take a look at the ways that companies and researchers are achieving real-world results with GPT-3, and examine the untapped potential of this 'celebrity AI'.
How to get started with machine learning based dynamic pricing algorithms for price optimization and revenue management
Let's take a look at how you can use spaCy, a state of the art natural language processing tool, to build custom software tools for your business that increase ROI and give you data insights your competitors wish they had.
The landscape for AI in ecommerce has changed a lot recently. Some of the most popular products and approaches have been compromised or undermined in a very short time by a new global impetus for privacy reform, and by the way that the COVID-19 pandemic has transformed the nature of retail.
Extremely High ROI Computer Vision Applications Examples Across Different Industries
Building Data Capture Services To Collect High ROI Business Data With Machine Learning and AI
Software packages and Inventory Data tools that you definitely need for all automated warehouse solutions
Inventory automation with computer vision - how to use computer vision in online retail to automate backend inventory processes