A Deep Guide to Text-Guided Open-Vocabulary Segmentation
Discover the power of text-guided open-vocabulary segmentation using large language models like GPT-4 & ChatGPT for automating image and video processing tasks.
Equipping your e-commerce site with even the most basic recommender system delivers tremendous business benefits. You find your average order value shoots up because your customers add your suggested products to their carts. Your site's customer lifetime values and customer retention go up because your customers perceive that you somehow sense what they want. Click-through rates to your product ads and conversions from your marketing emails rise. You can upsell and cross-sell products to your customers based on their interests.
All of that comes from using just basic recommender systems, the ones that treat users and products as nameless cohorts. Now imagine just how much more beneficial hyper-personalized recommendations can be — all carefully selected according to every customer's individual desires. Context-aware recommender systems (CARS) hyper-personalize product suggestions by sensing and understanding all the social, financial, emotional, and environmental factors — collectively called the “context” — that influence your customers to make their purchase decisions.
Even inconspicuous metrics like how long your customer kept a product page open, how they moved their mouse over the page, and how quickly they scrolled adds to the context.
With a context-aware recommender system, you can not only market and sell to your customers at the right place and right time, but also plan ways to recreate some of the contextual conditions that may persuade them to buy more from you.
A CARS brings context awareness and hyper-personalization into every step of your customer's shopping experience. Amazon was already getting as much as 35% of all sales through recommendations back in 2013 and the proportion is likely to be higher now.
You’d have seen Amazon’s persuasive on-site techniques for selling more — recently viewed items, “frequently bought together” sections, and “customers who bought this also bought” sections. They even serve you category recommendations through their “related to items viewed” and “best sellers in category” sections. Wherever you look in Amazon, you find recommendations in action. A CARS gives your site the same powers.
The products shown on your customer’s front page are selected based not just on what they bought in the past or what's sitting in their cart but also on factors like the time of the day, the device they’re using, the environment they're in, or their current activity. You can provide your customers with personalized “top picks for you” pages like Amazon does.
When they search for a particular product, a CARS uses their search query to fine-tune the list of product suggestions in real-time. It can even infer context based on the patterns in their typing. Did they seem unsure when typing a word? That may act as a hint to broaden the categories of products. Did they modify a search multiple times in a row? That may mean none of the top results were relevant to them till they finally decided to click a product.
A CARS can learn that one of your customers prefers using your app on weekdays and your website on weekends. Context awareness can help you understand such inconspicuous customer behavior and tune product suggestions suitably.
You can send hyper-personalized suggestions through marketing emails customized to every one of your customers. Contextual factors like the month and the day of the week influence exactly what your customer wants at the time and thus influence your recommendations too. You can notify your customers about an upgrade or newer version of a product they bought in the past. You can tell them about the best selling products by brand based on their category interests just like Amazon does. Personalized marketing emails are a great way to upsell products and increase customer lifetime values.
A CARS can tune its suggestions based on what's sitting in your customer's cart and for how long. It can use socio-economic or environmental clues to make frequently-bought-together and other-customers-bought suggestions a lot more fine-grained. Two customers are no longer similar simply because they bought the same item but because they bought the same item in the same hour on a particular day of the month from the same city.
Before diving into CARS, it helps to understand the idea of context better. Context covers all the social, financial, psychological, and environmental factors that influence an interaction, such as a customer purchase or a product rating.
Such contextual information can be very complex and often domain-specific, covering diverse factors like:
A CARS uses knowledge discovery to uncover the semantic relationships between users, items, and context that influence a user’s interactions. Context awareness keeps your recommender systems adaptive to changes in the user’s environment.
In 2010, Gediminas Adomavicius and Alexander Tuzhilin proposed three paradigms for context awareness in their paper on context-aware recommender systems — pre-filtering, post-filtering, and context modeling. The two filtering approaches inject context awareness either before or after a regular recommender system has given results. But because they’re not ideal recommendation approaches, we’ll focus only on context modeling.
In contextual modeling, the core recommendation algorithm — like collaborative filtering — is modified for context awareness using a multidimensional approach, by adding contextual features to the normally two-dimensional set of user versus item data. We’ll explore these algorithms in detail next.
It helps to first understand the motivation behind using deep neural networks (DNNs) for recommender systems before exploring specific models.
Recommender systems use different approaches like collaborative filtering that recommends based on user similarities, or content-based filtering that recommends based on item similarities. Hybrid recommender systems combine multiple approaches.
An algorithm common to these recommendation techniques is matrix factorization (MF). The data you have is a user-item matrix derived from interactions captured in your business databases. Given such a matrix, you want to derive the per-user and per-item weights that can produce that matrix. One of the easiest ways to do this is matrix factorization.
Once you know those per-user and per-item weights, you can determine both user similarities and item similarities using clustering or Bayesian networks, and predict the rating that a new user is likely to give to an item.
However, MF’s simplicity also limits its predictive capability. Matrix factorization is a simple linear operation that’s suited to legacy hardware. But state-of-the-art DNNs running on modern hardware can model the complex real-world relationships between users and items much better using non-linearities.
In 2019, Livne et al. proposed context awareness using deep neural networks to learn the complex relationships between users, items, and contextual factors. They used DNNs like long short-term memory (LSTM) and autoencoders for latent representation learning of the context.
The extracted latent context vectors were then integrated into the Neural Collaborative Filtering (NCF) architecture that used a DNN for collaborative filtering using more expressive non-linear functions in place of linear matrix factorization.
Let’s understand what latent context modeling means and how these models advanced the state of the art in CARS.
As you saw earlier, context can be complex, multi-faceted, and domain-specific. While your domain experts can guess some obvious and logical factors, the real world can be far more complex than they ever imagined. Can a music expert predict the influence of local weather, local news, or political beliefs on your customer’s music choices? These are the kind of real-world latent influences that deep neural networks can discover and model.
Because you don’t know all the factors that influence your customer’s decisions, the best approach is to avoid manual feature selection altogether. Just throw in every measurable factor you can think of — social, financial, political, environmental, social media sentiments, location, weather, local news, or smartphone sensor readings. Your deep neural networks can then figure out their individual and combined influences on your customers’ choices.
But such a hodgepodge of diverse features can easily number in the thousands. Even on modern hardware, this increased dimensionality and sparsity can cripple most recommender algorithms in production environments.
Latent context modeling improves both dimensionality and sparsity by using shorter vectors to express the same information. The next logical question is: Which neural network is good at this kind of information compression? The study compared context-aware recommender systems that used autoencoders and LSTMs for this task.
An autoencoder is a DNN whose expected output vector should have the same values and length as its input vector. But having significantly fewer neurons in its hidden layers forces it to learn to express the same input information using shorter vectors. You can use these shorter vectors instead of the high-dimensional input data because they contain the same information and can also be processed efficiently. Such an approach is an example of representation learning for dimensionality reduction.
When used for latent context modeling, the autoencoder reduces the high-dimensional raw context features to shorter vectors. However, it doesn’t treat context as a first-class dynamic entity whose long-term changes and influences should be accurately modeled. Instead, it looks at just the current context and treats it like any other information that should be compressed. You can think of it as more of an optimization approach. The paper calls this model the Latent Neural Context Model (LNCM).
These latent context vectors are integrated into the NCF model by combining them with user and item vectors passed to its first multi-layer perceptron (MLP) layer.
An LSTM is good at predicting sequences because it assumes the current value of a feature depends on all its previous values. Real-world phenomena like smartphone sensor readings and weather changes work exactly like that.
The paper proposes an LSTM as an alternative to an autoencoder and calls it the Sequential Latent Context Model (SLCM). It’s an LSTM encoder-decoder sequence-to-sequence (or seq2seq) model that predicts the next context values based on their previous values. Once trained to predict the context accurately, you can use the encoder block’s final outputs as reliable context vectors that contain all the same information as the raw context data but in a compressed efficient form. Moreover, these vectors will reflect the true nature of context better than the ones from an autoencoder.
These sequential latent context vectors are combined with user and item vectors passed to the NCF model’s first MLP layer.
The study compared recommender systems on two datasets — CARS and Yelp.
The CARS dataset is a private dataset with about 21,397 user ratings for points of interest like restaurants and cafés. It contains as many as 247 contextual features collected from smartphone and other data services and grouped into the following contextual groups:
The Yelp dataset is a public dataset of points of interest published by Yelp. About half a dozen contextual features are derived from the raw data — year, month, day of the week, city, holiday or weekend indicators, season, location, and more.
The study proposed three context-aware DNN models and compared them against four baseline models. The three context-aware DNN models were:
The four baseline models were a mix of traditional and neural collaborative filtering models with some of the traditional ones being context aware:
These seven models were compared on these metrics:
The results show all three context-aware DNN models outperforming the four baseline models on all metrics. Among them, the sequential latent context model has impressive top-k hits along with low error numbers. The study establishes the effectiveness of DNN context-aware recommender systems.
In 2020, the same team of Livne et al. proposed better context modeling approaches to improve explainability, privacy, trust, and data collection efficiency — the kind of improvements that can elevate your customers’ opinions about your business.
While latent context modeling improves the dimensionality of data and the quality of recommendations, it has these disadvantages:
To address all these, we need to improve both context modeling and data engineering.
Genetic algorithms (GA) are a neat solution to overcome all these disadvantages because they reduce dimensionality while retaining explainability and efficiency.
In computer science, a genetic algorithm is an optimization search method. It iteratively looks for the most optimized option in a set of possible solutions, using biology-inspired concepts like mutation, fitness, and crossover.
In context-aware recommendation methods, the context features are selected using a genetic algorithm as follows:
The authors ingeniously addressed this intractability problem using a second neural network that we’ll understand in the next section.
But why do all this? Because it has several benefits:
The paper uses three different machine learning and deep learning models:
The study evaluated this approach on two datasets — CARS and Hearo. Both are private datasets with ratings for various points of interest (POIs). The CARS dataset is the same one described previously but used as many as 480 contextual features this time.
The Hearo dataset provides recommendations of POIs and users’ binary ratings on the provided recommendations. The ratings are from 77 users and have 661 contextual features.
The study compared this GA approach against nine other models, including the three latent context DNN models we studied earlier.
They used two metrics for evaluation — area under ROC curve (AUC) and log loss. On both, the GA approach outperformed all the other models.
Context awareness has been a game-changer for e-commerce recommender systems, thanks to its uncanny ability to provide highly personalized recommendations by understanding your customers’ minds. Given that transformer networks are raising the bar in other information systems, we can anticipate that future research directions are headed that way in e-commerce recommender systems too.
An e-commerce CARS of the future is likely to be even more context-aware. Family members of your customer and their interests can influence your product recommendations as much as they influence your customer’s purchase decisions.
Data from your customer’s banks or credit card companies — all acquired with their permission — can help your CARS tailor its product suggestions to your customer’s current financial situation and price sensitivity. The possibilities are limitless.
Contact us to equip your marketing and sales departments with context-aware recommender systems that personalize your products and services for each of your customers.
Discover the power of text-guided open-vocabulary segmentation using large language models like GPT-4 & ChatGPT for automating image and video processing tasks.
Learn how CLIPSeg segmentation, in combination with GPT-4 and ChatGPT, can enable diverse applications from medical image diagnosis to remote sensing.
Can GPT-4 make your life as a finance or banking employee easier? Learn how GPT-4 and NLP can be used in finance to increase revenues and streamline workflows.
A deep dive into how we reached SOTA accuracy in product similarity matching through a custom fine-tuning pipeline that refines the CLIP model for image similarity.
Boost your conversions and sales numbers with NLP in sales using OpenAI's GPT-3 and GPT-4. You can use chatbots to improve customer experience and loyalty.
Explore the use of GPT for opinion summarization through innovative pipeline methods, evaluation metrics like ROUGE and BERTScore, and human evaluation insights. Dive into novel entailment-based evaluation tools for a comprehensive understanding of model performance in capturing diverse user opinions.
Come aboard the large language model revolution with our deep dive on AI21 vs. GPT-3 for business use cases like ad copy generation and math proof generation.
A technical guide to using BERT for extractive summarization on lectures that outperforms other NLP models
Discover how prompt based LLMs like GPT-3 & GPT-4 are transforming news summarization with its zero-shot capabilities and adaptability to specialized tasks like keyword-based summarization. Learn about the limitations of current evaluation metrics and the potential future directions in text summarization research.
Discover the PEZ method for learning hard prompts through optimization, a powerful technique that enhances generative models for image generation and language tasks, improves transferability, and enables few-shot learning
Take a look at how Width.ai built 17 generative ai pipelines for use in the Keap.com marketing copy generation product
A deep look at how recurrent feature reasoning outperforms other image inpainting methods for difficult use cases and popular datasets.
See a comparison of GPT-3 vs. GPT-J, a self-hosted, customizable, open-source transformer-based large language model you can use for your business workflows.
Discover how transformer networks are revolutionizing image and video segmentation, and get insights on modern semantic segmentation vs. instance segmentation.
Discover how the state-of-the-art mask-aware transformer produces visually stunning and semantically meaningful images and how it stacks up against Stable Diffusion & DALL-E for large-hole inpainting
Unlock the full potential of spaCy with this guide to building production-grade text classification pipelines for business data.
We compare 12 AI text summarization models through a series of tests to see how BART text summarization holds up against GPT-3, PEGASUS, and more.
Let’s take a look at what intent classification is in conversational ai and how you can build a GPT-3 intent classification model for conversational ai and chatbot pipelines.
Discover the capabilities of zero-shot object detection, which enables anyone to use a model out-of-the-box without any training and generate production-grade results.
What is facial expression recognition and what SOTA models are being used today in production
Get a simple TensorFlow facial recognition model up & running quickly with this tutorial aimed at using it in your personal spaces on smartphones & IoT devices.
Explore accurate classification algorithms using the latest innovations in deep learning, computer vision, and natural language processing.
Learn what human activity recognition means, how it works, and how it’s implemented in various industries using the latest advances in artificial intelligence.
What is the the SetFit architecture and how does it outperform GPT-3 and other few shot large language models
What is image classification and how we build production level TensorFlow image classification systems for recognizing various products on a retail shelf.
Explore the application of intelligent document processing (IDP) in different industries and dive in-depth on intelligent document pipelines.
How to build an image classification model in PyTorch with a real world use case. How you can perform product recognition with image classification
Let's build a custom CTA generator that you'll actually want to use for your website copy
We’re going to look at how we built a state of the art NLP pipeline for blended summarization and NER to process master service agreements (MDAs) that vary the outputs based on the input document and what is deemed important information.
Get a comprehensive overview of a purchase order vs. invoice, including when businesses use each, what information goes in them, and more.
Learn what Google Shopping categories are used for and how you can automate fitting products to this taxonomy using ai.
Automatically categorize your Shopify store products to the Shopify Product Taxonomy instantly with ai based PIM software
Dive deep into 3-way invoice matching, including how it works, eight benefits for your business, and the problems with doing it manually.
Smart farming using computer vision and deep learning provides the most promising path forward in the slow-moving industry of agriculture.
How we leveraged large language models to build a legal clause rewriting pipeline that generates stronger language and more clarity in legal clauses
Using ai for document information extraction to automate various parts of the loan process.
Apply AI to your favorite sport with this guide. Learn how automated ball tracking can change the game for coaches and players.
Categorize your ecommerce products to the 2021 google product taxonomy tree instantly with our Ai software
Surveying the current landscape of ecommerce automation and how you can use ai to automate huge chunks of your product management.
Classify your product data against an existing product category database or generate categories and tags in seconds using artificial intelligence
Warehouse automation plays a crucial role across your supply chain. Learn about how machine learning and ai software can be integrated into your warehouse automation stack.
4 different NLP methods of summarizing longer input text into different methods such as extractive, abstractive, and blended summarization
iscover an invoice OCR tool that will revolutionize the way you handle invoices. There’s no human intervention needed & a dramatically lower per-invoice cost.
Instead of invoice matching taking upwards of a week, it could take mere seconds with the proper automation solution. Learn more here.
Manual and template-based invoicing are riddled with low accuracy and required human intervention. Learn how to systematically eliminate these issues with the right invoice data capture software.
A complete walkthrough guide on how to use visual search in ecommerce stores to create more sales and real examples of companies already using it.
Automating the extraction of data from invoices can reduce the stress of your accountants by finding inaccuracies, digitizing paper invoices, and more.
How you can optimize email marketing campaigns with machine learning based models that improve conversion & click-through rates.
How you can use machine learning based data matching to compare data features in a scalable architecture for deduping, record merging, and operational efficiency
Learn how lifetime value or LTV prediction can improve your marketing strategies. Then, discover the best statistical & machine learning models for your predictions.
A deep understanding of how we use gpt-3 and other NLP processes to build flexible chatbot architectures that can handle negotiation, multiple conversation turns, and multiple sales tactics to increase conversions.
The popular HR company O.C. Tanner, which has been in business since 1927 and has over 1500 employees, was looking to research and design two GPT-3 software products to be used as internal tools with their clients. GPT-3 based products can be difficult to outline and design given the sheer lack of publicly available information around optimizing and improving these systems to a production level.
We’ll compare Tableau vs QlikView in terms of popularity, integrations, ease of use, performance, security, customization, and more.
We’re going to walk through building a production level twitter sentiment analysis classifier using GPT-3 with the popular tweet dataset Sentiment140.
Find out how machine learning in medical imaging is transforming the healthcare world and making it more efficient with three use cases.
Discover ways that machine learning in health care informatics has become indispensable. Review the results of two case studies and consider two key challenges.
Accelerate your growth by pivoting key areas of your business to AI. Your business outcomes will be achieved quicker & you’ll see benefits you didn’t plan for.
We built a GPT-3 based software solution to automate raw data processing and data classification. Our model handles keyword extraction, named entity recognition, text classification | Case Study
We built a custom GPT-3 pipeline for key topic extraction for an asset management company that can be used across the financial domain | Case Study
How you can use GPT-3 to create higher order product categorization and product tagging from your ecommerce listings, and how you can create a powerful product taxonomy system with ai.
5 ways you can use product matching software in ecommerce to create real value that raises your sales metrics and improves your workflow operations.
Data mining and machine learning in cybersecurity enable businesses to ensure an acceptable level of data security 24/7 in highly dynamic IT environments. Learn how data security is getting increasingly automated.
Product recognition software has tremendous potential to improve your profits and slash your costs in your retail business. Find out just how useful it is.
Big data has evolved from hype to a crucial part of scaling your organization in every modern industry. Learn more about how big data is transforming organizations and providing business impacts.
Learn how natural language processing can benefit everybody involved in education from individual students and teachers to entire universities and mass testing agencies.
Here’s how automated data capture systems can benefit your business in some key ways and some real-life examples of what it looks like in practice.
Use these power ai and machine learning tools to create business intelligence in your marketing that pushes your business understanding and analytics past your competition.
We built a custom ML pipeline to automate information extraction and fine tuned it for the legal document domain.
In this practical guide, you'll get to know the principles, architectures, and technologies used for building a data lake implementation.
Find out how machine learning in biology is accelerating research and innovation in the areas of cancer treatment, medical devices, and more.
An enterprise data warehouse (EDW) is a repository of big data for an enterprise. It’s almost exclusive to business and houses a very specific type of data.
Dlib is a versatile and well-diffused facial recognition library, with perhaps an ideal balance of resource usage, accuracy and latency, suited for real-time face recognition in mobile app development. It's becoming a common and possibly even essential library in the facial recognition landscape, and, even in the face of more recent contenders, is a strong candidate for your computer vision and facial recognition or detection framework.
Learn how to utilize machine learning to get a higher customer retention rate with this step-by-step guide to a churn prediction model.
Machine learning algorithms are helping the oil and gas industry cut costs and improve efficiency. We'll show you how.
We’ll show you the difference between machine learning vs. data mining so you know how to implement them in your organization.
Here’s why you should use deep learning algorithms in your business, along with some real-world examples to help you see the potential.
Beam search is an algorithm used in many NLP and speech recognition models as a final decision making layer to choose the best output given target variables like maximum probability or next output character.
Best Place For was looking for an image recognition based software solution that could be used to detect and identify different food dishes, drinks, and menu items in images sourced from blogs and Instagram. The images would be pulled from restaurant locations on Instagram and different menu items would be identified in the images. This software solution has to be able to handle high and low quality images and still perform at the highest production level, while accounting for runtime as well as accuracy.
Deep learning recommendation system architectures make use of multiple simpler approaches in order to remediate the shortcomings of any single approach to extracting, transforming and vectorizing a large corpus of data into a useful recommendation for an end user.
Let's take a look at the architecture used to build neural collaborative filtering algorithms for recommendation systems
GPT-3 is one of the most versatile and transformative components that you can include in your framework, application or service. However, sensational headlines have obscured its wide range of capabilities since its launch. Let’s take a look at the ways that companies and researchers are achieving real-world results with GPT-3, and examine the untapped potential of this 'celebrity AI'.
How to get started with machine learning based dynamic pricing algorithms for price optimization and revenue management
Let's take a look at how you can use spaCy, a state of the art natural language processing tool, to build custom software tools for your business that increase ROI and give you data insights your competitors wish they had.
The landscape for AI in ecommerce has changed a lot recently. Some of the most popular products and approaches have been compromised or undermined in a very short time by a new global impetus for privacy reform, and by the way that the COVID-19 pandemic has transformed the nature of retail.
Extremely High ROI Computer Vision Applications Examples Across Different Industries
Building Data Capture Services To Collect High ROI Business Data With Machine Learning and AI
Software packages and Inventory Data tools that you definitely need for all automated warehouse solutions
Inventory automation with computer vision - how to use computer vision in online retail to automate backend inventory processes