Width.ai

How Context-Aware Recommender Systems Can Help You Increase Conversions Quickly

Karthik Shiraly
·
December 9, 2021
Context aware recommender systems: customer giving a review

Equipping your e-commerce site with even the most basic recommender system delivers tremendous business benefits. You find your average order value shoots up because your customers add your suggested products to their carts. Your site's customer lifetime values and customer retention go up because your customers perceive that you somehow sense what they want. Click-through rates to your product ads and conversions from your marketing emails rise. You can upsell and cross-sell products to your customers based on their interests.

All of that comes from using just basic recommender systems, the ones that treat users and products as nameless cohorts. Now imagine just how much more beneficial hyper-personalized recommendations can be — all carefully selected according to every customer's individual desires. Context-aware recommender systems (CARS) hyper-personalize product suggestions by sensing and understanding all the social, financial, emotional, and environmental factors — collectively called the “context” — that influence your customers to make their purchase decisions.

Even inconspicuous metrics like how long your customer kept a product page open, how they moved their mouse over the page, and how quickly they scrolled adds to the context.

With a context-aware recommender system, you can not only market and sell to your customers at the right place and right time, but also plan ways to recreate some of the contextual conditions that may persuade them to buy more from you.

E-Commerce Applications of Context-Aware Recommender Systems

A CARS brings context awareness and hyper-personalization into every step of your customer's shopping experience. Amazon was already getting as much as 35% of all sales through recommendations back in 2013 and the proportion is likely to be higher now.

You’d have seen Amazon’s persuasive on-site techniques for selling more — recently viewed items, “frequently bought together” sections, and “customers who bought this also bought” sections. They even serve you category recommendations through their “related to items viewed” and “best sellers in category” sections. Wherever you look in Amazon, you find recommendations in action. A CARS gives your site the same powers.

The products shown on your customer’s front page are selected based not just on what they bought in the past or what's sitting in their cart but also on factors like the time of the day, the device they’re using, the environment they're in, or their current activity. You can provide your customers with personalized “top picks for you” pages like Amazon does.

When they search for a particular product, a CARS uses their search query to fine-tune the list of product suggestions in real-time. It can even infer context based on the patterns in their typing. Did they seem unsure when typing a word? That may act as a hint to broaden the categories of products. Did they modify a search multiple times in a row? That may mean none of the top results were relevant to them till they finally decided to click a product.

A CARS can learn that one of your customers prefers using your app on weekdays and your website on weekends. Context awareness can help you understand such inconspicuous customer behavior and tune product suggestions suitably.

You can send hyper-personalized suggestions through marketing emails customized to every one of your customers. Contextual factors like the month and the day of the week influence exactly what your customer wants at the time and thus influence your recommendations too. You can notify your customers about an upgrade or newer version of a product they bought in the past. You can tell them about the best selling products by brand based on their category interests just like Amazon does. Personalized marketing emails are a great way to upsell products and increase customer lifetime values.

A CARS can tune its suggestions based on what's sitting in your customer's cart and for how long. It can use socio-economic or environmental clues to make frequently-bought-together and other-customers-bought suggestions a lot more fine-grained. Two customers are no longer similar simply because they bought the same item but because they bought the same item in the same hour on a particular day of the month from the same city.

Basics of Context-Aware Recommender Systems

Before diving into CARS, it helps to understand the idea of context better. Context covers all the social, financial, psychological, and environmental factors that influence an interaction, such as a customer purchase or a product rating.

Such contextual information can be very complex and often domain-specific, covering diverse factors like:

  • Weather and news
  • User context built from user modeling, user profiles, and user preferences
  • Interactions of users with user interfaces
  • Activity data collected from smartphones
  • Data from ubiquitous computing devices like IoT sensors
  • Implicit feedback, such as abandoning a cart before checkout
  • Information collected by data mining useful third-party web services

A CARS uses knowledge discovery to uncover the semantic relationships between users, items, and context that influence a user’s interactions. Context awareness keeps your recommender systems adaptive to changes in the user’s environment.

In 2010, Gediminas Adomavicius and Alexander Tuzhilin proposed three paradigms for context awareness in their paper on context-aware recommender systems — pre-filtering, post-filtering, and context modeling. The two filtering approaches inject context awareness either before or after a regular recommender system has given results. But because they’re not ideal recommendation approaches, we’ll focus only on context modeling.

In contextual modeling, the core recommendation algorithm — like collaborative filtering — is modified for context awareness using a multidimensional approach, by adding contextual features to the normally two-dimensional set of user versus item data. We’ll explore these algorithms in detail next.

How Deep Neural Networks Improve Traditional Recommender Systems

It helps to first understand the motivation behind using deep neural networks (DNNs) for recommender systems before exploring specific models.

Recommender systems use different approaches like collaborative filtering that recommends based on user similarities, or content-based filtering that recommends based on item similarities. Hybrid recommender systems combine multiple approaches.

Context aware recommender systems: Matrix factorization of a user-item matrix
Matrix factorization of a user-item matrix into user and item latent matrices

An algorithm common to these recommendation techniques is matrix factorization (MF). The data you have is a user-item matrix derived from interactions captured in your business databases. Given such a matrix, you want to derive the per-user and per-item weights that can produce that matrix. One of the easiest ways to do this is matrix factorization.

Once you know those per-user and per-item weights, you can determine both user similarities and item similarities using clustering or Bayesian networks, and predict the rating that a new user is likely to give to an item.

However, MF’s simplicity also limits its predictive capability. Matrix factorization is a simple linear operation that’s suited to legacy hardware. But state-of-the-art DNNs running on modern hardware can model the complex real-world relationships between users and items much better using non-linearities.

Latent Context Modeling Using LSTMs and Autoencoders

In 2019, Livne et al. proposed context awareness using deep neural networks to learn the complex relationships between users, items, and contextual factors. They used DNNs like long short-term memory (LSTM) and autoencoders for latent representation learning of the context.

The extracted latent context vectors were then integrated into the Neural Collaborative Filtering (NCF) architecture that used a DNN for collaborative filtering using more expressive non-linear functions in place of linear matrix factorization.

Let’s understand what latent context modeling means and how these models advanced the state of the art in CARS.                   

Motivation for Latent Context Modeling

As you saw earlier, context can be complex, multi-faceted, and domain-specific. While your domain experts can guess some obvious and logical factors, the real world can be far more complex than they ever imagined. Can a music expert predict the influence of local weather, local news, or political beliefs on your customer’s music choices? These are the kind of real-world latent influences that deep neural networks can discover and model.

Because you don’t know all the factors that influence your customer’s decisions, the best approach is to avoid manual feature selection altogether. Just throw in every measurable factor you can think of — social, financial, political, environmental, social media sentiments, location, weather, local news, or smartphone sensor readings. Your deep neural networks can then figure out their individual and combined influences on your customers’ choices.

But such a hodgepodge of diverse features can easily number in the thousands. Even on modern hardware, this increased dimensionality and sparsity can cripple most recommender algorithms in production environments.

Latent context modeling improves both dimensionality and sparsity by using shorter vectors to express the same information. The next logical question is: Which neural network is good at this kind of information compression? The study compared context-aware recommender systems that used autoencoders and LSTMs for this task.

Autoencoder Deep Neural Networks for Latent Context Modeling

NCF DNN with autoencoder for latent context modeling
NCF DNN with autoencoder for latent context modeling (Source: Livne et al.)

An autoencoder is a DNN whose expected output vector should have the same values and length as its input vector. But having significantly fewer neurons in its hidden layers forces it to learn to express the same input information using shorter vectors. You can use these shorter vectors instead of the high-dimensional input data because they contain the same information and can also be processed efficiently. Such an approach is an example of representation learning for dimensionality reduction.

When used for latent context modeling, the autoencoder reduces the high-dimensional raw context features to shorter vectors. However, it doesn’t treat context as a first-class dynamic entity whose long-term changes and influences should be accurately modeled. Instead, it looks at just the current context and treats it like any other information that should be compressed. You can think of it as more of an optimization approach. The paper calls this model the Latent Neural Context Model (LNCM).

These latent context vectors are integrated into the NCF model by combining them with user and item vectors passed to its first multi-layer perceptron (MLP) layer.

LSTM Deep Neural Networks for Sequential Latent Context Modeling

Context aware recommender systems: NCF DNN with LSTM for sequential latent context modeling
NCF DNN with LSTM for sequential latent context modeling (Source: Livne et al.)

An LSTM is good at predicting sequences because it assumes the current value of a feature depends on all its previous values. Real-world phenomena like smartphone sensor readings and weather changes work exactly like that.

The paper proposes an LSTM as an alternative to an autoencoder and calls it the Sequential Latent Context Model (SLCM). It’s an LSTM encoder-decoder sequence-to-sequence (or seq2seq) model that predicts the next context values based on their previous values. Once trained to predict the context accurately, you can use the encoder block’s final outputs as reliable context vectors that contain all the same information as the raw context data but in a compressed efficient form. Moreover, these vectors will reflect the true nature of context better than the ones from an autoencoder.

These sequential latent context vectors are combined with user and item vectors passed to the NCF model’s first MLP layer.

Datasets

The study compared recommender systems on two datasets — CARS and Yelp.

The CARS dataset is a private dataset with about 21,397 user ratings for points of interest like restaurants and cafés. It contains as many as 247 contextual features collected from smartphone and other data services and grouped into the following contextual groups:

  • Environmental context: Factors include date and time, two temperatures from a weather service, ambient light levels, and ambient audio noise levels.
  • User activity context: Raw readings from the accelerometer, orientation, magnetic field, gravity, GPS location, and activity recognition service are aggregated over short time windows and converted to features.
  • Mobile state context: Features like the screen state, battery level, battery temperature, charging status, and ringer mode are included.
  • Behavioral context: User actions like calling or texting, the set of running applications, and network traffic statistics are included in this group.

The Yelp dataset is a public dataset of points of interest published by Yelp. About half a dozen contextual features are derived from the raw data — year, month, day of the week, city, holiday or weekend indicators, season, location, and more.

Comparisons and Results

The study proposed three context-aware DNN models and compared them against four baseline models. The three context-aware DNN models were:

  • Explicit neural context model (ENCM): Raw context data is passed directly to the NCF.
  • Latent neural context model (LNCM): Latent context data, extracted using an autoencoder, is passed to the NCF.
  • Sequential latent context model (SLCM): Sequential latent context data, extracted using an LSTM encoder-decoder, is passed to the NCF.

The four baseline models were a mix of traditional and neural collaborative filtering models with some of the traditional ones being context aware:

  • Traditional matrix factorization (MF): A model that uses traditional collaborative filtering using matrix factorization for the recommendation process.
  • Explicit context model (ECM): A model that extends traditional matrix factorization with explicit contextual features.
  • Latent context model (LCM): A model that extends traditional matrix factorization with latent context extracted using an autoencoder.
  • Neural matrix factorization (NCF or NeuMF): A non-context-aware collaborative filtering DNN that models user-item relationships using non-linear functions instead of matrix factorization.

These seven models were compared on these metrics:

  • RMSE: The root mean square error between actual and predicted ratings.
  • MAE: The mean absolute error between actual and predicted ratings.
  • Hit@k: The accuracy of the top-k recommendations where k can be three, five, or 10. A hit means an item rated positively by a user is present in their top k recommendations.
Comparison of latent context modeling DNN models
Comparison of latent context modeling DNN models (Source: Livne et al.

The results show all three context-aware DNN models outperforming the four baseline models on all metrics. Among them, the sequential latent context model has impressive top-k hits along with low error numbers. The study establishes the effectiveness of DNN context-aware recommender systems.

Improved Context Modeling Using Genetic Algorithms

In 2020, the same team of Livne et al. proposed better context modeling approaches to improve explainability, privacy, trust, and data collection efficiency — the kind of improvements that can elevate your customers’ opinions about your business.

Motivation for Improved Context Modeling

While latent context modeling improves the dimensionality of data and the quality of recommendations, it has these disadvantages:

  • A latent compressed representation of context severely reduces the explainability of your recommendations. You sometimes want to be able to tell your customer why your service is recommending a particular song or product. Perhaps it’s based on their current activity or the weather outside. Latent context can’t help you there.
  • Using all the sensors on your customers’ smartphones to build context can quickly drain battery levels, and along with them, the patience of your customers. 
  • Being tracked constantly can increase their anxiety over privacy and undermine their trust in your business.

To address all these, we need to improve both context modeling and data engineering.

Genetic Algorithms for Context Modeling

Genetic algorithms (GA) are a neat solution to overcome all these disadvantages because they reduce dimensionality while retaining explainability and efficiency.

In computer science, a genetic algorithm is an optimization search method. It iteratively looks for the most optimized option in a set of possible solutions, using biology-inspired concepts like mutation, fitness, and crossover.

In context-aware recommendation methods, the context features are selected using a genetic algorithm as follows:

  • The raw input data is the set of all user-item interactions and the context values at the time of the interaction.
  • For each interaction, a binary array called its “genome” is created. Its length is the total number of context features. Each bit indicates the presence or absence of the respective feature value in that interaction. For example, if the weather wasn’t recorded, the bit for the weather column is 0.
  • Stated differently, the genome is a one-hot encoded version of the raw context features. It’s essentially an optimum binary mask to apply to the list of contextual features. A context feature corresponding to a zero bit is not used when training the neural network. Feature dimensionality is reduced this way.
  • The genetic algorithm’s goal is to search for that genome — the binary array that tells you which features to retain and which ones to discard — that optimizes a fitness function.
  • Each step in this search — called a generation — produces a genome that tells you which features to use and which ones to discard.
  • Each generation may involve a mutation — with a specified mutation probability — where a random bit in the genome gets flipped. Each generation may also involve crossovers or recombinations that simulate biological reproduction by combining two parent genomes to create a child genome.
  • The obvious fitness function is one based on the metrics of a collaborative filtering DNN that’s trained on the features switched on by a genome.
  • However, such a naive approach to training an entire DNN in each generation is intractable.
  • Don’t forget that many aspects of the GA — like the population size, number of generations, crossover probability, and mutation probability — are themselves parameters that you should choose through experiments.
  • So a three-stage naive approach that involves repeatedly setting the GA’s parameters, searching for an optimum genome with those parameters, and training the DNN for every combination of GA parameters and genome is pretty impractical.

The authors ingeniously addressed this intractability problem using a second neural network that we’ll understand in the next section.

But why do all this? Because it has several benefits:

  • It improves the explainability of recommendations to your customers. The best genome tells you which raw context features are used by your recommender DNN. You can explain your results based on that.
  • It provides knobs to improve efficiency and privacy during context collection. The authors propose certain fitness functions that penalize the collection of battery-draining, privacy-destroying data like GPS locations. By using them, you can demonstrate your concern for the privacy and convenience of your customers, increasing their trust in your business.
  • It improves the overall performance of the recommender system. Something like genetic algorithms can run quickly and efficiently compared to deep models like LSTMs.

Machine Learning and Deep Learning in Improved Context Modeling

Context aware recommender systems: Boosted ensemble of multiple models
Boosted ensemble of multiple models (Source: Livne et al.)

The paper uses three different machine learning and deep learning models:

  1. The primary model is of course the neural collaborative filtering (NCF) DNN.
  2. A secondary small neural network addresses the problem of evaluating the fitness function efficiently without training an entire NCF DNN each time. It does so by learning to predict the area under the ROC curve (AUC) metric, given a context vector and its genome mask. Its small number of layers and neurons means it calculates the value within a few seconds, enabling a large number of possible solutions to be evaluated.
  3. A third model is a boosted ensemble of multiple neural collaborative filtering DNNs where each DNN is trained on a different set of switched-on context features. In machine learning, boosted ensemble refers to combining a large number of weak models using a process called boosting to render the overall combination as a strong model.

Datasets

The study evaluated this approach on two datasets — CARS and Hearo. Both are private datasets with ratings for various points of interest (POIs). The CARS dataset is the same one described previously but used as many as 480 contextual features this time.

The Hearo dataset provides recommendations of POIs and users’ binary ratings on the provided recommendations. The ratings are from 77 users and have 661 contextual features.

Comparisons and Results

The study compared this GA approach against nine other models, including the three latent context DNN models we studied earlier.

Comparison of genetic algorithm feature selection with other deep models
Comparison of genetic algorithm feature selection with other deep models (Source: Livne et al.)

They used two metrics for evaluation — area under ROC curve (AUC) and log loss. On both, the GA approach outperformed all the other models.

What’s Next for Context-Aware Recommender Systems?

Context awareness has been a game-changer for e-commerce recommender systems, thanks to its uncanny ability to provide highly personalized recommendations by understanding your customers’ minds. Given that transformer networks are raising the bar in other information systems, we can anticipate that future research directions are headed that way in e-commerce recommender systems too.

An e-commerce CARS of the future is likely to be even more context-aware. Family members of your customer and their interests can influence your product recommendations as much as they influence your customer’s purchase decisions. 

Data from your customer’s banks or credit card companies — all acquired with their permission — can help your CARS tailor its product suggestions to your customer’s current financial situation and price sensitivity. The possibilities are limitless.

Contact us to equip your marketing and sales departments with context-aware recommender systems that personalize your products and services for each of your customers.

Citations

  • Amit Livne, Moshe Unger, Bracha Shapira, Lior Rokach (2019). “Deep Context-Aware Recommender System Utilizing Sequential Latent Context”. arXiv:1909.03999 [cs.LG].     https://arxiv.org/abs/1909.03999.
  • Amit Livne, Eliad Shem Tov, Adir Solomon, Achiya Elyasaf, Bracha Shapira, Lior Rokach (2020). “Evolving Context-Aware Recommender Systems With Users in Mind”. arXiv:2007.15409 [cs.LG]. https://arxiv.org/abs/2007.15409.
  • Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, Tat-Seng Chua (2017). “Neural Collaborative Filtering”. arXiv:1708.05031 [cs.IR]. https://arxiv.org/abs/1708.05031
  • Adomavicius G., Tuzhilin A. (2011) “Context-Aware Recommender Systems”. In: Ricci F., Rokach L., Shapira B., Kantor P. (eds) Recommender Systems Handbook. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-85820-3_7
  • Moshe Unger, Bracha Shapira, Lior Rokach & Amit Livne (2018): “Inferring contextual preferences using deep encoder-decoder learners”. New Review of Hypermedia and Multimedia, DOI:10.1080/13614568.2018.1524934