Width.ai

How to Deploy Custom WordPress Chatbots for Happier Customers

Matt Payne
·
March 5, 2024
Wordpress chatbot

Powering around 40% of the world's websites, WordPress is one of the most popular and easy-to-use content management systems (CMS). Its flexibility and capabilities enable businesses to build entire e-commerce ecosystems like WooCommerce and customer helpdesk services like Awesome Support on top of it.

You can further improve its capabilities by deploying AI-powered custom chatbots that can answer complex customer and employee queries using natural language processing (NLP). In this article, find out about WordPress's capabilities and learn to integrate smart chatbots with it.

WordPress Ecosystem and Capabilities

WordPress is a general-purpose CMS with an extensive ecosystem of plugins and customization options that enable you to transform it into any type of website or web application you desire.

In the following sections, we look at three types of WordPress websites that can be enhanced with custom AI-powered chatbots:

  1. E-commerce websites
  2. Customer helpdesk websites
  3. Websites of small businesses like primary care clinics and legal firms

WordPress E-Commerce

WooCommerce is hands-down the most popular e-commerce add-on for WordPress. When you activate WooCommerce, your website is turned into a typical online shop like this:

You get a readymade storefront with drag-and-drop user interface widgets like product search, shopping cart, checkout, chat widgets, and payment gateway integration out of the box.

You can add new products either manually via the WordPress dashboard or using WordPress's application programming interfaces (APIs). The screenshot below is the product management page in the administration dashboard:

WooCommerce provides a second layer of development platform over that of WordPress. It enables an extensive ecosystem of WooCommerce-specific free and commercial add-ons for essential features like customer service chat:

Businesses interested in improving metrics like customer loyalty and lifetime value can enable advanced add-ons like customer relationship management:

Overall, WordPress and WooCommerce provide a compelling solution for small and medium businesses to run online shops that sell physical products, digital products, or services.

WordPress Customer Helpdesk

If you're a business owner looking to set up a customer relationship management (CRM) system or a customer support portal for troubleshooting problems, WordPress has you covered.

Add-ons like Awesome Support provide basic customer ticketing for your support teams and website chatbot features for your customers as shown below:

Awesome Support helpdesk (Source: Awesome Support)

WordPress Small Business Websites

What if you have an existing brick-and-mortar business and are trying to establish an online presence, either for lead generation, to convert website visitors into potential customers, or to streamline your business? WordPress provides options in this case as well.

For health care clinics, for example, WordPress sites can use add-ons that manage clinics, patients, and electronic health records.

Next, we'll explore current artificial intelligence (AI) integrations of WordPress.

WordPress AI Integrations and Their Drawbacks

As you saw in the screenshots above, many plugins already have different levels of chatbot integrations. In addition, there are standalone WordPress chatbot plugins that integrate with ChatGPT or other large language model (LLM) services via their APIs:

Plus, most WordPress plugins are open-source, which means you can examine and customize most of them to your business needs.

Despite these pros, there are some lingering cons across the ecosystem that justify the development of custom WordPress chatbots:

  • Lack of advanced prompting and LLM workflows: Most of the existing AI add-ons are just thin wrappers around LLM APIs and offer only basic LLM capabilities. The built-in models either lack, or don't expose, advanced LLM techniques like retrieval augmented generation, prompting techniques like PEARL, agents, function calling, and multimodal capabilities that combine language and image generation. These are critical for moving chatbots to production and getting the most out of the options available. Control over prompting techniques is required to build the best chatbots that empower both your customers and employees to get or give better information faster. However, these advanced features are generally missing from the WordPress ecosystem.
  • Limited customizability: You'll eventually run into new customer use cases that require special workflows, particular features, and tailored results from your chatbots. However, these managed, built-in chatbots just aren't sufficiently customizable to implement them easily.
  • Limited language support: Many current built-in LLMs are English-only chatbots. If your business has non-English-speaking customers or employees, you'll need custom chatbots.
  • Inconsistent user experiences across channels: People expect consistent customer experiences across all your channels, from website chat to social media channels like Facebook Messenger and WhatsApp. They also expect consistent behavior across commonly used devices like iOS and Android smartphones. However, available plugins are rather lacking in these consistency aspects.
  • PHP doesn't have good AI frameworks: Another drawback is that PHP — the language that WordPress and all the plugins are developed in — doesn't have a mature AI ecosystem like Python or Node.js do. If you want to implement advanced AI features or optimize AI operations in the backend, you typically can't stick with PHP and have to implement it as an external API using Python or Node.js.
  • Plugins can be quite opinionated: WordPress's architecture encourages tight coupling of the user interface and the backend. So most plugins are quite opinionated about their user interfaces, user experiences, and backend designs. That means even if their source code is available, customizing it to your needs may often require more effort than simply developing your own chatbot from scratch.

A key architectural challenge is that as you look to improve your chatbot's usefulness by deeply integrating with your business tools and data sources, the “managed service” architecture of prebuilt chatbots makes this incredibly complex. In some instances due to license restrictions or API restrictions, it's impossible to incorporate specific tools you might need to get the most out of the chatbot. Why limit yourself to what you can use your chatbot for and what it can do?

Width.ai Chatbot Framework for WordPress

In this section, we explain our field-tested framework and architecture for integrating custom large language model (LLM) chatbots with WordPress.

Our architecture can handle a variety of complex integrations that enable chatbots to include real-time insights from your business systems as well as a variety of internal and external data sources when answering queries. This allows us to take an approach that doesn’t limit us to any design or API restrictions of WordPress. Instead, we have control over how we interact with WordPress, and take advantage of all our data and structure inside WordPress.

Our framework and chatbots support the following capabilities out of the box:

  • Using reasoning and dynamic data fetching, answer complex business queries from your customers as well as internal stakeholders like sales executives and customer service agents.
  • Answer queries based on your customer-facing and internal static documents, such as knowledge bases and standard operating procedures (SOPs). This is achieved with retrieval augmented generation (RAG).
  • Include real-time dynamic data like product prices and availability in the answers. Such data are fetched from your enterprise resource planning (ERP) and other business systems.
  • Provide a workflow for adding or modifying the static content used for RAG.

How Our Framework Works and the Custom Architecture Under Its Hood

Our chatbot framework is a versatile system that combines:

  • Powerful prompting frameworks
  • Connections to external tools
  • Direct connections to WordPress APIs
  • Built-in feedback loops to improve the chatbot

We deploy this system in cloud services that provide access to the WordPress API to easily integrate into the backend.

The architecture that enables all these is shown below:

wordpress chatbot

In the following sections, we go into more detail on the components shown above.

Intent Classification

Intent classification enables us to reuse the same chatbot for multiple WordPress workflows. We can provide the models with different instructions and goal state outputs based on what operations and data sources are being used. This greatly improves the ability and accuracy of the chatbot as we can have task-specific prompts that are not generic, and don’t have to build new chatbot architectures for each use case.

This is one of the most overlooked parts of successful chatbots. The amount of customization we can provide to each chatbot greatly improves the results of the system. We can access specific data sources, APIs, and knowledge each time the chatbot is used. This isn’t easily available in “managed service” chatbots.

Our Advanced Prompting Framework for Reasoning, Problem-Solving, Response Generation, and Conversation management

The prompting framework used with the LLM is the key driver behind the chatbot's quality of responses. Our production chatbots are not basic "send a prompt and get a response" type of chatbots like ChatGPT.

Instead, the framework uses multiple complex prompts and reasoning steps to guide the conversation toward a response that accurately and comprehensively aligns with the user's intent. Using features like custom tools and actions, the framework actively manages the context provided to the LLM at every conversation step by deciding what data from which data sources is relevant, fetching it, and supplying it to the LLM. Our prompting framework includes enhancements around proven reason-and-act prompting techniques like ReAct.

Example workflow for a Mortgage Chatbot

This is a key benefit of building custom chatbots with a custom architecture that you have full control over. We can adjust how the LLM is used and what the process looks like each time a user query is provided.

In the above workflow, customization allows us to dynamically decide, at runtime, which WordPress APIs and knowledge bases to consult based on the user's query. As we’ll show later, we have a new level of flexibility for what we can integrate into this chatbot.

External API Services and Knowledge Base Access

In this section, we explain how we integrate chatbot-specific knowledge bases and external APIs.

Our prompting framework provides hooks to specify the APIs and external tools we want to fetch data from. This data is inserted into the LLM's chat context using function-calling that connects to the WordPress API and other external APIs for dynamic information. It also fetches static information from your knowledge bases, product documents, manuals, reviews, and other static sources that are stored in a vector database.

The idea is to use APIs to interact with data that changes often, such as customer relationship data, inventory levels or website analytics. This is because we have to vectorize the data to be used by our system, and if the data is constantly changing we might as well wait till we need it.

Our framework allows the chatbot to integrate with your other business systems like your enterprise resource planning (ERP) system, Salesforce CRM, Zendesk, Zoho, and others.

Data that doesn’t change often — like product data, sales data, documents, or marketing campaign lists — are stored in a vector DB that is connected to the offline database in WordPress. We set up an automatic system to regularly fetch the data from WordPress and reindex it to the vector DB. We’re huge fans of Qdrant for our vector DB, and have scaled it up to 10 million records in a product database for search.

We build these various options into our prompting logic to inform the LLM that it has all these dynamic and static information available to it when deciding what context to pull into the prompt based on the query.

Note: Using vector DBs or generated API queries does not prevent us from using complex filtering criteria when accessing WordPress data. We’ve built this into these conversational systems a number of times and it works perfectly. We define the schema we need to take advantage of the specific filters and include it in the search knowledge. We’ve used this to search on specific customers or client IDs, access specific documents, limit access to documents, and a bunch of other operations.

Use of Embedded Vectors

In our system, we employ embedded vectors at two critical junctures: during the initial data processing phase and at the moment of query execution.

The first use involves the static data that is uploaded to our vector database. As part of an offline procedure, we extract textual content from the uploaded documents, format it appropriately, and then convert it into embedded vectors. These vectors are stored in our database, ready to be accessed through our "Lookup()" function when needed.

The second use of embeddings occurs during live conversations when we need to retrieve relevant information from our vector database in response to user queries. To do this, we must convert the query, which is generated by our query formulation model, into an embedded vector in real time. This conversion allows us to match the query with the most relevant information in our database.

We currently have a product, which is in the private beta stage with select clients, designed to enhance the performance of our embedded vector generation model. The aim is to refine the model's ability to discern semantic similarities between the database content and user queries. This is particularly important in large vector databases where data can be densely packed and highly similar, making it challenging to retrieve the precise document needed. The improved model is being trained to better recognize the nuances in the way queries are phrased and how they relate to the segments of text within documents, thereby redefining the concept of "similarity" for more effective information retrieval.

LLMs With Reasoning and Problem-Solving Capabilities

The LLM powering our chatbots must be able to:

  • Reason about a query step by step
  • Find relevant static information in documents like knowledge bases, SOPs, frequently asked questions (FAQs), manuals, discussions, and datasheets
  • Fetch relevant real-time dynamic information from WordPress, ERP, and other business systems
  • Combine all that static and dynamic information into a coherent, accurate answer

We equip the GPT-4 LLM to do all this using the ReAct strategy described below.

ReAct Overview

ReAct is a prompting strategy to make an LLM plan on how to answer a complex question by combining reasoning with any necessary actions. In the example above, the LLM reasons that in order to answer a complex question comprehensively and accurately, it must break down the question into distinct actions that require interfacing with external systems.

The full list of available actions is domain-specific and supplied via few-shot examples to the LLM. It picks the actions that are most helpful for answering the question and acts on them.

Some few-shot examples for a question-answering task are shown below:

few shot examples for ReAct prompting
Few-shot examples for ReAct (Source: Yao et al.)

ReAct helps use cases like complex product search with multiple criteria:

  • It starts by executing a high-level product search.
  • It then uses reasoning to examine each product against the user's criteria.
  • It eliminates any products that don't match some of the criteria.
  • It verifies if the remaining products satisfy each of the other criteria.
  • It homes in on the product that best satisfies all the criteria.

Let's look at ReAct in the context of WordPress chatbots.

Document Search using Retrieval Augmented Generation (RAG)

For finding information relevant to a query in static documents — like FAQs, product descriptions, customer emails, or discussion forum posts — we implement RAG via a ReAct action named "search(query)." It does the following operations:

  • Chunking: Documents are split into chunks of text so that information can be retrieved in chunks. The splitting is done on meaningful semantic boundaries like headers, sections, posts, or comments.
  • Document embeddings: An embedding generator produces an embedding for each chunk. Our framework defaults to a SentenceTransformer model tuned for asymmetric search since we find that it works well for most use cases. However, the framework can also be configured to use OpenAI's Embeddings API or any alternative embedding API.
  • Vector database storage: The embeddings are stored in the configured vector database like Qdrant, Weaviate, Pinecone, or other.
  • Semantic search: When a customer or internal query is received, it looks for chunks that are relevant to the query. It first converts the query to a query embedding using the embedding generator. It then searches the vector database for chunk embeddings that are nearest neighbors of the query embedding. Such chunks are likely to be the most relevant to the query and provided to the framework for further processing.

The framework also supports adding new documents or updating existing ones. After such changes, the document embeddings are recalculated and updated in the vector database.

WordPress Dynamic Data Retrieval as ReAct Tools

Relevant static information is just one part of the final answer. The real value and productivity gains of our custom chatbots lie in their ability to weave in real-time dynamic information fetched from other business systems like WordPress or ERP systems.

For obtaining product and customer details from WordPress and WooCommerce, implement the following dynamic data retrieval operations as ReAct actions:

  • Customer details: Use the Customers API endpoint to retrieve information of your customers.
  • Product details: Call the Products API to query your product inventory.
  • WordPress data: Call the methods of the wpdb global object to fetch any data stored in the WordPress database.

Ability-Trained LLM

The idea behind ability-trained LLMs vs. knowledge-trained LLMs focuses on what the goal of the LLM is in RAG. We want to use it to understand and contextualize information provided to the model for generation, not pull information from its training for generation. These are two very different use cases, and two very different prompting structures. Here’s a better way to think about it.

All understanding of the task and what information is available to perform the task is based on what is provided. This way the model only generates responses based on this specific information, and none of its underlying knowledge, which can lead to hallucinations. This generally requires some level of extraction from the model to understand what is relevant. Although you might not actually perform an extraction step, the model has to do this with larger context.

This is what it looks like when we rely on the LLM for the knowledge used to answer the query. Everything is focused on the prompt and the knowledge the model is trained on.

This means the key focus of the LLM in RAG is understanding the context provided and how it correlates with the query. What that means for fine-tuning LLMs is the focus should be on improving the LLMs ability to extract and understand provided context, not fine-tuning the LLM to improve its knowledge. This is how we best improve RAG systems by minimizing the data variance that causes hallucinations or poor responses. Our LLM better understands how to handle context from multiple sources and sizes which becomes more common as these systems move to production use cases. This means we can spend less time trying to over optimize chunking algorithms and preprocessing to fit a specific data variance as our model is better at understanding the inputs and how they correlate to a goal state output.

Implementing ReAct Tools With GPT-4

We can implement ReAct actions using GPT-4 function-calling capability. The advantage of the GPT-4 API is that it provides a much more structured way to specify actions compared to ReAct's original proposals.

OpenAI's API enables you to specify actions and their information in great detail as shown below:

OpenAI function calling API example (Source: OpenAI)

GPT-4 uses that semantic information and metadata — the function description, parameter property descriptions, parameter units, and so on — to make informed decisions on when to invoke your actions based on your queries.

GPT-4 also enables the structured invocation of actions and transmission of action responses back to the LLM.

LLM Answers to Queries

The framework combines all the retrieved document chunks and the relevant dynamic data into a coherent answer by appropriately prompting the LLM.

All that remains now is to get the user query via WordPress and show the answer to the customer or internal stakeholder via WordPress' front end. That's explained in the next step.

Chatbot Integration With WordPress

Unlike many managed platforms, WordPress has a very simple, flexible, and rather generic API because it's designed to be highly extensible and customizable.

WordPress has a core, a set of files that contain all the essential logic that makes WordPress tick. For example, when a user visits a page on your website, WordPress puts together the HTML of that page dynamically. It processes the request, decides which piece of content it must render, compiles all the frontend elements and backend data that are necessary to render that content, and finally sends back the HTML it generated on the fly in response.

When this core is executing its logic, it invokes well-defined callback functions called hooks. Hooks act like notifications about what the core is currently doing or preparing to do.

Other components like plugins can subscribe to these hooks and execute their logic in response. For example, most plugins subscribe to the init hook and initialize themselves when the core invokes it after readying the essential page elements.

So a basic custom chatbot plugin typically implements the following at a minimum:

  1. Register with a hook to get notified that a page has loaded, and the chatbot should render its user interface if it's the right page.
  2. Call the methods of the wpdb global object to extract data stored in the WordPress database. For example, product and customer details can be had from there.
  3. Call the wp_remote_post function or other wp_remote functions to send prompts to the external LLM API and get back responses.
  4. Render the response on the plugin's HTML.

As you can see, integration with WordPress can be quite simple.

Custom WordPress AI Chatbot Demo for B2B Enquiries

In this section, we demonstrate a common e-commerce scenario. Your customers may ask difficult questions about your products or place difficult prerequisites for procurement. Since you or your staff may not have all the required data and answers ready at hand, you promise to get back to them soon.

At this point, you risk losing a customer who may shop around for other vendors and go with someone who has all the answers at hand.

The problem here is that compiling the required data and answers can potentially take time. But if you have a smart chatbot that's plugged into all your databases and current status, it can help you answer even complex questions as demonstrated next.

The scenario is an electronics equipment retail business. A customer asks you this question: "I need 50 spectrum analyzers with 50-100 MHz capacity sub-$1,000 pricing ASAP, all calibrated and good to go."

To answer it accurately, you must have the following information ready:

  • Which of your spectrum analyzers satisfies the 50-100 MHz requirement?
  • How many do you have in your inventory and distribution centers?
  • How many of them have recent calibration certificates?
  • How many of them satisfy the user's pricing conditions?

That kind of information surely takes time and a few phone calls to piece together.

But with a smart chatbot that's already plugged into all those databases, you just have to ask it whatever the customer asks you:

Notice that the query is a complex one with several criteria like product quantity, product type, very specific criteria for the products, and real-time status.

Just embed the chatbot widget in your product and search landing pages or page templates with a shortcode.

Our chatbot architecture receives the user's query, applies our prompting framework to query GPT-4 as well as your internal product, inventory, and calibration databases it's plugged into, and comes up with an accurate real-time assessment within a few seconds:

This kind of pulling together complex information from multiple sources and reasoning about them to provide an accurate helpful answer provides a competitive advantage to your business.

Custom WordPress AI Chatbot Demo for E-Commerce

Our second demo demonstrates the value of our AI-powered chatbots for your e-commerce online stores based on WooCommerce or similar framework.

Initial Product Search on User Query

When the user enters their first reply in your store's chatbot, the following steps run:

  • The query reaches the backend.
  • The backend processes the query through our LLM prompting pipeline.
  • Noticing that the query is looking for products, the LLM decides that the appropriate action for the product query must be invoked.

In the product query function callback, the backend queries the vector database for the nearest semantic match for the query.

When a customer or internal query is received, it's first converted to a query embedding using the same embedding generator that was used in the product import step.

It then searches the vector database for embeddings that are nearest neighbors of the query embedding.

The vector database returns a list of semantic matches but they're often not close to the user's shopping intent.

So we ask the LLM to reason about the matches and eliminate any product that doesn't match any criteria.

Reasoning About Product Matches Using the Buyer's Criteria

Our prompting framework now runs the reasoning step:

  • For each product result, it verifies whether all the criteria match or whether there are any outright non-matches.
  • For abstract criteria, like "something stylish," nothing is typically eliminated unless multiple product reviews are emphatic that an item does not match those criteria ("not stylish" in this example).
  • But for more concrete criteria, like specifying the brand, any non-matches are immediately eliminated.

The result of this reasoning step is a shorter list of better product recommendations that satisfy all the criteria so far, as shown below:

Repeating the Search and Reasoning Steps

After each reply from the user, the product querying and reasoning steps are repeated. Each reply from the user is treated as an additional set of criteria by the reasoning step in order to eliminate any non-matches, as shown in the customer interactions below.

When the user specifies "winter-time" clothes as their criteria, the chatbot displays a range of winter wear and also guides the user toward what criteria to consider next; in this case, colors:

When user types black, the shortlisted products are changed to black while all past criteria like winter wear, stylish, and Ralph Lauren are respected:

When the user adds an additional criterion of cotton apparel, the chatbot updates the results as shown below. It then guides the user to the next criteria to consider. In this case, it's a very abstract criterion like "vibe."

Traditional shopping user interfaces and chatbots can't really process criteria like "stylish" or "vibe." But LLMs, thanks to their powerful semantic capabilities, allow such abstract criteria and also provide relevant results for them as shown below:

Whenever the user modifies previous criteria or adds new ones, the list of products shown is updated as shown above and below.

Fetching Order and Inventory Details

Our prompting framework is designed to classify a query into product search queries, order status queries, inventory queries, or possibly some combination of them.

If the framework determines that order status information is required, it requests a custom action to query the WooCommerce Orders API and send back the information to the LLM.

Similarly, if the query requires current inventory details, the prompting framework requests a custom action to fetch the details from whichever WooCommerce inventory plugin you're using and send back the details to the LLM.

Custom WordPress Chatbots for Your Web Properties

In this article, we showed how to deploy custom AI-powered chatbots on the WordPress platform for e-commerce, helpdesk, and small business use cases. Using these chatbots, you can convert more visitors into potential customers, generate leads, and troubleshoot problems.

Contact us to discuss custom chatbots or other types of AI-powered workflow automation tuned to your business needs.

References

  • Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao (2022). "ReAct: Synergizing Reasoning and Acting in Language Models." arXiv:2210.03629 [cs.CL]. https://arxiv.org/abs/2210.03629