Width.ai

How to Build a Custom Salesforce Chatbot with our Powerful Framework

Matt Payne | Patrick Hennis
·
January 22, 2024
Chatbot workflow at a high level for salesforce

Your customer account managers or customer service executives have probably fielded difficult customer queries that require in-depth knowledge of your products or production processes. If such knowledge isn't readily available at their fingertips, customers may form a negative image of your company and products.

Problems like these can be solved by deploying smart custom Salesforce chatbots that are aware of your customers' preferences. In this article, learn how to implement and deploy custom chatbots powered by large language models (LLMs) to help all the Salesforce users in your company.

Salesforce Chatbot Use Cases with our Framework

Salesforce provides a mind-bogglingly vast set of features for a large number of business functions and verticals. The business functions include sales, marketing, customer service, e-commerce, and data analytics. The verticals include financial services, manufacturing, and health care among others.

Salesforce calls each product suite, either targeting a business function or an industry, a “cloud.” There's a sales cloud, a marketing cloud, a health care cloud, and so on. However, they're far from being silos; they just prove convenient for organizing features and resources. The user experiences across all the clouds are consistent and interoperable.

Four common chatbot-related use cases in different Salesforce clouds are explained in the sections below.

1. B2B Product Procurement Assistant

Account managers in product companies often get inquiries from their customers' procurement counterparts. They may specify a complex list of product criteria and want to know prices and availability.

Having a chatbot on the account manager's sales cloud homepage that can answer such complex questions quickly and correctly not only improves productivity at both ends but also creates a positive image of the company.

2. Salesforce Customer Service Chatbot

The Salesforce service cloud helps to create a good customer experience. A chatbot that customer service employees can use to comprehensively answer difficult questions and resolve complex issues helps with customer retention and creates a positive image of the company.

3. Sales and Marketing Intelligence Chatbots

Salesforce's analytics and insights are some of its biggest draws for businesses. However, their user experiences aren't the greatest and require several manual steps.

Chatbots that can accept complex analytics queries and high-level questions about insights in natural languages and answer them comprehensively, preferably with visual aids like charts, will be game changers in terms of productivity and user experience.

Salesforce's Built-In AI Capabilities

Salesforce supports several generative AI and LLM use cases in each cloud using its built-in AI and LLM platform, Einstein. These capabilities include:

  • Sales AI: These features include a buyer assistant, the generation of context-aware and relationship-aware sales emails, automatic summarization of customer calls, and extracting insights from conversations with customers.
  • Marketing AI: Its capabilities include content creation for different marketing channels and insights from customer messaging.
  • Customer service AI: The service cloud Einstein platform generates service replies, work summaries, and knowledge articles. It enables customer service chatbots as well as the bot application programming interface (API) and a custom bot builder.
  • Commerce AI: These features for e-commerce businesses include shopping assistance chatbots, generative page designer tools, generative product descriptions, and custom Einstein chatbots.

Why Build Custom Chatbots Instead

If Salesforce already has built-in generative AI and chatbots, why should you even consider custom self-hosted or third-party chatbots? Here are some of Salesforce AI's drawbacks that custom chatbots can solve:

  • Limited customizability: You'll eventually run into new customer use cases that require special workflows, particular features, and tailored results from your chatbots. However, these managed, built-in chatbots just aren't sufficiently customizable to implement them easily.
  • Subscription restrictions: Salesforce has several subscription plans, called editions, with different features. Unfortunately, most of the built-in AI capabilities are only available in the high-end enterprise and unlimited editions, and may incur additional subscription costs.
  • Prototyping restrictions: Some products, like the commerce cloud for e-commerce, aren't even available for trialing and require special licenses. Exploratory development work like prototypes and proofs of concept may be impossible for small businesses.
  • Limited language support: Salesforce's current built-in LLMs are restricted to English. If your business has non-English-speaking customers or employees, you'll need custom chatbots.
  • Lacking advanced LLM capabilities: The built-in models either lack, or don't expose, capabilities like advanced prompting techniques, agents, and function calling. These are critical for moving chatbots to production and getting the most out of the options available. Control over prompting techniques is required to build the best chatbots.

The key idea is that as you look to further your chatbot and integrate with other tools and data sources the “managed service” architecture of prebuilt chatbots makes this incredibly complex. In some instances due to license restrictions or API restrictions it's impossible to incorporate specific tools you might need to get the most out of the chatbot. Why limit yourself to what you can use your chatbot for and what it can do?

With better awareness of the benefits of custom chatbots, we next explore the essential first step of fetching data from Salesforce.

Getting Data From Salesforce

Salesforce provides multiple APIs to access data depending on your application's requirements and restrictions. Here are a new of the important APIs that applications can use to fetch data:

  • Salesforce REST API: This is the primary API for data retrieval.
  • Salesforce GraphQL API: Unlike the REST API, the GraphQL API only gets the exact objects and fields you're interested in. Since Salesforce object models can be many levels deep, GraphQL helps reduce network and data processing loads.
  • Bulk API: Use the bulk API for fetching large batches of data or files.

When implementing custom chatbots, user queries are converted to one or more calls to these APIs to fetch the relevant data and files.

Width.ai Chatbot Framework for Salesforce

Width.ai chatbot framework for Salesforce

Our first pipeline takes a look at our chatbot framework that can handle complex integrations and data sources. This allows us to take an approach that doesn’t limit us to anything inside Salesforce but have control over how we interact with Salesforce, and take advantage of all our data and structure inside Salesforce.

How it works and the custom structure that makes this viable

Our chatbot framework uses a complex system of our favorite prompting framework, connection to external tools, a direct connection to the Salesforce API, and built in feedback loops to improve the chatbot.

We deploy this system in cloud services that provide access to the Salesforce API to easily integrate into the backend.

Intent Classification

We use intent classification in our more complex chatbots to allow us to use the same chatbot for different Salesforce workflows. This means that we can provide the models with different instructions and goal state outputs based on what operations and data sources are being used. This greatly improves the ability and accuracy of the chatbot as we can have task specific prompts that are not generic, and don’t have to build new chatbot architectures for each use case in Salesforce. This is one of the most overlooked parts of successful chatbots. The amount of customization we can provide to each chatbot run greatly improves the results of the system. We can access specific data sources, APIs, and knowledge each time the chatbot is used. This isn’t easily available in “managed service” chatbots.

Prompting Framework for response generation and conversation management

Example workflow for a Mortgage Chatbot
Example workflow for a Mortgage Chatbot

The prompting framework used in the LLM is the key driver behind the chatbot. Production chatbots are not just a simple prompt that just generates a response. These frameworks guide our conversation towards the end result response that fits with the users intent, and manages the context provided to the LLM through deciding what data sources are used at each conversation step. I highly recommend reading this article where we go much deeper into one of our prompting frameworks for these chatbots called ReAct. We also talk briefly about it in a later section.

This is again one of the key benefits of using a custom architecture that you have full control over. We can now adjust how the LLM is used and what the process looks like each time a user query is provided. In the above workflow, this customization allows us to configure variability in what Salesforce APIs and knowledge bases are used based on what user query is provided at runtime. As we’ll see below we have a new level of flexibility for what we can integrate into this chatbot and need to manage our connections.

External API Services & Knowledge Base Access

With our prompt framework managing what APIs and external tools we want to get data from to pull into our context window we’re ready to leverage this data. This is where we set up code functions to connect to the Salesforce API and vector DB as a knowledge base.

The idea is that we use API connection to interact with data that changes often, such as inventory levels or website analytics. This is because we have to vectorize the data to be used by our system, and if the data is constantly changing we might as well wait till we need it. Data that doesn’t change often like product data, sales data, or marketing campaign lists can be stored in a vector DB that is connected to the offline database in Salesforce. We set up an automatic system to reindex the data every so often and interact directly with the Vector DB. We’re huge fans of Qdrant for our DB, and have scaled it up to 10 million records in a product database for search.

We build these various options into our prompt logic so the model understands that it has these available when deciding what context to pull into the prompt based on the query.

A key note, using Vector DBs or generated API queries does not mean we can’t add complex filtering when accessing data in Salesforce. We’ve built this into these conversational systems a number of times and it works perfectly. We define the schema we need to take advantage of the specific filters and include it in the search knowledge. We’ve used this to search on specific customers or client IDs, access specific documents, limit access to documents, and a bunch of other operations.

Use of Embedded Vectors

In our system, we employ embedded vectors at two critical junctures: during the initial data processing phase and at the moment of query execution.

The first application involves the static data that is uploaded to our vector database. As part of an offline procedure, we extract textual content from the uploaded documents, format it appropriately, and then convert it into embedded vectors. These vectors are then stored in our database, ready to be accessed through our Lookup[] function when needed.

The second application occurs during live operations when we need to retrieve information from our vector database in response to user queries. To do this, we must convert the query, which is generated by our query formulation model, into an embedded vector in real time. This conversion allows us to match the query with the most relevant information in our database.

We currently have a product, which is in the private beta stage with select clients, designed to enhance the performance of our embedded vector generation model. The aim is to refine the model's ability to discern semantic similarities between the database content and user queries. This is particularly important in large vector databases where data can be densely packed and highly similar, making it challenging to retrieve the precise document needed. The improved model is being trained to better recognize the nuances in the way queries are phrased and how they relate to the segments of text within documents, thereby redefining the concept of "similarity" for more effective information retrieval.

Another Option - ReAct Chatbot Using GPT-4

In this section, we explain how we use the new OpenAI Assistants API and function calling to equip chatbots with a prompting framework. As we saw above, ReAct enables chatbots to reason about complex questions and execute suitable actions.

The goals we're talking about are ones like searching for the closest product that satisfies all the complex criteria specified by a customer in an inventory with millions of products, exactly the kind of goals that a B2B procurement manager or account executive would want in their Salesforce workspaces.

The Assistants API

The OpenAI Assistants API enables the use of specialized agents and actions while a user chats with GPT-4.

For a long time, the only responses from GPT-3 or GPT-4 were suitable replies based on their training. They weren't capable of things like:

  • Looking up information on the web
  • Fetching relevant data on demand from public or private databases
  • Doing any custom data processing

The assistants API solves such drawbacks by letting chatbot client applications define custom agents and their semantics. They work as follows:

  • Based on the provided agent descriptions, GPT-4 decides if a custom agent should run on demand.
  • If it decides that an agent must run, it sends a request to the application to run the selected agent with the supplied arguments.
  • The application executes the agent and sends back its results to GPT-4. One agent may look up information on the web. Another agent may retrieve relevant documents that contain additional information. Yet another may talk to an external subsystem like Salesforce to get useful customer data. The possibilities of what agents can do are endless.
  • GPT-4 treats the results from agents as additional context for use in its subsequent plans and replies.

As of December 2023, two predefined agents are provided by the assistants API:

  • Knowledge retrieval: This agent is a general-purpose implementation of retrieval-augmented generation. It searches your file systems and data repositories for documents containing information relevant to the conversation. PDFs, Microsoft Office, and many more formats are supported.
  • Code interpreter: This is a specialized agent that generates Python code to solve complex math and coding problems, and refines it iteratively by running it in a sandboxed environment till it produces useful answers.

In addition, the assistants API comes with a generic function-calling framework that enables client applications to implement any type of custom agent.

ReAct Overview for Function Calling

react prompting framework

ReAct is a prompting strategy to make an LLM plan on how to answer a complex question by combining reasoning with any necessary actions. In the example above, the LLM reasons that in order to answer a complex question comprehensively and accurately, it must break down the question into distinct actions that require interfacing with external systems.

The full list of available actions is domain-specific and supplied via few-shot examples to the LLM. It picks the actions that are most helpful for answering the question and acts on them.

Some few-shot examples for a question-answering task are shown below:

react prompt examples

ReAct helps use cases like complex product search with multiple criteria:

  • It starts by executing a high-level product search.
  • It then uses reasoning to examine each product against the user's criteria.
  • It eliminates any products that don't match some of the criteria.
  • It verifies if the remaining products satisfy each of the other criteria.
  • It homes in on the product that best satisfies all the criteria.

Implementing ReAct With GPT-4

We can implement ReAct as an assistant using the Assistants API or using the function-calling capability. The advantage of the GPT-4 API is that it provides a much more structured way to specify actions compared to ReAct's original proposals.

OpenAI's API enables you to specify actions and their information in great detail as shown below:

OpenAI function calling API example (Source: OpenAI)
OpenAI function calling API example (Source: OpenAI)

GPT-4 uses that semantic information and metadata — the function description, parameter property descriptions, parameter units, and so on — to make informed decisions on when to invoke your actions based on your queries.

GPT-4 also enables the structured invocation of actions and transmission of action responses back to the LLM.

In summary, ReAct is a powerful, yet simple, approach to equip GPT-4 with attention to detail, reasoning, and interfacing with external systems via actions.

Chatbot and Salesforce Integration

Salesforce dashboard

In this section, we explain how to deploy a custom chatbot and integrate it seamlessly with Salesforce's front end, back end, and security settings.

Decide How to Deploy the Chatbot Backend

You have two choices when deploying the chatbot backend:

  1. Web service API outside Salesforce: In this deployment, your chatbot is a web service with an API running on your self-hosted or managed infrastructure.
  2. Apex service inside Salesforce: Apex is Salesforce's domain-specific programming language for backend components and services. You can implement the chatbot's PEARL logic and GPT-4 API calls as an Apex service.

Chatbot User Interface

Implement the chatbot user interface as a Lightning web component that users can add to their home pages. Lightning is the Salesforce user interface (UI) framework. The chatbot will just be a simple component implemented using standard HTML and Javascript but uses the Lightning or Aura UI components.

The UI sends user prompts to the custom chatbot based on how the latter is deployed:

  • If it's deployed as an external web service API, use the standard Javascript fetch() function to communicate with the API.
  • If it's deployed as an Apex service, use Lightning APIs to communicate with the service.

Security Settings and Other Considerations

Salesforce is very strict about security settings. Whether talking to the chatbot API from the UI or talking to the GPT-4 API from the Apex backend, ensure that the external API URL is added as a trusted site in remote site settings. You may also need to configure named credentials if your chatbot API requires authentication.

Apart from security, Salesforce imposes performance and latency limits on components. Ensure that the chatbot sends a response within that time, or alternatively, implement asynchronous replies. The latter is probably the better option since PEARL can take a long time to finish.

Salesforce Chatbot Demos

In this section, we demonstrate a Salesforce chatbot that:

  • Assists sales and account executives in answering questions from their B2B customer liaisons that may come up during sales negotiations or routine operations, such as availability of products
  • Helps customer service personnel in answering customer queries, typically in B2C scenarios

The image below shows the Sales app home page of a sales executive at a large electronic equipment wholesaler. The custom chatbot has access to their SKUs and current inventory data and is available in the right sidebar:

Salesforce dashboard

While on a call with a valuable client, they ask a difficult question about the availability of a large number of devices: "We need 500 100 MHz or better oscilloscopes within the month. Do you have any products for immediate dispatch?"

In the past, a question like that would necessitate a promise to get back soon with the information followed by internal communications with multiple departments to check if anything was available.

However, the product assistant chatbot, with access to real-time inventory and product information, cuts this time down to just a few seconds. The exec just asks it the same question as the client: "Customer needs 500 100MHz or better oscilloscopes within the month. Do we have any products in enough numbers in any of our distribution centers?"

Product assistant chatbot

Using its real-time information retrieval and reasoning capabilities, the chatbot provides a comprehensive answer along with product details:

product search with ai

The answer has all the product availability information the exec needs to answer the question on the spot, impress the client, and retain a sale that might have gone to a competitor.

In the same company's customer service department, a support executive gets a query from a retail customer: "Is my Keysight O9025 oscilloscope bought in June 2022 still under calibration warranty?"

With all the details of the sale and product details, the chatbot answers this immediately and suggests additional information to get from the customer for a more accurate answer:

Ready to Build a Custom Salesforce Chatbot for Your Organization?

In this article, we showed that custom Salesforce chatbots can offer enormous value and productivity gains to your employees. Contact us to learn more and discuss custom chatbots tuned to your business needs.

References

  • Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao (2022). "ReAct: Synergizing Reasoning and Acting in Language Models." arXiv:2210.03629 [cs.CL]. https://arxiv.org/abs/2210.03629