Width.ai

Building A GPT-3 Twitter Sentiment Analysis Product

Matt Payne
·
August 21, 2023


We’re going to walk through building a production level twitter sentiment analysis classifier using GPT-3 with the popular tweet dataset Sentiment140.

Social media sentiment analysis has a ton of powerful business use cases for many different industries. Being able to understand the sentiment someone is using when reaching out to your brand or business is one of the quickest ways to understand general sentiment for your business, product, competing products, or your marketing strategy. Case studies such as this one show how you can analyze your interactions and use dashboards to create deep marketing analytics around product launches. We’ve laid out use cases for sentiment analysis in this article such as:


  • Improving system down-time reactions
  • Generating extremely warm leads with competition spying
  • Chatbot funnel automation


Gone are the days of using outdated sentiment analysis models, naive bayes classifiers, NLTK, and bag of words based models that paved the way in the early days. GPT-3 is a powerful NLP model capable of a wide variety of natural language processing tasks through few shot learning. If you’re unfamiliar with the capabilities of GPT-3 I suggest you read our business applications with GPT-3 post. 


Getting Started

The first thing we’ll need to do is pick a dataset that we can use with labeled tweets for sentiment analysis. There’s a ton of these datasets out there, and you can use any of them as long as they have some form of a sentiment label. Most commonly they will have three options for what they label can be: “Positive”, “Negative”, “Neutral”. You’ll notice when scanning the datasets they are usually based around a specific company's tweets, such as tweets aimed at Apple, Best Buy, or others. These projects are normally done with a specific company's use case in mind, and you can certainly build your own dataset based on your own tweets. Keep in mind that you’ll have to label the tweets yourself. Since we’re going to try to take advantage of GPT-3s few shot learning abilities we’ll only use a few tweets in the prompt for now, but will use something trickey later down the line. 


For this example I’ll move around a few datasets but we’ll start with Sentiment140 as the dataset. 


Example of Sentiment140 dataset for sentiment analysis
Example of Sentiment140 dataset for sentiment analysis


This dataset has a ton of tweets, and contains the same three labels shown above. They are marked with “4” = positive, “2” = neutral, “0” = negative. The only downside to this dataset is that it is pretty outdated, with most of the tweets coming from 2009. Language differences from tweets in 2009 to 2021 is probably high, but we’ll start with this for now. For real time use of the machine learning model we’ll want to use the twitter api to grab inputs for our text analysis. 


Diving Into GPT-3

twitter sentiment analysis using gpt-3 prompt


Oh look at that, GPT-3 already has a tweet classifier built in! Sorry to burst your bubble but this model just does not work well on real tweets for sentiment analysis. If you look through the examples used in the prompt you’ll notice the language used is very basic and does not match real tweets. All the given example tweets use easy language for the model to figure out and are never long enough to provide real variance. Words like “hate, loved, amazing” are used throughout, and always are attached to clear and concise thoughts. Rarely do we actually see real life examples that make sentiment analysis that easy.


Let’s Make A Quick Tweet Sentiment Analysis Prompt

We’re going to look at a bunch of ways to build this machine learning software, and we’re going to start with just a prompt based model. Using a few examples of positive tweets, negative tweets, and neutral tweets we can put together a quick model.


Example prompt for twitter sentiment analysis


After running a batch of tweet examples through, the accuracy came out to around 65% with 6 total examples in the prompt. Considering how small the example size is for each classification this is pretty good. Let’s look at a few important points we found when analyzing the results.


  • The model struggled mostly with tweets deemed to be neutral in sentiment. There’s a couple reasons for this that become pretty clear when diving into the language. For one there isn’t a good set of language that is always deemed “neutral”. With positive and negative there are sets of words and phrases that pretty much always lead to one or the other. Neutral doesn’t really have this or the set of language is just too large for just 2 examples in the prompt. The other spot where neutral can struggle is when the tweet has language that is both positive and negative. If the tweet has two parts to it, either sentences or phrases and they have differentiating sentiment it's difficult for the model to understand what should be done for that. In most cases GPT-3 uses recency bias to decide if tweets like that should be positive or negative. An example of one of these tweets is: “UP! was sold out, so i'm seeing Night At The Museum 2. I'm __ years old.” The model actually thinks this is a positive tweet, and you could argue the text before the comma is negative and then positive. 
  • As with all GPT-3 based tasks, context matters a ton when determining sentiment. We only used 6 total examples for this first prompt which makes it impossible to cover a wide range of variance. A few tweets in the test tweet sentiment analysis dataset were sports fans arguing about Lebron James and Kobe Bryant. None of the prompt examples had anything to do with basketball. This large global variance can be accounted for with a few tools we’ll talk about below. 
  • When your prompt is this small and covers this little variance and context, length of the input tweet matters alot. If your examples are mostly one line or one sentence new input tweets that are much longer or much much shorter can have varying accuracy. 


Even as we dive into the more powerful machine learning models down the line these same points will be of emphasis. 


We added an extra 30 tweets to this beginner GPT-3 models prompt and were able to increase the accuracy by around 8 points instantly. Now let’s look at real production level examples of sentiment analysis. 


Tweet Sentiment Analysis Using GPT-3 Classify Text API


At the time of writing this the GPT-3 classify text api endpoint is still in beta form. Given the huge success of the other task specific api endpoints, and the success we’ve had with this specific endpoint, adding this to any guide is a no brainer. 


The classifications endpoint allows you to leverage a far greater number of labeled examples for your GPT-3 model than you would be able to with the token limited prompt. You can also avoid the need to fine-tune your model using the Fine-tuning endpoint which requires hyper-parameter tuning. Let’s look at the process to reach a high accuracy classifier. 


Classifications Endpoint Process Overview

Layout of classification api for sentiment analysis
Layout of Classifications API


Gather Required Inputs

Before running the model you have to upload labeled data examples to the GPT-3 documents server. It’s a very simple process that requires you to upload a jsonl file where each line is a single training example with a “text” field, which should be your tweet, and a “label” field where you should add one of our three labels as a string. The line can also contain a “metadata” field which does not affect the output. One interesting thing to note is that you can use this same api endpoint for a bunch of classification tasks depending on what your labels are for your examples.


jsonl example for classification
JSONL Example For Classification


You’ll have to upload this file via a query which can be found here. To run the model on a new tweet via the api you can create a query in python that looks likes this:


openai classification api example

The “file” field is a location hash of the file you will have uploaded. They provide that as an output when you complete the upload. “query” will be your tweet and “model” will be the GPT-3 engine you use to classify. We’ll get to max_examples and search_model and the value of those. The result you get back will look something like this.


Results from gpt-3 classification api for sentiment analysis


The key benefit to this api endpoint is that you can use an incredibly large amount of labeled examples that cover a much larger set of variance and language domains. You can store upto 1 GB of files at once which should be more than enough. 


Narrow Examples Dataset

The process of narrowing the labeled tweet examples is a two step process that starts with a conventional keyword search across the tweet examples to shrink down to the “max_examples” field we saw above. This is an integer that defaults to 200 and can be adjusted. The general rule with this GPT-3 feature is that a higher value leads to improved accuracy across a generalized test dataset, but does have higher latency and cost. I’m a huge fan of going with a much larger “max_examples” value for twitter sentiment analysis, let me explain why:


  • This api endpoint is set up to handle classification of any documents or text regardless of size which is part of the reason for the keyword search as a way to reduce the size of possible examples. The problem with our use case is that tweets are generally very short and do not provide a ton of valuable keywords to begin with. This is not a semantic or contextual search which would allow keywords like “basketball” to be aligned with similar sport keywords. Keywords in any given example that highly match our input tweet are much lower in count than you might have if you used example data points like resumes.
  • Based on what the next step in this pipeline does (we’ll get there) these labeled tweet examples are already going to be optimized semantically in order based on our input tweet’s text. Having labeled examples that barely match in terms of keywords won’t have a large impact on the input tweets classification result given they will be far back in the rank. 


Just like any other machine learning model with hyper-parameters you can use tools such as Bayesian optimization to find the optimal value for “max_examples” that produces the highest accuracy across a test dataset. 


Rank Tweet Examples On Semantic Relevance

Once we have narrowed down to a number of examples we rank the labeled tweets by semantic search scores. The “search_model” parameter decides which GPT-3 model is used for this and we recommend using “davinci”. We’ve found that for tasks across the board especially when our model is not fine-tuned that davinci exponentially outperforms the other options. If you want to learn more about how search works, you can read the api guide on the search api. Ranking labelled tweets based on semantic relevance is a huge advantage of this api endpoint, as it greatly improves the accuracy. 


Tweet ranking when using classification api


You’ll notice this list provided in the output results which shows which tweet examples from the uploaded file were used. The “score” field is a similarity score between this example and the input tweet text. Higher scores are most similar with most values being between 0 and 300. Given there is no set range for the score you can imagine that different search queries reach a different distribution of scores. Considering the large dataset we want to use for tweet sentiment analysis I think it's important to have an understanding of the mean and standard deviation of a test set given randomly sampled labeled tweets. 


Classify Text

The final step simply returns what label the model believes best matches the tweet. 

Real example of classification api for twitter sentiment analysis


Final Thoughts On Classification Api

Classification api information from gpt-3

This is an extremely powerful way to perform twitter sentiment analysis regardless of what tools you’re considering. The data can be used quickly without having to worry about training a giant model like BERT, and provides the opportunity to use a huge amount of labeled tweets. The normal trade off between a model that requires training and the few shot learning of GPT-3 normally has to be managed, but with the ability to use a huge amount of data and have the api handle optimization tasks for you such as ranking and semantic relevance it's a huge advantage. 


We’ve used this api endpoint + plus a custom prompt optimization pipeline that we built to get extremely high results on both the Sentiment140 dataset and custom client data. Here are a few things to think about with twitter sentiment analysis:


  • The keyword narrowing is something you’ll have to manage quite a bit. 
  • Understand key data science principles around your labeled examples and their relevance to real world inputs. 
  • Build a probability estimation model to evaluate your classification labels. This helps you estimate the predicted probabilities of each classification label on an input tweet. This gives you deep insight into the relationship between language used in the tweet and the model's understanding.



Twitter Sentiment Analysis Using Prompt Optimization

We’re going to take a lot of the information we’ve already seen from the two implementations above to form a new prompt based model like the first twitter sentiment analysis model shown, but include custom components similar to the second api endpoint tool.


The goal of this component is to reach a generalized accuracy as high as the second api endpoint tool, but gives us a level of customization that let’s difficult sentiment analysis use cases optimize their accuracy. A popular example of sentiment analysis use cases that can struggle are tweets where a lot of the language used by the writer is expected to be inferred by the person who is supposed to read it. We’ve seen this come up before in case studies such as topic summarization for asset management where the writer is directly interacting with someone which can lead to points being inferred. Natural language processing models generally struggle with this use case. 


Twitter Sentiment Analysis Prompt From Completion Api

Given the normal GPT-3 prompt and the number of examples we can fit into their 2048 token limit you can imagine there’s no way to fit enough examples to cover a wide range of possible input tweets. When using the classification api above we didn’t have to worry about running out of tokens or trying to fit enough generalization into our prompt as there is no token limit. 


We’re going to build our own version of the processes in the classification api to give us more control over functionality and the results. Just as before we have our dataset of tweets that are labeled for sentiment analysis. Instead of uploading them to GPT-3 or hand picking them to store in a GPT-3 prompt we’re going to put them into a data structure of your choice. 


Reading seniment140 dataset into pandas dataframe


This will act as our pool of possible tweets to use as examples for a given prompt and an input tweet. When it's time to add tweets to our prompt and run our input tweet we’ll format these example tweets and their label to be the same as we saw in our first example. 


Gpt-3 playground example of prompt examples for sentiment analysis

We’ll want a simple script to take any given tweet and label from our dataset and put them in this format. The label is added to a line with “Sentiment: label” and the text is added to a line with “Tweet”. Don’t forget the “###” between lines.  

Creating An Optimized Prompt Algorithm 

Now we need a way to choose which labelled tweets from our dataset are perfect to use as prompt examples in our 1500 token limited GPT-3 prompt. As we saw in the first example we can’t account for a ton of language variance and topics in the tweets used as examples, so we have to choose wisely what labeled examples give us the best chance to produce a correct sentiment analysis given an input tweet. 


SBERT sentence transformers for twitter sentiment analysis


SBERT allows us to compute semantic similarity between tweets in the same way the classification api did above. It’s no secret that semantically similar examples produce better results in GPT-3 given any input tweet. SBERT creates text embeddings that can be compared with algorithms such as cosine similarity to determine which tweets are semantically similar to one another. You can compute our own text embeddings and determine the perfectly optimized prompt example generator for our use case. Why would you want to use our own semantic similarity pipeline over the one offered with the classification api?


  • More flexibility over the model used to compute sentence embeddings. GPT-3 does not allow you to choose one and more importantly you cannot retrain the model used. With this pipeline you can pick a pretrained model and fine-tune it for your specific use case. 
  • True customization over what decides the examples used in the prompt. The classification api uses keyword comparison to first shrink the input then semantic similarity to rank examples, and that is your only option. You cannot remove either step in the process or reorder. 


How Should Our Basic Pipeline Work?

I highly recommend computing the text embeddings for your example tweet dataset ahead of time, as it's a long process that you won’t want to do at runtime. No matter how many steps you have in your pipeline you’ll want to precompute anything you can that is not related to the input tweet. Here’s a simple example of running the completion api with a prompt optimization function that works on the fly and builds the prompt in the format shown in the playground model. 


Simple framework of required steps
Simple framework of required steps


This is what our main function should look like. “Prompt” is filled in dynamically through our pipeline that decides examples.


Example of using SBERT to transform tweet dataset into embedded vectors
Here’s an example of the logic to generate embedding vectors from the dataset of tweets. Model.encode takes in a list and produces a list back. 


You can use this same code to generate the embedding vector for our input tweet as well. Unless you change your model used or the words in a tweet your embedding vector does not change.


SBERT example code to semantically compare new input tweets to dataset


This code example from SBERT shows how we can take our list of text embeddings from labeled tweets and compare them to a newly encoded input sentence (“tweet”). Shown as well is ranking the results and grabbing the top 5 most semantically similar tweets. This is how we can easily choose prompt examples in real time that are optimized for our input tweet. 


Visualization of sentence embedding space with tweets

If we were to visualize our tweets in the embedded word space they would look something like this. Sentences (“tweets”) with similar semantic topics and information become clustered together as their vectors are closer to each other. 


Twitter API For Real Time Analysis


Using twitter api to create a real time tweet streamer for sentiment analysis

Using the twitter api and this easy to follow guide you can quickly create a deployed application to grab new tweets for sentiment analysis or to create new training data. 


Final Thoughts

Twitter sentiment analysis using GPT-3 and other machine learning algorithms is a powerful application that is easy to build and can be customized to many different levels of ability. GPT-3 is an incredible tool that we’ve blended into all of our text analysis products and clearly is more than capable of producing high quality results for twitter sentiment. 


Build Your Next GPT-3 Application With Width.ai

We build custom GPT-3 and NLP products that fit a huge range of industries and use cases. We have GPT-3 case studies in asset management, SaaS, ecommerce and many more. Let’s talk today about GPT-3 in your industry. 


Width.ai logo