Automated Issue Classification of Political Advertisements on Facebook

By Natalie Appel ‘23, Noah Cohen ‘22, Spencer Dean ‘21, Sam Feuer ‘23, Magda Kisielinska ‘22, and Brianna Mebane ‘22

Since 2010, the Wesleyan Media Project has hand coded American political advertisements for an extensive list of variables relating to content and tone. The information collected through this process is insightful, however it is a time intensive task. Thus, the initial question we sought to answer was how much, if any, of this process could be automated in order to keep up with the scale of digital advertising. Our goal was to do so by training machine learning models on the text of the existing hand coded ads in order to predict the characteristics of new ads. We tested a number of methods and ultimately found that our Random Forest worked best for binary issue variables while a distilBERT Neural Network worked best for multi class variables, especially ad tone.

Data

We collected 2,865 unique Facebook ads from Facebook’s API that were hand coded by the Wesleyan Media Project. We used these ads to train and validate our models, as well as a much larger set of Facebook ads that had not been coded to apply our models. This was a set of 604,723 ads from October 1, 2020 through November 3, 2020 that were run by advertisers who mentioned a federal candidate during the 2020 election cycle.

The set of hand coded Facebook ads we worked with were from January 1, 2020 through November 3, 2020. The characteristics of interest were: 

  • “Issue” variables: coded as yes or no to indicate whether or not the ad mentioned a certain issue like COVID-19, health care, or the economy.
  • “Action” variables: donate, purchase, persuade, info, and other goal. “Donate” refers to ads that are asking viewers for donations, “purchase” refers to ads selling or promoting merchandise, “persuade” refers to ads trying to sway viewers in their favor, “info” refers to ads presenting information on the election or candidates, and “other goal” refers to ads with another perceived goal, like encouraging viewers to reach out to their representatives or register to vote. For each ad these variables were coded as either primary goal, secondary goal, or not a goal.
  • The “ad tone” variable: a three class variable that categorizes the tone of the ads as “attack,” “contrast,” or “promote” depending on whether or not the perceived purpose of the ad was to attack a specific candidate, contrast candidates, or promote a specific candidate respectively.

We also used other information collected by the Wesleyan Media Project and OpenSecrets such as the political lean of Facebook advertisers, the dates that ads were run, the approximate amount spent on the ads, and the platforms the ads were on to apply our models after we created them.

Methods

We started by implementing a basic Random Forest model, focusing on predicting which ads mentioned certain issue variables including COVID-19, employment, the economy, and health care. These models were around 80% accurate for these “issue” variables but struggled with identifying positive cases, likely due to the low volume of positive cases in the training dataset. In an attempt to improve accuracy we experimented with implementing lemmatization, word embeddings, and ROC (receiver operating characteristic) curves into the model. After tuning hyperparameters and making other improvements, the final Random Forest models had accuracy above 95% for these issue variables. We then expanded the list of variables we tested on to include ones that had fewer positive cases, and adjusted different iterations of the model to fit the five “action” variables. 

However, the Random Forest method continuously struggled to predict the “ad tone” variable. The model was significantly improved by tuning the hyperparameters and making other adjustments, but we turned to other methods to see if any would outperform this one. We experimented with an Extra Trees classifier and a Voting Classifier using Random Forest, ExtraTrees, and AdaBoost. Running each of these models on several randomized splits of the data, we found that neither performed significantly better than the Random Forest, especially after adjusting for the class proportions. The Extra Trees method only reached 84% accuracy for ad tone whereas we had achieved 87% accuracy with the Random Forest model. 

We also experimented with a keyword search approach. This involved creating a list of keywords and phrases for each variable, and then searching through the text of a given advertisement to detect any matches. Some keywords were formulated intuitively, like “COVID” for the coronavirus variable, and others were added by employing relative frequency analyses to determine which keywords differentiated variables within the hand coded dataset. These lists were edited iteratively in order to optimize accuracy, sensitivity, and specificity.

We also thought that neural networks may be effective in classifying the ads. In recent years, many text classification and other natural language processing tasks have achieved breakthroughs in accuracy through use of a language model published by Google known as BERT (Bidirectional Encoder Representations from Transformers). Unlike so-called “Bag-of-words” models, BERT employs attention mechanisms called Transformers to learn the contextual relationships between a given text’s words and sub-words. For the purposes of this project, we applied BERT using a Keras-based architecture for deep learning.

Results

Ultimately we were able to achieve upwards of 95% accuracy for most issue and some action variables and approximately 87% accuracy for ad tone using the Random Forest method. We also found that for many variables, the keyword search model rivaled and occasionally outperformed the Random Forest models. This was especially true for issues like the economy or coronavirus, for which there are words that are clearly indicative of the issue being mentioned. As such, the keyword search model struggled with more ambiguous issues and did not perform well with ad tone or the action variables. The BERT classification method produced levels of accuracy above 90% for most issue variables, however, it failed to outperform the Random Forest on ad tone, only achieving 76% accuracy. This model was still of high interest to us as we believed it could be greatly improved, see here for the progress.

Figure 1: Accuracies, specificities, and sensitivities by variable and model

After developing these models we used them to make predictions for a set of 604,723 Facebook ads run between October 1st and November 3rd by advertisers who mentioned a federal candidate in the 2020 election cycle.

We looked at the distribution of ad tone across Facebook and Instagram. Figure 2 shows that ads run only on Facebook were more likely to be attack ads, and ads run on both Facebook and Instagram were more likely to be contrast ads. Lastly we investigated differences in spend by the action variable. Additionally, figures 3 and 4 show that ads aimed at getting viewers to donate were more expensive relative to others while ads aimed at getting viewers to purchase items were less expensive relative to others.

Figure 2: Breakdown of ad tone across different social media platforms

Figure 3: Spend for Donate ads compared to ads in all other categories

Figure 4: Spend for Purchase ads compared to ads in all other categories

Conclusion

In conclusion, our team was able to develop several machine learning algorithms to make predictions about the content, sentiment, and intent of ads aired on Facebook leading up to the 2020 presidential election. Random Forest and keyword search methods generally performed the best, and we would like to continue exploring improvements that can be made to the BERT Neural Network, as it could eventually outperform other methods for more complicated variables like ad tone. More information about class proportions in the entire corpus of Facebook ads could also prove instrumental in improving our predictions of ad tone. As the Wesleyan Media Project gains a greater understanding of the entire universe of political advertisers on Facebook, we can begin to draw more general conclusions about political advertising on the platform. There are many applications of this research and we look forward to continuing our analysis of these ads as we improve our models.