Candidates take different approaches when it comes to advertising on Facebook in terms of how much is spent on each ad, the content of ads, and how long each is live on the platform. We wanted to investigate these trends and find patterns in the way candidates advertise themselves, especially to see how political affiliation, incumbent status, or level of government could influence advertising strategies.
Due to persistent worries about gun violence and mass shootings, the question of gun regulation has become a highly divisive one in current American politics. In this regard, the goal of our research was to gain insight into how candidate traits, such as gender and partisanship, may affect whether and how candidates discuss and feature guns in political advertisements. We used deep learning models to detect weapons in political ad images and video frames. In particular, we evaluated the performance of SwinTransformer, ResNext, and RegNet_y_128f on gun detection, finding that RegNet_y_128f outperforms the other models and achieves an F-1 score of 0.8 on our validation set.
Automatic speech recognition (ASR) models are key to our understanding of political communication. They allow us to convert audio data into text data, so they are vital to any of our projects which involve analyzing audio communication on a large scale. However, they are impeded by poor quality audio, background music, or less common pronunciations of words, potentially resulting in unreliable results. Latine candidates are more likely than white, non-Latine candidates to have an accent or have learned English as a second language. To gain confidence in the use of transcriptions for downstream tasks, we want to evaluate the performance of current models on these candidates.
Our goal was to use deep learning-based facial recognition algorithms to determine the appearances of political leaders, candidates, and opponents in political ad images. To do this, we ran a facial recognition algorithm in Python on Snapchat political campaign ad images. This algorithm uses the method, Histogram of Oriented Gradients (HOG) (Dalal & Triggs, 2005), for face detection, and a deep convolutional neural network (CNN) model for face encoding.
We first wanted to compare the facial recognition results with setting different tolerances and comparing the results. Tolerance is equivalent to sensitivity, the lower the tolerance the more strict the algorithm is at facial comparisons. After completing the testing of tolerances 0.5, 0.6, and 0.7, we wanted to compare the accuracy of the faces found with the results from Amazon Web Services (AWS) that the Wesleyan Media Project (WMP) had previously analyzed. This allowed us to compare the open source facial recognition software and find that the tolerance .5 was the most accurate in identifying faces among the Snapchat images and videos.
This research aims to understand the role of station ownership in local news stations’ discussion of vaccinations during the ongoing COVID-19 pandemic. Past research tells us that a large proportion of the US population gets public health information from local news, with those who get COVID-19 vaccine information from local news expressing greater intent to get a COVID-19 vaccine than those who did not get their information from local TV, regardless of how much they trusted the vaccine information (Piltch-Loeb et al. 2021; Nagler et al. 2020; Hamel et al. 2021; Gollust, Fowler, and Niederdeppe 2019). Clearly, local news has power in sharing public health information.
Our goal was to develop a dataset of Snapchat ads to enable the Wesleyan Media Project to research an area of ads that impacts a younger demographic, and hasn’t been explored as thoroughly as other platforms like Facebook. The Snapchat political ads library offers an interesting look into how political ads operate on primarily video based platform that has unique user base. By investigating the Snapchat political ad library, we hope to recognize differences between ads on different platforms, and discover underlying trends in political entities behavior across platforms. We extracted text from the speech in ad videos and text in ad images and generated facial recognition results from the Snapchat 2020 political ad dataset. Additionally, we developed a classifier that predicts the party lean of a Snapchat ad using the content data we gathered. As a result of our research, we have made it possible for the Wesleyan Media Project to analyze Snapchat ads, and have taken steps to predict the party of future Snapchat ads.
This research centers around discussions of race and racial justice in Facebook campaign advertisements run during the 2020 election cycle. More specifically, this research analyzed Facebook campaign advertisements run by 2020 presidential and Georgia Senate special candidates in the state of Georgia. 2020 marked a watershed in the contemporary fight for racial justice in the United States following the highly publicized murders of innocent Black people like George Floyd, Breonna Taylor, and Ahmaud Arbery. The Georgia Senate special elections in particular were a major talking point due to national campaigns led by people like former Georgia State Rep. Stacey Abrams and activist LaTosha Brown to increase Black votership across the state. By reading this blog post, audiences will gain a better understanding of how politicians discussed race and racial justice during this major moment in contemporary American history.
Our goal was to analyze the multiple classifiers that the Wesleyan Media Project has run on political advertisements and uncover the patterns that the classifier identified and utilized to make its predictions. The ABSA classifier works by analyzing the text of an ad for mentions of Joe Biden and Donald Trump and using sentiment analysis to predict which party the ad supports, while the Party All classifier works by running a machine learning method that uses hand-coded party training data to predict ad lean. By investigating how the classifiers actually work, we hope to enable the Wesleyan Media Project to improve the classification of advertisements and, perhaps more importantly, understand what the algorithms we utilize do. In other words, we want to turn our classifiers into something we understand and can explain, instead of a “black box.” The classifiers were run on the WMP’s set of ads from the 2020 election cycle. We analyze the trends and distributions in the set of classified ads to see the underlying patterns our classification algorithm is capturing. With these analyses, we hope to find sources of classification bias and error and seek to explain why the classifier does classify an ad to a specific party. As a result of our research, we were able to improve the classifier’s accuracy over the whole election cycle and uncover trends associated with regionally concentrated ads.
Abortion has emerged as a key polarizing issue for voters over the last few decades. Attitudes toward abortion predict voters’ decisions across levels of government––presidential, congressional, gubernatorial, lower offices––making abortion a matter of issue ownership for political parties (Jelen & Wilcox, 2003). Since the pro-life movement gained political traction in the 1980s, media attention on pro-choice vs. anti-abortion interest groups has consistently (a) linked the groups to distinct parties and (b) amplified party-specific positions in the mind of the American electorate (Carmines & Wagner, 2010). As such, pro-choice has become synonymous with the Democratic Party and anti-abortion with the Republican Party. In addition, long-term exposure to Facebook political advertisements about abortion and women’s healthcare may impact voter turnout in competitive congressional districts, particularly among women voters (Haenschen, 2022). The national conversation on abortion has become increasingly heated in the past election cycle, and abortion will only become a bigger issue when the Supreme Court rules on modifications to 1973’s landmark Roe v. Wade case during the upcoming 2022 midterm election cycle (Hulse, 2021).
Since 2010, the Wesleyan Media Project has hand coded American political advertisements for an extensive list of variables relating to content and tone. The information collected through this process is insightful, however it is a time intensive task. Thus, the initial question we sought to answer was how much, if any, of this process could be automated in order to keep up with the scale of digital advertising. Our goal was to do so by training machine learning models on the text of the existing hand coded ads in order to predict the characteristics of new ads. We tested a number of methods and ultimately found that our Random Forest worked best for binary issue variables while a distilBERT Neural Network worked best for multi class variables, especially ad tone.