My PhD

Title:
Far-Right Extremism, Online Propaganda, and Hybrid Human-Automated Content Removal

External Stakeholder:
Facebook

The Research:
Online propaganda and radicalisation is widely regarded as a pressing security challenge. Following attacks such as those in Christchurch, New Zealand, attention is increasingly focusing on the threat from the far-right, as well as the threat from violent jihadist groups. There has also been increased focus on social media platforms. Given the size of the task, the use of technology to block and remove terrorist content from social media platforms is unquestionably essential. Despite efforts made in this area, online extremists still have a presence on these platforms to varying extents. Whilst a number of factors have contributed to this trend, one important factor has been the utilisation of the internet in general and social media in particular. Given the sheer volume of content posted on social media platforms each day, the use of technology is essential for efforts to remove terrorist content to be effective. At the same time, automated decision-making has its limitations. In particular, machines work with data and code; they do not attribute meaning (Hildebrandt 2018). These challenges are exacerbated in the context of the far-right. Unlike content associated with the so-called Islamic State,most far-right content is not branded. Moreover, there has been a shift in the Overton Window, such that some powerful actors –including heads of state, major political parties, some traditional media organisations, and broad swathes of Western publics –identify with this content (Conway, 2020). The key to effectiveness therefore lies in the ability to improve hybrid human-automated decision-making (van der Vegt et al, 2019). As a result, this research will incorporate research in theoretical models to develop efficiency in this context.

This project will assume a comparative narrative, in doing so the research will focus on two particular collectives. Firstly, a radical right group referred to as The Proud Boys and secondly the trolling culture more broadly. Comparing these two collective shows promise in uncovering new insights surrounding the ever-evolving narratives and tactics employed by radical right groups.

This research will use data (social media posts including images, texts, and user metrics) will be analysed using a content analysis methodology. Content analysis utilises both qualitative and quantitative methods to critically analyse audio and visual material (Finch and Fafinski, 2012). Specific ideas, concepts, terms, themes and other image characteristics will be identified, and comparisons made, to allow a detailed description, explanation, and analysis of the material. These categories will be generated through a careful reading of the data, as opposed to being pre-defined, thus ensuring an inductive approach. The generation of coding categories from the content analysis will result in a coding manual: a document containing instructions for the coder, so that the process of data analysis is specific, consistent and repeatable. As the coding categories are applied to the data, a coding schedule will be created, i.e. a document containing all the findings related to each item within the sample. This will result in a quantitative dataset, from which conclusions will be drawn and presented using qualitative methods.