A circuit board marked with the letters AI: Swansea experts have co-authored a new report on the use of artificial intelligence (AI) in detecting terrorist activity online.

Artificial Intelligence (AI) has a crucial role to play in identifying terrorist content online, but human input remains essential to protect human rights and prevent legitimate groups being wrongly targeted, according to a new report by a team including Swansea University experts.  

Published by the Tech Against Terrorism Europe project, the report provides practical guidance for tackling terrorist content online fairly and responsibly while respecting human rights. 

The report underlines that AI tools are essential in detecting terrorist content amidst the growing amount of material posted online daily.  Every minute, on average, Facebook users share 694,000 stories, X (formerly Twitter) users post 360,000 posts, Snapchat users send 2.7 million snaps and YouTube users upload over 500 hours of new content.  

Platforms also now have a legal responsibility to take down terrorist content swiftly, to comply with the EU’s 2021 Terrorist Content Online Regulation. 

Given this background it is not surprising that many platforms are expanding the use of AI and automated approaches to detecting terrorist content.   

However, the report makes clear that relying just on automated approaches has drawbacks.  

For example, most automated content-based tools rely on either matching images and videos to a database or using machine learning to classify content. This can lead to difficulties compiling suitable datasets that can be used to train algorithms, as well as algorithms lacking cultural sensitivity, including understanding variations in dialect and language use across different groups of English speakers.

The authors also warn that relying solely on automated approaches risks having a disproportionate impact on marginalised groups and activists, who may be wrongly labelled as terrorist.    

This is why the report calls for human oversight and appropriate accountability mechanisms, alongside the use of AI tools.  

Read the report: Using Artificial Intelligence and Machine Learning to Identify Terrorist Content Online 

Professor Stuart Macdonald of Swansea University School of Law, chief author of the report, said: 

“Automated content moderation tools are essential, given the sheer volume of content posted online. But this must not be allowed to obscure the ongoing importance of human review and oversight. As well as ensuring a sufficient number of human reviewers, it is also crucial that reviewers have the necessary expertise and that companies make suitable provision to safeguard their wellbeing”

The report’s overall recommendations are:   

  1. Augmented intelligence in content moderation: Develop minimum standards and promote AI tools to support content moderators' wellbeing. 
  2. Choose automation products wisely: Small platforms should carefully assess third-party solutions and explore collaborations to augment capacity.   
  3. Burden-share: Enable knowledge sharing and open development of automated moderation tools across the industry. 

Dr Ashley Mattheis of Dublin City University School of Law and Government, a co-author of the report, said: 

“Companies of all sizes lack the capacity needed to tackle terrorist content online effectively. It follows that collaboration and tool sharing—both peer to peer support and large platforms’ support of small to medium sized platforms—are essential components of developing robust content moderation, best practice responses to terrorist and extremist content, and industry-wide capacity-building.”

The Tech Against Terrorism Europe project, funded by the EU, is a consortium of organisations charged with raising awareness of and compliance with the EU’s Terrorist Content Online Regulation. 

As a member of the consortium, Swansea University’s Cyber Threats Research Centre (CYTREC) is conducting original research to identify online platforms that are being exploited by terrorist groups and their supporters, and developing recommendations to help boost the resilience of smaller companies that may currently lack the capacity to detect and remove terrorist content from their platforms.

The report was authored by Professor Stuart Macdonald of Swansea University, Dr Ashley Mattheis of Dublin City University and David Wells of Swansea University. 

Read the report.

Hillary Rodham Clinton School of Law – Swansea University

Share Story