Close up of man's hands with a laptop on a table

A new report provides the first comprehensive analysis of how AI chatbots are reshaping violence against women and girls (VAWG) in dangerous new ways.

The report strengthens calls for Government action on AI chatbots in the Policing and Crime Bill.

The report, Invisible No More: How AI Chatbots are Reshaping Violence Against Women and Girls, finds that AI chatbots are generating new forms of violence against women and girls, and - for the first time - shows how platforms are enabling and encouraging gender-based violence through deliberate design choices and failures in safety mechanisms.

It identifies significant gaps in regulation and platform governance, with recommendations for reform of the Online Safety Act, criminal law, product safety legislation, as well as a new AI Act.

The report finds:

  • AI chatbots allow roleplays of incest, child sexual abuse and rape with few safeguards, risking the normalisation and legitimisation of this abuse;
  • AI chatbots are creating new forms of violence and abuse, such as chatbot-driven abuse and simulations, requiring urgent action;
  • AI chatbots are intensifying abuse such as stalking with detailed and personalised guidance, likely to escalate offending;
  • AI platform design choices, policies and governance failures are encouraging and enabling violence against women and girls, with harms are not simply the result of user misuse;
  • Existing regulation is wholly inadequate to prevent and address chatbot-VAWG; and,
  • There is a shocking lack of research into how AI chatbots are implicated in violence against women and girls, raising significant concerns about the evidence base for future AI regulation.

The authors say the report makes visible the very real harms and threats to the freedom and safety of women and girls.

Leading expert on violence against women and girls Professor Clare McGlynn said: “Our report warns that chatbot-VAWG represents a rapidly escalating threat. Without early intervention, these harms risk becoming entrenched and scaling quickly, mirroring the trajectory of other forms of tech-facilitated abuse such as deepfake and nudify apps, where early warnings were largely ignored. We must not make the same mistakes again.”

Principal Investigator on the project Professor Yvonne McDermott Rees, from the Hillary Rodham Clinton School of Law at Swansea University, said: “Crucially, existing legal regulation is patchy in its application to chatbot-related VAWG. Our report recommends the adoption of a new AI Safety Act, the creation of an online safety regulator, and the establishment of a right of action for AI harms to ensure victims can get the justice they deserve.”

Her colleague Professor Stuart Macdonald, founder and co-director of Swansea University’s Cyber Threats Research Centre (CYTREC), added: “Existing criminal laws fail to cover the full range of chatbot-VAWG harms. We therefore recommend a new criminal offence of dangerous deployment of an AI chatbot.”

The research has been funded by UK Research and Innovation (UKRI) and is co-authored by Professor McGlynn (Durham University), Professor McDermott Rees, Professor Macdonald, Rüya Tuna Toparlak (Lucerne University), independent consultant Fabienne Tarrant and Dr Samantha Treacy (Swansea University).

 

Share Story