My PhD

Title:
Participatory Design Approach to Delivering a Fair. Human-in-the-Loop, Decision Support System in the Critical Care Hub of EMRTS

External Stakeholder: 
Emergency Medical Retrieval and Transfer Service (EMRTS)

The Emergency Medical Retrieval and Transfer Service (EMRTS) Cymru was established in April 2015 to provide rapid pre-hospital care across Wales. It has four bases operated by the Welsh Air Ambulance Charitable Trust (WAACT) at Caernarfon, Welshpool, Llanelli and Cardiff with a control centre (Critical Care Hub) co-located with the Welsh Ambulance Service NHS Trust (WAST) in Cwmbran. In addition to air assets, it controls deployment of a fleet of Rapid Response Vehicles (RRVs). The critical care service is delivered by Consultants (from Emergency Medicine, Anaesthesia and Intensive Care Medicine) and EMRTS Critical Care Practitioners (CCPs).

The Research:
This work will seek to design, deploy and test an Artificial Intelligence (AI) Machine Learning (ML) based clinical decision support system at EMRTS’s Critical Care Hub (CCH) to assist in task prioritisation – whether and how to respond with the EMRTS resources. It aims to find out if AI can provide a decision support tool to learn from previous taskings, reduce variability in practice and outcomes. The task allocations result from an assessment, filtering and prioritisation process that begins with ~300,000 calls. The process tasked emergency crews to over 3,500 emergencies during 2019.

Machine Learning systems to provide support in decision-making must be carefully designed, validated and monitored in order to avoid unintended harms. However, in reducing variability in practice and outcomes, there are clear complementary benefits to algorithmic decision support: machines do not become tired, hungry, bored or distracted, and can take into account factors that are orders of magnitude greater in volume than human experts. They can also potentially capture and utilise long-held experience of a multitude of experts, avoiding variations due to experience or expertise within individuals, and loss of on-hand experience when staff are unavailable or leave a team. On the other hand, like people, algorithms are vulnerable to biases that could render their decisions prejudicial or harmfully inaccurate.

Fair (accountable and transparent) AI/ML system research has attracted a lot of attention recently. Whilst the ethics of fairness in healthcare is non-controversial and practised by medics and embraced by society, it is not yet understood how to embed these values in AI/ML algorithms, despite a rise of frameworks and design principles. As with much of the research on intelligent systems, there has been insufficient focus on real world implementation and the associated challenges to fairness, transparency, and accountability.

In this project the decision support tool will need be a socio-technical system requiring integration into existing social, organisational and professional contexts.  The primary aim of the proposed AI/ML tool will be to augment the abilities of the practitioners in the Critical Care Hub to enhance their decision-making, considering their intuition, skill and expertise.