My PhD

Title:
Developing Trust in Algorithmically Driven Services by Enhancing Explainable and Fair Machine Learning Through Co-Creation and Customer Interactions

External Stakeholder:
Starling Bank

The Research:
This project will consider ways of ensuring that end-users (the customers) understand – and are satisfied with - the reasons for the ways the services are presented and offered to them. A particular focus will be to ensure that interactions between the customer and the service appear just, inclusive, transparent and fair uncovering potential nuanced biases. To achieve this the project will adopt an ambitious mixed methods approach to connect the end-user to FairML algorithm development. Furthermore, it will challenge the unidirectional assumptions of current XAI research and embrace an interactive ‘conversation’ and co-creation of FairML to enhance trust. First, using participatory design and co-creation methods the research will challenge and shape acceptable fairness of algorithmic approaches, with a focus on how the customer can come to trust the service delivery (e.g. overdraft or loan approvals). For example, this may define fairness as parity of outcome (customer focus - to change living conditions, support social mobility etc.) rather than parity of opportunity (bank focus - manage financial risks of repayment). This co-design could also consider ethical design decisions in such cases where there is a 'real' (as in statistically accurate) difference, not just a legacy or artefact of human bias, historical data or other types of bias. For example, the choice to make insurance premiums independent of gender was an ethical one, since it was factually accurate that on average, i.e. from the insurance firm’s point of view, female drivers were lower risk.

Second, working with data scientists at Starling the research will design test and evaluate ways in which implicit or explicit interactions (including AI explanation techniques) with the customers can drive the performance of fair algorithms that deliver services. This could
include: development and adoption of specific fairness metrics (often supervised) with customer consent; unfairness discovery through unsupervised learning approaches and how it relates to customer’s perception of fairness or trust in algorithmically driven service delivery; empirical assessment of proxy variables’ impact on desirable discriminatory value or undesirable bias; considering average versus distribution and the disproportional impact on people in protected characteristic groups.