User-Centric Visualisation of the Top Activations in a Deep Neural Network for Explainability

Abstract:

Using a pre-trained model to investigate and implement a tool that can aid in giving meaning to how a black box machine learning neural network model has made its decisions during the training of landscape mapping data. It was Implementing a Graphical User Interface (GUI) system that can visualise steps taken by the neural network and how it has come to its decisions. Displaying the visualisation of these steps at several points using the test data on the existing trained model and presenting it to a user to show the meaning of decisions taken by the machine learning model. The investigation of high dimension datasets and experiment with how best to display in a GUI to enable user-friendly insight and meaning to how a neural network model makes its decisions. Increasing meaning to how neural networks come to decisions at the node level could allow further optimisations and insights into landscape datasets.

Download Michael's Thesis