Introducing Explainability to Pointcloud Place Recognition for AI Assisted Shoreline Navigation

Abstract

Global localization via the use of image based querying/landmark detection has been a successful area within AI for a few years however, one issue within this field of research is the naturally large variances in locational imagery over time, either due to a change of weather, camera angle or if an image is taken at night rather than in the day. Due to this, many researchers have sought to perform localization based on LIDAR gathered 3D pointcloud data, the motivation being that because these clouds provide locational structure rather than appearance, variance becomes less of an issue as the factors mentioned before rarely affect overall structures. We set out not only to identify a pointcloud-based place recognition model that can aid in the real world problem of shoreline vessel navigation (in the absence of GPS), but to do so in a way that provides a good deal of explainability to the user, such that they can interpret the machines decision. Explainability is an important aspect of applying AI to real world use cases, as often times the model may produce a less than certain result that, if wrong, can lead to a loss of trust and abandonment of the machine as well as potential accidents, in which case providing an explanation can help the user to better understand the models reasoning as well as display the cause of some fault.

Download Luke's Thesis

My MSc Project