Stay Local or Go Global – A Comparison of Preference of Explainability

Abstract:

Machine learning has been at the core of many technological advancements in recent years, unfortunately the role of the human is sometimes overlooked. The problem that initially arises here is that if there is no trust in the model then it will not be used. A big step in resolving this is understanding of what the machine does and understanding it is behaviour. The model of acceptance believes that the acceptance of technology in households needs to be understood and accepted for the same technology ideas to be widely adopted into society. In recent years there has been a huge increase in demand for cloud based services, this in turn has caused a prompt boost in the levels of Internet traffic and topological complexity. This creates several different requirements for accurate classification of applications and Internet traffic. Machine learning continues to grow and gain notoriety in socially important decision making, interpretability remains a critical dilemma especially when it comes to predictive models. If a model lacks optimality, then it could have substantial societal implications [5]. This study employed a mixed method approach. Participants were initially sent a pre-study questionnaire to garner information regarding attitudes towards technology. Then three separate neural networks were trained on the darknet dataset to classify information. Then participants contributed further by watching a presentation video and completing a second questionnaire to establish whether global or local explanation methods are preferred when understanding what each model is contributing.

Download Lyds' Thesis