[1911.12116] Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey

Deep Learning is a state-of-the-art technique to make inference on extensive or complex data. As a black box model due to their multilayer nonlinear structure, Deep Neural Networks are often criticized to be non-transparent and their predictions not traceable by humans. Furthermore, the models learn from artificial datasets, often with bias or contaminated discriminating content. Through their increased distribution, decision-making algorithms can contribute promoting prejudge and unfairness which is not easy to notice due to lack of transparency. Hence, scientists developed several so-called explanators or explainers which try to point out the connection between input and output to represent in a simplified way the inner structure of machine learning black boxes. In this survey we differ the mechanisms and properties of explaining systems for Deep Neural Networks for Computer Vision tasks. We give a comprehensive overview about taxonomy of related studies and compare several survey pa

4 mentions: @pm_girl
Date: 2019/11/28 23:20

Related Entries

Read more GitHub - victordibia/handtracking: Building a Real-time Hand-Detector using Neural Networks (SSD) on...
0 users, 1 mentions 2019/11/09 15:51
Read more Neural networks for Graph Data NeurIPS2018読み会@PFN
25 users, 9 mentions 2019/01/26 09:46
Read more GitHub - thunlp/GNNPapers: Must-read papers on graph neural networks (GNN)
1 users, 0 mentions 2019/08/18 08:16
Read more Deep Forest :Deep Neural Networkの代替へ向けて - QiitaQiita
0 users, 0 mentions 2018/04/25 17:22