[2009.05041v1] Understanding the Role of Individual Units in a Deep Neural Networkopen searchopen navigation menucontact arXivsubscribe to arXiv mailings

Deep neural networks excel at finding hierarchical representations that solve complex tasks over large data sets. How can we humans understand these learned representations? In this work, we present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks. First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts. We find evidence that the network has learned many object classes that play crucial roles in classifying scene classes. Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes. By analyzing changes made when small sets of units are activated or deactivated, we find that objects can be added and removed from the output scenes while adapting to the context. Finally, we apply our analytic framework to understandi

Date: 2020/09/14 11:22

Related Entries

Read more Quanta Magazine
0 users, 139 mentions 2019/07/03 03:48
Read more Predicting Prices of Bitcoin with Machine Learning - Towards Data Science
0 users, 8 mentions 2019/09/27 03:48
Read more Privacy Preserving AI (Andrew Trask) | MIT Deep Learning Series - YouTube
0 users, 29 mentions 2020/01/19 18:51
Read more Turing Test: Can Machines Think? - YouTube
0 users, 50 mentions 2020/04/27 03:51
Read more GitHub - emadboctorx/yolov3-keras-tf2: yolo implementation in keras and tensorflow 2.2
0 users, 10 mentions 2020/05/22 21:51