PUBLICATIONS

Causal Learning and Explanation of Deep Neural Networks via Autoencoded Activations

Harradon, M., Druce, J., and Ruttenberg, B.

arXiv:1802.00541v1 (February 2018)

Deep neural networks are complex and opaque. As they enter application in a variety of important and safety critical domains, users seek methods to explain their output predictions. We develop an approach to explaining deep neural networks by constructing causal models on salient concepts contained in a CNN. We develop methods to extract salient concepts throughout a target network by using autoencoders trained to extract human-understandable representations of network activations. We then build a bayesian causal model using these extracted concepts as variables in order to explain image classification. Finally, we use this causal model to identify and visualize features with significant causal influence on final classification.

For More Information

To learn more or request a copy of a paper (if available), contact Michael Harradon.

(Please include your name, address, organization, and the paper reference. Requests without this information will not be honored.)