Conference
Proceedings 2018 IEEE Pacific Visualization Symposium (PacVis 2018), 2018, pp. 180-189
APA
Click to copy
Nie, S., Healey, C. G., Padia, K., Leeman-Munk, S. P., Benson, J., Caira, D., … Devarajan, R. (2018). Visualizing deep neural networks for text analytics. In Proceedings 2018 IEEE Pacific Visualization Symposium (PacVis 2018) (pp. 180–189).
Chicago/Turabian
Click to copy
Nie, S., C. G. Healey, K. Padia, S. P. Leeman-Munk, J. Benson, D. Caira, S. Sethi, and R. Devarajan. “Visualizing Deep Neural Networks for Text Analytics.” In Proceedings 2018 IEEE Pacific Visualization Symposium (PacVis 2018), 180–189, 2018.
MLA
Click to copy
Nie, S., et al. “Visualizing Deep Neural Networks for Text Analytics.” Proceedings 2018 IEEE Pacific Visualization Symposium (PacVis 2018), 2018, pp. 180–89.
BibTeX Click to copy
@conference{s2018a,
title = {Visualizing deep neural networks for text analytics},
year = {2018},
pages = {180-189},
author = {Nie, S. and Healey, C. G. and Padia, K. and Leeman-Munk, S. P. and Benson, J. and Caira, D. and Sethi, S. and Devarajan, R.},
booktitle = {Proceedings 2018 IEEE Pacific Visualization Symposium (PacVis 2018)}
}
Deep neural networks (DNNs) have made tremendous progress in many different areas in recent years. How these networks function internally, however, is often not well understood. Advances in understanding DNNs will benefit and accelerate the development of the field. We present TNNVis, a visualization system that supports understanding of deep neural networks specifically designed to analyze text. TNNVis focuses on DNNs composed of fully connected and convolutional layers. It integrates visual encodings and interaction techniques chosen specifically for our tasks. The tool allows users to: (1) visually explore DNN models with arbitrary input using a combination of node–link diagrams and matrix representation; (2) quickly identify activation values, weights, and feature map patterns within a network; (3) flexibly focus on visual information of interest with threshold, inspection, insight query, and tooltip operations; (4) discover network activation and training patterns through animation; and (5) compare differences between internal activation patterns for different inputs to the DNN. These functions allow neural network researchers to examine their DNN models from new perspectives, producing insights on how these models function. Clustering and summarization techniques are employed to support large convolutional and fully connected layers. Based on several part of speech models with different structure and size, we present multiple use cases where visualization facilitates an understanding of the models.