Auditing visualizations: Transparency methods struggle to detect anomalous behavior

JS Denain, J Steinhardt - arXiv preprint arXiv:2206.13498, 2022 - arxiv.org
arXiv preprint arXiv:2206.13498, 2022arxiv.org
Model visualizations provide information that outputs alone might miss. But can we trust that
model visualizations reflect model behavior? For instance, can they diagnose abnormal
behavior such as planted backdoors or overregularization? To evaluate visualization
methods, we test whether they assign different visualizations to anomalously trained models
and normal models. We find that while existing methods can detect models with starkly
anomalous behavior, they struggle to identify more subtle anomalies. Moreover, they often …
Model visualizations provide information that outputs alone might miss. But can we trust that model visualizations reflect model behavior? For instance, can they diagnose abnormal behavior such as planted backdoors or overregularization? To evaluate visualization methods, we test whether they assign different visualizations to anomalously trained models and normal models. We find that while existing methods can detect models with starkly anomalous behavior, they struggle to identify more subtle anomalies. Moreover, they often fail to recognize the inputs that induce anomalous behavior, e.g. images containing a spurious cue. These results reveal blind spots and limitations of some popular model visualizations. By introducing a novel evaluation framework for visualizations, our work paves the way for developing more reliable model transparency methods in the future.
arxiv.org