ELLIS header
University of Stuttgart Logo
Max Planck Institute for Intelligent Systems Logo

Amortized Bayesian Model Comparison with Evidental Deep Learning

Stefan T. Radev, Marco D’Alessandro, Ulf K. Mertens, Andreas Voss, Ullrich Kothe, Paul-Christian Bürkner

IEEE Transactions on Neural Networks and Learning Systems (TNNLS), (), pp. 1–12, 2021.


Abstract

Comparing competing mathematical models of complex processes is a shared goal among many branches of science. The Bayesian probabilistic framework offers a principled way to perform model comparison and extract useful metrics for guiding decisions. However, many interesting models are intractable with standard Bayesian methods, as they lack a closed-form likelihood function or the likelihood is computationally too expensive to evaluate. In this work, we propose a novel method for performing Bayesian model comparison using specialized deep learning architectures. Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset. Moreover, it requires no hand-crafted summary statistics of the data and is designed to amortize the cost of simulation over multiple models, datasets, and dataset sizes. This makes the method especially effective in scenarios where model fit needs to be assessed for a large number of datasets, so that case-based inference is practically infeasible. Finally, we propose a novel way to measure epistemic uncertainty in model comparison problems. We demonstrate the utility of our method on toy examples and simulated data from nontrivial models from cognitive science and single-cell neuroscience. We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work. We argue that our framework can enhance and enrich model-based analysis and inference in many fields dealing with computational models of natural processes. We further argue that the proposed measure of epistemic uncertainty provides a unique proxy to quantify absolute evidence even in a framework which assumes that the true data-generating model is within a finite set of candidate models.

Links


BibTeX

@article{radev21_tnnls, title = {Amortized Bayesian Model Comparison with Evidental Deep Learning}, author = {Radev, Stefan T. and D'Alessandro, Marco and Mertens, Ulf K. and Voss, Andreas and Kothe, Ullrich and Bürkner, Paul-Christian}, year = {2021}, journal = {IEEE Transactions on Neural Networks and Learning Systems (TNNLS)}, volume = {}, number = {}, pages = {1--12}, doi = {10.1109/TNNLS.2021.3124052} }