Meeting Register Page

Meeting banner
CANSSI/SSC Cross-Country Tour with Trevor Campbell
Presentation abstract: Bayesian inference provides a coherent approach to learning from data and uncertainty assessment in complex, expressive statistical models. However, algorithms for performing inference have not yet caught up to the deluge of data in modern applications. One approach—Bayesian coresets—involves replacing the large dataset with a small, weighted, representative subset of data during inference. The coreset is designed to capture the information from the full dataset, but be much less computationally expensive to store in memory and iterate over. Although the methodology is sound in principle, efficiently constructing such a coreset in practice remains a significant challenge: current methods tend to be complicated to implement, slow, require a secondary inference step after coreset construction, and do not enable model selection. In this talk, I will introduce a new method—sparse Hamiltonian flows—that addresses all of these challenges. The method involves first subsampling the data uniformly, and then optimizing a Hamiltonian flow parametrized by coreset weights and including periodic momentum quasi-refreshment steps. I will present theoretical results demonstrating that the method enables an exponential compression of the dataset in representative models, and that the quasi-refreshment steps reduce the KL divergence to the target. Real and synthetic experiments demonstrate that sparse Hamiltonian flows provide accurate posterior approximations with significantly reduced runtime compared with competing dynamical-system-based inference methods.

Jul 8, 2022 12:00 PM in Vancouver

Meeting logo
Loading
* Required information