Scaling and Generalising Approximate Bayesian Inference
This event is part of the Melbourne Centre for Data Science’s Seminar Series
The Melbourne Centre for Data Science Seminar Series is an engaging monthly virtual seminar series hosting international experts and covering various focus areas and current research in data science.
This month, the Centre is pleased to host David Blei, professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute.
A core problem in statistics and machine learning is to approximate difficult-to-compute probability distributions. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation about a conditional distribution. In this talk Professor Blei will review and discuss innovations in variational inference (VI), a method that approximates probability distributions through optimisation. VI has been used in myriad applications in machine learning and Bayesian statistics. It tends to be faster than more traditional methods, such as Markov chain Monte Carlo sampling.
After quickly reviewing the basics, he will discuss some recent research on VI. First he will describe stochastic variational inference, an approximate inference algorithm for handling massive data sets, and demonstrate its application to probabilistic topic models of millions of articles. Then he will discuss black box variational inference, a generic algorithm for approximating the posterior. Black box inference easily applies to many models but requires minimal mathematical work to implement. Professor Blei then will demonstrate black box inference on deep exponential families—a method for Bayesian deep learning—and describe how it enables powerful tools for probabilistic programming.
Learn more about David Blei.
David Blei, professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute.
He studies probabilistic machine learning, including its theory, algorithms, and application. David has received several awards for his research. He received a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), ACM-Infosys Foundation Award (2013), a Guggenheim fellowship (2017), and a Simons Investigator Award (2019). He is the co-editor-in-chief of the Journal of Machine Learning Research. He is a fellow of the ACM and the IMS.