Deep learning has revealed some major surprises from the perspective of statistical complexity: even without any explicit effort to control model complexity, these methods find prediction rules that give a near-perfect fit to noisy training data and yet exhibit excellent prediction performance in practice. This talk reviews recent work on methods that predict accurately in probabilistic settings despite fitting too well to training data. We see how benign overfitting can occur with sufficient overparameterization in regression and classification problems, but it leads to sensitivity to adversarial examples.
This is the AI@Melbourne Colloquium Series, a program of talks on the future of Artificial Intelligence at The University of Melbourne.
Stay and enjoy complimentary afternoon tea and networking following the event.
About the speaker
Peter Bartlett
Head of Google Research Australia and UC Berkeley Professor
Peter Bartlett is Professor of Statistics and Computer Science at UC Berkeley and Head of Google Research Australia. He has served as Associate Director of the Simons Institute for the Theory of Computing at UC Berkeley, and since 2020 he has been Director of the Foundations of Data Science Institute and Director of the Collaboration on the Theoretical Foundations of Deep Learning. He is President of the Association for Computational Learning, Honorary Professor of Mathematical Sciences at the Australian National University, and co-author with Martin Anthony of the book Neural Network Learning: Theoretical Foundations. He was awarded the Malcolm McIntosh Prize for Physical Scientist of the Year in Australia in 2001, was chosen as an Institute of Mathematical Statistics Medallion Lecturer in 2008, an IMS Fellow and Australian Laureate Fellow in 2011, and a Fellow of the ACM in 2018, and was elected to the Australian Academy of Science in 2015.