Overparametrization and the bias-variance dilemma
Speaker
Abstract
For several machine learning methods such as neural networks, good generalization performance has been reported in the overparametrized regime. In view of the classical bias-variance trade-off, this behavior is highly counterintuitive. We will present a general framework to establish universal lower bounds for the bias-variance trade-off. This is joint work with Alexis Derumigny (Delft).