I will explain how to think of infinitely wide neural networks at both initialization and during training. This means, its initial value and how it evolves along its training. At initialization, I will show that such neural networks are equivalent to a Gaussian process. During training, I will show that their evolution is equivalent to an autonomous linear flow in the space of functions. This is related to a phenomenon called (the lack of) feature learning and I intend to at least mention what that is.
Based on:
Move the mouse over the schedule to see start and end times.
Permanent link to this information: https://m4ai.math.tecnico.ulisboa.pt/lecture_series?sgid=84