On December 09th our team member Borjan Geshkovski PhD student at DyCon ERC Project from UAM, will give a talk at “AG Mathematics of Deep Learning” by Friedrich-Alexander Universität Erlangen-Nürnberg (Germany) about The interplay of Deep Learning and Control Theory.
Abstract.It is by now well-known that practical deep supervised learning may roughly be cast as an optimal control problem for a specific discrete-time, nonlinear dynamical system called an artificial neural network. In this talk, we consider the continuous-time formulation of the deep supervised learning problem. We will present, using an analytical approach, this problem’s behavior when the final time horizon increases, a fact that can be interpreted as increasing the number of layers in the neural network setting. We show qualitative and quantitative estimates of the convergence to the zero training error regime depending on the functional to be minimised.
You can see now the recording for this lecture:
Like this?
You might be interested in the “Math & Research” post by Borjan:
Check this event at CCM calendar