Tuesday, January 8th 10:00-10:30, 2019
Waste4Think Room at DeustoTech
DeustoTech, Bilbao, Basque Country, Spain
A fundamental issue in areas such as statistical estimation and machine learning is to solve optimization problems based on the finite sum of convex functions. In order to solve these problems classical gradient descent algorithm can be applied. However, in some scenarios, this method is inefficient, since when the quantity of data is large the complexity is substantial per iteration. Thus, to avoid evaluating the full gradient, SGD iterative method, which only uses a small portion of data, can be applied to compute an approximated gradient. The main objective of this seminar is to show how to transfer that idea to solve optimal control problems of parameter-dependent systems. In particular, two problems that come from the idea of simultaneous and averaged controllability.