A. Alvarez-Lopez, A. Hadj Slimane, E. Zuazua. Interplay between depth and width for interpolation in neural ODEs (2024) M3AS
Abstract. Neural ordinary differential equations (neural ODEs) have emerged as a natural tool for supervised learning from a control perspective, yet a complete understanding of their optimal architecture remains elusive. In this work, we examine the interplay between their width and number of layer transitions (effectively the depth ). Specifically, we assess the model expressivity in terms of its capacity to interpolate either a finite dataset comprising pairs of points or two probability measures in within a Wasserstein error margin . Our findings reveal a balancing trade-off between and , with scaling as for dataset interpolation, and for measure interpolation. In the autonomous case, where , a separate study is required, which we undertake focusing on dataset interpolation. We address the relaxed problem of -approximate controllability and establish an error decay of . This decay rate is a consequence of applying a universal approximation theorem to a custom-built Lipschitz vector field that interpolates . In the high-dimensional setting, we further demonstrate that neurons are likely sufficient to achieve exact control.