Differentiable Computational Fluid Dynamics as a Layer

1 min read


With ever accelerating advances in machine learning, fluid dynamicists have begun to intersperse machine learning techniques in fluid dynamics simulations to achieve in-depth control of the physical process, acceleration by dimensionality reduction, or more enhanced active learning. In adjacent fields, such as optimization, researchers have gone a step further, and made their optimization problems differentiable, and started to embed them as layers in their deep architectures. This has not only enabled them to improve computational performance, but also made it easier to solve previously hard-to-solve optimization problems. In fluid dynamics such approaches have so far been out of reach, as differentiability implied the need for hand-written adjoints, or specialized software which is often unable to differentiate hybrid parallelism, and GPU-computations while incurring significant performance degradation. In this lecture we will present how, enabled by compiler-based approaches to automatic differentiation, we are able to overcome these constraints and seamlessly embed simulation layers inside of deep networks with simulation structures abstracted away as layers in our end-to-end learned architecture. A presented example of this are Neural PDEs, a fully end-to-end learned system where the simulation’s PDE solver is used on a coarse-grained latent space. All presented approaches are directly generalizable to other simulation-based fields.