Compiler-based Differentiable Programming for Accelerated Simulations

2022-11-05
1 min read

Abstract

With the ever accelerating advances of modern machine learning techniques such as implicit differentiable layers, and deep equilibrium models the desire to extend these techniques to traditional simulations for the purpose of acceleration or the improvement of outer-loop applications becomes ever stronger. While in some areas it may be feasible to rewrite simulations in differentiable domain specific languages such as Jax, PyTorch or DiffTaichi, this is entirely infeasible for traditional simulations which have been built up, and validated over the past decades. In this talk we build on our recent work on compiler-based automatic differentiation enabling the synthesization of gradients, and vectorization of computation for simulations written in C/C++, Fortran, Julia, Rust, etc. to extend these modern techniques to traditional simulations, and enable the acceleration thereof.