Parallelizable adjoint stencil computations using transposed forward-mode algorithmic differentiation

J. C. Hückelheim, P. D. Hovland, M. M. Strout, J. D. Müller

Research output: Contribution to journalArticlepeer-review

6 Scopus citations


Algorithmic differentiation (AD) is a tool for generating discrete adjoint solvers, which efficiently compute gradients of functions with many inputs, for example for use in gradient-based optimization. AD is often applied to large computations such as stencil operators, which are an important part of most structured-mesh PDE solvers. Stencil computations are often parallelized, for example by using OpenMP, and optimized by using techniques such as cache-blocking and tiling to fully utilize multicore CPUs and many-core accelerators and GPUs. Differentiating these codes with conventional reverse-mode AD results in adjoint codes that cannot be expressed as stencil operations and may not be easily parallelizable. They thus leave most of the compute power of modern architectures unused. We present a method that combines forward-mode AD and loop transformation to generate adjoint solvers that use the same memory access pattern as the original computation that they are derived from and can benefit from the same optimization techniques. The effectiveness of this method is demonstrated by generating a scalable adjoint CFD solver for multicore CPUs and Xeon Phi accelerators.

Original languageEnglish (US)
Pages (from-to)672-693
Number of pages22
JournalOptimization Methods and Software
Issue number4-6
StatePublished - Nov 2 2018
Externally publishedYes


  • 65Y05
  • 68N20
  • algorithmic differentiation
  • discrete adjoints
  • OpenMP
  • reverse mode
  • shared-memory parallelism

ASJC Scopus subject areas

  • Software
  • Control and Optimization
  • Applied Mathematics


Dive into the research topics of 'Parallelizable adjoint stencil computations using transposed forward-mode algorithmic differentiation'. Together they form a unique fingerprint.

Cite this