Skip to main navigation Skip to search Skip to main content

Reinforcement-Learning-Enhanced Model Predictive Control with Application to Autonomous Planetary Landing

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents a reinforcement-learning (RL)-enhanced model predictive control (MPC) framework, referred to as RLE-MPC, for robust spacecraft guidance and control. The RL agent learns the parameters of a quadratic cost function, which is used to formulate MPC’s recursively solved optimization problem. By training in a perturbed environment, RL enhances robustness to uncertainties and learns the long-term effects of control actions, mitigating the limitations of nominal MPC under unmodeled dynamics. Conversely, MPC explicitly specifies and enforces the cost function and constraints, thus preserving within the control policy the constraint-awareness and optimality guarantees that are generally absent in standard RL methods. Numerical results for a soft pinpoint lunar landing scenario under uncertainty evaluate the performance and robustness of RLE-MPC in comparison to standalone MPC and RL, both in the perturbed environment used during training and in a more challenging one to assess generalization.

Original languageEnglish (US)
Pages (from-to)788-805
Number of pages18
JournalJournal of Guidance, Control, and Dynamics
Volume49
Issue number3
DOIs
StatePublished - Mar 2026

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Aerospace Engineering
  • Space and Planetary Science
  • Applied Mathematics
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Reinforcement-Learning-Enhanced Model Predictive Control with Application to Autonomous Planetary Landing'. Together they form a unique fingerprint.

Cite this