Improving reinforcement learning performance in spacecraft guidance and control through meta-learning: a comparison on planetary landing

Lorenzo Federici, Roberto Furfaro

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

This paper investigates the performance and computational complexity of recurrent neural networks (RNNs) trained via meta-reinforcement learning (meta-RL) as onboard spacecraft guidance and control systems. The paper first presents the theoretical background behind meta-RL with RNNs, highlighting the features that make it suitable for real-world spacecraft guidance and control applications. A thorough comparison of meta-RL with a standard RL approach that uses fully connected neural networks is carried out on a benchmark problem related to spacecraft guidance and control, namely a pin-point planetary landing. The focus is on evaluating the optimality of the control policy, the ability to handle constraints, and the robustness of the approach to different kinds and levels of uncertainties, such as unmodeled dynamics, navigation uncertainties, control errors, and engine failures, to highlight the superiority of meta-RL in both nominal and off-nominal operating conditions.

Original languageEnglish (US)
Article number105746
JournalNeural Computing and Applications
DOIs
StateAccepted/In press - 2024
Externally publishedYes

Keywords

  • Meta-reinforcement learning
  • Planetary landing
  • Recurrent neural network
  • Spacecraft guidance and control

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Improving reinforcement learning performance in spacecraft guidance and control through meta-learning: a comparison on planetary landing'. Together they form a unique fingerprint.

Cite this