Abstract
This paper investigates the use of reinforcement learning for the fuel-optimal guidance of a spacecraft during a time-free low-thrust transfer between two libration point orbits in the cislunar environment. To this aim, a deep neural network is trained via proximal policy optimization to map any spacecraft state to the optimal control action. A general-purpose reward is used to guide the network toward a fuel-optimal control law, regardless of the specific pair of libration orbits considered and without the use of any ad hoc reward shaping technique. Eventually, the learned control policies are compared with the optimal solutions provided by a direct method in two different mission scenarios, and Monte Carlo simulations are used to assess the policies’ robustness to navigation uncertainties.
Original language | English (US) |
---|---|
Pages (from-to) | 1954-1965 |
Number of pages | 12 |
Journal | Journal of Spacecraft and Rockets |
Volume | 60 |
Issue number | 6 |
DOIs | |
State | Published - 2023 |
Externally published | Yes |
ASJC Scopus subject areas
- Aerospace Engineering
- Space and Planetary Science