Maneuvering detection and prediction using inverse reinforcement learning for space situational awareness

Richard Linares, Roberto Furfaro

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper uses inverse Reinforcement Learning (RL) to determine the behavior of Space Objects (SOs) by estimating the reward function that an SO is using for control. The approach discussed in this work can be used to analyze maneuvering of SOs from observational data. The inverse RL problem is solved using the feature matching approach. This approach determines the optimal reward function that a SO is using while maneuvering by assuming that the observed trajectories are optimal with respect to the SO’s own reward function. This paper utilizes estimated orbital element data to determine the behavior of SOs in a data-driven fashion. Simple proof-of-concept results are shown for a simulation example.

Original languageEnglish (US)
Title of host publicationASTRODYNAMICS 2017
EditorsJohn H. Seago, Nathan J. Strange, Daniel J. Scheeres, Jeffrey S. Parker
PublisherUnivelt Inc.
Pages527-536
Number of pages10
ISBN (Print)9780877036456
StatePublished - 2018
EventAAS/AIAA Astrodynamics Specialist Conference, 2017 - Stevenson, United States
Duration: Aug 20 2017Aug 24 2017

Publication series

NameAdvances in the Astronautical Sciences
Volume162
ISSN (Print)0065-3438

Other

OtherAAS/AIAA Astrodynamics Specialist Conference, 2017
Country/TerritoryUnited States
CityStevenson
Period8/20/178/24/17

ASJC Scopus subject areas

  • Aerospace Engineering
  • Space and Planetary Science

Fingerprint

Dive into the research topics of 'Maneuvering detection and prediction using inverse reinforcement learning for space situational awareness'. Together they form a unique fingerprint.

Cite this