Space objects maneuvering prediction via maximum causal entropy inverse reinforcement learning

Bryce Doerr, Richard Linares, Roberto Furfaro

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Inverse Reinforcement Learning (RL) can be used to determine the behavior of Space Objects (SOs) by estimating the reward function that an SO is using for control. The approach discussed in this work can be used to analyze maneuvering of SOs from observational data. The inverse RL problem is solved using maximum causal entropy. This approach determines the optimal reward function that a SO is using while maneuvering with random disturbances by assuming that the observed trajectories are optimal with respect to the SO’s own reward function. Lastly, this paper develops results for scenarios involving Low Earth Orbit (LEO) station-keeping and Geostationary Orbit (GEO) station-keeping.

Original languageEnglish (US)
Title of host publicationAIAA Scitech 2020 Forum
PublisherAmerican Institute of Aeronautics and Astronautics Inc, AIAA
ISBN (Print)9781624105951
DOIs
StatePublished - 2020
EventAIAA Scitech Forum, 2020 - Orlando, United States
Duration: Jan 6 2020Jan 10 2020

Publication series

NameAIAA Scitech 2020 Forum
Volume1 PartF

Conference

ConferenceAIAA Scitech Forum, 2020
Country/TerritoryUnited States
CityOrlando
Period1/6/201/10/20

ASJC Scopus subject areas

  • Aerospace Engineering

Fingerprint

Dive into the research topics of 'Space objects maneuvering prediction via maximum causal entropy inverse reinforcement learning'. Together they form a unique fingerprint.

Cite this