Space objects maneuvering prediction via maximum causal entropy inverse reinforcement learning

Bryce Doerr, Richard Linares, Roberto Furfaro

Research output: Contribution to journalArticlepeer-review

Abstract

Inverse Reinforcement Learning (RL) can be used to determine the behavior of Space Objects (SOs) by estimating the reward function that an SO is using for control. The approach discussed in this work can be used to analyze maneuvering of SOs from observational data. The inverse RL problem is solved using maximum causal entropy. This approach determines the optimal reward function that a SO is using while maneuvering with random disturbances by assuming that the observed trajectories are optimal with respect to the SO's own reward function. Lastly, this paper develops results for scenarios involving Low Earth Orbit (LEO) station-keeping and Geostationary Orbit (GEO) station-keeping.

Original languageEnglish (US)
JournalUnknown Journal
StatePublished - Nov 1 2019

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Space objects maneuvering prediction via maximum causal entropy inverse reinforcement learning'. Together they form a unique fingerprint.

Cite this