Reinforcement learning for accurate two-stage pointing control of CubeSats in photometric research

Authors

  • Bárbara Nitsche Leidens
  • Matheus Inoue
  • Fábio Arbach Fernandes de Oliveira
  • João Francisco Süssekind Junqueira

DOI:

https://doi.org/10.54021/sesv4n1-009

Keywords:

CubeSat, pointing control, reinforcement learning

Abstract

Star photometry performed on satellites requires high pointing accuracy and stability. However, pointing control on CubeSats is especially affected by the spacecraft’s own movements and vibrations. Therefore, the goal of this work is to explore the application of Reinforcement Learning on the pointing control of CubeSats that perform photometry, considering a two-stage approach. A simulated environment was developed to model a staged approach to pointing control, and this work explored the appropriate parameters to be accounted for. The model achieved subpixel-to-subpixel accuracy and stability, and the results demonstrated the high impact of centroiding observations in achieving a higher learning rate and precision. In addition, this work provided evidence of the need for considering parameters from multiple subsystems and different stages of information processing in a spacecraft.

References

AGU, Cecily C. Hybridized Spacecraft Attitude Control via Reinforcement Learning using Control Moment Gyroscope Arrays. 2021.

ARULKUMARAN, Kai et al. A brief survey of deep reinforcement learning. arXiv preprint arXiv:1708.05866, 2017.

BROCKMAN, Greg et al. Openai gym. arXiv preprint arXiv:1606.01540, 2016.

DOUGLAS, Ewan S. et al. Cubesats for astronomy and astrophysics. Bulletin of the AAS, v. 51, n. 7, p. 1-6, 2019.

FUJIMOTO, Scott; HOOF, Herke; MEGER, David. Addressing function approximation error in actor-critic methods. In: International conference on machine learning. PMLR, 2018. p. 1587-1596.

HOVELL, Kirk; ULRICH, Steve. On deep reinforcement learning for spacecraft guidance. In: AIAA Scitech 2020 Forum. 2020. p. 1600.

HU, Weiduo. Fundamental spacecraft dynamics and control. John Wiley & Sons, 2015.

KNAPP, Mary et al. Demonstrating high-precision photometry with a CubeSat: ASTERIA observations of 55 Cancri e. The astronomical journal, v. 160, n. 1, p. 23, 2020.

LILLICRAP, Timothy P. et al. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.

DE SÁ MARQUES, Wilson; CHAGAS, Ronan Arraes. Reinforcement learning applied to the control of the pitch-axis of a satellite.

MNIH, Volodymyr et al. Asynchronous methods for deep reinforcement learning. In: International conference on machine learning. PMLR, 2016. p. 1928-1937.

PONG, Christopher M.; SMITH, Matthew W. Camera modeling, centroiding performance, and geometric camera calibration on ASTERIA. In: 2019 IEEE Aerospace Conference. IEEE, 2019. p. 1-17.

RAFFIN, Antonin et al. Stable-baselines3: Reliable reinforcement learning implementations. The Journal of Machine Learning Research, v. 22, n. 1, p. 12348-12355, 2021.

SCHULMAN, John et al. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.

SMITH, Matthew et al. On-orbit results and lessons learned from the ASTERIA space telescope mission. 2018.

STRAUB, Jeremy. An intelligent attitude determination and control system concept for a CubeSat class spacecraft. In: AIAA Space and Astronautics Forum and Exposition. 2015.

WAN, Xiaowei et al. Star centroiding based on fast Gaussian fitting for star sensors. Sensors, v. 18, n. 9, p. 2836, 2018.

Downloads

Published

2023-04-03

How to Cite

Leidens, B. N., Inoue, M., de Oliveira, F. A. F., & Junqueira, J. F. S. (2023). Reinforcement learning for accurate two-stage pointing control of CubeSats in photometric research. STUDIES IN ENGINEERING AND EXACT SCIENCES, 4(1), 126–142. https://doi.org/10.54021/sesv4n1-009