Image-Based Lunar Hazard Detection in Low Illumination Simulated Conditions via Vision Transformers

Luca Ghilardi, Roberto Furfaro

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Hazard detection is fundamental for a safe lunar landing. State-of-the-art autonomous lunar hazard detection relies on 2D image-based and 3D Lidar systems. The lunar south pole is challenging for vision-based methods. The low sun inclination and the terrain rich in topographic features create large areas in shadow, hiding the terrain features. The proposed method utilizes a vision transformer (ViT) model, which is a deep learning architecture based on the transformer blocks used in natural language processing, to solve this problem. Our goal is to train the ViT model to extract terrain features information from low-light RGB images. The results show good performances, especially at high altitudes, beating the UNet, one of the most popular convolutional neural networks, in every scenario.

Original languageEnglish (US)
Article number7844
JournalSensors
Volume23
Issue number18
DOIs
StatePublished - Sep 2023

Keywords

  • deep neural network
  • hazard detection
  • image segmentation
  • lunar hazard detection
  • lunar south pole
  • supervised learning
  • vision transformer
  • vision-based hazard detection

ASJC Scopus subject areas

  • Analytical Chemistry
  • Information Systems
  • Atomic and Molecular Physics, and Optics
  • Biochemistry
  • Instrumentation
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Image-Based Lunar Hazard Detection in Low Illumination Simulated Conditions via Vision Transformers'. Together they form a unique fingerprint.

Cite this