Improving Visual Feature Extraction in Glacial Environments

Steven D. Morad, Jeremy Nash, Shoya Higa, Russell Smith, Aaron Parness, Kobus Barnard

Research output: Contribution to journalArticle

Abstract

Glacial science could benefit tremendously from autonomous robots, but previous glacial robots have had perception issues in these colorless and featureless environments, specifically with visual feature extraction. This translates to failures in visual odometry and visual navigation. Glaciologists use near-infrared imagery to reveal the underlying heterogeneous spatial structure of snow and ice, and we theorize that this hidden near-infrared structure could produce more and higher quality features than available in visible light. We took a custom camera rig to Igloo Cave at Mt. St. Helens to test our theory. The camera rig contains two identical machine vision cameras, one which was outfitted with multiple filters to see only near-infrared light. We extracted features from short video clips taken inside Igloo Cave at Mt. St. Helens, using three popular feature extractors (FAST, SIFT, and SURF). We quantified the number of features and their quality for visual navigation by comparing the resulting orientation estimates to ground truth. Our main contribution is the use of NIR longpass filters to improve the quantity and quality of visual features in icy terrain, irrespective of the feature extractor used.

Original languageEnglish (US)
Article number8931615
Pages (from-to)385-390
Number of pages6
JournalIEEE Robotics and Automation Letters
Volume5
Issue number2
DOIs
StatePublished - Apr 2020

    Fingerprint

Keywords

  • computer vision for automation
  • Field robots
  • SLAM
  • visual-based navigation

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Biomedical Engineering
  • Human-Computer Interaction
  • Mechanical Engineering
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Control and Optimization
  • Artificial Intelligence

Cite this