Time-to-collision estimation from motion based on primate visual processing

John M. Galbraith, Garrett T. Kenyon, Richard W. Ziolkowski

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

A population coded algorithm, built on established models of motion processing in the primate visual system, computes the time-to-collision of a mobile robot to real-world environmental objects from video imagery. A set of four transformations starts with motion energy, a spatiotemporal frequency based computation of motion features. The following processing stages extract image velocity features similar to, but distinct from, optic flow; "translation" features, which account for velocity errors including those resulting from the aperture problem; and finally, estimate the time-to-collision. Biologically motivated population coding distinguishes this approach from previous methods based on optic flow. A comparison of the population coded approach with the popular optic flow algorithm of Lucas and Kanade against three types of approaching objects shows that the proposed method produces more robust time-to-collision information from a real world input stimulus in the presence of the aperture problem and other noise sources. The improved performance comes with increased computational cost, which would ideally be mitigated by special purpose hardware architectures.

Original languageEnglish (US)
Pages (from-to)1279-1291
Number of pages13
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume27
Issue number8
DOIs
StatePublished - Aug 2005

Keywords

  • Autonomous robotics
  • Computer vision
  • Depth cues
  • Motion processing
  • Neuromorphic computing
  • Optic flow
  • Time-to-collision

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Time-to-collision estimation from motion based on primate visual processing'. Together they form a unique fingerprint.

Cite this