Abstract
A population coded algorithm, built on established models of motion processing in the primate visual system, computes the time-to-collision of a mobile robot to real-world environmental objects from video imagery. A set of four transformations starts with motion energy, a spatiotemporal frequency based computation of motion features. The following processing stages extract image velocity features similar to, but distinct from, optic flow; "translation" features, which account for velocity errors including those resulting from the aperture problem; and finally, estimate the time-to-collision. Biologically motivated population coding distinguishes this approach from previous methods based on optic flow. A comparison of the population coded approach with the popular optic flow algorithm of Lucas and Kanade against three types of approaching objects shows that the proposed method produces more robust time-to-collision information from a real world input stimulus in the presence of the aperture problem and other noise sources. The improved performance comes with increased computational cost, which would ideally be mitigated by special purpose hardware architectures.
Original language | English (US) |
---|---|
Pages (from-to) | 1279-1291 |
Number of pages | 13 |
Journal | IEEE Transactions on Pattern Analysis and Machine Intelligence |
Volume | 27 |
Issue number | 8 |
DOIs | |
State | Published - Aug 2005 |
Keywords
- Autonomous robotics
- Computer vision
- Depth cues
- Motion processing
- Neuromorphic computing
- Optic flow
- Time-to-collision
ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition
- Computational Theory and Mathematics
- Artificial Intelligence
- Applied Mathematics