TY - JOUR
T1 - An insect-inspired model for visual binding II
T2 - functional analysis and visual attention
AU - Northcutt, Brandon D.
AU - Higgins, Charles M.
N1 - Funding Information:
The authors would like to thank the Air Force Office of Scientific Research for early support of the modeling research on this project with Grant Number FA9550-07-1-0165, and the Air Force Research Laboratories for supporting this research to maturity with STTR Phase I Award Number FA8651-13-M-0085 and Phase II Award Number FA8651-14-C-0108, both in collaboration with Spectral Imaging Laboratory (Pasadena, CA). We would also like to thank the reviewers, whose input greatly enhanced this manuscript.
Publisher Copyright:
© 2017, Springer-Verlag Berlin Heidelberg.
PY - 2017/4/1
Y1 - 2017/4/1
N2 - We have developed a neural network model capable of performing visual binding inspired by neuronal circuitry in the optic glomeruli of flies: a brain area that lies just downstream of the optic lobes where early visual processing is performed. This visual binding model is able to detect objects in dynamic image sequences and bind together their respective characteristic visual features—such as color, motion, and orientation—by taking advantage of their common temporal fluctuations. Visual binding is represented in the form of an inhibitory weight matrix which learns over time which features originate from a given visual object. In the present work, we show that information represented implicitly in this weight matrix can be used to explicitly count the number of objects present in the visual image, to enumerate their specific visual characteristics, and even to create an enhanced image in which one particular object is emphasized over others, thus implementing a simple form of visual attention. Further, we present a detailed analysis which reveals the function and theoretical limitations of the visual binding network and in this context describe a novel network learning rule which is optimized for visual binding.
AB - We have developed a neural network model capable of performing visual binding inspired by neuronal circuitry in the optic glomeruli of flies: a brain area that lies just downstream of the optic lobes where early visual processing is performed. This visual binding model is able to detect objects in dynamic image sequences and bind together their respective characteristic visual features—such as color, motion, and orientation—by taking advantage of their common temporal fluctuations. Visual binding is represented in the form of an inhibitory weight matrix which learns over time which features originate from a given visual object. In the present work, we show that information represented implicitly in this weight matrix can be used to explicitly count the number of objects present in the visual image, to enumerate their specific visual characteristics, and even to create an enhanced image in which one particular object is emphasized over others, thus implementing a simple form of visual attention. Further, we present a detailed analysis which reveals the function and theoretical limitations of the visual binding network and in this context describe a novel network learning rule which is optimized for visual binding.
KW - Artificial intelligence
KW - Blind source separation
KW - Neural networks
KW - Object perception
KW - Visual attention
KW - Visual binding
UR - http://www.scopus.com/inward/record.url?scp=85015712005&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85015712005&partnerID=8YFLogxK
U2 - 10.1007/s00422-017-0716-z
DO - 10.1007/s00422-017-0716-z
M3 - Article
C2 - 28303334
AN - SCOPUS:85015712005
SN - 0340-1200
VL - 111
SP - 207
EP - 227
JO - Biological Cybernetics
JF - Biological Cybernetics
IS - 2
ER -