Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

Open Access Highly Accessed Research article

A neural computational model for bottom-up attention with invariant and overcomplete representation

Zou Qi1*, Zhao Songnian2, Wang Zhe13 and Huang Yaping1

Author affiliations

1 Department of Computer Science, Beijing Jiaotong University, Beijing, 100044, China

2 LAPC, Institute of Atmospheric Physics, Chinese Academy of Sciences, 100029, China

3 Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, 100049, China

For all author emails, please log on.

Citation and License

BMC Neuroscience 2012, 13:145  doi:10.1186/1471-2202-13-145

Published: 29 November 2012

Abstract

Background

An important problem in selective attention is determining the ways the primary visual cortex contributes to the encoding of bottom-up saliency and the types of neural computation that are effective to model this process. To address this problem, we constructed a two-layered network that satisfies the neurobiological constraints of the primary visual cortex to detect salient objects. We carried out experiments on both synthetic images and natural images to explore the influences of different factors, such as network structure, the size of each layer, the type of suppression and the combination strategy, on saliency detection performance.

Results

The experimental results statistically demonstrated that the type and scale of filters contribute greatly to the encoding of bottom-up saliency. These two factors correspond to the mechanisms of invariant encoding and overcomplete representation in the primary visual cortex.

Conclusions

(1) Instead of constructing Gabor functions or Gaussian pyramids filters for feature extraction as traditional attention models do, we learn overcomplete basis sets from natural images to extract features for saliency detection. Experiments show that given the proper layer size and a robust combination strategy, the learned overcomplete basis set outperforms a complete set and Gabor pyramids in visual saliency detection. This finding can potentially be applied in task-dependent and supervised object detection.

(2) A hierarchical coding model that can represent invariant features, is designed for the pre-attentive stage of bottom-up attention. This coding model improves robustness to noises and distractions and improves the ability of detecting salient structures, such as collinear and co-circular structures, and several composite stimuli. This result indicates that invariant representation contributes to saliency detection (popping out) in bottom-up attention.

The aforementioned perspectives will significantly contribute to the in-depth understanding of the information processing mechanism in the primary visual system.