Skip to main content
  • Poster presentation
  • Open access
  • Published:

Action recognition using Natural Action Structures

Humans can detect, recognize, and classify natural actions in a very short time. How this is achieved by the visual system and how to make machines understand human actions have been the focus of neuro-scientific studies and computational modeling in the last several decades. A key issue is what spatial-temporal features should be encoded and what the characteristics of their occurrences are in natural actions. We propose a novel model in which Natural Action Structures (NASs) (see Figure 1), i.e., multi-size, multi-scale, spatial-temporal concatenations of local features, serve as the basic encoding units of natural actions. In this concept, any action is a spatial-temporal concatenation of a set of NASs, which convey a full range of information about natural actions. We took several steps to extract and identify these structures and selected a set of informative natural action structures to classify a range of human actions. We found that the NASs obtained in this way achieved a significantly better recognition performance than low-level features [1] and that the performance was better than or comparable to the best current models (see Table 1).

Figure 1
figure 1

Examples of NASs. 6 frequent NASs compiled from each of the 4 actions in the KTH and the Weizmann dataset. The locations of the NASs in the videos and the NASs are indicated by the same color.

Table 1

Conclusions

NASs contain a variety of information about human actions and are robust against variations due to noises, occlusions, changes in scales, and a range of structural changes since they are concatenations of features at multiple spatial-temporal scales. The results suggest that NASs can be used as the basic encoding units of human actions and activities and may hold the key to the understanding of human ability of action recognition.

References

  1. Dollár P, Rabaud V, Cottrell G, Belongie S: Behavior recognition via sparse spatio-temporal features. IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS). 2005, 65-72.

    Google Scholar 

  2. Yao A, Gall J, Van Gool LJ: A Hough transform-based voting framework for action recognition. IEEE Conference on Computer Vision and Pattern Recognition. 2010, 2061-2068.

    Google Scholar 

  3. Niebles JC, Wang HC, Li FF: Unsupervised learning of human action categories using spatial-temporal words. International Journal of Computer Vision. 2008, 79: 299-318. 10.1007/s11263-007-0122-4.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoyuan Zhu.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zhu, X., Yang, Z. & Tsien, J.Z. Action recognition using Natural Action Structures. BMC Neurosci 13 (Suppl 1), P18 (2012). https://doi.org/10.1186/1471-2202-13-S1-P18

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/1471-2202-13-S1-P18

Keywords