Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

This article is part of the supplement: Sixteenth Annual Computational Neuroscience Meeting: CNS*2007

Open Access Poster presentation

Functional mechanisms of motor skill acquisition

Ashvin Shah1* and Andrew G Barto2

Author Affiliations

1 Neuroscience and Behavior Program, University of Massachusetts Amherst, Amherst, MA 01003, USA

2 Department of Computer Science, University of Massachusetts Amherst, Amherst, MA 01003, USA

For all author emails, please log on.

BMC Neuroscience 2007, 8(Suppl 2):P203  doi:10.1186/1471-2202-8-S2-P203

The electronic version of this article is the complete one and can be found online at:


Published:6 July 2007

© 2007 Shah and Barto; licensee BioMed Central Ltd.

Poster presentation

As a motor skill is learned, behavior progresses from execution of movements that appear to be separately generated to recruitment of a single entity. Movements come to be executed more quickly, require less attention, and behavior loses flexibility. Neural activity also changes. Task-related neuron activity during a movement executed as part of a motor skill differs from that during the same movement executed alone. Also, cortical planning areas (e.g., frontal and prefrontal cortices) dominate control early in learning, while less cognitive areas (e.g., striatum) dominate later. The change in behavior and neural activity suggests that different control strategies and systems are employed as the motor skill develops.

We propose that the behavioral and neural progression is due to a transfer of control to different types of controllers: explicit planner, which selects movements by considering the goal; value-based, which selects movements based on estimated values of each choice; and static-policy, in which a sensory cue directly elicits a movement – no decision is made. Explicit planners require much computation (and thus time and attention) and pre-existing knowledge, but are able to make reasonable decisions with little experience and are flexible to changes in task and environment. Static-policy controllers require little computation and knowledge, but must be trained with experience and are inflexible. Value-based controllers have intermediate characteristics. Neural systems can implement these mechanisms: frontal cortices conduct planning, striatum and prefrontal cortex estimate values, and the static policy controller can be implemented by a direct mapping, such as thalamus (sensory) to striatum (motor). The progression of the behavior and neural systems associated with the progression of the controllers is similar to that seen in motor skill development.

We test the validity of this scheme with computational models – based on biologically plausible mechanisms and architecture – in which an agent must execute a series of actions (analogous to movements), elicited by the controllers, to solve tasks. As the succeeding controller is trained, it selects a movement faster than the preceding controller, which relinquishes control. By comparing model behavior to human and animal behavior in analogous tasks, we show that it exhibits qualities indicative of motor skill acquisition. We also investigate how task specification and environmental conditions affect motor skill development and strategy, how the presence of existing motor skills affect the agent's strategy in solving other tasks, and the parallels between resulting model behavior and human and animal behavior.

Previous models have investigated how different controllers participate in biological decision making [1] and motor control [2-4]. While each model has unique properties, they all show that the availability of different controllers improves learning and behavior.

References

  1. Daw ND, Niv Y, Dayan P: Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control.

    Nat Neurosci 2005, 8:1704-1711. PubMed Abstract | Publisher Full Text OpenURL

  2. Kawato M: Feedback-error-learning neural network for supervised motor-learning. In Advanced Neural Computers. Edited by Eckmiller R. Elsevier, North-Holland; 1990:365-372. OpenURL

  3. Hikosaka O, Nakahara H, Rand MK, Sakai K, Lu X, Nakamura K, Miyachi S, Doya K: Parallel neural networks for learning sequential procedures.

    Trends Neurosci 1999, 22:464-471. PubMed Abstract | Publisher Full Text OpenURL

  4. Rosenstein MT, Barto AG: Supervised actor-critic reinforcement learning. In Handbook of Learning and Approximate Dynamic Programming. Edited by Si J, Barto AG, Powell WB, Wunsch D. Wiley-IEEE Press, Piscataway, NJ; 2004:359-380. OpenURL