Computational
Neurobiology of Reaching and Pointing A Foundation for Motor
Learning Reza Shadmehr and Steven P. Wise MIT Press, 544 pages, 165 illustrations |
|
Web resources: muscle models, limb stiffness, kinematics, dynamics, and control policies.
Chapter 1: Introduction
Overview:
Understanding reaching and pointing movements depends on knowledge of physics,
biology, mathematics, robotics, and computer science. Physics plays a
fundamental role because reaching and pointing require your central nervous
system (CNS) to solve difficult mechanical problems: it must learn to control a
limb that consists of linked segments, which interact with each other as well
as with external objects as they accelerate in a gravitational field.
1-1 Why motor learning?
1-2 Why now?
1-3 Why a theoretical study?
1-4 Why a computational
theory?
1-5 Why vertebrates, why
primates, and why a two-joint arm?
Chapter 2: Our Moving History: The Evolution of the
Vertebrate CNS
Overview:
The vertebrate CNS originated approximately 500–550 million years ago, in
surprisingly recognizable form. In large part, what it did for those animals
then, it does for you and other vertebrates today. A major role of the early
vertebrate CNS involved the guidance of swimming based on receptors that
accumulate information from a relatively large distance, mainly those for
vision and olfaction. The original vertebrate motor system later adapted into
the one that controls your reaching and pointing movements.
2-1 Birth of the motor
system
2-2 Components of the motor
system
2-3 A brief history of the
motor system
2-4 First steps: inventing
the vertebrate brain
2-5 More recent steps:
cerebellum and motor cortex
2-6 Summary
Chapter 3: Burdens of History: Control Problems that
Reach from the Past
Overview:
The evolutionary history of the CNS accounts for many of the problems that the
motor system must overcome in order to control reaching and pointing movements.
In learning to control such movements, the CNS must generate force slowly with
spring-like actuators (muscles) that act against a skeleton. It also must
analyze inputs from sensory transducers that provide feedback, but only after a
relatively long delay.
3-1 Limbs
3-2 Muscles
3-3 Nerves
Chapter 4: What Motor Learning Is, What Motor Learning
Does
Overview:
Your evolutionary history has given you a motor system that learns, and motor
learning plays a fundamental role in reaching and pointing movements. Motor
learning takes many forms, including: (1) learning over generations that
becomes encoded in the genome, is epigenetically expressed as instincts and
reflexes, and contributes to learned (conditioned) reflexes; (2) learning new
skills to augment your inherited motor repertoire, and adapting those skills to
maintain performance at a given level; and (3) learning what movements to make
and when to make them. Motor learning allows you and other animals to achieve
appetitive goals and avoid harm. Reaching extends the range of goals available,
and pointing has special importance for communication.
4-1 Motor learning undefined
4-2 Motor learning over
generations: Links to instincts and reflexes
4-3 Learning new skills and
maintaining performance
4-4 Making decisions
adaptively
4-5 Summary
Chapter 5: What Does the Motor Learning I: Spinal Cord
and Brainstem
Overview:
All levels of your CNS contribute to motor learning, including those lowest in
its hierarchy: the spinal cord and brainstem. One highly specialized part of
the brainstem, the cerebellum, plays a particularly important role in learning
to reach and point, among other aspects of motor learning.
5-1 Spinal cord
5-2 Hindbrain
5-3 Cerebellum
5-4 Red nucleus
5-5 Superior colliculus
Chapter 6: What Does the Motor Learning II: Forebrain
Overview:
The forebrain comprises the diencephalon and telencephalon. The basal ganglia
play an important, but enigmatic, role in motor control and learning, including
reaching and pointing movements. The thalamus acts as a key node in recurrent,
distributed modules—often known as “loops”—which integrate the cerebral cortex
into subcortical motor-control systems. Like other advanced mammals, your
cerebral cortex makes up most of your CNS, and your neocortex makes up most of
your brain. Two large parts of it, the motor cortex and the posterior parietal
cortex (PPC), make important contributions to reaching and pointing.
6-1 Basal ganglia
6-2 Thalamus
6-3 Cortical organization I:
General considerations
6-4 Cortical organization
II: Cortical fields for reaching and pointing
Chapter 7: What Generates Force and Feedback
Overview:
Muscles convert chemical energy into force and act like an integrated system of
springs, dampers, and force generators. This chapter describes the relationship
between linear forces, as produced by muscles, and torques generated in a
two-joint arm. Muscle fibers not only generate force, but also give rise to
feedback signals that convey information about forces and muscle lengths to the
CNS.
7-1 Biological versus mechanical actuators
7-2 Muscle mechanisms
7-3 Motor units
7-4 A muscle model
7-5 Converting force to
torque
7-6 Muscle afferents
7-7 Muscle afferents in
action
Chapter 8: What Maintains Limb Stability
Overview:
Pairs of muscles act against each other. This antagonist architecture produces
an equilibrium point—a balance of forces—which helps stabilize the limb. The
passive, spring-like properties of your limb promote its stability, but your
CNS also uses reflexes to stabilize the limb. These mechanisms maintain your
hand at a reach target or in a given direction of pointing.
8-1 Equilibrium points from
antagonist muscle activity
8-2 Restoring torques from
length–tension properties
8-3 Stiffness from
coactivation
8-4 Reaching without
feedback in monkeys
8-5 Equilibrium points from
artificial stimulation
8-6 Rapid movements from sequential
muscles activation
8-7 Passive properties
produce stability
8-8 Reflexes produce
stability
8-9 Reaching without
feedback in people
8-10 Passive properties and
reflexes combined
Part II. Computing Locations and Displacements
Chapter 9: Computing End-Effector Location I: Theory
Overview:
Collectively, your hand and other things controlled by it are end effectors. In
order to control a reaching movement, the CNS computes the difference between
the location of a target and the current location of the end effector. This
chapter considers the problem of computing end-effector location from sensors
that measure muscle lengths or joint angles, a computation called forward
kinematics.
9-1 Reaching and pointing
requires sensory feedback
9-2 Kinematics and dynamics
9-3 Degrees of freedom and
coordinate frames
9-4 End effectors and
adaptive mapping
9-5 Predicting the location
of an end effector in visual coordinates
9-6 Predicting end-effector
location with proprioception: Virtual robotics
9-7 Predicting end-effector
location with proprioception: Computations
Chapter 10: Computing End-Effector Location II:
Experiment
Overview:
The CNS computes an estimate of limb configuration through an alignment of
information from various sensory modalities, including proprioception and
vision. This computation appears to rely on neurons in which discharge varies
monotonically and approximately linearly with location of the end effector in
the workspace.
10-1 Role of proprioceptive signals in end-effector
localization
10-2 Introduction to frontal
and parietal neurophysiology
10-3 Encoding of limb
configuration in the CNS
10-4 Errors in reaching due to
lesions of the PPC
Chapter 11: Computing Target Location
Overview:
In order to control a reaching or pointing movement, your CNS computes the
difference between the location of a target and the current location of the end
effector. In computing target location, your CNS combines information about the
location of the target on the retina with information about eye and head orientation.
Neurons in the PPC encode this information in a multiplicative way.
11-1 Computing target and
end-effector locations in a common frame
11-2 Computing target location
in a vision-based frame
11-3 Combining retinal location
with eye orientation through gain fields
Chapter 12: Computing Difference Vectors I:
Fixation-Centered Coordinates
Overview:
In order to control a reaching or pointing movement your CNS compares an
estimate of end-effector location to an estimate of target location. Neural
networks subtract these estimates to represent a difference vector for the end
effector. The difference vector represents a movement plan for reaching the
target. For reaching and pointing movements in primates, the CNS represents
both targets and end effectors in a visual coordinate frame, with the fovea as
its origin, termed fixation-centered.
12-1 Planning reaching and pointing with difference
vectors
12-2 Shoulder-centered versus
fixation-centered coordinates
12-3 Planning in
fixation-centered coordinates: experiment
12-4 Planning in
fixation-centered coordinates: theory
12-5 Localizing an end-effector
in fixation-centered coordinates
12-6 Encoding end-effector
location in fixation-centered coordinates
12-7 Issues concerning
fixation-centered coordinates
Chapter 13: Computing Difference Vectors II: Parietal
and Frontal Cortex
Overview:
The difference vector represents a high-level plan for movement, which
specifies a displacement of an end effector from its current location to the
target’s location. However, several question remain about the nature of this
plan. Does it correspond to a movement that your CNS will make with the hand,
with the eye, or with some other part of your body? Does it reflect a movement
the CNS might make or definitely will make? And what parts of the CNS play the
most direct role in formulating this plan? This chapter presents some evidence
that areas in the PPC, acting in close concert with the frontal motor areas,
participate in computing the motor plan.
13-1 Computing a movement plan
13-2 Planning potential
movements but not executing them
13-3 Planning the next movement
in a sequence
Chapter 14: Planning Displacements and Forces
Overview:
It seems likely that motor areas of the frontal cortex—functioning as parts of
cortical–cerebellar and cortical–basal ganglionic modules—transform the
high-level motor plan for reaching and pointing, corresponding to a difference
vector, into representations of movement in terms of a low-level motor plan (in
terms of joint-angle changes and force commands).
14-1 Representing the
difference vector in the motor areas of the frontal lobe
14-2 Population vectors, force
coding, and coordinate frames in M1
Part III. Skills, Adaptation, and Trajectories
Chapter 15: Aligning Vision and Proprioception I:
Adaptation and Context
Overview:
To represent end-effector location in fixation-centered coordinates, your CNS
must align proprioceptive inputs about joint angles and muscle lengths with
visual inputs about where the end effector appears in fixation-centered space
for that pattern of proprioceptive inputs. The CNS needs to recalibrate these
computational maps whenever something alters the visual feedback of
end-effector location.
15-1 Newts cannot adapt to
rotation of their eyes
15-2 Primates adapt to rotation
of the visual field
15-3 Prism adaptation requires
modification of both location and displacement maps
15-4 Long-term memories and
learning to switch on context
15-5 Prism adaptation in
virtual robotics
15-6 Consequences of planning
in vision-based coordinates
15-7 Moving an end-effector
attached to the hand
15-8 Internal models of
kinematics
Chapter 16: Aligning Vision and Proprioception II:
Mechanisms and Generalization
Overview:
This chapter examines some of the neural systems involved in adapting and
learning the alignments between visual and proprioceptive information and some
of the consequences of their mechanisms.
16-1 Neural systems involved in
adapting alignments between proprioception and vision
16-2 Generalization of
adaptation to altered visual feedback
Chapter 17: Remapping, Predictive Updating, and
Autopilot Control
Overview:
Reaching and pointing movements involve continuous monitoring of target- and
end-effector location in fixation-centered coordinates with the goal of reducing
the difference vector to zero. Your CNS recomputes the kinematic maps that
estimate target- and end-effector location as the eyes, targets and end
effector move. Because this remapping depends on a copy of motor commands to
the eyes, the head, and the arm, your CNS can update these estimates
predictively. Systems that predict consequences of motor commands in sensory
coordinates are called forward models. Forward models may also underlie
your ability to imagine movements.
17-1 Remapping target location
17-2 Predictive remapping of
target and end-effector location with efference copy
17-3 Remapping end-effector
location
Chapter 18: Planning to Reach or Point I: Smoothness
in Visual Coordinates
Overview:
A reaching or pointing movement can entail an infinite number of trajectories
from the end effector’s starting location to the target. However, for most
reaching and pointing movements, your CNS plans the movement so that the end
effector moves along just one of these trajectories: an approximately straight
path with a smooth, unimodal velocity profile. Given the choice between a
trajectory that looks straight in visual coordinates and one that is straight
in reality, your CNS generates a visually straight trajectory.
18-1 Regularity in reaching and
pointing
18-2 Description of trajectory
smoothness: minimum jerk
Chapter 19: Planning to Reach or Point II: A
Next-State Planner
Overview:
Smooth hand trajectories may be an emergent property of a feedback control
system that plans for a desired change in the limb’s state based on an estimate
of its current location and goal. Called a next-state planner, such a system
allows the CNS to respond smoothly, as if on autopilot control, to unexpected
changes in goals or perturbations to the limb. Evidence indicates that people
carrying the gene for Huntington’s disease, a disorder primarily of the basal
ganglia, do not make these computations efficiently.
19-1 The problem of planning
19-2 Transforming a
displacement vector into a trajectory
19-3 The next-state planner
19-4 Minimizing the effects of
signal dependent noise
19-5 Online correction of
self-generated and imposed errors in Huntington’s disease
19-6 Transforming plans into
trajectories: the problem of redundancy
Part IV. Predictions, Decisions, and Flexibility
Chapter 20: Predicting Force I: Internal Models of
Dynamics
Overview:
In planning a reaching or pointing movement, your CNS relies on an internal
model that predicts the forces needed to reach the target. This internal model
maps desired limb states—for example, the limb’s configuration and the rate at
which that configuration changes—to forces.
20-1 Internal models of
dynamics
20-2 Correlates of adapting to
altered dynamics
Chapter 21: Predicting Force II: Representation and
Generalization
Overview:
In computing an internal model of dynamics, your CNS maps limb states to
forces. The patterns of generalization for this kind of learning suggest that
in computing this map, your CNS represents limb states in intrinsic coordinates
such as joint angles or muscle lengths.
21-1 The coordinate system of
the internal model of dynamics
21-2 Computing an internal
model with a population code
21-3 Estimating generalization
functions from trial-to-trial changes in movement
21-4 A not-so-invariant desired
trajectory
Chapter 22: Predicting Force III: Consolidating a
Motor Skill
Overview:
Passage of time alters the representation of internal models. With sleep and
with passage of time, the functional properties of motor skills change.
22-1 Consolidation
22-2 A role for time and sleep
in consolidation of motor memories
Chapter 23: Predicting Inputs and Correcting Errors I:
Filtering and Teaching
Overview:
The neural mechanisms for predicting inputs and correcting errors play a
central role in motor learning. Although relatively little is known about the
mechanisms of motor learning for reaching and pointing, more is known about
those for pavlovian and instrumental learning. These forms of learning depend
on the cerebellum and basal ganglia, and they can serve as models for other
forms of motor learning. The cerebellum and basal ganglia both function to
correct errors, but of different kinds. The cerebellum functions to correct
errors in the prediction of sensory signals and perhaps neural signals, more
generally. One consequence of these predictions is the production of motor
commands that anticipate and meliorate the potentially damaging effects of
those stimuli. Learning in the basal ganglia, on the other hand, is driven by
an error in predicted biological value. Associating biological value with the
state of the system aids in deciding what to do at a given context, e.g., the
selection of feedback control policies for performing an action.
23-1 Cancellation of predicted
signals by adaptive filtering
23-2 Predicting and responding
to a stimulus
23-3 Similar learning
mechanisms in basal ganglia and cerebellum
23-4 A training signal for the
basal ganglia
23-5 Why does Huntington’s
disease result in disorders in reaching?
Chapter 24: Predicting Inputs and Correcting Errors II:
Learning from Reflexes
Overview:
When stimuli engage your reflexes, your CNS generates signals that guide
learning in the cerebellum. In some cases, these stimuli are externally
generated, like an air puff to the eye, which produces the eye-blink reflex
discussed in Chapter 23. In many cases, however, the inputs result from the
your own actions. For example, motor commands for moving your forearm around
the elbow produce torques that, due to inertial properties of the arm, also
move your upper arm around the shoulder. If your goal is to move only your
forearm, the movements of your upper arm are motor errors. The cerebellum
learns to predict these errors and to produce motor commands that compensate
for them. The cerebellum plays an essential role in learning internal models of
dynamics.
24-1 Climbing fibers encode a
signal that represents motor error
24-2 Predicatively correcting
motor commands
Chapter 25: Deciding Flexibly on Goals, Actions, and
Sequences
Overview:
There is more to life than reaching and pointing directly to stimuli, one at a
time. Your reaching and pointing movements require decisions and choices. You
must compare and contrast alternative targets and control policies (goals), and
you must evaluate potential goals—or a sequence of goals—among several
possibilities. You must decide, choose, and then act, all the while suppressing
rejected alternatives and those held in abeyance. In addition, your reaching
and pointing movements need not be aimed directly at the stimuli that instruct
and guide your action. Your CNS can guide those and other movements by external
cues, by internal cues, and by combinations of both. And you can learn to reach
and point places because of rules, strategies, and abstract goals.
25-1 Deciding on a target
25-2 Choosing among multiple
potential targets of movement
25-3 Deciding on multiple
movements
25-4 Action selection based on
estimates of state
25-5 Moving to places other
than a stimulus: Standard mapping vs. nonstandard mapping
25-6 Summary
Part V. Glossary and Appendices
Glossary
Appendix
A. Biology Refresher
Appendix
B. Anatomy Refresher
Appendix
C. Mathematics Refresher
Appendix
D. Physics Refresher
Appendix
E. Neurophysiology Refresher
Web resources: muscle models, limb stiffness, kinematics, dynamics, and control policies.