Two visual systems re-viewed
Two visual systems re-viewed. - PubMed - NCBI
To be able to grasp an object successfully, for example,
it is essential that the brain compute the actual size of the object,
and its orientation and position with respect to the observer
(i.e. in
egocentric coordinates). We also argued that the time
at which these computations are performed is equally critical.
Observers and goal objects rarely stay in a static relationship
with one another and, as a consequence, the egocentric coordi-
nates of a target object can often change radically from moment
to moment. For these reasons, it is essential that the required
coordinates for action be computed in an egocentric framework
at the very moment the movements are to be performed.
Perceptual processing needs to proceed in a quite different
way.
Vision for perception does
not require the absolute size of
objects or their egocentric locations to be computed. In fact, such
computations would be counter-productive. It would be better to
encode the size, orientation, and location of objects relative to the
other, preferably larger, objects that are present. Such a
scene-
based frame of reference permits a perceptual representation of
objects that transcends particular viewpoints, while preserving
information about spatial relationships (as well as relative size
and orientation) as the observer moves around.
These considerations led us to predict that normal observers
would show, under appropriate conditions, clear differences
between perceptual reports and object-directed actions when
interacting with pictorial illusions, particularly size-contrast illu-
sions. This counter-intuitive prediction was initially based on the
simple assumption that the
perceptual system could not avoid
computing the size of a target object in relation to the size of
neighbouring objects, whereas visuomotor networks would need
to compute the true size of the object. This prediction was con-
firmed in a study by
Aglioti, Goodale, and DeSouza (1995)
which showed that the scaling of grip aperture in-flight was
remarkably insensitive to the Ebbinghaus illusion, in which a
target disc surrounded by smaller circles appears to be larger
than the same disc surrounded by larger circles. In short, max-
imum grip aperture was scaled to the real not the apparent size
of the target disc.
According to our two visual systems model, vision for action
works only in real time and is not normally engaged unless the
target object is visible during the programming phase, that is
when bottom-up visual information is being converted into the
appropriate motor commands. When there is a delay between
stimulus offset and the initiation of the grasping movement,
the programming of the grip would be driven by a memory of
the target object that was originally derived from a perceptual
representation of the scene, created moments earlier by mecha-
nisms in the ventral stream
Thus, we
would predict that memory-guided grasping would be affected
by the illusory display, because the stored information about the
target’s dimensions would reflect the earlier perception of the
illusion. In fact, a range of studies has shown that this is exactly
the case In the case
of the
dorsal stream this is not so: indeed the coding of the target
has to be as far as possible absolute, and needs to be referred to
an egocentric rather than a scene-based framework. Non-target
visual information needs to impact dorsal-stream processing
dynamically, thereby influencing the moment-to-moment kine-
matics of the action. It seems likely that this happens without
the visual coding of target information being itself modulated:
in other words that both target and non-target information each
modulate motor control directly and quasi-independently
Matters are quite different in the dorsal stream,
where the peripheral field is relatively well represented. Indeed
some dorsal-stream areas, such as the parieto-occipital area
(PO), show almost no cortical magnification at all, with a large
amount of neural tissue devoted to processing inputs from the
peripheral visual fields
http://www.nature.com/nrn/journal/v1...nrn2733-f3.jpg