Integration of information across the visual scene in the control of prehension

P. B. Hibbard, P. Scarfe, S. J. Watt

Research output: Contribution to journalAbstractpeer-review

Abstract

In order to reach out and grasp an object, accurate information about its three-dimensional shape, size, and location are required. The accuracy with which this information can be recovered can in principle be improved by integrating across extended regions of the visual scene, to incorporate information from objects other than the one that is to be picked up. We used a simple prehension task to investigate whether the presence of another object altered the apparent shape and distance of an object to be grasped. Participants grasped a static binocularly viewed elliptical cylinder, presented alone or in the presence of a second cylinder. This flanking object was either static or rotating, and was viewed either monocularly or binocularly. The presence of the flanking object had a significant effect on both grip apertures and wrist velocities of reaches to the target object. These results demonstrate that, when controlling prehension, observers integrate information across objects, rather than programming their actions on the basis of their view of the target object alone. Such a strategy is optimal, since it makes use of all the available information, and will thus be expected to deliver more accurate and reliable estimates of the shape and location of the target.
Original languageEnglish
Pages (from-to)178-178
Number of pages1
JournalPerception
Volume35
Issue numberECVP Abstract Supplement
Publication statusPublished - 2006

Fingerprint

Dive into the research topics of 'Integration of information across the visual scene in the control of prehension'. Together they form a unique fingerprint.

Cite this