Project: Standard

Project Details


Because our eyes are positioned side by side in our heads, there is a large portion of our visual field that can be seen through both eyes at the same time. But each eye gets a slightly different view of the world, because it is in a slightly different location, and hence each eye's image is slightly different. The small differences between the eyes are called binocular disparities and it been known for about 150 years that our brains use disparity to help us see depth and shape in the world. In fact, our brains are exquisitely sensitive to disparity: we can detect depth differences as small as the thickness of a sheet of paper at twice arms reach. Binocular disparity is also potentially useful for perceiving how objects move in depth. During object motion in three dimensions (3-D), the binocular disparity of the object will change. However, there will also be direct differences in the size of motion signals in each eye. For example, if an object move directly towards the nose, that object's image will move in one direction in one eye, and the opposite direction in the other eye. If we want to understand how the brain sees 3-D object motion, we must first find out whether it uses binocular disparity, motion differences between the eyes, or both. Such knowledge is critically important for furthering our understanding of basic brain processes. It also has potential application to the enhancement of virtual reality (VR) technology: VR systems can be designed to exploit what the visual system is most sensitive to.Over the last 10 years, several research groups have attempted to test what visual information is used by the brain, using simple visual tests that attempt to isolate the two sources of information. The results have been controversial. One problem is that complex tricks have to be employed to eliminate one of the sources of information (in the real world, both are always present). These manipulations result in visual stimuli that are noisy and that may not have completely eliminated all the expected information. In this project we propose to not just test human vision with such stimuli. Additionally, we will design models called 'ideal observers'. These are mathematical models designed to use all the information available in a stimulus. For example an ideal observer for detecting changing disparity will use all disparity information within a visual stimulus. Indeed, applying the model to a stimulus is a way of testing what information that stimulus contains. By using these models, we can design visual stimuli that contain both sources of information and calculate how well an ideal observer could use the information to see 3-D motion. We then compare human performance with the model, rather than simply testing whether people can see the motion or not. For the first time, we will be able to test the relative importance of binocular disparity and motion information for seeing 3-D object motion.

Key findings

This project explored the importance of changes in binocular disparity (CD), inter-ocular velocity differences (IOVD) and eye-movement information (EM) in the perception of motion in three dimensions.

We used two-frame motion and random dot noise to deliver
equivalent strengths of CD and IOVD information to determine which source of information would be dominant under such sparse visual conditions. Whilst CD information could be used precisely only one participant was able to consistently perceive the displacement for stimuli containing only IOVD. When stimuli are specifically designed to provide equivalent two-frame motion or disparity-change, few participants can reliably detect displacement when IOVD is the only cue. (Nefs, Harris, 2010, Perception. 39, 6, p. 727-744). This challenges the notion that IOVD is involved in the discrimination of direction of displacement in two-frame motion displays.

We used a technique rare in human vision research, that exploits the natural variation between people’s behavior, to discover that the individual differences in perception suggested there were separate processing mechanisms for CD and IOVD information Nefs, O’Hare, Harris, 2010, Front. Perc. Science. doi:10.3389/fpsyg.2010.00155).

Finally, this project also explored the role of non-visual information from vergence eye movements that occur when the eyes fixate a target moving in depth. We discovered that perception of objects moving in depth is altered when the eye verge to follow a target (Nefs, Harris, 2007, Experimental Brain Research, 183, 313-322; Nefs, Harris, 2008, Journal of Vision, 8(3):8, 1-16), and that the eye movement cue alone can be used to support the perception of motion in depth (Welchman, Harris, Brenner, 2009, Vision Research, 49, 782-789).
Effective start/end date1/10/0530/09/09


  • EPSRC: £257,960.00


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.