Perceptual learning with spatial uncertainties

Thomas U. Otto*, Michael H. Herzog, Manfred Fahle, Li Zhaoping

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In perceptual learning, stimuli are usually assumed to be presented to a constant retinal location during training. However, due to tremor, drift, and microsaccades of the eyes, the same stimulus covers different retinal positions on sequential trials. Because of these variations the mathematical decision problem changes from linear to non-linear (Zhaoping, Herzog, & Dayan, 2003). This non-linearity implies three predictions. First, varying the spatial position of a stimulus within a moderate range does not deteriorate perceptual learning. Second, improvement for one stimulus variant can yield negative transfer to other variants. Third, interleaved training with two stimulus variants yields no or strongly diminished learning. Using a bisection task, we found psychophysical evidence for the first and last prediction. However, no negative transfer was found as opposed to the second prediction. (c) 2006 Elsevier Ltd. All rights reserved.

Original languageEnglish
Pages (from-to)3223-3233
Number of pages11
JournalVision Research
Volume46
Issue number19
DOIs
Publication statusPublished - Oct 2006

Keywords

  • ideal observer model
  • bisection task
  • recurrent networks
  • positional coding
  • stimulus uncertainty
  • PRIMARY VISUAL-CORTEX
  • FIXATIONAL EYE-MOVEMENTS
  • CONTRAST DISCRIMINATION
  • TEXTURE-DISCRIMINATION
  • ORIENTATION
  • SPECIFICITY
  • ACUITY
  • CONTEXT
  • HYPERACUITY
  • CONNECTIONS

Fingerprint

Dive into the research topics of 'Perceptual learning with spatial uncertainties'. Together they form a unique fingerprint.

Cite this