Abstract
The visual system converts location information from retinal to several other more relevant coordinate systems. This mechanism has been modelled previously by neural networks trained with supervised and reinforcement learning. Here, we use a simple and biologically plausible trace-Hebbian unsupervised learning mechanism based on the principle of temporal coherence (Földiak, 1991 Neural Computation 3 194 - 200). This algorithm had been used previously to learn invariance to positional shifts. Here we show that this simple unsupervised mechanism, presented with a static environment scanned by smooth and saccadic eye movements can also learn eye-position invariance and coordinate transformations. As an example, a representation defined in retinal coordinates is transformed into one in head-centred coordinates. The neural tuning found in area 7a shows tuning modulated both by retinal position and eye position, which is consistent with the dual tuning properties of the input representation of our model. The network uses the constraint of temporal continuity without explicit teaching or reinforcement to learn the appropriate pooling connections to achieve a tuning effected only by head-centred coordinates, independently of retinal and eye positions. The network is demonstrated on 1-D and 2-D input image sequences. The mechanism is sufficiently general to apply to other coordinate transformations as well.
Original language | English |
---|---|
Pages (from-to) | 45-45 |
Journal | Perception |
Volume | 33 |
Issue number | ECVP Abstract Supplement |
Publication status | Published - 2004 |