Unsupervised learning of coordinate transformations using temporal coherence

P Foldiak, S Soloviev, L Vaina

Research output: Contribution to journalAbstractpeer-review

Abstract

The visual system converts location information from retinal to several other more relevant coordinate systems. This mechanism has been modelled previously by neural networks trained with supervised and reinforcement learning. Here, we use a simple and biologically plausible trace-Hebbian unsupervised learning mechanism based on the principle of temporal coherence (Földiak, 1991 Neural Computation 3 194 - 200). This algorithm had been used previously to learn invariance to positional shifts. Here we show that this simple unsupervised mechanism, presented with a static environment scanned by smooth and saccadic eye movements can also learn eye-position invariance and coordinate transformations. As an example, a representation defined in retinal coordinates is transformed into one in head-centred coordinates. The neural tuning found in area 7a shows tuning modulated both by retinal position and eye position, which is consistent with the dual tuning properties of the input representation of our model. The network uses the constraint of temporal continuity without explicit teaching or reinforcement to learn the appropriate pooling connections to achieve a tuning effected only by head-centred coordinates, independently of retinal and eye positions. The network is demonstrated on 1-D and 2-D input image sequences. The mechanism is sufficiently general to apply to other coordinate transformations as well.
Original languageEnglish
Pages (from-to)45-45
JournalPerception
Volume33
Issue numberECVP Abstract Supplement
Publication statusPublished - 2004

Fingerprint

Dive into the research topics of 'Unsupervised learning of coordinate transformations using temporal coherence'. Together they form a unique fingerprint.

Cite this