Learning invariance from transformation sequences

Peter Foldiak

Research output: Contribution to journalArticlepeer-review


The visual system can reliably identify objects even when the retinal image is transformed considerably by commonly occurring changes in the environment. A local learning rule is proposed, which allows a network to learn to generalize across such transformations. During the learning phase, the network is exposed to temporal sequences of patterns undergoing the transformation. An application of the algorithm is presented in which the network learns invariance to shift in retinal position. Such a principle may be involved in the development of the characteristic shift invariance property of complex cells in the primary visual cortex, and also in the development of more complicated invariance properties of neurons in higher visual areas.
Original languageEnglish
Pages (from-to)194-200
JournalNeural Computation
Issue number2
Publication statusPublished - 1991


Dive into the research topics of 'Learning invariance from transformation sequences'. Together they form a unique fingerprint.

Cite this