Free-Lunch Learning: Modeling spontaneous recovery of memory

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

After a language has been learned and then forgotten, relearning some words appears to facilitate spontaneous recovery of other words. More generally, relearning partially forgotten associations induces recovery of other associations in humans, an effect we call free-lunch learning (FLL). Using neural network models, we prove that FLL is a necessary consequence of storing associations as distributed representations. Specifically, we prove that (1) FLL becomes increasingly likely as the number of synapses (connection weights) increases, suggesting that FLL contributes to memory in neurophysiological systems, and (2) the magnitude of FLL is greatest if inactive synapses are removed, suggesting a computational role for synaptic pruning in physiological systems. We also demonstrate that FLL is different from generalization effects conventionally associated with neural network models. As FLL is a generic property of distributed representations, it may constitute an important factor in human memory.

Original languageEnglish
Pages (from-to)194-217
Number of pages24
JournalNeural Computation
Volume19
Issue number1
Early online date29 Nov 2006
DOIs
Publication statusPublished - Jan 2007

Keywords

  • CEREBELLUM
  • NETWORKS

Fingerprint

Dive into the research topics of 'Free-Lunch Learning: Modeling spontaneous recovery of memory'. Together they form a unique fingerprint.

Cite this