Abstract
After a language has been learned and then forgotten, relearning some words appears to facilitate spontaneous recovery of other words. More generally, relearning partially forgotten associations induces recovery of other associations in humans, an effect we call free-lunch learning (FLL). Using neural network models, we prove that FLL is a necessary consequence of storing associations as distributed representations. Specifically, we prove that (1) FLL becomes increasingly likely as the number of synapses (connection weights) increases, suggesting that FLL contributes to memory in neurophysiological systems, and (2) the magnitude of FLL is greatest if inactive synapses are removed, suggesting a computational role for synaptic pruning in physiological systems. We also demonstrate that FLL is different from generalization effects conventionally associated with neural network models. As FLL is a generic property of distributed representations, it may constitute an important factor in human memory.
Original language | English |
---|---|
Pages (from-to) | 194-217 |
Number of pages | 24 |
Journal | Neural Computation |
Volume | 19 |
Issue number | 1 |
Early online date | 29 Nov 2006 |
DOIs | |
Publication status | Published - Jan 2007 |
Keywords
- CEREBELLUM
- NETWORKS