Incremental learning of temporally-coherent Gaussian mixture models

Oggie Arandelovic, Roberto Cipolla

Research output: Chapter in Book/Report/Conference proceedingConference contribution

41 Citations (Scopus)


In this paper we address the problem of learning Gaussian Mixture Models (GMMs) incrementally. Unlike previous approaches which universally assume that new data comes in blocks representable by GMMs which are then merged with the current model estimate, our method works for the case when novel data points arrive one-by-one, while requiring little additional memory. We keep only two GMMs in the memory and no historical data. The current fit is updated with the assumption that the number of components is fixed, which is increased (or reduced) when enough evidence for a new component is seen. This is deduced from the change from the oldest fit of the same complexity, termed the Historical GMM, the concept of which is central to our method. The performance of the proposed method is demonstrated qualitatively and quantitatively on several synthetic data sets and video sequences of faces acquired in realistic imaging conditions.

Original languageEnglish
Title of host publicationBMVC 2005 - Proceedings of the British Machine Vision Conference 2005
PublisherBritish Machine Vision Association, BMVA
Publication statusPublished - 2005
Event2005 16th British Machine Vision Conference, BMVC 2005 - Oxford, United Kingdom
Duration: 5 Sept 20058 Sept 2005


Conference2005 16th British Machine Vision Conference, BMVC 2005
Country/TerritoryUnited Kingdom


Dive into the research topics of 'Incremental learning of temporally-coherent Gaussian mixture models'. Together they form a unique fingerprint.

Cite this