Tracking of deformable objects using dynamically and robustly updating pictorial structures

Connor Ratcliffe, Oggie Arandelovic

Research output: Contribution to journalArticlepeer-review

Abstract

The problem posed by complex, articulated or deformable objects has been at the focus of much tracking research for a considerable length of time. However, it remains a major challenge, fraught with numerous difficulties. The increased ubiquity of technology in all realms of our society has made the need for effective solutions all the more urgent. In this article, we describe a novel method which systematically addresses the aforementioned difficulties and in practice outperforms the state of the art. Global spatial flexibility and robustness to deformations are achieved by adopting a pictorial structure based geometric model, and localized appearance changes by a subspace based model of part appearance underlain by a gradient based representation. In addition to one-off learning of both the geometric constraints and part appearances, we introduce a continuing learning framework which implements information discounting i.e., the discarding of historical appearances in favour of the more recent ones. Moreover, as a means of ensuring robustness to transient occlusions (including self-occlusions), we propose a solution for detecting unlikely appearance changes which allows for unreliable data to be rejected. A comprehensive evaluation of the proposed method, the analysis and discussing of findings, and a comparison with several state-of-the-art methods demonstrates the major superiority of our algorithm.
Original languageEnglish
Article number61
Number of pages19
JournalJournal of Imaging
Volume6
Issue number7
DOIs
Publication statusPublished - 2 Jul 2020

Keywords

  • Computer vision
  • Pose
  • BBC
  • Articulated
  • Motion
  • Video

Fingerprint

Dive into the research topics of 'Tracking of deformable objects using dynamically and robustly updating pictorial structures'. Together they form a unique fingerprint.

Cite this