Abstract
Given the growing trend of continual learning techniques for deep neural networks focusing on the domain of computer vision, there is a need to identify which of these generalizes well to other tasks such as human activity recognition (HAR). As recent methods have mostly been composed of loss regularization terms and memory replay, we provide a constituent-wise analysis of some prominent task-incremental learning techniques employing these on HAR datasets. We find that most regularization approaches lack substantial effect and provide an intuition of when they fail. Thus, we make the case that the development of continual learning algorithms should be motivated by rather diverse task domains.
Original language | English |
---|---|
Pages | 1-4 |
Number of pages | 4 |
Publication status | Published - 17 Jul 2020 |
Event | Thirty-seventh International Conference on Machine Learning, ICML 2020; ICML Workshop on Continual Learning - Duration: 13 Jul 2020 → 18 Jul 2020 Conference number: 37 https://icml.cc/Conferences/2020/ScheduleMultitrack?event=5743 (Link to Workshop) |
Workshop
Workshop | Thirty-seventh International Conference on Machine Learning, ICML 2020; ICML Workshop on Continual Learning |
---|---|
Abbreviated title | ICML 2020 |
Period | 13/07/20 → 18/07/20 |
Internet address |
|