Abstract
When solving decision and optimisation problems, many competing algorithms have complementary strengths. Typically, there is no single algorithm that works well for all instances of a problem. Automated algorithm selection has been shown to work very well for choosing a suitable algorithm for a given instance. However, the cost of training can be prohibitively large due to the need of running all candidate algorithms on a set of training instances. In this work, we explore reducing this cost by selecting specific instance/algorithm combinations to train on, rather than requiring all algorithms for all instances. We approach this problem in three ways: using active learning to decide based on prediction uncertainty, augmenting the algorithm predictors with a timeout predictor, and collecting training data using a progressively increasing timeout. We evaluate combinations of these approaches on six datasets from ASLib and present the reduction in labelling cost.
| Original language | English |
|---|---|
| Pages | 1-17 |
| Number of pages | 17 |
| Publication status | Published - 9 Sept 2024 |
| Event | International Conference on Automated Machine Learning - Sorbonne University, Paris, France Duration: 9 Sept 2024 → 12 Sept 2024 https://2024.automl.cc/ |
Workshop
| Workshop | International Conference on Automated Machine Learning |
|---|---|
| Abbreviated title | AUTOML24 |
| Country/Territory | France |
| City | Paris |
| Period | 9/09/24 → 12/09/24 |
| Internet address |
Keywords
- Automated Algorithm Selection
- Active Learning
- Constraint Programming
Fingerprint
Dive into the research topics of 'Cost-efficient training for automated algorithm selection'. Together they form a unique fingerprint.Datasets
-
Cost-Efficient Training for Automated Algorithm Selection (dataset)
Kus, E. (Creator), GitHub, 2024
https://github.com/stacs-cp/CP2024-Frugal
Dataset
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver