A framework for generating informative benchmark instances

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Benchmarking is an important tool for assessing the relative performance of alternative solving approaches. However, the utility of benchmarking is limited by the quantity and quality of the available problem instances. Modern constraint programming languages typically allow the specification of a class-level model that is parameterised over instance data. This separation presents an opportunity for automated approaches to generate instance data that define instances that are graded (solvable at a certain difficulty level for a solver) or can discriminate between two solving approaches. In this paper, we introduce a framework that combines these two properties to generate a large number of benchmark instances, purposely generated for effective and informative benchmarking. We use five problems that were used in the MiniZinc competition to demonstrate the usage of our framework. In addition to producing a ranking among solvers, our framework gives a broader understanding of the behaviour of each solver for the whole instance space; for example by finding subsets of instances where the solver performance significantly varies from its average performance.
Original languageEnglish
Title of host publication28th International Conference on Principles and Practice of Constraint Programming (CP 2022)
EditorsChristine Solon
Place of PublicationDagstuhl
PublisherSchloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
Number of pages18
ISBN (Electronic)9783959772402
DOIs
Publication statusPublished - 23 Jul 2022

Publication series

NameLeibniz International Proceedings in Informatics (LIPIcs)
PublisherSchloss Dagstuhl -- Leibniz-Zentrum für Informatik
Volume235
ISSN (Electronic)1868-8969

Keywords

  • Instance generation
  • Benchmarking
  • Constraint programming

Fingerprint

Dive into the research topics of 'A framework for generating informative benchmark instances'. Together they form a unique fingerprint.

Cite this