Reducing Training Time in a One-shot Machine Learning-based Compiler

John Donald Thomson, Michael O'Boyle, Grigori Fursin, Björn Franke

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Iterative compilation of applications has proved a popular and successful approach to achieving high performance. This however, is at the cost of many runs of the application. Machine learning based approaches overcome this at the expense of a large off-line training cost. This paper presents a new approach to dramatically reduce the training time of a machine learning based compiler. This is achieved by focusing on the programs which best characterize the optimization space. By using unsupervised clustering in the program feature space we are able to dramatically reduce the amount of time required to train a compiler. Furthermore, we are able to learn a model which dispenses with iterative search completely allowing integration within the normal program development cycle. We evaluated our clustering approach on the EEMBCv2 benchmark suite and show that we can reduce the number of training runs by more than a factor of 7. This translates into an average 1.14 speedup across the benchmark suite compared to the default highest optimization level.
Original languageEnglish
Title of host publicationLanguages and Compilers for Parallel Computing
Subtitle of host publication22nd International Workshop, LCPC 2009, Newark, DE, USA, October 8-10, 2009, Revised Selected Papers
PublisherSpringer-Verlag
Pages399-407
Number of pages9
ISBN (Print)978-3-642-13373-2
DOIs
Publication statusPublished - 2010

Publication series

NameLecture Notes In Computer Science
PublisherSpringer-Verlag
Volume5898
ISSN (Print)0302-9743

Fingerprint

Dive into the research topics of 'Reducing Training Time in a One-shot Machine Learning-based Compiler'. Together they form a unique fingerprint.

Cite this