Studying programmer behaviour at scale: a case study using Amazon Mechanical Turk

Jason T. Jacques, Per Ola Kristensson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

11 Downloads (Pure)

Abstract

Developing and maintaining a correct and consistent model of how code will be executed is an ongoing challenge for software developers. However, validating the tools and techniques we develop to aid programmers can be a challenge plagued by small sample sizes, high costs, or poor generalisability. This paper serves as a case study using a web-based crowdsourcing approach to study programmer behaviour at scale. We demonstrate this method to create controlled coding experiments at modest cost, highlight the efficacy of this approach with objective validation, and comment on notable findings from our prototype experiment into one of the most ubiquitous, yet understudied, features of modern software development environments: syntax highlighting.

Original languageEnglish
Title of host publicationProgramming '21
Subtitle of host publicationcompanion proceedings of the 5th International conference on the art, science, and engineering of programming
EditorsLuke Church, Shigeru Chiba, Elisa Gonzalez Boix
PublisherACM
Pages36-48
Number of pages13
ISBN (Print)9781450389860
DOIs
Publication statusPublished - 22 Mar 2021
Event5th International Conference on the Art, Science, and Engineering of Programming, Programming 2021 - Virtual, Online, United Kingdom
Duration: 22 Mar 202126 Mar 2021

Conference

Conference5th International Conference on the Art, Science, and Engineering of Programming, Programming 2021
Country/TerritoryUnited Kingdom
CityVirtual, Online
Period22/03/2126/03/21

Keywords

  • Behaviour
  • Crowdsourcing
  • Programming

Fingerprint

Dive into the research topics of 'Studying programmer behaviour at scale: a case study using Amazon Mechanical Turk'. Together they form a unique fingerprint.

Cite this