1. the complete title of one (or more) paper(s) published in the open literature describing the work that the author claims describes a human-competitive result; Using Genetic Improvement & Code Transplants to Specialise a C++ Program to a Problem Class 2. the name, complete physical mailing address, e-mail address, and phone number of EACH author of EACH paper(s); Justyna Petke University College London Department of Computer Science Gower Street London WC1E 6BT United Kingdom e-mail: j.petke@ucl.ac.uk tel: +44 (0)20 7679 7190 Mark Harman University College London Department of Computer Science Gower Street London WC1E 6BT United Kingdom e-mail: mark.harman@ucl.ac.uk tel: +44 (0)20 7679 1305 William B. Langdon University College London Department of Computer Science Gower Street London WC1E 6BT United Kingdom e-mail:w.langdon@cs.ucl.ac.uk tel:+44 (0)20 3108 4125 Westley Weimer University of Virginia Department of Computer Science School of Engineering and Applied Science University of Virginia 85 Engineer's Way, P.O. Box 400740 Charlottesville, Virginia 22904-4740 United States e-mail: weimer@cs.virginia.edu te: (434) 924-1021 3. the name of the corresponding author (i.e., the author to whom notices will be sent concerning the competition); Justyna Petke 4. the abstract of the paper(s); Genetic Improvement (GI) is a form of Genetic Programming that improves an existing program. We use GI to evolve a faster version of a C++ program, a Boolean satisfiability (SAT) solver called MiniSAT, specialising it for a particular problem class, namely Combinatorial Interaction Testing (CIT), using automated code transplantation. Our GI-evolved solver achieves overall 17% improvement, making it comparable with average expert human performance. Additionally, this automatically evolved solver is faster than any of the human-improved solvers for the CIT problem. 5. a list containing one or more of the eight letters (A, B, C, D, E, F, G, or H) that correspond to the criteria (see above) that the author claims that the work satisfies; (D) The result is publishable in its own right as a new scientific result independent of the fact that the result was mechanically created. (H) The result holds its own or wins a regulated competition involving human contestants (in the form of either live human players or human-written computer programs). 6. a statement stating why the result satisfies the criteria that the contestant claims (see examples of statements of human-competitiveness as a guide to aid in constructing this part of the submission); (D) Boolean Satisfiability (SAT) solvers are used in a wide variety of application domains, including Model Checking, Software Verification, Planning, Combinatorial Design and Cryptography. Therefore, any improvement in SAT solver technology could have significant impact in many other domains. Moreover, instances from the Combinatorial Interaction Testing field usually take hours or even days to run, SAT solving technology has not been widely used to solve this particular type of problems. We have evolved a version of a SAT solver that is 17% faster than the original and 4% faster than any of the human-developed versions of the solver as of 2009 on our benchmark set. Thus this is a big first step in popularising SAT solver technology in the Combinatorial Interaction Testing field. (H) In 2009 a formal competition was established to encourage developers to improve the performance of MiniSAT, a state-of-the-art SAT solver. This "MiniSAT hack track" has attracted 19 human submissions over five years. Programmers have been optimising this solver for over 10 years now. Our research addresses the very challenging task of automatically improving a solver that has been engineered by expert human developers for many years. Our approach works by extending, arranging and augmenting code that all participants have access to from earlier versions of MiniSAT. In addition, by specialising the solver for efficiently solving problems from a particular problem class, namely Combinatorial Interaction Testing, we were able to automatically evolve a version of MiniSAT that is 17% faster on our benchmark set and 4% faster than any of the human-developed versions of MiniSAT from the hack track competition of our choice. Furthermore, for example, in the MiniSAT hack track competition held in 2011 the maximum improvement achieved was only 8.74%. 7. a full citation of the paper (that is, author names; publication date; name of journal, conference, technical report, thesis, book, or book chapter; name of editors, if applicable, of the journal or edited book; publisher name; publisher city; page numbers, if applicable); Justyna Petke, Mark Harman, William B. Langdon and Westley Weimer. Using Genetic Improvement & Code Transplants to Specialise a C++ Program to a Problem Class. The 17th European Conference on Genetic Programming (EuroGP 2014). To appear. 8. a statement either that "any prize money, if any, is to be divided equally among the co-authors" OR a specific percentage breakdown as to how the prize money, if any, is to be divided among the co-authors; and Any prize money, if any, is to be awarded to Justyna Petke. 9. a statement stating why the judges should consider the entry as "best" in comparison to other entries that may also be "human-competitive;" The use of genetic programming to improve software has recently had many successful applications, ranging from bug fixing, runtime improvement through porting old code to new hardware. This automatic approach of software improvement could potentially have a huge impact in the coming years in the way people write and test software. Genetic improvement can significantly reduce the engineering burden on human developers, allowing them to concentrate on the challenging functional properties of the system. It has already been shown with the bug fixing work that testers could use GI as a first step of quickly finding errors in the system under test. The core algorithm of SAT solvers has been developed in 1960s. Since then SAT experts have introduced improvements to this algorithm to speed-up the SAT solving process. However, only few improvements, including, most notably, conflict-driven clause learning with restarts, have had a significant impact on runtime. We have shown that genetic programming can automatically improve on this general, widely-used and high-optimised system. The 17% improvement we achieved is almost twice as high as the best improvement from the 2011 competition, devoted to improving the MiniSAT SAT solver by expert human developers. We have been able to achieve this by using a novel approach of transplanting code from multiple versions of the system and specialising it to a particular problem class. This is the first time genetic improvement was used to insert code from other versions of the software to be improved. 10. An indication of the general type of genetic or evolutionary computation used, such as GA (genetic algorithms), GP (genetic programming), ES (evolution strategies), EP (evolutionary programming), LCS (learning classifier systems), GE (grammatical evolution), GEP (gene expression programming), DE (differential evolution), etc. Genetic Improvement (GI), Genetic Programming (GP)