SlowCoach: Mutating Code to Simulate Performance Bugs

Yiqun Chen, Oliver Schwahn, Roberto Natella, Matthew Bradbury, and Neeraj Suri. SlowCoach: Mutating Code to Simulate Performance Bugs. In The 33rd IEEE International Symposium on Software Reliability Engineering, ISSRE, 274–285. Charlotte, North Carolina, USA, 31 October – 3 November 2022. doi:10.1109/ISSRE55969.2022.00035.

[ bibtex] [ file] [ presentation] [ dataset]

It is important that software runs fast. Better performance means that software is more responsive, consumes less energy, and provides less scope for adversaries to perform denial of service attacks that take advantage of poor performance. In order to identify performance issues, diagnosis tools need to be used to analyse software performance. Like other software, it is important that we are able to investigate the efficacy of performance diagnostic tools (such as perf).

Mutation testing has been used to analyse the performance of a test suite’s ability to detect bugs in code. This is done by injecting mutations which leads to incorrect behaviour in the source code, and allows evaluation of the test suite’s capability to detect that inserted mutation. In this paper we extend the idea of mutation testing to performance mutation testing, in order to be able to test how well performance diagnostic tools are able to detect performance bugs.

Diagram showing the tasks to perform performance mutation testing, including using a mutation tool on source code to generate mutants which are then compiled and benchmarked, before finally being compared against the performance of the unmutated executable.
Workflow of performing performance mutation testing

Importance

As with all software it is important that we can test it. Performance diagnosis tools are no different in this regard. The reason performance mutation testing is so valuable, is because it allows the capability to generate large sets of test inputs based on a specific set of rules for arbitrary programs (in a specific language). Otherwise, workloads and benchmarks would need to be manually created and maintained.

Without a profiler, I would not have been able to contribute performance improvements to COOJA which is used to simulate wireless sensor networks.

Perspectives

A downside to this approach is the quantity of time it takes to generate mutants and evaluate them. However, there is potential for interesting performance issues to be identified. In the future there is a need to consider more complex performance mutations. In addition, the ability to inject mutations in the form of algorithms with worse computational or space complexity should be considered.

Presentation