https://arxiv.org/pdf/2304.11686.pdf
Generating test cases is one of the new areas where ChatGPT is gaining traction. It is a good thing as it allows software developers to quickly raise quality of their software.
This paper discusses the problem and challenges in finding failure-inducing test cases, the potential of using LLMs for software engineering tasks, and the limitations of ChatGPT in this context. It also provides insights into how the task of finding a failure-inducing test case can be facilitated if the program’s intention is known, and how ChatGPT’s weakness at recognizing nuances can be leveraged to infer a program’s intention.
The authors propose Differential Prompting as a new paradigm for finding failure-inducing test cases, which involves program intention inference, program generation, and differential testing. The evaluation of this technique on QuixBugs and Codeforces demonstrates its effectiveness, notably outperforming state-of-the-art baselines.
The contributions of the paper include the original study of ChatGPT’s effectiveness in finding failure-inducing test cases, the proposal of the Differential Prompting technique, and the evaluation of this technique on standard benchmarks.
The paper also acknowledges that Differential Prompting works best for simple programs and discusses its potential benefits in software engineering education. Preliminaries and methodology are provided to illustrate the task of finding failure-inducing test cases and the workflow of Differential Prompting.
The authors conclude with the promising application scenarios of Differential Prompting, suggesting that while it is currently best for simple programs, it is a step towards finding failure-inducing test cases for larger software. They also highlight its benefits for software engineering education.