Mitigating the impact of mislabeled data on deep predictive models: an empirical study of learning with noise approaches in software engineering tasks

BIld av Michal Jarmoluk från Pixabay

Mitigating the impact of mislabeled data on deep predictive models: an empirical study of learning with noise approaches in software engineering tasks | Automated Software Engineering (springer.com)

Labelling data, annotating images or text is a really tedious work. I don’t do it a lot, but when I do it, it takes time.

This paper presents a study of the extent to which mislabeled samples poison SE datasets and what it means for deep predictive models. The study also evaluates the effectiveness of current learning with noise (LwN) approaches, initially designed for AI datasets, in the context of software engineering.

The core of their investigation revolves around two primary datasets representative of the SE landscape: Bug Report Classification (BRC) and Software Defect Prediction (SDP). Mislabeled samples are not just present; they significantly alter the dataset, affecting everything from the class distribution to the overall data quality.

The implications of this study are interesting for developers and researchers as they offer a roadmap for navigating the challenges of data quality and model integrity in software engineering, ensuring that as we advance, our tools and models do so on a foundation of accurate and reliable data.