Machine learning testing.. (review)

https://ieeexplore.ieee.org/abstract/document/9000651

Image by DarkWorkX from Pixabay

Testing of machine learning systems is a tricky business. Not only the algorithms are based on statistics, they are also very complex and they are highly dependent on the data that is used for training and validation. Yet, the algorithms are very important for our modern software systems and therefore we need to make sure that they work as they are intended to.

I’ve came across an article where authors reviewed literature on how machine learning systems are tested. A list of aspects that this paper looks into is:

What to test:

  • Test input generation
  • Test oracle generation
  • Test adequacy evaluation
  • Bug report analysis
  • Debug and repair

Where to test:

  • Data testing
  • Learning program testing
  • Framework testing

Test for what:

  • Correctness
  • Model Relevance
  • Robustness&Security
  • Efficiency
  • Fairness
  • Interpretability
  • Privacy

The list is quite impressing and so is the paper. For me, the most interesting category was the testing of data, which reviews challenges and also provides some solutions. For example, it lists frameworks which are used for testing of data: ActiveClean or BoostClean. These frameworks look at the data and try to capture how valuable the data is for the actual algorithm.

Author: Miroslaw Staron

I’m professor in Software Engineering at IT faculty. I usually blog about interesting articles (for me) and my own reflections on the development of Software Engineering, AI, computer science and automotive software.