Comparing different security vulnerability detection techniques (article review)

Image by Reimund Bertrams from Pixabay

An Empirical Study of Rule-Based and Learning-Based Approaches for Static Application Security Testing (arxiv.org)

In the recent weeks I’ve turned into a specific part of my work, i.e. security vulnerability detection. In many areas, working with security has focused on the entire chain. And that’s a good thing – we need to understand when and where we have a vulnerability. However, that’s not what I can help with, which has never really stopped me before.

So, I was looking for more programmatic view on security. To be more exact, I wonted to know what we, as software engineers, need to focus on when it comes to cyber security. We can, naturally, measure it, but that’s probably not the only thing. We can analyze libraries from OSS communities to find which ones could be exploited. We can even program in a specific way to minimize the risk of the exploitation.

In this paper, the authors compare two different techniques for software vulnerability prediction – static software analysis and vulnerability prediction models. They have identified 12 different findings, of which the following are the most interesting ones:

  • SVP models are generally a bit better when it comes to precision and overall preformance.
  • SVP models provide fewer files to inspect as the output, which saves the cost.
  • The two approaches lack synergy, and it’s difficult to use them together to increase their performance.

Since they have compared only a few tools, I believe it’s important to do more experiments. It is also important to understand whether it is good or bad to have fewer files to inspect – I mean, one undetected vulnerability can be very costly…

autoML – let’s talk about it…

Image from Pixabay

AutoML, a promise of green pastures, less work, optimal results. So, it is like that? In this post I share my view on this and experience from running the first test using that model.

First of all, let’s be honest, there is not such thing as a free lunch. In case of autoML (auto-sklearn), the price tag comes first with the effort, skills and time to install it and make it work. The second is the performance…. It’s painfully slow compared to your own models, simply because it tests a lot of models here and there. It also take a lot of time to download and to make it work.

But, first thing first, let me tell you where I start. So, I used the data from the MicroHRV project ( 3. MicroHRV: Recognizing Rare Events in Microwave Radio Links and Intensive Care Units using Machine Learning – Software Center (software-center.se)). The data is from patients being operated to remove clots of blood from the brain (although dangerous it may sound, the actual procedure is planned and calm). I wanted to check whether autoML can do better compared to what we have at the moment.

What we have at the moment (for that particular dataset) is: Accuracy: 0.98, Precision: 0.98, Recall: 0.98 – using Random Forest classifier. So, this is actually already very good. For the medical domain, that’s actually in class of its own, given our previous studies ended up with ca. 0.7 in accuracy at best.

When it comes to installing autoML – if you like stackoverflow, downgrading, upgrading, compiling, etc. and run Windows 10, then it’s your heaven. If you run Linux – no problems. Otherwise – stick to manual analyses:)

After two days (and nights) of trying, the best configuration was:

  • WSL – Windows Subsystem for Linux
  • Ubuntu 20, and
  • countless of oss libraries

It takes a while to get it to work, the question is whether the results are good enough…

After three hours of waiting, a lot of heat from my laptop, over 1,000 models tested resulted in Accuracy: 0.91, Precision: 0.94, Recall: 0.91

So, worse than my manual selection of models. I include the confusion matrices.

AutoML
Random forest

The matrices are not that different, as the validation sets are not that large either. However, it seems that the RF is still better than the best model from autoML.

I need work more on that and see if I do something wrong. However, I take this as a success – I’m better than autoML (still some use of an old professor) – instead of a let-down of not getting better results.

By the end of the day, 0.98 in accuracy is still very good!

Reproducing AI models – a guideline

Image by Pete Linforth from Pixabay

2107.00821.pdf (arxiv.org)

Machine learning has been used in software engineering as a great tool for both research and development. The fact that we have access to TensorFlow, PyCharm, and other toolkits, provides almost endless possibilities. Combine that with the hundreds (if not thousands) of datasets from Zenodo and Co. and you can train a model for almost anything.

So far, so good, I would say. Problems (yes, there are always some problems) appear when we want to reproduce the results of others. Training a model on your own dataset and making it available is easy. Trusting such a model in a new context is not.

Imagine an example of an ML model trained on data from Company X. We have probably tuned the parameters a lot, so the model works great there, but does it work for Company Y? Most probably it will not. Well, it will work, but the performance of the predictions are not going to be great.

So, Google has partner up with academic partners to set up SIGMODELS, and TensorFlow garden, initiatives that are aimed at making ML models more portable, experiments more replicable, and all the other goodies.

In this paper, the authors provide a set of checks, which we can use to make the models more transparent, which is the first step towards reproducibility. In these guidelines, the authors advocate for reporting the models architecture, their input and output structure, building blocks, loss functions, etc.

Naturally, they also recommend to report metrics which were used to optimize the models, e.g. accuracy, F1-score, MCC or others. I know, these are probably essentials, but you would be surprised to see that many authors do not really report these metrics. If they are omitted, then how do we know if the metrics were just so poor that the authors omitted them (low performance of the model) or that they are not relevant (low relevance of the metrics – which is a good thing).

For now, these guidelines are only a draft, but I hope that they will become more mainstream. just like the emprical guidelines from ACM (GitHub – acmsigsoft/EmpiricalStandards: Empirical standards for conducting and evaluating research in software engineering).