Vulnerability detection, a new article (highlight)

sec23summer_449-mirsky-prepub.pdf (usenix.org)

Cybersecurity has been, and will always be, a challenge for software systems. It is also perceived as an art when it comes to security analysis (or exploitation for that matter). There is no single tool, no single method that will make our software secure.

This article is interesting because of the way that it works. Usually, security analyzers are token-based analyzers which see programs as a set of instructions. They are very good, but they struggle with understanding the context of the analyzed program.

Let me give you an example. We’re analyzing a program for SQL injections – a very simple vulnerability. We can check that the SQL statement in the code contains any parameters. If it does not, then it’s safe – we know what we do with the database, but it’s not very common (or even useful). So, most statements will have some sort of parameters, and this is where the tricky part is. These parameters need to be validated, but this validation can be done in the same function (just before the actual SQL statement) or it can be done somewhere in the calling function/method. The check in the calling function/method is the part where token-based security analyzers give up.

Now, this paper presents an approach which works on a call graph, which allows for this interesting checks. I still need to understand it myself, but I hope to do it quite soon. The full source code is available here: GitHub – ymirsky/VulChecker: A deep learning model for localizing bugs in C/C++ source code (USENIX’23)

Language models and security vulnerabilities – what works and what does not…. (article review)

BIld av Jan Alexander från Pixabay

1176898.pdf (hindawi.com)

Language models are powerful tools if you know how to use them. One of the areas where they can be used in recognizing security vulnerabilities. In this article, the authors look into six language models and test them.

The results show that there are more challenges than solutions in this area. The models can be applied to languages, but the problem is with the examples and the ground truth. What is good about the paper is that it provides a good overview of the models and how they are used. They also look a bit deeper on why the limitations of the models happen.

It’s something that our team has also observed in other context, but I will talk about that in some other event. Stay tuned.

50 Language/Code models, let’s talk…

BIld av pencil parker från Pixabay

As you have probably observed I’ve been into language models for code analysis, design and recognition. It’s a great way of spending your research time as it gives you the possibility to understand how we program and understand how to model that. In my personal case, this is a great complement to the empirical software engineering research that I do otherwise.

In the recent time I got a feeling that I look into more and more of these models, all of them baring certain similarity to the Google’s BERT model or the Fracebook’s TransCoder. So I set off to do a short review of the papers that actually talk about code models or, as they are often called, programming language models. I started from the paper describing CodeBERT ( [2002.08155] CodeBERT: A Pre-Trained Model for Programming and Natural Languages (arxiv.org) ) and looked at the 500 citations that the model has. The list below is just the list of the models that are created based on CodeBERT. There are also models created based on AlphaGo or Github CoPilot, but I leave these for another occasion.

I must admit that I did not read all of these papers and did not look at all of these models. Far from it, I only looked at some of them. My conclusion is that we have a lot of models, but the quality of the results vary a lot. The best models provide good results in ca. 20% of cases. AlphaCode is an example of such a model, which is fantastic, but not super-accurate all the time. As the model is used for super-competitive tasks, 20% is actually very impressive – it’s difficult to say that I would do better for these programming competitions, so I’m not criticizing.

The best model I’ve seen so far, however, is the Github CoPilot, which is by far the best model to create code that the world has seen. Well, there may be models that the world has not seen, but then they do not count. If you would like to see a preview of how I use it (part I), you can take a look at this video:

I sincerely hope that you find this list useful and that you can help me to keep it updated – drop me an e-mail about the list if you want to:

  1. AlphaGo: https://www.deepmind.com/blog/competitive-programming-with-alphacode
  2. TransCoder: https://github.com/facebookresearch/TransCoder
  3. CodeT5: https://arxiv.org/pdf/2109.00859 
  4. CodeITT5: https://arxiv.org/pdf/2208.05446 
  5. ProphetNet: https://arxiv.org/pdf/2104.08006 
  6. Cotex: https://arxiv.org/pdf/2105.08645 
  7. Commit2vec: https://arxiv.org/pdf/1911.07605 
  8. CoreGen: https://www.sciencedirect.com/science/article/pii/S092523122100792X  
  9. SyncoBERT: https://arxiv.org/pdf/2108.04556 
  10. TreeBERT: https://proceedings.mlr.press/v161/jiang21a/jiang21a.pdf 
  11. FastSpec: https://ieeexplore.ieee.org/iel7/9581154/9581061/09581258.pdf 
  12. CVEFixes: https://dl.acm.org/doi/pdf/10.1145/3475960.3475985 
  13. CodeNet: https://arxiv.org/pdf/2105.12655
  14. Graph4Code: https://www.researchgate.net/profile/Jamie-Mccusker-2/publication/339445570_Graph4Code_A_Machine_Interpretable_Knowledge_Graph_for_Code/links/5fd2a29a45851568d154cfaa/Graph4Code-A-Machine-Interpretable-Knowledge-Graph-for-Code.pdf 
  15. DeGraphCE: https://dl.acm.org/doi/pdf/10.1145/3546066 
  16. VELVET: https://ieeexplore.ieee.org/iel7/9825713/9825693/09825786.pdf
  17. Code2Vec: https://uwspace.uwaterloo.ca/bitstream/handle/10012/15862/Arumugam_Lakshmanan.pdf?sequence=9&isAllowed=y 
  18. MulCode: https://ieeexplore.ieee.org/iel7/9425868/9425874/09426045.pdf 
  19. Flakify: https://ieeexplore.ieee.org/iel7/32/4359463/09866550.pdf 
  20. CoDesc: https://arxiv.org/pdf/2105.14220 
  21. NatGen: https://arxiv.org/pdf/2206.07585 
  22. Coctail: https://arxiv.org/pdf/2106.05345 
  23. MergeBERT: https://arxiv.org/pdf/2109.00084 
  24. SPTCode: https://dl.acm.org/doi/pdf/10.1145/3510003.3510096 
  25. InCoder: https://arxiv.org/pdf/2204.05999 
  26. JavaBERT: https://ieeexplore.ieee.org/iel7/9680270/9679822/09680322.pdf 
  27. BERT2Code: https://arxiv.org/pdf/2104.08017 
  28. NeuralCC: https://arxiv.org/pdf/2012.03225 
  29. LineVD: https://arxiv.org/pdf/2203.05181 
  30. GraphCode2Vec: https://arxiv.org/pdf/2112.01218 
  31. ASTBERT: https://arxiv.org/pdf/2201.07984 
  32. CodeRL: https://arxiv.org/pdf/2207.01780 
  33. CV4Code: https://arxiv.org/pdf/2205.08585 
  34. NaturalCC: https://xcodemind.github.io/papers/icse22_naturalcc_camera_submitted.pdf 
  35. StructCode: https://arxiv.org/pdf/2206.05239   
  36. VulBERT: https://arxiv.org/pdf/2205.12424 
  37. CodeMVP: https://arxiv.org/pdf/2205.02029 
  38. miBERT: https://ieeexplore.ieee.org/iel7/9787917/9787918/09787973.pdf?casa_token=rPNbu-k9Gh4AAAAA:3lkZVyUjnDP4Sp1UmmO9eVftsRaf1zAuw1YhHQogsyDBE2Y7992gBlhPb9jKVcI-5Q8tTv2JEyQ 
  39. LineVUL: https://www.researchgate.net/profile/Chakkrit-Tantithamthavorn/publication/359402890_LineVul_A_Transformer-based_Line-Level_Vulnerability_Prediction/links/623ee3d48068956f3c4cbede/LineVul-A-Transformer-based-Line-Level-Vulnerability-Prediction.pdf 
  40. CommitBART: https://arxiv.org/pdf/2208.08100 
  41. GAPGen: https://arxiv.org/pdf/2201.08810 
  42. El-CodeBERT: https://dl.acm.org/doi/pdf/10.1145/3545258.3545260?casa_token=DNyXQpkP69MAAAAA:y2iJC3RliEh7yJ6SzRpRRKrzPn2Q6w25vpm5vpoN0TksDh_SbmVfa_8JcDxvVN8FydOL_vTJqH-6OA 
  43. COCLUBERT: https://ieeexplore.ieee.org/iel7/9679834/9679948/09680081.pdf?casa_token=FtrqlHTmm74AAAAA:kkMyRsMl9xqPQQSBTRd6vFD-2-DyVSomYBYqm8u8aKs7B0_rkYYfL_OLVmOHgzn1-vqMF6W7pM8 
  44. Xcode: https://dl.acm.org/doi/pdf/10.1145/3506696?casa_token=5H8iW3e2GlYAAAAA:m2QA-DXSk5LZYazFxDPEVfLZcYREqDomXNg5YmkR-rPllHD37Qd8eLw_SCu6rbhNHZJ2Od24dvJt_Q 
  45. CobolBERT: https://arxiv.org/pdf/2201.09448 
  46. SiamBERT: https://melqkiades.github.io/files/download/papers/siambert-sais-2022.pdf 
  47. CodeReviewer: https://arxiv.org/pdf/2203.09095 
  48. CodeBERT-nt: https://arxiv.org/pdf/2208.06042 
  49. BashExplainer: https://arxiv.org/pdf/2206.13325

Code reviews and cybersecurity… (article highlight)

https://arxiv.org/pdf/2208.04261.pdf

So I find myself on the train again, this time strolling towards MDU for their cybersecurity workshop. Not that I am an expert on just cybersecurity, but I know a bit about programming and design. I also know this much to see that a secure product needs to start designing for security, not only testing for it.

I stumbled upon this paper about a week ago, probably as it has been submitted to some conference and the pre-print became available. It is a paper that interviews 10 developers and surveys over 180 professionals about how they work with finding security vulnerabilities during code reviews. I will not describe the entire article, although I wish I had the time to do that. Here are some of the highlights.

Interviewees stated to disregard security aspects during code reviews due to their assumptions about the security dynamic of the application they develop. ” – this is an interesting finding, as many companies see the code reviews as a golden bullet of software quality assurance today. Yet, the developers do not review something they thing “someone else” does…

When it comes to the survey, the results show that the majority of software developers think about security during their code reviews. The majority of the developers admit that there is no security experts reviewing their code, which is probably not great. Maybe we should have some of the security experts do some code reviews? Maybe both the developers and the security specialists would learn something from one another?

Finally, I think that the survey puts a finger on one of the pain points in modern companies – support for specific aspects of code reviews. They would like to see more support for the developers for making better security evaluations. I could only speculate that this is about in-depth training.

Well, very interesting reading. Let me get back to the paper, looking at the beautiful landscapes of Östergötland….

What are code reviews really good for?

Visualization of a source code of one module from the Cloudera projects. The embeddings are taken from our team’s neural network. t-SNE is a visualization technique taken from bioinformatics.

Concerns identified in code review: A fine-grained, faceted classification – ScienceDirect

Code reviews are time consuming. And effort intensive. And boring. And needed. Depending whom we ask, we get one of the above answers (well, 80% of the time). The reality is that the code reviews are not the most productive activity. Reading the code and looking for defects is good when we do it once, but when we need to work with it during continuous integration, the story changes. It becomes like studying for the exam or the homework – we do everything else to postpone it. Then someone waits longer or the code quality suffers.

There has been a lot of work done to make this activity more fun – gamification, automated support, using machine learning to filter out the code that we can automatically check – just to name the few. As far as I know, there has not been much work in understanding of what kind of problems code reviews really find.

In this article, the authors address that very question. Admittedly, they only analyzed 7 OSS projects, but their results are still interesting: “We identified 116 defect types that we grouped into 15 groups to create a defect classification. Additionally, 38% of these defects could be automatically detected accurately.

So, that basically means that 38% of defects could be identified by using testing or static analysis (or some other fancy automation technique). This figure summarizes their results (this is a link to the figure in sciencedirect): https://ars.els-cdn.com/content/image/1-s2.0-S0950584922001653-gr5_lrg.jpg

So, what the code reviews are good for? Here is their list:

  • threads,
  • header comments,
  • errors, warnings and logging,
  • test cases,
  • annotations,
  • performance,
  • identifier naming,
  • modifiers,
  • comments,
  • javadoc,
  • design,
  • implementation, and
  • logic and functionality

The list is sorted from the least frequent to the most frequent – so logic and functionality is the category where the code reviews are the most useful for. I need to also say that the frequencies are not super-high – threading is only 1 detected concern, while logic and functionality has 57. So, you know, could be more, given how much time is spent on code reviews. I guess it is what the quality costs nowadays, even though there is no real data on this.

Language models in Software Engineering (new paper review)

Image by Lorenzo Cafaro from Pixabay

Articla available at: https://arxiv.org/pdf/2205.11739.pdf

It’s no secret that I’ve been fascinated by modern, BERT-like language models. I’ve seen what they can do and how they operate, use them in two of my research projects. So, when this paper came around, I read it directly.

It’s a paper which makes an overview of what kind of tasks the language models are used in software engineering today. The list is long and contains a variety of tasks, e.g., code-to-code retrieval, repairing of source code or bug finding/fixing. In total a lot of these tasks, but, IMHO, a bit low-level tasks. There are no tasks that attempt to understand code at the design-level, for example whether we can really see specific design in the code.

The paper also shows which models are used, and provides references to these models. They list 20 models, with the tasks for which they were trained, including the datasets that they were trained on. Fantastic!

I need to dive deeper into these models, but I’m super happy about the fact that there is a list of these models now and that the language technology makes a significant body of work in software engineering now.

How good are language models for source code tasks?

https://ieeexplore-ieee-org.ezproxy.ub.gu.se/document/9653849

Using machine learning, and deep learning in particular, for software engineering tasks exploded recently. I would say that it exploded a bit too much. I’m myself to blame here as our team was one of the early adopters with the CCFlex model and source code analysis.

Well, this paper compares a number of modern deep learning models, so called transformers, in various code and comment analysis tasks. The authors did a great job in collecting a set of models and datasets, trained them and critically evaluated the performance.

I recommend reading the entire paper, but what they found was a bit surprised for me. First of all, they found that the transformer models are better for the natural language and not so great for the source code analysis. The hypothesis is that the structure of programs is important here. They have also found that pre-training is important, but not crucial. Pre-training attributes to a moderate effect in the end. The dataset, and its content, is much more important for the task at hand.

This is a great paper and I hope that this can become an essential reading for software engineers working with AI systems engineering supporting the software engineering tasks.

Reviewing rounds prediction for code patches

Image by pixabay

Reviewing rounds prediction for code patches | SpringerLink

Understanding how reviews of source code are done seem to be one of my main interests recently. Partly because reviews are important for software quality, while taking time. Partly, also, because I think it is interesting to check if we can quantify a good opinion from a good software developer.

In this paper, the authors study how to predict to which degree one can predict how many comments a given patch will have. Now, this problem may not be the most exciting ones, but it attracted my attention because of the fact that the authors studied the same projects as we did. However, to the contrary of our work, they also take into consideration features that characterize software developer networks – for example the experience in commenting software patches or the networking.

Now, to the results. The models presented in this paper seem to be quite good in quantifying and predicting patches within the same project – all kinds of predictions have pretty good F1-scores, above 65%. This means that we can train these models for our own projects and to be able to predict whether a particular patch will be commented on once or twice, or even many times.

The performance of the model on the cross-projects dataset. There, the performance is ok for predicting whether a particular patch will be commented on. Predicting how many times the patch will be commented on, or even that it will be commented on many times, does not work very well. The magnitude of the performance measures oscillates close to the 0% mark, which means that the models are not better than just guessing. I guess, you cannot have it all… from one model.

To sum up, the reason I read this article in more detail than others, was essentially not the performance, but tryin to understand the underlying techniques which they use. I’d like to say that they use a good set of features, which I recommend other to use (and will definitely use myself in the next studies), and the fact that they use simple language models, like word2vec, to understand the programming language. What I lack, though, is scrutinizing whether there is a statistically significant dependency between the sentiment (or even its strength) and the length of the discussion.

Comparing different security vulnerability detection techniques (article review)

Image by Reimund Bertrams from Pixabay

An Empirical Study of Rule-Based and Learning-Based Approaches for Static Application Security Testing (arxiv.org)

In the recent weeks I’ve turned into a specific part of my work, i.e. security vulnerability detection. In many areas, working with security has focused on the entire chain. And that’s a good thing – we need to understand when and where we have a vulnerability. However, that’s not what I can help with, which has never really stopped me before.

So, I was looking for more programmatic view on security. To be more exact, I wonted to know what we, as software engineers, need to focus on when it comes to cyber security. We can, naturally, measure it, but that’s probably not the only thing. We can analyze libraries from OSS communities to find which ones could be exploited. We can even program in a specific way to minimize the risk of the exploitation.

In this paper, the authors compare two different techniques for software vulnerability prediction – static software analysis and vulnerability prediction models. They have identified 12 different findings, of which the following are the most interesting ones:

  • SVP models are generally a bit better when it comes to precision and overall preformance.
  • SVP models provide fewer files to inspect as the output, which saves the cost.
  • The two approaches lack synergy, and it’s difficult to use them together to increase their performance.

Since they have compared only a few tools, I believe it’s important to do more experiments. It is also important to understand whether it is good or bad to have fewer files to inspect – I mean, one undetected vulnerability can be very costly…

What makes a great code maintainer…

BIld av Rudy and Peter Skitterians från Pixabay



ICSE2021_B.pdf (igor.pro.br)

For many of us, software engineering is the possibility to create new projects, new products and cool services. We do that often, but we equally often forget about the maintenance. Well, maybe not forget, but we deliverately do not want to remember about it. It’s natural, as maintaining old code is not really anything interesting.

When reading this paper, I’ve realized that my view about the maintenance is a bit old. In my time in industry, maintainance was “bug-fixing” mostly. Today, this is more about community work. As the abstract of this paper says: “Although Open Source Software (OSS) maintainers devote a significant proportion of their work to coding tasks, great maintainers must excel in many other activities beyond coding. Maintainers should care about fostering a community, helping new members to find their place, while also saying “no” to patches that although are well-coded and well-tested, do not contribute to the goal of the project.”

This paper conducts a series of interviews with software maintainers. In short, their results are that great software maintainers are:

  • Available (response time),
  • Disciplined (follows the process),
  • Has a global view of what to achieve with the review,
  • Communicative,
  • Emapthetic,
  • Community building,
  • Technically excellent,
  • Quality aware,
  • Has domain experience,
  • Motivated,
  • Open minded,
  • Patient,
  • Diligent, and
  • Responsible

It’s a long list and the priority of each of these characteristics differs from one reviewer to another. However, it’s important that we see software maintainer as a social person who can contribute to the community rather than just sit in the dark office and reads code all day long. The maintainers are really the persons who make the software engineering groups work well.

After reading the paper, I’m more motivated to maintain the community of my students!