Language models in Software Engineering (new paper review)

Image by Lorenzo Cafaro from Pixabay

Articla available at: https://arxiv.org/pdf/2205.11739.pdf

It’s no secret that I’ve been fascinated by modern, BERT-like language models. I’ve seen what they can do and how they operate, use them in two of my research projects. So, when this paper came around, I read it directly.

It’s a paper which makes an overview of what kind of tasks the language models are used in software engineering today. The list is long and contains a variety of tasks, e.g., code-to-code retrieval, repairing of source code or bug finding/fixing. In total a lot of these tasks, but, IMHO, a bit low-level tasks. There are no tasks that attempt to understand code at the design-level, for example whether we can really see specific design in the code.

The paper also shows which models are used, and provides references to these models. They list 20 models, with the tasks for which they were trained, including the datasets that they were trained on. Fantastic!

I need to dive deeper into these models, but I’m super happy about the fact that there is a list of these models now and that the language technology makes a significant body of work in software engineering now.

How good are language models for source code tasks?

https://ieeexplore-ieee-org.ezproxy.ub.gu.se/document/9653849

Using machine learning, and deep learning in particular, for software engineering tasks exploded recently. I would say that it exploded a bit too much. I’m myself to blame here as our team was one of the early adopters with the CCFlex model and source code analysis.

Well, this paper compares a number of modern deep learning models, so called transformers, in various code and comment analysis tasks. The authors did a great job in collecting a set of models and datasets, trained them and critically evaluated the performance.

I recommend reading the entire paper, but what they found was a bit surprised for me. First of all, they found that the transformer models are better for the natural language and not so great for the source code analysis. The hypothesis is that the structure of programs is important here. They have also found that pre-training is important, but not crucial. Pre-training attributes to a moderate effect in the end. The dataset, and its content, is much more important for the task at hand.

This is a great paper and I hope that this can become an essential reading for software engineers working with AI systems engineering supporting the software engineering tasks.

Reviewing rounds prediction for code patches

Image by pixabay

Reviewing rounds prediction for code patches | SpringerLink

Understanding how reviews of source code are done seem to be one of my main interests recently. Partly because reviews are important for software quality, while taking time. Partly, also, because I think it is interesting to check if we can quantify a good opinion from a good software developer.

In this paper, the authors study how to predict to which degree one can predict how many comments a given patch will have. Now, this problem may not be the most exciting ones, but it attracted my attention because of the fact that the authors studied the same projects as we did. However, to the contrary of our work, they also take into consideration features that characterize software developer networks – for example the experience in commenting software patches or the networking.

Now, to the results. The models presented in this paper seem to be quite good in quantifying and predicting patches within the same project – all kinds of predictions have pretty good F1-scores, above 65%. This means that we can train these models for our own projects and to be able to predict whether a particular patch will be commented on once or twice, or even many times.

The performance of the model on the cross-projects dataset. There, the performance is ok for predicting whether a particular patch will be commented on. Predicting how many times the patch will be commented on, or even that it will be commented on many times, does not work very well. The magnitude of the performance measures oscillates close to the 0% mark, which means that the models are not better than just guessing. I guess, you cannot have it all… from one model.

To sum up, the reason I read this article in more detail than others, was essentially not the performance, but tryin to understand the underlying techniques which they use. I’d like to say that they use a good set of features, which I recommend other to use (and will definitely use myself in the next studies), and the fact that they use simple language models, like word2vec, to understand the programming language. What I lack, though, is scrutinizing whether there is a statistically significant dependency between the sentiment (or even its strength) and the length of the discussion.

Comparing different security vulnerability detection techniques (article review)

Image by Reimund Bertrams from Pixabay

An Empirical Study of Rule-Based and Learning-Based Approaches for Static Application Security Testing (arxiv.org)

In the recent weeks I’ve turned into a specific part of my work, i.e. security vulnerability detection. In many areas, working with security has focused on the entire chain. And that’s a good thing – we need to understand when and where we have a vulnerability. However, that’s not what I can help with, which has never really stopped me before.

So, I was looking for more programmatic view on security. To be more exact, I wonted to know what we, as software engineers, need to focus on when it comes to cyber security. We can, naturally, measure it, but that’s probably not the only thing. We can analyze libraries from OSS communities to find which ones could be exploited. We can even program in a specific way to minimize the risk of the exploitation.

In this paper, the authors compare two different techniques for software vulnerability prediction – static software analysis and vulnerability prediction models. They have identified 12 different findings, of which the following are the most interesting ones:

  • SVP models are generally a bit better when it comes to precision and overall preformance.
  • SVP models provide fewer files to inspect as the output, which saves the cost.
  • The two approaches lack synergy, and it’s difficult to use them together to increase their performance.

Since they have compared only a few tools, I believe it’s important to do more experiments. It is also important to understand whether it is good or bad to have fewer files to inspect – I mean, one undetected vulnerability can be very costly…

What makes a great code maintainer…

BIld av Rudy and Peter Skitterians från Pixabay



ICSE2021_B.pdf (igor.pro.br)

For many of us, software engineering is the possibility to create new projects, new products and cool services. We do that often, but we equally often forget about the maintenance. Well, maybe not forget, but we deliverately do not want to remember about it. It’s natural, as maintaining old code is not really anything interesting.

When reading this paper, I’ve realized that my view about the maintenance is a bit old. In my time in industry, maintainance was “bug-fixing” mostly. Today, this is more about community work. As the abstract of this paper says: “Although Open Source Software (OSS) maintainers devote a significant proportion of their work to coding tasks, great maintainers must excel in many other activities beyond coding. Maintainers should care about fostering a community, helping new members to find their place, while also saying “no” to patches that although are well-coded and well-tested, do not contribute to the goal of the project.”

This paper conducts a series of interviews with software maintainers. In short, their results are that great software maintainers are:

  • Available (response time),
  • Disciplined (follows the process),
  • Has a global view of what to achieve with the review,
  • Communicative,
  • Emapthetic,
  • Community building,
  • Technically excellent,
  • Quality aware,
  • Has domain experience,
  • Motivated,
  • Open minded,
  • Patient,
  • Diligent, and
  • Responsible

It’s a long list and the priority of each of these characteristics differs from one reviewer to another. However, it’s important that we see software maintainer as a social person who can contribute to the community rather than just sit in the dark office and reads code all day long. The maintainers are really the persons who make the software engineering groups work well.

After reading the paper, I’m more motivated to maintain the community of my students!

Siri, Write the Next Method… (article highlight)

BIld av yangjiepsy01 från Pixabay

Wen2021a.pdf (usi.ch)

I’ve came across this article by accident. Essentially I do not even remember what I was looking for, but that’s maybe not so important. Either way, I really want to try this tool.

This research study is about designing a tool for code completion, but not just a completion of a word/statement/variable, but providing a signature of the next method to implement.

From the abstract: “Code completion is one of the killer features of Integrated Development Environments (IDEs), and researchers have proposed different methods to improve its accuracy. While these techniques are valuable to speed up code writing, they are limited to recommendations related to the next few tokens a developer is likely to type given the current context. In the best case, they can recommend a few APIs that a developer is likely to use next. We present FeaRS, a novel retrieval-based approach that, given the current code a developer is writing in the IDE, can recommend the next complete method (i.e., signature and method body) that the developer is likely to implement. To do this, FeaRS exploits “implementation patterns” (i.e., groups of methods usually implemented within the same task) learned by mining thousands of open source projects. We instantiated our approach to the specific context of Android apps. A large-scale empirical evaluation we performed across more than 20k apps shows encouraging preliminary results, but also highlights future challenges to overcome.”

As far as I understand, this is a plug-in to android studio, so I will probably need to see if I can use it outside of this context. However, it seems to be very interesting….

Exploring code weaknesses in StackOverflow

https://doi.ieeecomputersociety.org/10.1109/TSE.2021.3058985

Whether we like it or not, software designers, programmers and architects use StackOverflow. Mostly because they want to be part of a community – help others and help themselves.

However, StackOverflow has become a de-facto go-to place to find programming answers. Oftentimes, these answers include usage of libraries or other solutions. These libraries solve the immediate problems, but they can also introduce vulnerabilities that the programmers are not aware of.

In this article, the authors review how C/C++ authors introduce and revise vulnerabilities in their code. From the introduction: “We scan 646,716 C/C++ code snippets from Stack Overflow answers. We observe that code weaknesses are detected in 2% of the C/C++ answers with code snippets; more specifically, there are 12,998 detected code weaknesses that fall into 36% (i.e., 32 out of 89) of all the existing C/C++ CWE types.

I like that the paper presents a number of good examples, which can be used for training of software engineers. Both at the university level and later during their work. Some of them can even be used to create coding guidelines for companies – including good and bad examples.

The paper has a lot of great findings about the way in which weaknesses and vulnerabilities are introduced, for example ” 92.6% (i.e., 10,884) of the 11,748 Codew has weaknesses introduced when their code snippets were initially created on Stack Overflow, and 69% (i.e., 8,103 out of 11,748) of the Codew has never been revised

I strongly recommend to read the paper and give it to your software engineers to scan….

Consistency in code reviews (article review)

BIld av press 👍 and ⭐ från Pixabay

tse2020_hirao.pdf (uwaterloo.ca)

In the last year, I’ve written a lot about code reviews, mostly because this is where I put my effort now and where I see that software engineers could improve.

Although there is a lot of studies about how good code reviews are and what kind of benefits they bring, there is no doubt that code reviews are a tiresome task. You read software code and try to improve it, but, let’s be honest, if it works don’t break it – right?

In this paper, the authors study open source communities and check how often the reviewers actually agree upon the code review score. They find that it’s not that often – 37% disagree. From the paper: “How often do patches receive divergent scores? Results: Divergent review scores are not rare. Indeed, 15%–37% of the studied patch revisions that receive review scores of opposing polarity

They also study how the divergence actually influences the patches – are they integrated or not: “Patches are integrated more often than they are abandoned. For example, patches that elicit positive and negative scores of equal strength are eventually integrated on average 71% of the time. The order in which review scores appear correlates with the integration rate, which tends to increase if negative scores precede positive ones.

Finally, they study when the discussions/disagreements happen and how many reviewers there actually are: “Patches that are eventually integrated involve one or two more reviewers than patches without divergent scores on average. Moreover, positive scores appear before negative scores in 70% of patches with divergent scores. Reviewers may feel pressured to critique such patches before integration (e.g., due to lazy consensus).2 Finally, divergence tends to arise early, with 75% of them occurring by the third (QT) or fourth (OPENSTACK) revision. “

I think that these results say something about our community – that we tend to disagree, but do integrate the code anyways. What does that mean?

It could mean two things, which IMHO are equally valid:

  1. The review comments do not really touch upon crucial aspects and therefore are deemed not so important (e.g. whether we call something weatherType or typeOfWeather as a variable…)
  2. The reviewers’ reputation makes it difficult to get some of the comments through, e.g. when a junior reviewer is calling for a complete overhaul of the architecture.

Either way – I think that the modern code review field is quite active these days and I hope that we can get something done about the speed and quality of these long and tiresome code review processes.

Testing machine learning systems…

Image by Comfreak from Pixabay

https://rdcu.be/caKuc

Today, everybody is talking about machine learning and AI. Some talk about deterministic models, some about statistical ones, some about bayesian, some talk about X-mas 🙂

My experience with working with machine learning is that we need to be very careful what we actually do. If we do the machine learning in the classical sense, e.g. neural network models or decision trees. Then we need to make sure that we test the system alongside the data. Never together with the data. We need to prepare a dataset that we use as a reference and which we know well.

Testing, in that scenario, becomes just like we know it. We can make calculations manually, or just step-by-step, and we can check if the algorithm behaves like this.

Testing the system is also not difficult if we follow principles of good engineering – separation of concerns, modularization, observability.

In the runtime, we need to make sure that we add mechanisms related to such aspects as out-of-bounds distributions and safety cages for ML algorithms.

Either way, I recommend this article for all ML designers and product managers who want to know what’s the state of the art in this field, from the perspective of testing. A good overview, nice reading!

Who and when needs automated code reviews…

https://rdcu.be/caKsW

Image by Arek Socha from Pixabay

Having worked with code reviews for a while, I strongly sympathize with the thesis put forward by the authors of this paper – code review tools are still far from being supporting for software developers.

Yes, they do automate the process and organize it. Yes, they help in assuring that all code is reviewed and yes, they do help to capture problems in the code and help to spread the knowledge.

However, what I expect from such a tool is to help me to find problems in the code. I would like to have a tool that would help me, as a designer, get better: avoid mistakes, use cool programming constructs, make better design. None of the tools I know help with that.

This paper shows that my understanding is similar to the developers studied in the paper. Documentation – automatically fixing and suggesting were top priority. Renaming suggestions, commenting and explaining were some others.

Detection of duplicated code, architectural analysis and similar things were also mentioned as expectations. I cannot agree more! These things are priority 1 – I would also expect them to be there.

Now, some are more difficult that others – like analyzing the architecture. Not a trivial task at all, cause what is the architecture? Where are the patterns? How to find it from the code? How to rely on the tools that research provides? W’re not there yet.

Duplicate code, however, is something we should be able to fix. I’ve looked at some repository that had over 200 papers about code clones, duplicates and what have you. Are all these papers good? Probably not, but even if 10% is good, then here we have 20 tools we can try.

I agree, we do have SonarQube and similar tools, but they are not integrated with code review. I cannot just link to a report from SQ when writing a review comment. I cannot add a review comment to a detected technical debt in SQ. So, no integration then?

Maybe it’s just a friday afternoon thing, but I hope that we can get better in making the last mile with our tools. Hope that we can address the expectations that the developers have …