Defect predictions – still valid in 2023…

Image by WikiImages from Pixabay

Industrial applications of software defect prediction using machine learning: A business-driven systematic literature review – ScienceDirect

Wow, when I look at the last entry, it was two months ago. Well, somewhere between the course in embedded systems for my students, delegation to Silicon Valley and all kinds of challenges, the time seemed to pass between my fingers.

Well, nevertheless, I would like to put a highlight to the article from our colleagues who specialize in defect predictions and systematic reviews. The article describes how companies use defect prediction models and when they do it.

It’s a nice sunday reading for those of you who are interested in the topic. It is a good source of best practices as well as a solid source for looking for datasets for defect prediction.

Enjoy your reading!

GitHub Co-pilot and code generation

So, this week’s post is my reflection on the seminar that we hosted last week (the recording is above). It was an eye-opener for me in a few aspects.

For the first, it was the question of ownership of things. Since AI is not a subject in legal cases, it cannot really own anything. I know, AI and computational models are not the same, but for the sake of the argument let’s assume that they are. By the end of the day, it is still a human being that presses the button and generates new source code or comments or what have you. So, the responsibility is still very much on us when we use these tools.

The second, it was the question about the community and why we have open-source software. We certainly do not put our source code openly for someone to profit from it. Attribution and recognition are very important (if not the most important) aspects of any open-source community. So, using their code to create commercial models requires at least some attribution. Why not show which code was used to train these models and show how good the communities really are?

Finally, my main point still stands – we should use these models to become better. They make us so much more productive that we should not go back to the old ways of writing software. Providing suggestions and ideas to programmers can make our software better, shipped faster and potentially more reliable.

However, we need to make sure that we change the way we attribute the software. Myself, I will start to add “co-created by Github Co-pilot and the OSS communities” to my work when I use the tool. Maybe you can do that too? At least to give some attribution back to our countless colleagues who deserve it….

Christmas post from ChatGPT

BIld av Manfred Steger från Pixabay

I think that almost everyone in the field of SE has heard about the new bot – ChatGPT from OpenAI. The bot is capable of writing programs, making conversations and even creating virtual machines.

So, this year I wanted to check what ChatGPT can do. Here is my conversation with it:

M: What are the most important machine learning papers from 2022?

C: I’m sorry, but I don’t have access to papers or other information from 2022 because my knowledge was cut off in 2021. I can tell you about some important machine learning papers from the past, or provide general information about machine learning and how it has evolved over time. Is there something specific you would like to know about machine learning?

M: Ok, how about the most important papers from 2021?

C: I’m sorry, but I am not able to browse the internet and do not have access to information about specific papers or other developments that have occurred since my knowledge cutoff in 2021. However, there are many resources available online that can help you learn about the latest developments in the field of machine learning. Some places you might look for information about the most important machine learning papers from 2021 include scientific journals and conference proceedings, online news sources, and blogs or websites that cover developments in the field of artificial intelligence and machine learning. Additionally, you can check out the proceedings of major machine learning conferences, such as the International Conference on Machine Learning (ICML) and the Conference on Neural Information Processing Systems (NeurIPS), which are typically held each year and showcase the latest research in the field.

M: What is your prediction for machine learning field for 2023?

C: One trend that has been especially prominent in the field of machine learning in recent years is the increasing use of deep learning techniques, which have led to significant advances in a wide range of applications, including image and speech recognition, natural language processing, and autonomous systems. It is likely that this trend will continue in the coming years, as researchers continue to develop and refine deep learning algorithms and apply them to new and challenging problems.

Another trend that is likely to continue in the field of machine learning is the increasing importance of interpretability and explainability. As machine learning models become more sophisticated and are applied to more complex and sensitive tasks, it will be important to understand how they make their predictions and decisions, and to be able to explain these to users and stakeholders. Researchers and practitioners will therefore continue to focus on developing techniques for interpreting and explaining the behavior of machine learning models.

Finally, the field of machine learning will continue to be driven by the availability of large amounts of data and the increasing power and efficiency of computing hardware. As data continues to be generated at an ever-increasing rate, and as hardware capabilities continue to improve, it is likely that machine learning techniques will continue to advance and be applied to a wide range of problems and domains.

M: Ok, then write a program that draws a Christmas tree.

C:

Here is a Python program that uses ASCII art to draw a simple Christmas tree:

def draw_tree():
  print("   /\\")
  print("  /  \\")
  print(" /    \\")
  print("/      \\")
  print("--------")

draw_tree()

This code will output the following tree to the console:

   /\
  /  \
 /    \
/      \
--------

You can customize the appearance of the tree by modifying the print statements. For example, you could add more branches to the tree by adding additional print statements, or you could use different characters to create a more detailed or decorative tree.

Here is an example of a more detailed tree that uses a combination of ASCII art and Unicode characters to create a more realistic-looking tree:

def draw_tree():
  print("    /\\")
  print("   /  \\")
  print("  /    \\")
  print(" /      \\")
  print("/        \\")
  print("----------")
  print("   |||")
  print("   |||")
  print("   |||")

draw_tree()

Summary: well, the conversation is not very deep, not great either, which makes me sleep better at nights, feeling we (software engineers) are still needed in 2023. At least for the time being.

Have a wonderful holiday everyone!

How can AI see programming code… (article highlight)

BIld av Willi Heidelbach från Pixabay

A systematic mapping study of source code representation for deep learning in software engineering – Samoaa – 2022 – IET Software – Wiley Online Library

Understanding programming language is an important topic in research in the area of programming language models. I’ve written before that there are ca. 50 programming language models, which we can use in software engineering. Ok, not all of them are equivalent and they are specific to the task, but they are available, so we can use and customize them.

Now, whether 50 models is a lot or not is debatable. Compared to natural language models this is a small number. Even compared to the number of programming languages this number is not impressive. However, how many languages are used widely – 10-15? Java, C, C++, Python, JavaScript, Rust, Go, and derivatives are the most common ones.

This article is a study done by our colleagues from the department. It’s too long to quote in detail, but there are a few things that I like. First, it’s a good overview of the types of language models:

  1. Token-based representation: when the program code is basically a set of tokens/words; some can have a special meaning, but they are just words (I’ve written about this before, even worked with it: GitHub – mochodek/py-ccflex: py-ccflex – Python Flexible Code Classifier )
  2. Tree-based representation: when the program code is seen from the perspective of their Abstract-Syntax-Tree, an example is the code2vec model: code2vec
  3. Graph-based models: when the program code is seen as a directed graph, e.g., a control flow graph

Although I like this classification, I see that it misses one of the most prominent and the most popular one – the NLP based model. It is a type of model where the program code is seen as a set of sentences that have meaning of some sort. It is a derivative of the token-based representation, but it is much more than that. CodeX from OpenAI is an example of such model.

Nevertheless, this study provides a very interesting set of examples of models and their applications. I sincerelly suggest to take a look at this paper to understand how the models work and where they are used best.

Inline tests – do we really need more testing?

BIld av Gordon Johnson från Pixabay

Inline Tests (pengyunie.github.io)

Some of you may not know, but I started my career as a software tester, so I’ve done my share of defect tracking and fixing. Although it was a while ago (well, over 20 years ago to be frank), I still remember a thing or two. I guess it is like riding a bike. One thing that I remember is that we did not really need more tests, but smarter testing.

This paper, nevertheless, proposes a new type of testing – inline testing – which is supposed to replace using printf(…) in code. Instead of printing values of variables for debugging purposes, we can use the new framework to create such small inline tests and execute them. The idea is simple and contributes to the maturity of our discipline.

By using inline tests, we can track the progress of our software development and its quality evolution. Since we can generate reports and use asserts, we could communicate our progress to quality management in a much better way.

I need to test this framework, especially that it works with Python, my new language of choice…

CoditT5: Pretraining for Source Code and Natural Language Editing

CoditT5: Pretraining for Source Code and Natural Language Editing (pengyunie.github.io)

I’ve written about programming language models before, and it is no secret that I am very much into this topic. I like the way in which software engineering evolves – we become a more mature discipline and our tools become smarter by the hour (at least that’s how it feels).

This paper presents a new language model that is capable of doing code edits, i.e., such things as bug fixes. The model is essentially a transformer with an architecture that has been published before. However, the strength of this model is in the way in which it is trained. It uses so-called edit plans to train the model to change the input code, rather than to complement it.

The difference may not sound like much, but it is significant. The existing models are trained to complete code sequences and therefore they are very good in generating code. However, when given a code that does not require any generation, they tend to copy the input sequence to the output sequence. Well, not very useful that is.

Thanks to this new way of training, the model is able to edit code, remove defects, address review comments and so on. Yes, address review comments, this is not a joke. I sincerely believe that we can use this in practice in our tools one day.

At the moment, you can find the code for this model here: GitHub – EngineeringSoftware/CoditT5: Code and data for “CoditT5: Pretraining for Source Code and Natural Language Editing” in ASE 2022.

Evaluating ML pipelines for real – spoiler alert: another pipeline (article review)

Evaluating classifiers in SE research: the ECSER pipeline and two replication studies (springer.com)

BIld av paula bassi från Pixabay

One of the most prominent problems with using research results in practice is the lack of replication packages, but this is far from being the only one. Another one, maybe an equally important problem, is the fact that the studies report performance in many different ways.

Since I have a chance to work with colleagues in medicine, I got to learn about their publication culture. It is more advanced than ours (software engineering), but that’s not the point. The main point is that they actually have guidelines on how to report ML studies. Here is an example of such a guideline: Clinician checklist for assessing suitability of machine learning applications in healthcare – PMC (nih.gov)

The paper that I wish to bring up is trying to address a similar aspect of software engineering. The paper reviews existing studies that provide recommendations, e.g., to report confusion matrices or to report statistical significance tests. Then it reviews some of the papers published in respected venues and then it provides actionable guidelines on how to evaluate the performance of machine learning models.

50 Language/Code models, let’s talk…

BIld av pencil parker från Pixabay

As you have probably observed I’ve been into language models for code analysis, design and recognition. It’s a great way of spending your research time as it gives you the possibility to understand how we program and understand how to model that. In my personal case, this is a great complement to the empirical software engineering research that I do otherwise.

In the recent time I got a feeling that I look into more and more of these models, all of them baring certain similarity to the Google’s BERT model or the Fracebook’s TransCoder. So I set off to do a short review of the papers that actually talk about code models or, as they are often called, programming language models. I started from the paper describing CodeBERT ( [2002.08155] CodeBERT: A Pre-Trained Model for Programming and Natural Languages (arxiv.org) ) and looked at the 500 citations that the model has. The list below is just the list of the models that are created based on CodeBERT. There are also models created based on AlphaGo or Github CoPilot, but I leave these for another occasion.

I must admit that I did not read all of these papers and did not look at all of these models. Far from it, I only looked at some of them. My conclusion is that we have a lot of models, but the quality of the results vary a lot. The best models provide good results in ca. 20% of cases. AlphaCode is an example of such a model, which is fantastic, but not super-accurate all the time. As the model is used for super-competitive tasks, 20% is actually very impressive – it’s difficult to say that I would do better for these programming competitions, so I’m not criticizing.

The best model I’ve seen so far, however, is the Github CoPilot, which is by far the best model to create code that the world has seen. Well, there may be models that the world has not seen, but then they do not count. If you would like to see a preview of how I use it (part I), you can take a look at this video:

I sincerely hope that you find this list useful and that you can help me to keep it updated – drop me an e-mail about the list if you want to:

  1. AlphaGo: https://www.deepmind.com/blog/competitive-programming-with-alphacode
  2. TransCoder: https://github.com/facebookresearch/TransCoder
  3. CodeT5: https://arxiv.org/pdf/2109.00859 
  4. CodeITT5: https://arxiv.org/pdf/2208.05446 
  5. ProphetNet: https://arxiv.org/pdf/2104.08006 
  6. Cotex: https://arxiv.org/pdf/2105.08645 
  7. Commit2vec: https://arxiv.org/pdf/1911.07605 
  8. CoreGen: https://www.sciencedirect.com/science/article/pii/S092523122100792X  
  9. SyncoBERT: https://arxiv.org/pdf/2108.04556 
  10. TreeBERT: https://proceedings.mlr.press/v161/jiang21a/jiang21a.pdf 
  11. FastSpec: https://ieeexplore.ieee.org/iel7/9581154/9581061/09581258.pdf 
  12. CVEFixes: https://dl.acm.org/doi/pdf/10.1145/3475960.3475985 
  13. CodeNet: https://arxiv.org/pdf/2105.12655
  14. Graph4Code: https://www.researchgate.net/profile/Jamie-Mccusker-2/publication/339445570_Graph4Code_A_Machine_Interpretable_Knowledge_Graph_for_Code/links/5fd2a29a45851568d154cfaa/Graph4Code-A-Machine-Interpretable-Knowledge-Graph-for-Code.pdf 
  15. DeGraphCE: https://dl.acm.org/doi/pdf/10.1145/3546066 
  16. VELVET: https://ieeexplore.ieee.org/iel7/9825713/9825693/09825786.pdf
  17. Code2Vec: https://uwspace.uwaterloo.ca/bitstream/handle/10012/15862/Arumugam_Lakshmanan.pdf?sequence=9&isAllowed=y 
  18. MulCode: https://ieeexplore.ieee.org/iel7/9425868/9425874/09426045.pdf 
  19. Flakify: https://ieeexplore.ieee.org/iel7/32/4359463/09866550.pdf 
  20. CoDesc: https://arxiv.org/pdf/2105.14220 
  21. NatGen: https://arxiv.org/pdf/2206.07585 
  22. Coctail: https://arxiv.org/pdf/2106.05345 
  23. MergeBERT: https://arxiv.org/pdf/2109.00084 
  24. SPTCode: https://dl.acm.org/doi/pdf/10.1145/3510003.3510096 
  25. InCoder: https://arxiv.org/pdf/2204.05999 
  26. JavaBERT: https://ieeexplore.ieee.org/iel7/9680270/9679822/09680322.pdf 
  27. BERT2Code: https://arxiv.org/pdf/2104.08017 
  28. NeuralCC: https://arxiv.org/pdf/2012.03225 
  29. LineVD: https://arxiv.org/pdf/2203.05181 
  30. GraphCode2Vec: https://arxiv.org/pdf/2112.01218 
  31. ASTBERT: https://arxiv.org/pdf/2201.07984 
  32. CodeRL: https://arxiv.org/pdf/2207.01780 
  33. CV4Code: https://arxiv.org/pdf/2205.08585 
  34. NaturalCC: https://xcodemind.github.io/papers/icse22_naturalcc_camera_submitted.pdf 
  35. StructCode: https://arxiv.org/pdf/2206.05239   
  36. VulBERT: https://arxiv.org/pdf/2205.12424 
  37. CodeMVP: https://arxiv.org/pdf/2205.02029 
  38. miBERT: https://ieeexplore.ieee.org/iel7/9787917/9787918/09787973.pdf?casa_token=rPNbu-k9Gh4AAAAA:3lkZVyUjnDP4Sp1UmmO9eVftsRaf1zAuw1YhHQogsyDBE2Y7992gBlhPb9jKVcI-5Q8tTv2JEyQ 
  39. LineVUL: https://www.researchgate.net/profile/Chakkrit-Tantithamthavorn/publication/359402890_LineVul_A_Transformer-based_Line-Level_Vulnerability_Prediction/links/623ee3d48068956f3c4cbede/LineVul-A-Transformer-based-Line-Level-Vulnerability-Prediction.pdf 
  40. CommitBART: https://arxiv.org/pdf/2208.08100 
  41. GAPGen: https://arxiv.org/pdf/2201.08810 
  42. El-CodeBERT: https://dl.acm.org/doi/pdf/10.1145/3545258.3545260?casa_token=DNyXQpkP69MAAAAA:y2iJC3RliEh7yJ6SzRpRRKrzPn2Q6w25vpm5vpoN0TksDh_SbmVfa_8JcDxvVN8FydOL_vTJqH-6OA 
  43. COCLUBERT: https://ieeexplore.ieee.org/iel7/9679834/9679948/09680081.pdf?casa_token=FtrqlHTmm74AAAAA:kkMyRsMl9xqPQQSBTRd6vFD-2-DyVSomYBYqm8u8aKs7B0_rkYYfL_OLVmOHgzn1-vqMF6W7pM8 
  44. Xcode: https://dl.acm.org/doi/pdf/10.1145/3506696?casa_token=5H8iW3e2GlYAAAAA:m2QA-DXSk5LZYazFxDPEVfLZcYREqDomXNg5YmkR-rPllHD37Qd8eLw_SCu6rbhNHZJ2Od24dvJt_Q 
  45. CobolBERT: https://arxiv.org/pdf/2201.09448 
  46. SiamBERT: https://melqkiades.github.io/files/download/papers/siambert-sais-2022.pdf 
  47. CodeReviewer: https://arxiv.org/pdf/2203.09095 
  48. CodeBERT-nt: https://arxiv.org/pdf/2208.06042 
  49. BashExplainer: https://arxiv.org/pdf/2206.13325

So, you want to automate your security assessment (beyond pentesting)…

BIld av Darwin Laganzon från Pixabay

Automatic Security Assessment of GitHub Actions Workflows (arxiv.org)

After my last post, and the visit to the workshop at MDU, I realized that there are a few tools that can be used automatically already now. So, this paper presents one of them.

What is interesting about this tool is that it uses github workflows, so it’s compatible with many modern CI/CD pipelines. The tool analyzes worflows and looks for security vulnerabilities. For example, if you keep sensitive information in a plain text file that is used in the workflow (secrets), or checks if the workflow enforces the “least privilege” principle.

The implementation of the tool is OSS; can be found on github here: Mobile-IoT-Security-Lab/GHAST: GitHub Actions Security Tester

I need to test it as it looks very interesting. Maybe I can use this tool on some of the company’s workflows to test their exploitability score?

What are code reviews really good for?

Visualization of a source code of one module from the Cloudera projects. The embeddings are taken from our team’s neural network. t-SNE is a visualization technique taken from bioinformatics.

Concerns identified in code review: A fine-grained, faceted classification – ScienceDirect

Code reviews are time consuming. And effort intensive. And boring. And needed. Depending whom we ask, we get one of the above answers (well, 80% of the time). The reality is that the code reviews are not the most productive activity. Reading the code and looking for defects is good when we do it once, but when we need to work with it during continuous integration, the story changes. It becomes like studying for the exam or the homework – we do everything else to postpone it. Then someone waits longer or the code quality suffers.

There has been a lot of work done to make this activity more fun – gamification, automated support, using machine learning to filter out the code that we can automatically check – just to name the few. As far as I know, there has not been much work in understanding of what kind of problems code reviews really find.

In this article, the authors address that very question. Admittedly, they only analyzed 7 OSS projects, but their results are still interesting: “We identified 116 defect types that we grouped into 15 groups to create a defect classification. Additionally, 38% of these defects could be automatically detected accurately.

So, that basically means that 38% of defects could be identified by using testing or static analysis (or some other fancy automation technique). This figure summarizes their results (this is a link to the figure in sciencedirect): https://ars.els-cdn.com/content/image/1-s2.0-S0950584922001653-gr5_lrg.jpg

So, what the code reviews are good for? Here is their list:

  • threads,
  • header comments,
  • errors, warnings and logging,
  • test cases,
  • annotations,
  • performance,
  • identifier naming,
  • modifiers,
  • comments,
  • javadoc,
  • design,
  • implementation, and
  • logic and functionality

The list is sorted from the least frequent to the most frequent – so logic and functionality is the category where the code reviews are the most useful for. I need to also say that the frequencies are not super-high – threading is only 1 detected concern, while logic and functionality has 57. So, you know, could be more, given how much time is spent on code reviews. I guess it is what the quality costs nowadays, even though there is no real data on this.