Technical debt from the perspective of practitioners – article review

Image by Steve Buissinne from Pixabay

https://link.springer.com/article/10.1007/s10664-020-09832-9?utm_source=toc&utm_medium=email&utm_campaign=toc_10664_25_5&utm_content=etoc_springer_20200904

Technical debt is a great metaphor in software engineering. It provides software engineers with the toolkit to communicate how bad design can affect the product in a long run, and how much it can cost to fix these problems. The metaphor has been implemented in many static analysis tools like SonarQube.

Despite its power in communicating, its not clear whether this metaphor is actually useful. It has some dark sides, which makes it a bit tricky to use it. For example, the “conversion” from a problem to the debt, e.g. lack of getter and setter methods to 0.5 days in debt, is one of these challenges. Its also not always clear which of the technical debt categories apply to which products.

What I like about this paper is that it presents a survey of technical debt. For example, it identifies the top causes of technical debt, such as:

  • deadlines,
  • inappropriate planning,
  • lack of knowledge, and
  • lack of well-defined process.

These challenges are present in most companies today, and the first two – deadlines and inappropriate planning – are often associated with start-ups and agile organizations. I recommend to take a closer look at the mindmap in the paper (Fig. 5) to dive deeper into the causes.

Quote from the abstract: We identified a total of 78 causes and 66 effects, which confirm and also extend the current knowledge on causes and effects of TD. Then, we organized the identified set of causes and effects in probabilistic cause-effect diagrams. The proposed diagrams highlight the causes that can most contribute to the occurrence of TD as well as the most common effects that occur as a result of debt.

Finding lines of code that require review – my 100 blog post!

Image by skeeze from Pixabay

Working with continuous integration is an exciting new filed. You get your code into the main branch directly. Well, that’s what the theory says. What you really get is feedback directly, at least the feedback from the automated checks for technical debt, testing and similar.

What you do not get quickly is the review of your code by your colleagues. In larger organizations, things like code reviews do not get prioritized. Therefore they tend to slow down software development rather than speed up!

In this paper, we set of to understand how to fix that. We used Gerrit as the tool to extract lines of code to review, instead of reviewing all of the lines. Here is a short video about this: https://play.gu.se/media/t/0_h7hx95d2

The abstract of the paper is included:

Code reviews are one of the first quality assurance tasks in continuous software integration and delivery. The goal of our work is to reduce the need for manual reviews by automatically identify which code fragments should be further reviewed manually. We conducted an action research study with two companies where we extracted code reviews and build machine learning classifiers (AdaBoost and Convolutional Neural Network — CNN). Our results show that the accuracy of recognizing code fragments that require manual review, measured with Matthews Correlation Coefficient, was 0.70 in the combination of our own feature extraction and CNN. We conclude that this way of combining automation with manual code reviews can improve the speed of reviews while providing organizations with the possibility to support knowledge transfer among the designers.

Recommending refactoring via commit message analysis

Image by annca from Pixabay

https://doi.org/10.1016/j.infsof.2020.106332

In the process of reviewing code, we can identify refactoring pretty easy. We read the code, try to understand it and provide comments. In the understanding phase we also get ideas about possible alternative – why is this done this way?

Now, when writing the comments, we rarely have the time to refactor the code. In CI, this process of reviewing comes when we commit the code to the main branch and therefore we expect this to be delivered and used soon. So, it’s too late to refactor, we need to do it in the next iteration.

But the next iteration is the same, we need to deliver new functions, not “golden plate” the existing code, deliver it to the main branch, etc. When is the time for refactoring then? How do we document the possibilities and use them when we have a bit of time?

In this work, the authors look at the commit messages and identify refactoring possibilities for that, complementing the static and dynamic analysis of code. The method presented in the paper is based on the analysis of code from open source projects, the refactoring applied to the code and the analysis of the QMOOD quality attributes that were related to these commits.

The following quote from the paper explains a bit how the gist of the extraction of the refactored code works:

Identifying refactoring rationale has two parts. The first part is the detection of the files that are refactored by developers in a commit. The second part is the identification of changes in the QMOOD quality attributes then comparing these changes with the information in the commit message.

For the first part, we used the GitHub API to identify the changed files in each commit. In the second part, we compared the QMOOD quality attribute values before and after the commit to capture the actual quality changes for each file. Once the changed files and quality attributes were identified, we checked if the developers intended to actually improve these files and quality attributes. In fact, we preprocessed the commit messages and we used the names of code elements in the changed files and the changed quality metrics as keywords to match with words in the commit message. Once the refactoring rationale is automatically detected using this procedure, we continue with the next step to find better refactoring recommendations that can fully meet the developer’s intentions and expectations. In case that no quality changes were identified at all then a warning will be generated to developers that the manually applied refactorings are not addressing the quality issues described in his commit message.

If a tool can automatically refactor our code – is it good or bad for us, programmers?

https://link-springer-com.ezproxy.ub.gu.se/article/10.1007/s10664-020-09826-7

Image by GimpWorkshop from Pixabay

Recently, I’ve read an article in Empirical Software Engineering about automated code refactoring. I must admit that I do refactoring quite seldom. It’s a tedious task and for the software that I write, quite unnecessary. My software is often a set of scripts to solve a specific task and then the key is to document it, not refactor. A good documentation helps me to understand what I did in that code and how it works. Yes, I know it sounds like a cliché, but that’s how it is for me. I’m switching tasks so often that I forget what the code was doing.

Nevertheless, I recognize the code that is nicely written, formatted and refactored. Therefore, I was on a lookout for a tool that could do something like that for me – suggest a refactoring that I could implement.

So, this is a paper that I found, which I would like to try out. It is a tool that was evaluated through interviews with designers and developers. Although they can recognize that the code was refactored, but they seemed to be happy about it. So, I’m off to try out the tool:)

Abstract: Refactoring is a maintenance activity that aims to improve design quality while preserving the behavior of a system. Several (semi)automated approaches have been proposed to support developers in this maintenance activity, based on the correction of anti-patterns, which are “poor” solutions to recurring design problems. However, little quantitative evidence exists about the impact of automatically refactored code on program comprehension, and in which context automated refactoring can be as effective as manual refactoring. Leveraging RePOR, an automated refactoring approach based on partial order reduction techniques, we performed an empirical study to investigate whether automated refactoring code structure affects the understandability of systems during comprehension tasks. (1) We surveyed 80 developers, asking them to identify from a set of 20 refactoring changes if they were generated by developers or by a tool, and to rate the refactoring changes according to their design quality; (2) we asked 30 developers to complete code comprehension tasks on 10 systems that were refactored by either a freelancer or an automated refactoring tool. To make comparison fair, for a subset of refactoring actions that introduce new code entities, only synthetic identifiers were presented to practitioners. We measured developers’ performance using the NASA task load index for their effort, the time that they spent performing the tasks, and their percentages of correct answers. Our findings, despite current technology limitations, show that it is reasonable to expect a refactoring tools to match developer code. Indeed, results show that for 3 out of the 5 anti-pattern types studied, developers could not recognize the origin of the refactoring (i.e., whether it was performed by a human or an automatic tool). We also observed that developers do not prefer human refactorings over automated refactorings, except when refactoring Blob classes; and that there is no statistically significant difference between the impact on code understandability of human refactorings and automated refactorings. We conclude that automated refactorings can be as effective as manual refactorings. However, for complex anti-patterns types like the Blob, the perceived quality achieved by developers is slightly higher.

PHANTOM – finding well engineered software projects, fast…

https://link-springer-com.ezproxy.ub.gu.se/article/10.1007%2Fs10664-020-09825-8

Image by 2427999 from Pixabay

I’ve worked with two great students – Peter and Joshua – who wanted to do something interesting. They developed a tool that could replicate a study from other researchers. However, they did it faster and with less data. We also managed to team up with Mirek from Poznan who improved the classification algorithm and asked his colleagues from new, industrial data.

And this is the outcome – a tool that can connect to a git repository and recognise whether your project is well engineered or not. It helps companies to understand whether their teams are working in a structured manner or ad-hoc.

The tool provides the possibility to assess whether a specific repository is in need for maintenance or not.

Abstract:

Context: Within the field of Mining Software Repositories, there are numerous methods employed to filter datasets in order to avoid analysing low-quality projects. Unfortunately, the existing filtering methods have not kept up with the growth of existing data sources, such as GitHub, and researchers often rely on quick and dirty techniques to curate datasets.

Objective: The objective of this study is to develop a method capable of filtering large quantities of software projects in a resource-efficient way.

Method: This study follows the Design Science Research (DSR) methodology. The proposed method, PHANTOM, extracts five measures from Git logs. Each measure is transformed into a time-series, which is represented as a feature vector for clustering using the k-means algorithm.

Results: Using the ground truth from a previous study, PHANTOM was shown to be able to rediscover the ground truth on the training dataset, and was able to identify “engineered” projects with up to 0.87 Precision and 0.94 Recall on the validation dataset. PHANTOM downloaded and processed the metadata of 1,786,601 GitHub repositories in 21.5 days using a single personal computer, which is over 33% faster than the previous study which used a computer cluster of 200 nodes. The possibility of applying the method outside of the open-source community was investigated by curating 100 repositories owned by two companies.

Conclusions: It is possible to use an unsupervised approach to identify engineered projects. PHANTOM was shown to be competitive compared to the existing supervised approaches while reducing the hardware requirements by two orders of magnitude.

What do elite software developers do in software projects?

https://dl-acm-org.ezproxy.ub.gu.se/doi/10.1145/3387111

Image by Jose B. Garcia Fernandez from Pixabay

A while back I read an article in ZDNet about Linus Torvalds, the creator of Linux, and his daily work. He was (at the time of reading, which is ca. 2 years back) still working on the code. However, he was mostly working on the design of the system, reviewing patches and supporting younger designers. I’ve also read a number of articles which claimed the importance of code reviews as a way of teaching younger designers about the product and the code base.

In this paper, I’ve found that the support for younger designers is what the elite developers do a lot of. It seems that the communication, organisation and support are the activities that the elite developers find important. It’s aligned with what we do at the universities as well. The most elite professors work with students, show them how to program and how to structure their code. Seems like this is a very good way of continuing your career – help other be better.

I guess it’s time to change my wallpaper from “coding” to “teaching”….

Abstract: Open source developers, particularly the elite developers who own the administrative privileges for a project, maintain a diverse portfolio of contributing activities. They not only commit source code but also exert significant efforts on other communicative, organizational, and supportive activities. However, almost all prior research focuses on specific activities and fails to analyze elite developers’ activities in a comprehensive way. To bridge this gap, we conduct an empirical study with fine-grained event data from 20 large open source projects hosted on GITHUB. We investigate elite developers’ contributing activities and their impacts on project outcomes. Our analyses reveal three key findings: (1) elite developers participate in a variety of activities, of which technical contributions (e.g., coding) only account for a small proportion; (2) as the project grows, elite developers tend to put more effort into supportive and communicative activities and less effort into coding; and (3) elite developers’ efforts in nontechnical activities are negatively correlated with the project’s outcomes in terms of productivity and quality in general, except for a positive correlation with the bug fix rate (a quality indicator). These results provide an integrated view of elite developers’ activities and can inform an individual’s decision making about effort allocation, which could lead to improved project outcomes. The results also provide implications for supporting these elite developers.

Your code and AI – more than precision and recall!

Image by Daniel Hannah from Pixabay

Using machine learning and AI to improve your coding is an important area of research. Together with colleagues we work with these techniques, to take them from open source to more industry quality.

There are two great tools that one can use today already. One of the tools is a beta version of add-in for visual studio, which helps software engineers to write code.

https://www.microsoft.com/en-us/ai/ai-lab-code-defect

Microsoft is very active in this area and even has release a set of tools that support the development of AI systems: https://www.microsoft.com/en-us/research/project/visual-studio-code-tools-ai/

Also:

https://techcommunity.microsoft.com/t5/educator-developer-blog/visual-studio-code-tools-for-ai-extension/ba-p/379420

What is great is that the tools are, naturally, available freely!

Another tool is a DeepCode, which analyzes software code and provides suggestions to improve it – e.g. use a specific design pattern or refactoring.

https://www.deepcode.ai/

This is great that we have increasingly more tools and that AI engineering matures. We do not want to have precision and recall steer our development. We want to have real testing and real systems. We also need to work with data quality in order to ensure that the systems are reliable.

The alternative is that we use MCC, precision, recall, F1-score to tell us how good a system is, which is not entirely true. These measures do not provide any view on how the system reflects the requirements put on it. These measures allow us to compare different classifiers, but not systems.

I hope that we can focus more discussion on AI quality and not classification quality/accuracy.

Classifying code smells…

https://link-springer-com.ezproxy.ub.gu.se/article/10.1007%2Fs11219-020-09498-y

Image by Comfreak from Pixabay

Code smells are quite interesting phenomena to study. They are not really defects, but they are not good code either. They exist, but people rarely want to admit to them. There is also no consensus to how much effort it takes to remove them (or even whether they should be removed or just avoided).

In this paper, the authors study whether it is possible to use ML to find code smells. It turns out it is possible and the accuracy is quite high (over 95%). It also shows that sometimes it is better to show a number of recommendations (e.g. two potential smells) rather than one – it requires less accuracy to make the recommendation, but helps the users to narrow-down their solution spaces.

Machine learning for source code suggestion, completion

Image by StockSnap from Pixabay

https://www-sciencedirect-com.ezproxy.ub.gu.se/science/article/pii/S0950584920300616

This is a great paper demonstrating the use of NLP techniques for completion of software source code. It uses recurrent networks and can reduce the size of the vocabulary compared to previous approaches.

As the authors say: “The CodeGRU introduces a novel approach which can correctly capture the source code context by leveraging the token type information.”

I like the approach because it can extract the information that is important for the analysis of source code – what kind of token is analysed and how it is used.

Conclusions (quote from the abstract): “Our experiment confirms that the source code’s contextual information can be vital and can help improve the software language models. The extensive evaluation of CodeGRU shows that it outperforms the state-of-the-art models. The results further suggest that the proposed approach can help reduce the vocabulary size and is of practical use for software developers.”

I’m kind of keen to check this approach in our work. See if we can use this to improve the quality of source code.

Developing recommender systems – a framework which may just be the one (for us)…

https://rdcu.be/b3wgZ

Image by Gerd Altmann from Pixabay 

Creating recommendation systems is a tricky task. We need to add the temporal domain to the data. In particular, we need to make sure that we capture what was recommended before to the specific user and how the user reacted upon that. We also need to capture the evolution of the users and the data.

In this paper, the authors present a framework, RectoLibry, which helps to construct these kind of systems. The system captures both the parts of the development of the recommendations, but also their deployment.

The system is based on designing an ontology (yes, my old, good friend, used since before Web 2.0, even in my own research: https://link.springer.com/chapter/10.1007/978-3-540-87875-9_60 , https://link.springer.com/chapter/10.1007/3-540-46102-7_20 ).

The ontology describes the relationships existing in the recommendation domain and provide the support for the selections and feedback loops.

I recommend to take a look at the paper and the framework if you want to build a recommendation system. I will, when looking at the assignments from the software measurement PhD course.