I’ve written about the ways of assessing how good software is. One of the modern approaches, which I talked about before, is the use of A/B testing and online experiments. Providing the users with different versions of the features/systems/use cases allows the company to understand which of the options provides the best response from the users.
However, there are a number of challenges with this approach – the most prominent being the potential existence of confounding factors. Even if the results show a positive/negative response, we do not really know whether the response is not caused by something else (for example by users being tired, changes in the environment, etc.)
After using GitHub, both as a user and as a researcher, I sometimes wondered whether the star system is actually the right one. I wondered whether we should use a sort-of A/B testing system where we could check how often people usually access certain repositories.
In this paper, the authors take a look at different ways of assessing popularity of repositories. The results show that regardless of the metrics, the popular repositories are popular – i.e. popularity is not dependent of a metric.
Popularity metrics studied:
Total number of downloads of the package
Number of projects dependent on the package
Number of repositories dependent on the package
Source rank of the package
Number of forks
Number of watchers
Number of contributors
Number of stars
Number of open issues
Total number of tags
The actual analysis is quite interesting, so I recommend to take a look at the paper directly.
Working with software requirements and metrics is an important part of research in modern software companies. Although many of the companies are Agile or post-Agile, claiming that they do not have requirements, they still capture user needs in textual forms. For example, they describe user stories, epic, use cases.
This paper is an interesting view on the software requirements quality assessment. Instead of just calculating metrics and creating quality models, they use machine learning to mimic the way in which experts judge what is a good requirement and what is not. They use quality functions, and several of them, to distinguish between the good and bad requirements. Using multiple functions, in a multidimensional space, allows to select groups of requirements that are separated by the other class – the figures in the paper show more how this works in practice.
The summary of the gist of the paper is actually presented best in the introduction (quote): “Summing up, we can compute a set of quantitative metrics of textual requirements, and through them, we can assess the quality of requirements. However, the risk of this approach is to build assessment methods and tools that are both arbitrary in the parameterization of metrics and rigid in the combination of metrics to evaluate the different properties. This is why we propose in this work to develop a flexible assessment method that can be adapted to different contexts, with a high degree of automation. The method consists basically in the emulation of the experts’ judgment on quality through artificial intelligence techniques: first, obtain the expert’s implicit quality function through machine learning, and, second, apply this function to automatically assess the quality of textual requirements.
Our approach to emulate the experts’ judgment, as explained later in detail, is based on well-known machine learning techniques: we have a computer tool learn from a previous human-made classification of requirements according to their quality. Therefore, our work’s intent is not to improve machine learning techniques, but rather to devise a novel application to the field of requirements quality assessment.”
I strongly recommend to read the paper as it provides very good methods to work with requirements quality in many modern organisations.
Deep learning models are often designed, trained and tested in Python. It is a language with a nice structure, quite straigtforward syntax and a lot of libraries. However, very few tutorials about deep learning (or any Python programming tutorials) discuss the quality of the code, e.g. its modularization, encapsulation, naming consistency.
As a result, a lot of code for machine learning, written in Python, often is hard to read and hard to grasp. Even if used as part of jupyter notebooks, the code is not really commented (often).
The study behind the link above is a study that supports my long gut feeling about this. The findings show that (from the abstract): First, long lambda expression, long ternary conditional expression, and complex container comprehension smells are frequently found in deep learning projects. That is, deep learning code involves more complex or longer expressions than the traditional code does. Second, the number of code smells increases across the releases of deep learning applications. Third, we found that there is a co-existence between code smells and software bugs in the studied deep learning code, which confirms our conjecture on the degraded code quality of deep learning applications.
The paper is an experimental validation of whether requirement diagrams speed up the understanding of requirement specifications or whether they increase/decrease comprehension. The results show that the comprehension is increased while there is no change in time.
Darko Durisic has done an interesting work on the evolution of industrial-class meta-models. The work has been accepted as full paper at SEAA (Software Engineering for Advanced Applications) Euromicro Conference.
Title: Evolution of Long-Term Industrial Meta-Models – A Case Study
Abstract: Meta-models in software engineering are used to define properties of models. Therefore the evolution of the metamodels influences the evolution of the models and the software instantiated from them. The evolution of the meta-models is particularly problematic if the software has to instantiate two versions of the same meta-model – a situation common for longterm software development projects such as car development projects. In this paper, we present a case study of the evolution of the standardized meta-model used in the development of the automotive software systems – the AUTOSAR meta-model – at Volvo Car Corporation. The objective of this study is to assist the automotive software designers in planning long term development projects based on multiple AUTOSAR meta-model versions. We achieve this by visualizing the size and complexity increase between different versions of the AUTOSAR meta-model and by calculating the number of changes which need to be implemented in order to adopt a newer AUTOSAR meta-model version. The analysis is done for each major role in the Automotive development process affected by the changes.
Stay tuned for the full version of the paper and congrats to Darko!
In our recent research we’ve looked at a number of ways on how to support software development companies in working with reliability modelling.
I have come across this article on how to choose a model – a sys rev. They authors look at a number of criteria and evaluate which ones are the most used in choosing models. Nice and interesting reading.
In our research work we stumbled upon a question of monitoring whether the product is ready to release (Staron et al, “Release Readiness Indicator for Mature Agile and Lean Software Development Projects”, XP 2012). We could identify indicators which could show how many weeks to release the organization have given their testing and development speed.
In this article we could see a complement to our work since it presents a cost model for how much testing is needed to achieve a specific release pace. Interesting work, waiting to be validated in industrial contexts.
Predicting risk of pre-release code changes with Checkinmentor
This recently published paper shows a very nice approach of monitoring of what kind of patterns in pre-release code changes can be risky w.r.t. fault proneness of software components. The paper shows experiences of analyzing Windows Phone software at Microsoft done in one of my favorite places – Microsoft Research.
In the context of this work the module is risky if it can cause a bug fix after the release and the metrics used are both those of a source code and of the organization behind the product development. They have found that the change size metrics are the most prominent ones. This means that the more code one checks in, the higher the risk of having a bug…