Measuring Agile Software Development

It’s been a while since I blogged last, but does not mean that our team is not working:) Quite the contrary.

In the last few months we were busy with the investigation of how to measure agile software development and DevOps. We have looked at the companies that are about to make a transformation from waterfall and V to Agile. We also looked at the companies that did that recently and that did that kind of transformation a while back.

We found that the information needs evolve rapidly as companies evolve.

Companies willing to transform/in-transformation focus on measuring the improvement of their operations. They want to be faster, provide more features in shorter time frames, increase the quality. They also want to measure how much they transformed.

Companies that have just transformed focus on following agile practices despite that there is no such thing. They seek measurements that are “agile”, and often end up with measures of velocity, backlogs and customer reactiveness. They are happy to be agile and move on.

However, after a while they discover that these measures (i) do not have anything to do with their product, (ii) do not really care about long-term sustainability of their business, so they look at the mature agile companies.

Mature agile companies, however, focus on the products and customers. They look at the stability of their products and on the development of their business models. They focus on architectural stability and automation rather than on velocity and story points.

I hope that you enjoy the presentation on the topic that we soon give at VESC in Gothenburg.

 

How good is your measurement program?

One of our work – the MESRAM model for assessing the quality of measurement programs – has been used by our colleagues to evaluate measurement programs at two different companies: https://doi.org/10.1016/j.infsof.2018.06.006

The paper shows how easy it is to use the model and that it provides very nice results in terms of how well they reflect the real quality of the program.

If you are interested in these results from the Software Center metrics project, please also visit the original paper: https://doi.org/10.1016/j.jss.2015.10.051

And also a few papers that help you assess the quality of your KPIs and metrics: https://doi.org/10.1109/IWSM-Mensura.2016.033

Trailer about the metrics project

Dissemination of research results in the age of YouTube is not very easy. I would say it’s quite impossible. That’s why I’ve tried to make it a bit more interesting and made this trailer with the use of iMovie.

It’s my first edited video, so please be nice to it!

The link to the video at GU Play: https://play.gu.se/media/Metrics+theme+trailer/0_2f0dw0uz

Software center metrics day – reflections…

This year, the Software Center Metrics Day took place in the end of October, just a few days before the autumn break. The program included a mix of talked from academia and industry, https://www.software-center.se/research-themes/technology-themes/development-metrics/metrics-day-2018-metrics-software-analytics-and-machine-learning/, and was focused on the recent developments of the metrics area.

What I’ve learned from the event was that it is extremely easy to work with deep learning models. Our colleagues from Microsoft Gothenburg showed us how easy it is to use Azure to create image recognition models. Something that has evolved from research playgrounds to really easy-to-use powerful machine learning.

I’ve also learned how performance measurement in the cloud works. Thanks to our colleague Philip Leitner and his team, we could learn how to best optimize performance.

We have also seen the latest-and-greatest from Spotfire business analytics team, just across the water (literally!) We have also seen how the new car platforms are designed and what kind of metrics are used to drive the design.

Finally, we have also seen how start-up companies reason about the measurement and how their mother companies influence their way of measuring.

Stay tuned for the next metrics day in 2019!

Using Deep Learning to Understand code

One of our software center activities is focused on reducing the effort that the designers spend on code analysis and quality assurance. In this project we are looking at creating a model for high and low quality code – in general.

Now I’ve come across this nice paper about using deep learning for finding whether code is more readable or not: https://doi.org/10.1016/j.infsof.2018.07.006

The paper is written by a research team from City University of Hong Kong and Beijing University of Technology. The paper presents a method that has been evaluated against human reviewers and is based on techniques that require no feature engineering. It shows that it is better than the previous approaches, yet requires less effort to set up.

The paper also provides the possibility to reuse the code – great and very interesting reading.

In Software Center, we create a deep learning model that can learn the quality of code from tools for code review and reduce the review effort by order of magnitude. Please take a look at our presentation from the Software Center Metrics Day.

Stay tuned!

Software data fuels AI, ML and Software Analytics

I’ve talked about software analytics in the previous post, in particular the latest issue of IEEE Software. In this post, let me introduce an interesting book for software engineers and software engineering scientists interested in software analytics: Bird, C., Menzies, T., & Zimmermann, T. (Eds.). (2015). The Art and Science of Analyzing Software Data. Elsevier.

After reading a few chapters, one conclusion emerged – the fact that modern software analytics is not about algorithms, it’s about data and its collection. It’s about measurement, quantification and metrics. Even the analysis of qualitative data is often done using measurements in order to speed it up.

Harvard Business Review claimed that “Big Data is Not the New Oil” as there are fundamental differences between the scarce fossil fuel and abundant data from software project (https://hbr.org/2012/11/data-humans-and-the-new-oil). However, even though data is not scarce, I believe that it will fuel the software industry for at least one more decade.

Therefore, we still need to teach our students how to work with data, how to collect and analyse it, and how to assess its value. We also need to understand how to monetise the data.

Software analytics, the next thing for software metrics in modern companies

The hot summer in Europe provided a lot of time for relaxation and contemplation:) I’ve spent some of the warm days reading some articles for the upcoming SEAA session on software analytics, which is a follow up of the special issue of IST: https://doi.org/10.1016/j.infsof.2018.03.001 

Software analytics, simply put, is using data and its visualisation to make decisions about software development. The typical data sources, both in literature and observed in many companies, are:

  1. Source code measurements from Git
  2. Defect data from JIRA
  3. Requirements data
  4. Customer data, a.k.a. field data
  5. Performance/profiling data from running the system
  6. Process data from time reporting systems, Windows journals, etc.

These data sources allow us to find bottlenecks in the performance of our software and the performance of our progress.

Software analytics has been in the heart of such paradigms as the MVP from The Lean Start-Up, where they provide the ability to steer which features are developed and which are abandoned.

Our experiences from Software Analytics are described in the book Software Development Measurement Programs, chapter 5: https://www.springer.com/us/book/9783319918358 

 

KPI – what’s the major challenge in making them work in software organizations?

Our Software Center project has worked with a number of companies to increase the impact of KPIs in modern organizations. Although the concept of KPI has been around since the 90s, many organizations still struggle with making KPIs actionable.

In this post, I’ll show the results of one of the recent assessments of KPIs. To get the understanding of how the KPIs are worked, I’ve asked about 20 managers to assess some of the KPIs used in their organizations. We used a simplified model of KPI quality, developed in the last spring. The results are presented in the figure below.

The figure shows what the gut feeling would tell us – that the major quality problems with the KPIs is the lack of clear guidelines how to react. The company has no problem with the mathematics, the quantification or even the presentation. The major challenge is the analysis model and the action model linked to that.

How to change this situation?

1. Create an action plan – what to check when the indicator shows red?

2. Find the stakeholder who has the right mandate to act.

3. Make sure that the stakeholder checks the status of the indicator regularly.

4. Make sure that the indicator stays updated and maintained.

If the above cannot be fulfilled, then it makes no sense to have the KPI, remove it, forget it and move forward with another business goal.

To read more how we assess KPI’s quality, take a look at this paper:

Staron, Miroslaw, Wilhelm Meding, Kent Niesel, and Alain Abran. “A Key performance indicator quality model and its industrial evaluation.” In Software Measurement and the International Conference on Software Process and Product Measurement (IWSM-MENSURA), 2016 Joint Conference of the International Workshop on, pp. 170-179. IEEE, 2016.

Link: https://ieeexplore.ieee.org/abstract/document/7809605/

Measuring readability of code…

Recently, I had an interesting discussion about code qualities that are seldom part of software research. An example of such quality is readability, which is the degree to which we can read the code correctly.

Low readability does not need to lead to defects in the code, but in the long run it does. In the context of software engineering of products that evolve over long time, readability is dangerously close to understandability and therefore also very close to modifiability and correctness.

I’ve come across the following paper recently:

Scalabrino, S., Linares-Vásquez, M., Oliveto, R. and Poshyvanyk, D., 2017. A Comprehensive Model for Code Readability, published in Software Evolution and Maintenance journal.

The paper has designed a set of features for texts, which can help to quantify readability. Let me quote the abstract:

“…the models proposed to estimate code readability take into account only structural aspects and visual nuances of source code, such as line length and alignment of characters. In this paper, we extend our previous work in which we use textual features to improve code readability models. We introduce 2 new textual features, and we reassess the readability prediction power of readability models on more than 600 code snippets manually evaluated, in terms of readability, by 5K+ people. […] The results demonstrate that (1) textual features complement other features and (2) a model containing all the features achieves a significantly higher accuracy as compared with all the other state‐of‐the‐art models. Also, readability estimation resulting from a more accurate model, ie, the combined model, is able to predict more accurately FindBugs warnings.”

How to validate software measures – list of attributes from a systematic review

During the weekend I did some digging into the quality of measurement, in particular, I tried to answer a question from a colleague on measurement accuracy limits. Well, instead of digging into the accuracy, I managed to look at the validation of measures in general.

I’ve been searching for methods how people evaluate software measures and I came across this nice paper from Laurie Williams and colleagues: https://dl.acm.org/citation.cfm?id=2377661

This systematic review lists 47 criteria used to evaluate software metrics, combining both the empirical and theoretical validation. Here is the list of what they found:

  • A priori validity
  • Actionability
  • Appropriate Continuity
  • Appropriate Granularity
  • Association
  • Attribute validity
  • Causal model validity
  • Causal relationship validity
  • Content validity
  • Construct validity
  • Constructiveness
  • Definition validity
  • Discriminative power
  • Dimensional consistency
  • Economic productivity
  • Empirical validity
  • External validity
  • Factor independence
  • Improvement validity
  • Instrument validity
  • Increasing growth validity
  • Interaction sensitivity
  • Internal consistency
  • Internal validity
  • Monotonicity
  • Metric Reliability
  • Non-collinearity
  • Non-exploitability
  • Non-uniformity
  • Notation validity
  • Permutation validity
  • Predictability
  • Prediction system validity
  • Process or Product Relevance
  • Protocol validity
  • Rank Consistency
  • Renaming insensitivity
  • Repeatability
  • Representation condition
  • Scale validity
  • Stability
  • Theoretical validity
  • Trackability
  • Transformation invariance
  • Underlying theory validity
  • Unit validity
  • Usability

The list is really impressing, but not all attributes apply to all types of metrics. So, one should always look for the use of metric and then seek the right type of its validation. I recommend this article as great reading for those who are thinking about creating own metrics:)