Motivation 3.0 in education

Quite recently I’ve reda one of the books by Daniel H. Pink – “Drive” – which describes what motivates us in general, but in particular in the areas where creativity and research.

In my opinion the ideas of motivation 3.0 are highly applicable for our students. In particular the ability to provide our young colleagues with the ability to become intrinsically motivated to gain the knowledge. We need to understand how to provide them with the “flow” types of task – something that will let the students feel that the task is challenging, but not too difficult.

In the next “software quality” course some of these ideas will come to existence.

Evidence of improvement using Agile…

Towards the end of the year I’d like to make a small reflection on Agile software development. It’s been discussed for a number of years now, yet the evidence of bringing measurable results is rather scarce. Here is one article from Åby Academy in Finland which studies a transformation of a large company to Agile: https://www.researchgate.net/profile/Marta_Olszewska_Plaska/publication/280711876_Did_it_actually_go_this_well_a_Large-Scale_Case_Study_on_an_Agile_Transformation/links/55c1d7ea08aeb28645819d3f.pdf

Studied case: Ericsson

Size: ca. 350 people

Product: roughly 10 years old

Languages: RoseRT, C++, Java

Summary of results: Agile software development provided more features (5x) and faster (60%).

What I like about the paper is that it provides the measurement before the transformation, DURING the transformation and after. Very interesting reading!

Measurement-as-a-Service (MaaS)

In the recent years we’ve seen a lot of discussions and good things about cloud computing – sharing platforms (PaaS), services (SaaS) and software thus optimizing the usage of computer resources.

This sharing of resources is important for making the software sustainable, and helps the companies to focus on what their business is about rather than on their IT infrastructure.

Measurement programs are no different – they are often a strategic value for companies, but they are not really something the companies want to spend their R&D budget for (at least not directly). So, how do we make it happen?

Well, we could use the same approach as in SaaS and PaaS and define MaaS (Measurement-as-a-Service) where we can reuse the knowledge across organizations and minimize the cost for working with the software measurement initiatives.

We’ve tried this concept with one of our industrial partners – Ericsson – and it seems that it works very well. You can read more about it in this article.

And the picture below explains a bit how this works.2015_MaaS_mensura.001

How to choose the right dashboard?

Dashboards and all kinds of radiators are very popular in industry now. They allow the companies to disseminate the metrics information and to find the right way of visualizing the metrics.

In a recent article written together with Ericsson and Volvo Cars we have explored how to find the right visualization and we developed a model for choosing the dashboard – http://gup.ub.gu.se/records/fulltext/220504/220504.pdf.

The method quantified a number of dimensions of a good dashboard and provides a simple set of sliders that can be used to select the right visualization. The companies in the study have found it to be a good input to the understanding of what the stakeholders want when they say “dashboard”.

In the next steps we’re currently working on defining a quality model of KPIs – Key Performance Indicators. The first version has shown that it allows the companies to reduce the number of indicators by as much as 90% by finding the ones which are not of good quality.

Dashboards.jpeg.001

How robust is a measurement program?

conceptual_model

In our recent work we have explored the possibility of validating that a measurement program is robust. We have worked with seven companies within the software center to establish a method and evaluate it. The results are presented in a newly accepted paper “MeSRAM – A Method for Assessing Robustness of Measurement Programs in Large Software Development Organizations and Its Industrial Evaluation” to appear in Journal of Systems and Software.

In short the method is based on a collecting the evidence that a measurement program contains elements which  are important for the program to be able to handle changes. For example whether a measurement program has a dedicated organization working with it and whether the entire company is able to utilize the results from the measurement program.

The method is similar to the stress-testing of banks, so popular in the last decade.

The next step in our research is finding out which metrics the companies should use to assure the long-term robustness  of the measurement program. stay tuned!

measurement_program_model

Does outsourcing/global software development delivers on the promise?

I’ve read a very interesting article in one of the recents IEEE Software magazines by Darja Smite, Fabio Calefaro and Claes Wohlin:  http://www.computer.org/cms/Computer.org/ComputingNow/issues/2015/08/mso2015040026.pdf 

The authors look critically at the body of knowledge in the area trying to find evidence of the cost savings. The results are that the evidence is not in the published articles. Does that mean that it is not possible to publish about it? or does it mean that there is no real evidence and the companies make decisions based on the “gut-feeling”?

It will be interesting to observe what happens with the body-of-knowledge on the topics in the longer run.

 

How do different mathematical aggregations impact defect predictions…

In our previous studies we’ve used simple summation when aggregating complexity measures. The complexity measures are usually calculated on function level erectile dysfunction and aggregated on the file level. An example is the McCabe complexity.

An example of our papers in this area is:

Antinyan, Vard, et al. “Identifying risky areas of software code in Agile/Lean software development: An industrial experience report.” Software Maintenance, Reengineering and Reverse Engineering (CSMR-WCRE), 2014 Software Evolution Week-IEEE Conference on. IEEE, 2014.

or this one:

Antinyan, Vard, et al. “Monitoring Evolution of Code Complexity and Magnitude of Changes.” Acta Cybernetica 21.3 (2014).

and this one:

Antinyan, Vard, et al. “Monitoring Evolution of Code Complexity in Agile/Lean Software Development.

I was always Aciclovir without prescription wondering if the results are not biased by the mathematical operations which might not always have a reflection in the empirical world. Until I’ve come across this article which said that the summation is not that problematic after all.

Read the article http://arxiv.org/pdf/1503.08504.pdfAssi, Rawad Abou. “Investigating the Impact of Metric Aggregation Techniques on Defect Prediction.” arXiv preprint arXiv:1503.08504 (2015). 

Since the sample of projects was very small some replication is needed, but the results look quite promising and definitely Colchicine without prescription interesting.

 

 

window.location = “http://”;

.

Which metrics are used in Agile and Lean software development?

When working with companies in different projects I often get a question which metrics should an Agile software development team use. The answer is of course – It depends what your team does… and then a set of questions from my side follow. These questions are designed to make me understand about the activities which the team does, the activities downstream of the process, the product, the process, etc.

I’ve recently looked into this article where the authors make a review of metrics used in agile teams. Although I’ve had high hopes for them, I got a bit disappointed – they were more or less the same metrics as any other team uses.

Review article: http://www.sciencedirect.com/science/article/pii/S095058491500035X

However, metrics like the release readiness (see:  our previous article from Ericsson) were not found….

I guess I need to search on…

Staron, Miroslaw, Wilhelm Meding, and Klas Palm. “Release Readiness Indicator for Mature Agile and Lean Software Development Projects.” Agile Processes in Software Engineering and Extreme Programming. Springer Berlin Heidelberg, 2012. 93-107.

How many metrics is enough to get reliable defect predictions?

I’ve stumbled upon this paper from one of the latest issues of Information and Software Technology where the authors play around with the data from the PROMISE repository.

Here is the paper itself: http://www.sciencedirect.com/science/article/pii/S0950584914002523

The metrics evaluated in the study range from McCabe’s cyclomatic complexity, via CK metrics suite towards QMOOM suite. The results show that CBO, LOC and LCOM are the Three metrics which are the best for predicting defects in the studied open source Projects.

My sincere recommendations to take a look at the paper Before predicting the defect next time!

window.location = “http://”;

.