Together with Prof. Hansson and Prof. Bosch from Chalmers University of Technology we had the opportunity to guest edit one of the issues of Information and Software Technology. We set out to compile interesting empirical work on how performance in software development is perceived and assessed.
The link to the full text is available at: http://www.sciencedirect.com/science/journal/09505849/56/5
The outcome was a choice of five articles:
- Analysing ISD performance using narrative networks, routines and mindfulness
- Systematic analyses and comparison of development performance and product quality of Incremental Process and Agile Process
- Performance appraisal of software testers
- Performance on agile teams: Relating iteration objectives and critical decisions to project management success factors
- Evaluating performance in the development of software-intensive products
Each of the articles discusses different aspects of performance of software development – what is important for a team (4), which elements of performance are important for the managers (5) or how to assess performance (3).
I’m looking forward for feedback on this special issue!
How much does it cost to be ready with testing?
Link to full text
In our research work we stumbled upon a question of monitoring whether the product is ready to release (Staron et al, “Release Readiness Indicator for Mature Agile and Lean Software Development Projects”, XP 2012). We could identify indicators which could show how many weeks to release the organization have given their testing and development speed.
In this article we could see a complement to our work since it presents a cost model for how much testing is needed to achieve a specific release pace. Interesting work, waiting to be validated in industrial contexts.
Predicting risk of pre-release code changes with Checkinmentor
This recently published paper shows a very nice approach of monitoring of what kind of patterns in pre-release code changes can be risky w.r.t. fault proneness of software components. The paper shows experiences of analyzing Windows Phone software at Microsoft done in one of my favorite places – Microsoft Research.
In the context of this work the module is risky if it can cause a bug fix after the release and the metrics used are both those of a source code and of the organization behind the product development. They have found that the change size metrics are the most prominent ones. This means that the more code one checks in, the higher the risk of having a bug…
Link to full text
Choosing reliability growth model for open source software, online first from IEEE Computer
Link to full text at IEEE
Predicting the number of unknown defects has always been an important problem to solve. A lot has been done in the area and a lot will be done before the problem is solved.
This paper highlights different types of reliability models (e.g. Convex, Concave) and how to choose between them for open source projects. It’s a magazine article so it reads nicely and gives useful pointers. Recommended as Friday evening reading:)
Directing high-performance software development teams
Link to full text
Speed, speed, speed – who wouldn’t like their team to be fast, effective, high-performing. The only question is how to achieve this goal.
In this paper the author presents a methods for identifying capabilities of high-performance agile teams. For example, for being agile the team has to have conscious sensitivity and responsiveness to customer and environment needs and changes. Sound quite like one has to be on alert and flexible, ready to embrace changes.
The entire method/analyzer consists of 6+1 questions that can help to assess the maturity. It has been tested in a number of organizations by the author. Sounds simple and nice. Can’t wait to start using it in practice…
Agile metrics in technology acquisition
Link to full text
Recently the Software Engineering Institute has published an interesting article on the use of Agile metrics in DoD contracts. They have defined a few metrics of interest:
- Velocity – volume of work accomplished in a specified period of time
- Sprint Burn-Down – progress for the development team during a sprint
- Release Burn-Up – release readiness metrics
The report recognizes a number of advanced metrics and discusses their use and relation to the DoD standards, which makes a nice reading.
During my development of course material for DIT595 (Industrial Best Practice) for the Bachelor Program in Software Engineering and Management I got inspired by the Lean Start-up by Eric Ries (Crown Business Publishing, theleanstartup.com). The book is a very good material for entrepreneurs willing to start their own businesses. It is also very good for our students who want to do their bachelor and master theses in industry.
I also did some extra search for more articles on how measuring should be done in the lean start-ups. I have found the following article with tips for creating metrics – 8 tips…. The article proposes the following:
- Be actionable
- Be understandable and trustworthy
- Measure results
- Understand the downside
- Understand the upside
Since the article is free I will not quote more – I recommend reading it and reflecting upon the metrics that we create.