Towards a decision-making structure for selecting a research design in empirical software engineering, by C. Wohlin and A. Aurum
Choosing a research design is often an task which impacts the results, the ability to draw conclusions and finally the usefulness of an entire research study. It is not easy for senior researchers and definitely painful for younger PhD students. Sometimes, it is a task which is dictated by the set-up of the study (e.g. access to industrial practitioners, artefacts, etc.). However, sometimes we have the possibility to choose a design!
As the authors of the paper state: The main objective of this article is to make researchers more aware of options in relation to the research design, and hence to support researchers in their selection of a research design.
The paper makes a great reading and provides useful research view on the topic of how to choose the design. It clearly describes the relevant decision points when choosing the design and outlines several potential building blocks for these decision points.
I sincerely recommend this work for all empirical researcher – if nothing else, it raises our awareness of the potential which we have!
A colleague from my division has written an excellent piece of empirical work on how the transition towards continuous deployment can be done in an Australian company.
Sincerely recommend reading the article at http://www.sciencedirect.com/science/article/pii/S0950584914001694
Predicting defects has been on my mind for a while and I’ve been collecting evidence of good metrics which can improve accuracy of predictions.
In this article Madeyski and Jureczko have found one more metric – Number of Distinct Committers (NDC) which seems to improve prediction models. The link to the full article is here: Which Process Metrics Can Signicantly Improve Defect Prediction Models? An Empirical Study.
The empirical evaluation includes 27 open source projects and 6 industry projects. It’s great that there is an increased body of evidence combining both the open source and the industrial projects. Especially that the results seem to be consistent.
This week I had a chance to present our experiences from building a sustainable software engineering program (MSc) at University of Gothenburg.
The talk was given at the SANORD symposium at Karlstad University.
The link to the talk is here: Presentation (PDF)
Software Engineering is one of the newest engineering fields with a growing need from the society side. The field develops rapidly which poses challenges in developing sustainable software engineering education – allowing the alumni to be effective in their work over a long period of time (long-term impact of the education) and keeping the education attractive for the potential students and industry.
The objective of this presentation is to describe the experiences from using business intelligence methods to develop, profile and monitor software engineering education on the master level. In particular we address the following research questions:
- Which data sources should be used in developing a profile of a master program?
- How to combine, prioritize and communicate the analyses of the data from the different sources?
- How to identify barriers and enables of attractive sustainable software engineering education?
The results are a set of experiences from using data from the national agencies in Sweden (e.g. the Swedish Council for Higher Education – UHR, the Swedish job agency – Arbetsförmedlingen, international master education portals – mastersportal.eu) as input in development and evaluation of a master program in Software Engineering at University of Gothenburg.
The conclusions show that using the available sources lead to creating sustainable programs and we recommend using the data sources to a larger extent in the national and international level.
Today I’ve had a privilege to present a paper at EASE 2014 done in collaboration with University of Basilicata in Italy.
Link to presentation
The paper is an experimental validation of whether requirement diagrams speed up the understanding of requirement specifications or whether they increase/decrease comprehension. The results show that the comprehension is increased while there is no change in time.
Darko Durisic has done an interesting work on the evolution of industrial-class meta-models. The work has been accepted as full paper at SEAA (Software Engineering for Advanced Applications) Euromicro Conference.
Title: Evolution of Long-Term Industrial Meta-Models – A Case Study
Abstract: Meta-models in software engineering are used to define properties of models. Therefore the evolution of the metamodels influences the evolution of the models and the software instantiated from them. The evolution of the meta-models is particularly problematic if the software has to instantiate two versions of the same meta-model – a situation common for longterm software development projects such as car development projects. In this paper, we present a case study of the evolution of the standardized meta-model used in the development of the automotive software systems – the AUTOSAR meta-model – at Volvo Car Corporation. The objective of this study is to assist the automotive software designers in planning long term development projects based on multiple AUTOSAR meta-model versions. We achieve this by visualizing the size and complexity increase between different versions of the AUTOSAR meta-model and by calculating the number of changes which need to be implemented in order to adopt a newer AUTOSAR meta-model version. The analysis is done for each major role in the Automotive development process affected by the changes.
Stay tuned for the full version of the paper and congrats to Darko!
In our recent research we’ve looked at a number of ways on how to support software development companies in working with reliability modelling.
I have come across this article on how to choose a model – a sys rev. They authors look at a number of criteria and evaluate which ones are the most used in choosing models. Nice and interesting reading.
Link to full text
Which metrics are used by Agile teams?
Link to full text
I was browsing for articles to my new manuscript and encountered this nice piece of work. This article makes an overview which code metrics are used and why by agile teams. The needs are:
- Iteration planning
- Iteration tracking
- Motivating and improving
- Identifying process problems
- Pre-release quality
- Post-release quality
- Changes in processes and tools
The article of course mentions the metrics used in each category.
Article highlight: Empirical evidence on the link between object-oriented measures and external quality attributes: a systematic literature review
Link to full text
This article presents an interesting systematic review where the authors set off to look for evidence of correlation between OO metrics and quality. What I like about this paper:
- nice overview of which metrics suites exist for OO programs
- nice overview which external quality metrics are used
- essentially only 99 studies exist which have the right scope and quality
- the number of studies seem to be growing – even in the past 2-3 years
- the 20-year old CK suite is still the most popular one
Recommended reading for those who want to see which metrics are the best predictors, when and why.
Together with Prof. Hansson and Prof. Bosch from Chalmers University of Technology we had the opportunity to guest edit one of the issues of Information and Software Technology. We set out to compile interesting empirical work on how performance in software development is perceived and assessed.
The link to the full text is available at: http://www.sciencedirect.com/science/journal/09505849/56/5
The outcome was a choice of five articles:
- Analysing ISD performance using narrative networks, routines and mindfulness
- Systematic analyses and comparison of development performance and product quality of Incremental Process and Agile Process
- Performance appraisal of software testers
- Performance on agile teams: Relating iteration objectives and critical decisions to project management success factors
- Evaluating performance in the development of software-intensive products
Each of the articles discusses different aspects of performance of software development – what is important for a team (4), which elements of performance are important for the managers (5) or how to assess performance (3).
I’m looking forward for feedback on this special issue!