In the last few days I had the opportunity to read a great book about how to build scalable, high capacity and reusable software. The book was “Reselase it! …”, https://pragprog.com/book/mnee/release-it.
As a “former” programmer I realized that many of the ideas and good practices from the book I should have used a long time ago. Using examples from real industrial failures, Michael T. Nygard presents a set of pattern for design of high reliability software.
Even if the examples come from the domain of web applications and Java, many of them should be used in the automotive industry. The same problems will hit the automotive market very soon and therefore I recommend this to anyone who wants to start working with automotive software engineering.
Metrics research has gained a lot of attention from the Software Center and the results showed that tackling the complexity requires a holistic approach. Now, recently I’ve encountered a book which talked about the same principles, although at a more beginner level – “Your code as a Crime Scene” by Adam Tornhill.
The book uses metaphors of crime scene investigations to describe troubleshooting the code. I recommend this as the starting point. Readers who are interested in the going deeper into this topic should look at one of the recent PhD thesis from software center, by Dr. Vard Antinyan.
The thesis investigates the complexity of the software code, requirements and test cases. The conclusions from the work is that we can monitor the evolution of complexity using very common measures — e.g. McCabe complexity combined with the number of changes in the code. Dr. Antinyan provided even a number of tools to monitor the complexity.
The thesis can be found here: http://web.student.chalmers.se/~vard/files/Thesis.pdf.
In our recent work we tackled the problem of spending way too much effort on maintaining the measuring instruments (or metric tools). When the measured entity changes you need to rewrite the script and keep two or three or five billion versions of it.
So, we played with an idea of “teaching” an algorithm how to count so that everytime the entity changes we can “re-teach” the algorithm, but not re-program it.
Guess what – it worked! We played with the LOC metric and got over 90% accuracy on the first try. Cost of re-designing the measuring instrument to adjust to new information needs – almost 0 (null, nil).
Take a look at this paper of ours: https://gup.ub.gu.se/publication/249619, and the paper
Background: The results of counting the size of programs in terms of Lines-of-Code (LOC) depends on the rules used for counting (i.e. definition of which lines should be counted). In the majority of the measurement tools, the rules are statically coded in the tool and the users of the measurement tools do not know which lines were counted and which were not. Goal: The goal of our research is to investigate how to use machine learning to teach a measurement tool which lines should be counted and which should not. Our interest is to identify which parameters of the learning algorithm can be used to classify lines to be counted. Method: Our research is based on the design science research methodology where we construct a measurement tool based on machine learning and evaluate it based on open source programs. As a training set, we use industry professionals to classify which lines should be counted. Results: The results show that classifying the lines as to be counted or not has an average accuracy varying between 0.90 and 0.99 measured as Matthew’s Correlation Coefficient and between 95% and nearly 100% measured as the percentage of correctly classified lines. Conclusions: Based on the results we conclude that using machine learning algorithms as the core of modern measurement instruments has a large potential and should be explored further
I get a lot of questions about the essential readings for the area of metrics. Since the area has been active since the 1950s, the number of books is large and the number of articles is naturally even larger. Here is the list of the books that I’ve compiled for my students and colleagues from industry:
- Norman Fenton and James Bieman: Software Metrics. This is a classical position in the area of software metrics. It’s been around since 1990s and is perceived as providing the foundations of software metrics. The main audience of this book comprises software engineering students and researchers. If you want to start with the more theoretical aspects and closer to software product metrics, this is the perfect position for you.
- Alain Abran: Software Metrology and software metrics. This is the newest position in the discipline of software engineering. It provides a very good foundation in metrology and provides some examples of modern software measures. The major focus on the book is on the COSMIC FP measure. If you want to get good foundations in metrology and then move over towards estimations and measurement reference etalons, then this is the perfect position for you.
- Cheryl Jones and Beth Layman: Practical software measurement. This is a very good book for practitioners who want to apply ISO/IEC 15939 standard. The book provides a solid description of the standard which describes the measurement process. It’s a great position for everyone who wants to look into ISO/IEC 15939 and introduce it into the organization.
- Christof Ebert and Reiner Dumke: Software Measurement. This book is a classical position which provides solid foundations on the measurement theory and estimation. The book is rather long and covers multiple aspects, but it seems that the audience is mostly students.
- Christof Ebert et al. Best practices in software measurement. This book presents a number of best practices of measurement. Very good position for practitioners, but needs to be complemented with #4.
For everyone who wants to get into the measurement area, these positions are a good start. There is of course a lot of other books that are more dedicated for specific areas, and I will get back to these soon.
Link to article: http://www.sciencedirect.com/science/article/pii/S0950584916301203
Being an empirical researcher with tight relation to industry (www.software-center.se) this kind of study is of outmost importance. I believe that in the area of software engineering the ability to discuss, develop and evaluate methods, tools and techniques needs to be done in collaboration with industry. As the authors of this paper point out, the collaborative environments are still scarce, but they appear.
I recommend to read the paper and reflect a bit on the way in which we conduct the studies and the way in which we engage in the collaboration. Close planning, personal chemistry and academic excellence combined with industrial impact should be our guiding principles!
Results (as written in the abstract by the authors): “Through thematic analysis we identified 10 challenge themes and 17 best practice themes. A key outcome was the inventory of best practices, the most common ones recommended in different contexts were to hold regular workshops and seminars with industry, assure continuous learning from industry and academic sides, ensure management engagement, the need for a champion, basing research on real-world problems, showing explicit benefits to the industry partner, be agile during the collaboration, and the co-location of the researcher on the industry side.”
Link to the book: https://www.amazon.co.uk/Creativity-Inc-Overcoming-Unseen-Inspiration/dp/0552167266/ref=sr_1_1/253-4676796-1009806?ie=UTF8&qid=1482399808&sr=8-1&keywords=creativity+inc and https://en.wikipedia.org/wiki/Creativity,_Inc.
Naturally a lot has been written about the best academic environment and academic excellence, and while reading this book about Pixar animation studios, I kept reflecting on our profession and environments. In the book, the author present his experiences with the start-up of the studios and its later successes.
What struck me the most was the way in which the studios nurtures creativity. They acknowledge directly that creativity is not something that strikes one as a lightning from a blue sky, but a result of a a long process. It requires a number of diverse roles to come together – different roles, but all with equal voice – everyone has to have the right to provide an opinion and discuss it. They should be able to openly and honestly question and discuss other’s opinions.
I see this to be aligned with the academic spirit, and from my observations the best academic environments are the ones where teamwork and team spirit are the most important ones.
I sincerely recommend this book as an inspiration in the creating process of research!
By Source, Fair use, Link
The figure is a link to: https://en.wikipedia.org/wiki/Creativity,_Inc.
Keeping Continuous Deliveries Safe, by S. Vöst and S. Wagner
Link to article: https://arxiv.org/pdf/1612.04164v1.pdf
One of the challenges in introducing Agile software development into safety critical systems engineering is the ability to secure the safety properties. A number of solutions exist to that challenge, none of them successfully adopted in commercial product development, though. At least to my best knowledge.
The authors of this article propose a way of addressing this challenge by continuous safety builds. A good thing is the fact that this is in the context of automotive software development, although still in the idea phase. Hope to see more of this kind of research soon!
Abstract (of the article, quoted directly from the source):
Allowing swift release cycles, Continuous Delivery has become popular in application software development and
is starting to be applied in safety-critical domains such as the automotive industry.
These domains require thorough analysis regarding safety constraints, which can be achieved by formal verification and the execution of safety tests resulting from a safety analysis on the product. With continuous delivery in place, such tests need to be executed with every build to ensure the latest software still fullfills all safety requirements. Even more though, the safety analysis has to be updated with every change to ensure the safety
test suite is still up-to-date.
We thus propose that a safety analysis should be treated no differently from other deliverables such as source-code and dependencies, formulate guidelines on how to achieve this and advert areas where future research is needed.
I’ve recently done a personal mobility project with Volvo Cars (www.volvocars.com), which was a fantastic experience. I managed to be on Volvo’s site one day a week and developed the course for them — actionable dashboards:)
Here is a short movie about this collaboration, done with the colleagues at Volvo Cars, courtesy of Chalmers.
Link to vimeo
Quite recently I’ve reda one of the books by Daniel H. Pink – “Drive” – which describes what motivates us in general, but in particular in the areas where creativity and research.
In my opinion the ideas of motivation 3.0 are highly applicable for our students. In particular the ability to provide our young colleagues with the ability to become intrinsically motivated to gain the knowledge. We need to understand how to provide them with the “flow” types of task – something that will let the students feel that the task is challenging, but not too difficult.
In the next “software quality” course some of these ideas will come to existence.
We’re starting to take up some online trainings for measurement. The first one is about ISO 15939 and its measurement information model.
Go to the video at GU play: Measurement information model