Post-Corona branding…

Post Corona: From Crisis to Opportunity: Galloway, Scott: 9780593332214: Amazon.com: Books

A good holiday reading is something that is an essential addition to the time spent with family and friends. Every year I try to get hold of a good book to get inspiration for the upcoming year. Last year, I’ve read “Grit”, which is about perseverance. This year, I noticed a book of NYU Stern professor Scott Galloway – “Post Corona: From Crisis to Opportunity”. Galloway is also the author of “The Four”, which is a book about Apple, Google, Facebook, and Amazon.

Now, to the topic of the day – the post-corona book. I’ve read this book a bit slower than I usually do (which is a good thing). When I read it I had Galloway’s voice in my head talking about the opportunities of large companies – a big tech, as he calls them. His thesis is that the pandemic actually accelerated their growth to the size which makes them really hard to disrupt. By mergers and acquisitions, they can “cannibalize” their competition, unless the competition is them.

I thought that this book would be about Zoom – a company unheard of before the pandemic, now a synonym for a phone call. I thought that the book would be about health services and telemedicine – another area that was small and now is big. Now, it was nothing like that. The book was about The Four and how they capitalize on their brands in times of pandemic.

There is a thesis out there, that if you are getting something for free, it’s not worth much. In this book, Galloway popularizes another thesis – if you are getting something for free, you are the product, not the consumer. He uses this as a way of explaining why Apple charges so much for their products – for not using our data, whereas Google and Facebook/Meta capitalize on our data. Apple connects 200 data points per day from us, while Google collects 2000 data points per hour – a small difference.

I’m not a privacy freak, but I do not want to be a product unless I choose to. I do not want companies to monetize on me, my behavior, and my family. But, and that’s a sad thing, I do want to have great services for a reasonable price. I want my maps to work well – the one in the car’s GPS simply does not make it. I want to watch short tutorials on YouTube – Netflix does not produce tutorials about variational autoencoders (yet).

To sum up, I like the thesis posed by Galloway, that the next big thing taken up by Amazon, will probably be medical insurance or schools. It is not difficult to see that the telemedicine model is essentially mature enough for being disrupted. I really recommend this book as food for thought in the post-pandemic (or endemic) world of 2022.

Test prioritization – a systematic review (review)

Image source: pixabay

Test case selection and prioritization using machine learning: a systematic literature review (springer.com)

Testing is an important activity in every software engineering project. In professional organizations, the process is structured and well-organized. In smaller projects, start-up style organizations, or in research studies, the process is less organized.

There are different views on why we do testing. Some think that we do testing to find defects, some to prove that the software works correctly, finally some think that we do this to waste time (well, not so many maybe). In my experience it is the combination of the first and the second. We do testing to find defects and also to track how good our software gets over time (software reliability growth modelling).

This paper presents a systematic literature review on using machine learning to select and prioritize test cases. I think that the authors summarize their contribution in a very good way (quote):

  • The main ML techniques used for TSP are: supervised learning (ranking models), unsupervised learning (clustering), reinforcement learning, and natural language processing.
  • ML-based TSP techniques mainly rely on features that are easy to compute and based on data that are practical to collect in a CI context, including execution history, coverage information, code complexity, and textual data.
  • ML-based TSP techniques are evaluated using a variety of metrics that are, sometimes, calculated differently in TS and TP, making it difficult to compare their results. Most of the currently available subjects have extremely low failure rates, making them unsuitable for evaluating ML-based TSP techniques.
  • Comparing the performance of ML-based TSP techniques is challenging due to the variation of evaluation metrics, test suite sizes, and failure rates across studies. Reporting failure rates alongside performance values helps provide more interpretable results to the wider research community.
  • Only six out of the 29 selected studies (21%) can be considered reproducible, thus raising methodological issues in the studies and a lack of confidence in reported results.

I think the biggest surprise, for me, is that complexity-based metrics are still used widely in this context. I’m happy that there are new approaches on the rise, for example textual analyses. I guess there is a point in combining approaches, but complexity seems like a very coarse-grained instrument for this type of analysis. We know it correlates well with size, and the larger the test (or UUT), the higher the probability of triggering a failure.

Well, I guess I need to make more experiments myself to check if I miss something.

Merry X-mas and the next year with AI

Image by Peter Pieras from Pixabay

Sparse reward for reinforcement learning‐based continuous integration testing – Yang – – Journal of Software: Evolution and Process – Wiley Online Library

This is the last post that I want to write in 2021. The year has been hectic and full of surprises. First, we got the news that the vaccine works for Covid-19. We all prepared for normalization, for being able to travel, visit friends, families, and conferences in person.

Then came the new variants, like the Omikron, which seem to escape from the vaccine, and countries still are not ready for opening. Conferences get postponed, trips canceled. I hope this is just a temporary situation and that we will be able to get in control of the situation again.

For the last post in 2021, I chose one of the articles that I’ve recently read – about the use of reinforcement learning in integration testing. Kind of a different approach to what we do in the Software Center project.

This paper tackles the problem of sparse rewards for fitness functions when using reinforcement learning for test selection. It proposes a combination of historical data and a function that assigns a higher reward for non-sparse data. It looks like the work is very promising, as it has been tested on 14 different industrial data sets. I need to check if during the coming holidays. It’s a project to do for X-Mas

With that, I would like to thank all of you for being here with me during 2021 and hope that we can continue in 2022. Wish you all great holidays and the best of luck in the coming 2022!

From the abstract:

“Reinforcement learning (RL) has been used to optimize the continuous integration (CI) testing, where the reward plays a key role in directing the adjustment of the test case prioritization (TCP) strategy. In CI testing, the frequency of integration is usually very high, while the failure rate of test cases is low. Consequently, RL will get scarce rewards in CI testing, which may lead to low learning efficiency of RL and even difficulty in convergence. This paper introduces three rewards to tackle the issue of sparse rewards of RL in CI testing. First, the historical failure density-based reward (HFD) is defined, which objectively represents the sparse reward problem. Second, the average failure position-based reward (AFP) is proposed to increase the reward value and reduce the impact of sparse rewards. Furthermore, a technique based on additional reward is proposed, which extracts the test occurrence frequency of passed test cases for additional rewards. Empirical studies are conducted on 14 real industry data sets. The experiment results are promising, especially the reward with additional reward can improve NAPFD (Normalized Average Percentage of Faults Detected) by up to 21.97%, enhance Recall with a maximum of 21.87%, and increase TTF (Test to Fail) by an average of 9.99 positions. “

“That will never work” – A book about Netflix

That Will Never Work: The Birth of Netflix by the first CEO and co-founder Marc Randolph : Randolph, Marc: Amazon.se: Böcker

Building a successful start-up seems like a really cool idea – from a distance. I’ve used to teach a course about enterpreneurship, start-up, business models and alike. Although it was nice, I always felt that I’m a person who knows absolutely nothing about this. At least not in practice…

In this book, the original founder of Netflix tells his story about how he took the idea and made it into a product. He tells the story about how the idea hatched, how he, and his team, created a data-driven model of understanding their customers. The book is also about the struggles of start-ups – about taking on investments from the beginning and then being pushed out of the company. It’s about being able to understand what’s best for the company and what’s best for the individual.

I like the way in which the authors describes the story, and also shows a bit of himself: how he felt, how he wanted to build the company and how he decided when to leave (with grace!). I also like his ending of the book – Nobody knows anything! which is a saying that you never really knows what will and will not work in the end.

I recommend this as a Sunday reading to get inspired.

Is software architecture and code the same?

BIld av Stefan Keller från Pixabay

Relationships between software architecture and source code in practice: An exploratory survey and interview – ScienceDirect

Software architecting is one of the crucial activities for a success of your product. There is a BAPO model, there B stands for Business and A for Architecture – and there is a good reason why it is on the second place. It should not dictate your business model, but it should support it.

Well, it is also good that the architecture comes before processes and organization. If software is your product, then it should dictate how you work and how you are organized.

But, how about the software code? For many software programmers and designers, the architecture is a set of diagrams which show logical blocks and software organization, but they are not the ACTUAL code, not the product itself. In one of our research project we study exactly that kind of problem – how to ensure that we keep both aligned, or more accurately, how we can use machine learning to keep the code and architecture synchronized.

Note that I use the word synchronized, not aligned or updated. This is to avoid one of many misconceptions about software architectures — that they are set once and for all. Such an assumption is true for architectures of buildings, but not software. We are, and should be, more flexible than that.

In one of the latest Information and Software Technology issues, I found this interesting study. It is about how architects and programmers perceive software architectures. It shows how architectures evolve and why they are often outdated. It is a survey and I really like where it’s going. Strongly recommend to read if you are into software architectures, programming and the technical side of software engineering….

Open or close – how we can leverage innovation through collaboration (book review)

Open: The Story of Human Progress : Norberg, Johan: Amazon.se: Böcker

Progress and innovation are very important for the development of our societies. Software engineers are focused on the progress in technology, software, frameworks, and the ways to develop software.

This book is about openness and closeness in modern society. It is a story showing how we benefit from being open and collaborative. I could not stop myself from making parallels to the original work about open software – “The Cathedral and The Bazaar” by Eric Raymond. Although a bit dated, the book opened my view on the open source movement.

We take for granted that we have Linux, GitHub, StackOverflow and all other tools for open collaboration, but it wasn’t always like that. The world used to be full of proprietary software and software engineers were people who turned requirements into products. It was the mighty business analysts who provided the requirements.

Well, we know that this does not work like that. Software engineers are often working on product – they take ownership of these products, they feel proud to create them. It turns out that the openness is the way to go here – when software engineers share code, they feel that they contribute to something bigger. When they keep the code to themselves, … well, I do not know what they feel. I like to create OSS products, docker containers and distribute them. Kind of feels better that way!

How to make your pull request merged quickly… (article highlight)

Image source: pixabay.com

How Developers Modify Pull Requests in Code Review | IEEE Journals & Magazine | IEEE Xplore

I must admit that I’m not the greatest contributor to OSS projects. Yes, I did a few of those and contributed to projects, but this is more like a hobby than a real work. My goal for 2022 is to make it better and even put together some docker containers to make my scripts more reusable. I even bought a book about Docker, which I’ve read and (theoretically) I’m good to go.

Anyways, I stumbled upon this work which is about how developers make good pull requests. The paper has examined OSS projects and found that you need to make a clear change as part of the pull request, you need to make a clear classification of that change and then you have a high chance that the pull request will be adopted soon.

Stay tuned for more on code reviews…

Guiding the selection of research methodologies (article highlight)

Image by Gerd Altmann from Pixabay

Guiding the selection of research methodology in industry–academia collaboration in software engineering – ScienceDirect

Research methodology is something that we must follow when conducting research studies. Without a research methodology, we just search for something and if we find it, we do not know if this finding is universal, true, or even if it really exists…

In my early works, I got really interested in empirical software engineering, in particular in experimentation. One of the authors of this article was one of my supervisors and I fell for his way of understanding and describing software engineering – as an applied area of research.

Over time, I realized that experimentation is great, but it is still not 100% what I wanted. I understood that I would like to see more collaboration with software engineers in the industry, those who make their living by programming, architecting, testing, modifying the code. I did a study at one of the vehicle manufacturers in Sweden, where I studied the complexity of the entire car project. There I understood that software engineering needs to be studies and practices in the industry. Academia is the place where we shape young minds, where we can gather multiple companies to share their experiences, and where we can make findings from individual cases into universal laws.

In this article, the authors discuss research methodologies applicable for industrial, or industry-close research. They discuss even one of the technology transfer models as a way of research co-production and co-validation.

The authors conclude this great overview in the following way (from the conclusions):

When it comes to differences, the three methodologies differ in their primary objective: DSM on acquiring design knowledge through the design of artifacts, AR on change in socio-technical systems, and TTRM on the transfer of research to industry. The primary objective of one methodology may be a secondary objective in another. Thus, the differences between them are more in their focus than in which activities they include.

In our analysis and comparison of their feasibility for industry–academia collaboration in software engineering research, the selection depends on the primary objective and scope of the research (RQ3). We, therefore, advice researchers to consider the objectives of their software engineering research endeavor and select an appropriate methodological frame accordingly. Furthermore, we recommend studying different sources of information concerning, in particular, the chosen research methodology to better understand the methodology before using it when conducting industry–academia collaborative research.

I will include this article as mandatory reading in my AR Ph.D. course in the future.

Comparing different security vulnerability detection techniques (article review)

Image by Reimund Bertrams from Pixabay

An Empirical Study of Rule-Based and Learning-Based Approaches for Static Application Security Testing (arxiv.org)

In the recent weeks I’ve turned into a specific part of my work, i.e. security vulnerability detection. In many areas, working with security has focused on the entire chain. And that’s a good thing – we need to understand when and where we have a vulnerability. However, that’s not what I can help with, which has never really stopped me before.

So, I was looking for more programmatic view on security. To be more exact, I wonted to know what we, as software engineers, need to focus on when it comes to cyber security. We can, naturally, measure it, but that’s probably not the only thing. We can analyze libraries from OSS communities to find which ones could be exploited. We can even program in a specific way to minimize the risk of the exploitation.

In this paper, the authors compare two different techniques for software vulnerability prediction – static software analysis and vulnerability prediction models. They have identified 12 different findings, of which the following are the most interesting ones:

  • SVP models are generally a bit better when it comes to precision and overall preformance.
  • SVP models provide fewer files to inspect as the output, which saves the cost.
  • The two approaches lack synergy, and it’s difficult to use them together to increase their performance.

Since they have compared only a few tools, I believe it’s important to do more experiments. It is also important to understand whether it is good or bad to have fewer files to inspect – I mean, one undetected vulnerability can be very costly…

Reproducing AI models – a guideline

Image by Pete Linforth from Pixabay

2107.00821.pdf (arxiv.org)

Machine learning has been used in software engineering as a great tool for both research and development. The fact that we have access to TensorFlow, PyCharm, and other toolkits, provides almost endless possibilities. Combine that with the hundreds (if not thousands) of datasets from Zenodo and Co. and you can train a model for almost anything.

So far, so good, I would say. Problems (yes, there are always some problems) appear when we want to reproduce the results of others. Training a model on your own dataset and making it available is easy. Trusting such a model in a new context is not.

Imagine an example of an ML model trained on data from Company X. We have probably tuned the parameters a lot, so the model works great there, but does it work for Company Y? Most probably it will not. Well, it will work, but the performance of the predictions are not going to be great.

So, Google has partner up with academic partners to set up SIGMODELS, and TensorFlow garden, initiatives that are aimed at making ML models more portable, experiments more replicable, and all the other goodies.

In this paper, the authors provide a set of checks, which we can use to make the models more transparent, which is the first step towards reproducibility. In these guidelines, the authors advocate for reporting the models architecture, their input and output structure, building blocks, loss functions, etc.

Naturally, they also recommend to report metrics which were used to optimize the models, e.g. accuracy, F1-score, MCC or others. I know, these are probably essentials, but you would be surprised to see that many authors do not really report these metrics. If they are omitted, then how do we know if the metrics were just so poor that the authors omitted them (low performance of the model) or that they are not relevant (low relevance of the metrics – which is a good thing).

For now, these guidelines are only a draft, but I hope that they will become more mainstream. just like the emprical guidelines from ACM (GitHub – acmsigsoft/EmpiricalStandards: Empirical standards for conducting and evaluating research in software engineering).