Cybersecurity, security, and safety…

Image by Robinraj Premchand from Pixabay

During the spring semester, my students did great work looking into the security of a car’s electrical system. They managed to decode signals, understand high-level data, and managed to perform small changes in the car’s function.

It all sounds great as thesis project. Both the students and the company loved this project. It was challenging, it was new, it was useful. But I’m not writing this post about that. I want to write about what has happened, or not happened, after that.

In the months that came after the thesis, I decided to look into mechanisms for how to design and implement secure software. Being a programmer at the bottom, I turned to GitHub for help. I search for tools and libraries for secure software design. I know, I could have searched for something different, but let’s start there.

The results were :

Analysis frameworks:

There were more of these, but most of the same kind. I was a bit amazed by the fact that there is so little outside of web design. I also looked at some of the research in this area (no systematic review, I promised myself not to do one). There I found all kinds of work, but mostly theoretical. The areas of interest:

  • Cryptography: how to encode/decode information, keys, passwords.
  • Secure software design: mostly analysis of vulnerabilities
  • Secure systems: mostly about passwords and vulnerabilities.
  • Privacy: how to keep the private information hidden from third parties (kind of security, but mostly something else – I’m still waiting to understand what).
  • Legacy operations: how to make the software long-lived and provide it with secure infrastructure.
  • Infrastructure: security of the cloud environments, end-to-end security.

Since I worked with software safety, I thought that it would be very similar. However, it was not. The safety community discussed, mostly, standardization, hazards, risks. Very little about code analysis, finding unsafe code, etc. So, mostly something different.

I’ll keep digging and I will run a few experiments with some of my students to understand what the technology could be. However, I’m not as optimistic as I was at the beginning of my search.

Predicting defect, but continuously (article highlight)

Continuous Software Bug Prediction (yorku.ca)

Although a lot has been written about predicting defects, the problem is still valid. Some systems have more defects than others. In academia, we can do two things – educate young engineers in making better software or construct models for predicting where and when to find defects.

A lot of work in the defect prediction model development focuses on more-or-less randomly found releases. However, software development is not random, but structured, and often, continuous. This means that it’s important to understand that not all defects are found in the same release/path/commit as they are introduced (BTW: there is a lot of work on this aspect too).

In this work, the authors analyze 120 continuous releases of six software products and demonstrate the value of their prediction models. The novelty of this approach is a system that checks whether the releases are similar to one another based on the distributional characteristics. This means that the prediction models are tuned to each release based on these characteristics. These characteristics are, mostly, well-known metrics like the average cyclomatic complexity of a file, a MaxInheritanceTree of a class, etc. So – easy to collect and analyze, a lot of tools can be used for that.

The results, in short, show that the new method is better than randomly choosing a release or bagging releases. The results differ per project, but the approach is better than the other two, across the board.

I like the approach and will try it the next time I get my hands on software defects, issues, challenges. Let’s see when that happens:)

Thinking machines – summer reading

Tänkande maskiner: Den artificiella intelligensens genombrott: Häggström, Olle, Pettersson, Rasmus: Amazon.se: Books

Summer is around the corner, in some places it has actually arrived. This usually makes for relaxation and time for reflection.

I would like to recommend a book from fellow professors. The book is about the way in which thinking machines are considered today and about the potentials of GAI, General AI. The book is written in a popular science way, but the examples and the research behind this is solid. The authors discuss both the technology, but also the society – legislative aspects of AI and understanding of what it means to be thinking in general, and in some specific cases tool.

Have a great summer and stay tuned for new posts after the summer.

New from ML?

I seldom write about films and events, well maybe actually never, but this year, a lot has happened in the online way.

What’s new in Machine Learning | Keynote – YouTube

The video above is about the news from Google about their TensorFlow library, which include new ways of training models, compression and performance tuning and more.

TensorFlow Light and TensorFlow JS allow us to use the same models as for desktops, but on mobile devices. Really impressive. I’ve caught myself thinking whether I’m more impressed by the hardware capabilities of small devices, or the capabilities of software. Either way – super cool.

Google is not the only company announcing something. NVidia is also showing a lot of cool features for enterprises. Cloud access for rapid prototyping, model testing and deployments are in the center of that.

NVIDIA Executive Keynote for Enterprise AI at COMPUTEX 2021 – YouTube

I like gaming, so this is impressing, but even more impressive is to look at the last-year’s DLSS technology, which still cannot be beaten by the competition. Really nice.

How much does a testing cost and how to estimate it…

BIld av Armin Forster från Pixabay

CSUR5403-53 (acm.org)

Testing of software systems is a task which costs a lot. As a former tester, I see this as a neverending story – you’re done with your testing, new code is added, you are not done anymore, you test more, you’re done, new code…. and so on.

When I was a tester, there was no tools for automating the test process (we’re talking 1990’s here). Well, ok, there was CppUnit, and it was a great help – I could create a suite and execute it. Then I needed to add new test cases, create functional tests, etc. It was fun, until it wasn’t anymore.

I would have given a lot for having tools for test orchestration back then. A lot of things happened since then. This paper presents a great overview of how testing cost is estimated – I know, it’s not orchestration, but hear me out. I like this paper, because it shows anyways which tools are used, how test cost is estimated (e.g. based on metrics like coverage, effort, etc) and how the tests are evaluated.

I recommend this reading as an overview, a starting point for understanding the testing processes today, and, eventually, to optimize the test processes based on the right premises (not HiPPO).

If you want to test your CI test prioritization

BIld av Marc Pascual från Pixabay

Jin_Servant_ICSE21_AE.pdf (vt.edu)

Many of companies talk about using AI in their software engineering processes. However, they have problems in sharing their data with researchers and students. The legal processes with open sourcing data were and are scary. The processes of setting up internal collaborations is time consuming and therefore it needs more effort.

So, this is a great example of replicating some industrial set-ups in the open source community. I’ll use these data sets in my work and I’d love to see more initiatives like that.

Our team is working on one of those at the moment…

AI for decision makers…

Image by Gerd Altmann from Pixabay

In the last post of 2020, I would like to wish everyone Merry X-Mas and a fantastic 2021. Well, I guess that a normal 2021 would also work.

I would like to thank all my collaborators so far. I hope that I could contribute to your work at least half of what you did for me.

To end on a positive note, if you are interested in how to use AI for making decisions – here is the link to the seminar material that I developed together with GUSEE (GU executive education school): AI for Decision Makers – GU Play, Göteborgs universitet

Using skillset to do something different – helps me to reinvent myself and get more fun…

Image by Pexels from Pixabay

2020 was the year like no other. Everyone can agree with that. The pandemic changed our lives a lot – the pace of digitalization has gone from tortoise to a Space-X rocket!

For me, this year has also changed a lot of things. I’ve moved into new field of medical signal analysis using ML. I realized that my skillset can be used to help people. Maybe not the ones that were hit by the pandemic, but still people who need our help.

Together with a team of great specialists from the Sahlgrenska university hospital, we managed to create a set-up of collecting data in the operation room, tagging them and then, finally using ML.

In the last three months, we managed to move from 0 to having three articles in the making, collecting data from several patients, fantastic accuracy and a great deal of fun.

Here is the link to the movie that describes our work: CHAIR – GU Play, Göteborgs universitet

I’ve reflected upon this project and it’s probably the project where I had the most fun during 2020. It’s a completely new set-up, great team, extreme energy in the work and a great deal of meaning behind it.

The project was partially sponsored by Chalmers CHAIR initiative. Thank you!

Is noise important in SE?

https://www.researchgate.net/profile/Khaled_Al-Sabbagh/publication/344190831_Improving_Data_Quality_for_Regression_Test_Selection_by_Reducing_Annotation_Noise/links/5f5a167aa6fdcc116404d72b/Improving-Data-Quality-for-Regression-Test-Selection-by-Reducing-Annotation-Noise.pdf

Image by F. Muhammad from Pixabay

Machine learning and deep learning are only as good as the data used to train them. However, even the best data sources can lead to data of non-optimal quality. Noise is one of the exampes of the data problems.

Our research team has studied the impact of noise on machine learning in software engineering – mostly on the testing data. In this paper we present one techniques to identify noise, measure it and reduce it. There are several techniques to do it, but we use one of the more robust ones – removal of noise.

I recommend to take a look at how the algorithms work and let us know if you find it interesting!

Classifying code smells…

https://link-springer-com.ezproxy.ub.gu.se/article/10.1007%2Fs11219-020-09498-y

Image by Comfreak from Pixabay

Code smells are quite interesting phenomena to study. They are not really defects, but they are not good code either. They exist, but people rarely want to admit to them. There is also no consensus to how much effort it takes to remove them (or even whether they should be removed or just avoided).

In this paper, the authors study whether it is possible to use ML to find code smells. It turns out it is possible and the accuracy is quite high (over 95%). It also shows that sometimes it is better to show a number of recommendations (e.g. two potential smells) rather than one – it requires less accuracy to make the recommendation, but helps the users to narrow-down their solution spaces.