Thinking machines – summer reading

Tänkande maskiner: Den artificiella intelligensens genombrott: Häggström, Olle, Pettersson, Rasmus: Amazon.se: Books

Summer is around the corner, in some places it has actually arrived. This usually makes for relaxation and time for reflection.

I would like to recommend a book from fellow professors. The book is about the way in which thinking machines are considered today and about the potentials of GAI, General AI. The book is written in a popular science way, but the examples and the research behind this is solid. The authors discuss both the technology, but also the society – legislative aspects of AI and understanding of what it means to be thinking in general, and in some specific cases tool.

Have a great summer and stay tuned for new posts after the summer.

New from ML?

I seldom write about films and events, well maybe actually never, but this year, a lot has happened in the online way.

What’s new in Machine Learning | Keynote – YouTube

The video above is about the news from Google about their TensorFlow library, which include new ways of training models, compression and performance tuning and more.

TensorFlow Light and TensorFlow JS allow us to use the same models as for desktops, but on mobile devices. Really impressive. I’ve caught myself thinking whether I’m more impressed by the hardware capabilities of small devices, or the capabilities of software. Either way – super cool.

Google is not the only company announcing something. NVidia is also showing a lot of cool features for enterprises. Cloud access for rapid prototyping, model testing and deployments are in the center of that.

NVIDIA Executive Keynote for Enterprise AI at COMPUTEX 2021 – YouTube

I like gaming, so this is impressing, but even more impressive is to look at the last-year’s DLSS technology, which still cannot be beaten by the competition. Really nice.

How much does a testing cost and how to estimate it…

BIld av Armin Forster från Pixabay

CSUR5403-53 (acm.org)

Testing of software systems is a task which costs a lot. As a former tester, I see this as a neverending story – you’re done with your testing, new code is added, you are not done anymore, you test more, you’re done, new code…. and so on.

When I was a tester, there was no tools for automating the test process (we’re talking 1990’s here). Well, ok, there was CppUnit, and it was a great help – I could create a suite and execute it. Then I needed to add new test cases, create functional tests, etc. It was fun, until it wasn’t anymore.

I would have given a lot for having tools for test orchestration back then. A lot of things happened since then. This paper presents a great overview of how testing cost is estimated – I know, it’s not orchestration, but hear me out. I like this paper, because it shows anyways which tools are used, how test cost is estimated (e.g. based on metrics like coverage, effort, etc) and how the tests are evaluated.

I recommend this reading as an overview, a starting point for understanding the testing processes today, and, eventually, to optimize the test processes based on the right premises (not HiPPO).

If you want to test your CI test prioritization

BIld av Marc Pascual från Pixabay

Jin_Servant_ICSE21_AE.pdf (vt.edu)

Many of companies talk about using AI in their software engineering processes. However, they have problems in sharing their data with researchers and students. The legal processes with open sourcing data were and are scary. The processes of setting up internal collaborations is time consuming and therefore it needs more effort.

So, this is a great example of replicating some industrial set-ups in the open source community. I’ll use these data sets in my work and I’d love to see more initiatives like that.

Our team is working on one of those at the moment…

AI for decision makers…

Image by Gerd Altmann from Pixabay

In the last post of 2020, I would like to wish everyone Merry X-Mas and a fantastic 2021. Well, I guess that a normal 2021 would also work.

I would like to thank all my collaborators so far. I hope that I could contribute to your work at least half of what you did for me.

To end on a positive note, if you are interested in how to use AI for making decisions – here is the link to the seminar material that I developed together with GUSEE (GU executive education school): AI for Decision Makers – GU Play, Göteborgs universitet

Using skillset to do something different – helps me to reinvent myself and get more fun…

Image by Pexels from Pixabay

2020 was the year like no other. Everyone can agree with that. The pandemic changed our lives a lot – the pace of digitalization has gone from tortoise to a Space-X rocket!

For me, this year has also changed a lot of things. I’ve moved into new field of medical signal analysis using ML. I realized that my skillset can be used to help people. Maybe not the ones that were hit by the pandemic, but still people who need our help.

Together with a team of great specialists from the Sahlgrenska university hospital, we managed to create a set-up of collecting data in the operation room, tagging them and then, finally using ML.

In the last three months, we managed to move from 0 to having three articles in the making, collecting data from several patients, fantastic accuracy and a great deal of fun.

Here is the link to the movie that describes our work: CHAIR – GU Play, Göteborgs universitet

I’ve reflected upon this project and it’s probably the project where I had the most fun during 2020. It’s a completely new set-up, great team, extreme energy in the work and a great deal of meaning behind it.

The project was partially sponsored by Chalmers CHAIR initiative. Thank you!

Is noise important in SE?

https://www.researchgate.net/profile/Khaled_Al-Sabbagh/publication/344190831_Improving_Data_Quality_for_Regression_Test_Selection_by_Reducing_Annotation_Noise/links/5f5a167aa6fdcc116404d72b/Improving-Data-Quality-for-Regression-Test-Selection-by-Reducing-Annotation-Noise.pdf

Image by F. Muhammad from Pixabay

Machine learning and deep learning are only as good as the data used to train them. However, even the best data sources can lead to data of non-optimal quality. Noise is one of the exampes of the data problems.

Our research team has studied the impact of noise on machine learning in software engineering – mostly on the testing data. In this paper we present one techniques to identify noise, measure it and reduce it. There are several techniques to do it, but we use one of the more robust ones – removal of noise.

I recommend to take a look at how the algorithms work and let us know if you find it interesting!

Classifying code smells…

https://link-springer-com.ezproxy.ub.gu.se/article/10.1007%2Fs11219-020-09498-y

Image by Comfreak from Pixabay

Code smells are quite interesting phenomena to study. They are not really defects, but they are not good code either. They exist, but people rarely want to admit to them. There is also no consensus to how much effort it takes to remove them (or even whether they should be removed or just avoided).

In this paper, the authors study whether it is possible to use ML to find code smells. It turns out it is possible and the accuracy is quite high (over 95%). It also shows that sometimes it is better to show a number of recommendations (e.g. two potential smells) rather than one – it requires less accuracy to make the recommendation, but helps the users to narrow-down their solution spaces.

The truth… or how things can be untrue

Data veracity is a concept where we define the degree to which data corresponds to the true values. It comes from the metrological concept of “measurement trueness”, which is the degree to which the measurement quantifies the value correctly.

Well, that sounds very simple, but it is in fact quite complex. In our previous work, we scrutinized what it means to have veracious data in transport systems (https://ieeexplore.ieee.org/abstract/document/7535482). It turns out that “lying” is not the only option here.

In this book, the author looks into the way how things can be untrue. Sometimes deliberately by lying, sometimes by mistake. Sometimes, as we learn in the last chapter (with Brazilian aardvark), a mistake can actually end up being accepted as truth over time.

I recommend the book as it is written in a fantastic manner, providing examples from the real world (e.g. the alleged drone sightings over Gatwick in 2018). It even goes a bit further and discusses the need of replication of studies and that we should get more funding for making the scientific results more solid and robust.

How bugs are born: a model to identify how bugs are introduced in software components (review)

https://link-springer-com.ezproxy.ub.gu.se/content/pdf/10.1007/s10664-019-09781-y.pdf

Image by GLady from Pixabay 

I’ve came across this article from Empirical Software Engineering and it cought my attention. It describes a study of how to identify where a bug was introduced.

The article accurately observes that the defects are fixed, most often, in a place where they were NOT introduced. So, the question is whether we can find where the defects were introduced.

Several studies focused on understanding which release/commit introduced a specific defect. This article describes how to find this particular release. It is based on a theoretical framework of perfect tests, i.e. tests which can capture defects in releases where they were introduced. The authors of this study evaluate four different algorithms on two different open source projects. Their findings show that it is possible, to some extent, find the right release where the bug was introduced. Knowing the release and knowing which changes were introduced into the release, it is possible to narrow down the piece of code that contains the bug.

Very interesting work and looking forward to more studies in this area, in particular in the area of proprietary software!