Automating the Measurement of Heterogeneous Chatbot Designs (paper review)

Image by NPXL_Studio from Pixabay

Paper from: http://miso.es/pubs/ACMSAC_2022.pdf

Using chatbots has gained importance in recent years, which has resulted in development of several chatbot platforms (like Amazon Lex, Google DialogFlow or IBM Watson). However, there is a limited number of studies related to quality assurance of chatbots. The paper from Pablo C. Cañizares, Sara Pérez-Soler, Esther Guerra and Juan de Lara addresses just this problem – how to guide testing of chatbots by using design metrics.

The paper proposes six global metrics (e.g., number of intents of the bot), eight intent metrics (e.g., number of training phrases per intent), three entity metrics (e.g., word length), and three flow metrics (e.g., conversation length). By measuring the values for these metrics, software designers of chatbots can predict three usability types – effectiveness, efficiency and satisfaction. To support the measurement process, the paper proposes a tool, available on GitHub, which can be used by practitioners. For some of the metrics, the tool employs machine learning and natural language processing. The metrics and the tool are evaluated on twelve chatbot designs. The tool could identify quality issues in terms of readability, conversation complexity, user experience and bot understanding. This demonstrates the usefulness of the tool in practice and how these metrics can help software developers in designing high-quality bots.

The metrics from the paper are:

  • INT – # intents
  • ENT – # user-defined entities
  • FLOW – # conversation entry points
  • PATH – # different conversation flow paths
  • CNF – # confusing phrases
  • SNT – # positive, neutral, negative output phrases
  • TPI – # training phrases per intent
  • WPTP – # words per training phrase
  • VPTP – # verbs per training phrase
  • PPTP – # parameters per training phrase
  • WPOP – # words per output phrase
  • VPOP – # verbs per output phrase
  • CPOP – # characters per output phrase
  • READ – reading time of the output phrases
  • LPE – # literals per entity
  • SPL – # synonyms per literal
  • WL – word length
  • FACT – # actions per flow
  • FPATH – # conversation flow paths
  • CL – conversation length

I will try to use these metrics if I write chatbot 🙂

What will shape the future of automotive software (engineering)?

Image by Jordan Holiday from Pixabay

Based on the following article + my own thoughts: D08042936-with-cover-page-v2.pdf (d1wqtxts1xzle7.cloudfront.net)

It’s been a while since I’ve written about automotive software, but that does not mean that nothing happened. During the pandemic, the car manufacturers suffered great losses caused by the global shortage of silicon, lack of workforce due to lockdowns and the overall slowdown of development due to the WFH situation.

There are a few trends that shape and will continue to shape the automotive sector. The first one is electrification – as the world is going away from fossil fuels, more cars will need to use electricity. For the software part, this means that there will be fewer components to steer the powertrain, fewer communication buses, and lower complexity. This means that we have some spare computing power for more advanced functionality.

Now, this advanced functionality can come from autonomous driving, which is still an important trend. However, it may also come from increased connectivity and an increased number of smart functions (the ones using machine learning). The increased ability to develop software that utilizes this new power will decide whether a given car is popular or not. By the end of the day, the consumers do not want to have boring cars with bare-minimum functionality. Cars are great, they need to be driven and their driving needs to be fun!

The last trend is the ability to utilize cooperative driving (which the article at the top tackles). To make things work smoothly, we need to coordinate. We can save fuel/energy if we calculate the exact time for one bus to arrive and the next one to leave – that requires coordination. The same goes for trucks, taxis, etc. This increased cooperative driving will also increase the complexity of software and put more requirements on the dependability of it – as one failure can propagate longer than before.

Do explicit review strategies improve code review performance?

Image by Pixabay

Do explicit review strategies improve code review performance? Towards understanding the role of cognitive load | SpringerLink

I’ve written a lot about code reviews and I’ve done my share of experimentation in software engineering. When I started my career, using inspections (like Fagan-style code inspections) was the primary source of experimentation. It was how I learned to experiment, although I never experimented with code inspections.

So, when getting my hands on this article, I thought that this is just one of the same, but in a different context – whether guided reading actually improves effectiveness and efficiency of code review processes. The effectiveness and efficiency are measured in the standard way – using defects as the output of the review process. But, there is something new with this study.

First of all, this is a study done with professional developers. The authors have designed an experiment and employed professional, though junior, developers to conduct it. Second of all, this is an experiment in the context of modern code reviews (Git, Gerrit, that sort of thing). Third, the results are not that convincing any more.

I encourage you to read the entire paper, but let’s dive a bit deeper into some of the results. For example, the experiment found that it is not always the case that guidance is better. It provides more cognitive load (the reviewers need to understand the guidance as well as the code), and it can be downright misleading. It pays off for longer and more complex code fragments.

The experiment also found that the complexity of the actual guidance (checklist) plays an important role – shorter, less cognitively demanding lists, are preferred. This is an important finding as, to my best knowledge, no one has ever said that. Checklists and perspective-based reading techniques assumed that more extra information equals better results. This experiment says that a well-balanced information is better than more information. I know, seems kind of obvious when you think of that, but it was not really considered up until now.

Finally, the most significant factor, found in this experiment, was that it is the understanding of the code that makes a review better or worse, not the guidance. At least not the guidance on a general level (like “Are all data types declared correctly?”).

What I make out of that is that there is nothing that substitutes knowledge. If you want to get something done, you need to put the hours into this.

I know, kids may not like it….

Understanding anomalies in software data

Image by pixabay

Identifying and classifying anomalies in software engineering data is a well-known field. Using ML to identify intrusion attacks, credit card frauds, defects in production systems – are just a few of the examples of how broad the field is. Wherever we have data, we can have anomalies.

Both types of anomalies have similarities, but also differences, which provides us with an opportunity to study which of the algorithms for anomaly detection work best. We tried both the ML algorithms and domain-specific ones. Well, spoiler alert – not much has actually worked.

In our project together with Sahlgrenska University Hospital and Chalmers AI Research Center ( Chalmers AI Research Centre – Chair | Chalmers ), anomalies come in two shapes. One type of anomaly is the set of disturbances in radio networks, such as rain or wind. The other type is a specific type of event during surgeries, such as clamping of the carotid artery.

What works, on the other hand, is when we pivot on the problem. Instead of identifying anomalies, we can search for anomalies of a specific type. Instead of defining an anomaly as something deviating from the normal operations, we can say that we look for specific, though rare, events.

So far, we can identify anomalies pretty well and we work on being better to classify them automatically. So stay tuned if you would like to know more.

Testing of ML systems

BIld av OpenClipart-Vectors från Pixabay

Smoke testing for machine learning: simple tests to discover severe bugs | SpringerLink

Machine learning systems are very popular today, at least when it comes to research applications. They are not as popular as one would wished (or liked) in the real applications. One of the reasons is the fact that they are hard to test. We do not know how to check if an algorithm will behave as expected in all similar situations – well, we do not know which situations are similar for us and for the ML system.

This paper looks at the problem from a different angle. The research question is: RQ: What are simple and generic software tests that are capable of finding bugs and improving the quality of machine learning algorithms?

The authors developed a set of smoke tests, which they see that all ML algorithms should pass. The paper is quite exhaustive and if you are interested, I recommend to take a look at this table:

Table 1 | Smoke testing for machine learning: simple tests to discover severe bugs | SpringerLink

I love the article. It is simple, to the point and very applied. I’m going to use that in my tests of ML algorithms in the future.

How good are language models for source code tasks?

https://ieeexplore-ieee-org.ezproxy.ub.gu.se/document/9653849

Using machine learning, and deep learning in particular, for software engineering tasks exploded recently. I would say that it exploded a bit too much. I’m myself to blame here as our team was one of the early adopters with the CCFlex model and source code analysis.

Well, this paper compares a number of modern deep learning models, so called transformers, in various code and comment analysis tasks. The authors did a great job in collecting a set of models and datasets, trained them and critically evaluated the performance.

I recommend reading the entire paper, but what they found was a bit surprised for me. First of all, they found that the transformer models are better for the natural language and not so great for the source code analysis. The hypothesis is that the structure of programs is important here. They have also found that pre-training is important, but not crucial. Pre-training attributes to a moderate effect in the end. The dataset, and its content, is much more important for the task at hand.

This is a great paper and I hope that this can become an essential reading for software engineers working with AI systems engineering supporting the software engineering tasks.

Review4Repair – article review

https://www.sciencedirect.com/science/article/pii/S0950584921002111?casa_token=D32qAALSwPQAAAAA:2Vm5zjUzFncLhQ_eZTUueefRqllb8fwBEnfWJfbNO_TYMcIjuumTBL9wWCLkscBko53FPR5cRQ

Reviewing source code is something that I talked about a few times already. It is an activity which is almost as old as software engineering itself. Back in the beginning, this activity was done manually just before the release, as a complement to testing.

Then came google with their software engineering practices, something that is known today as “shift left”, meaning that software quality assurance activities should be done close to when the source code is actually developed. Then came Microsoft with their “Modern Code Reviews” that advocated code reviews before actually committing the source code to the main branch.

Now, we are pretty good at reviewing source code. It is an activity which is done rather fast. As the amount of data from code reviews grows, we are getting more eager to try to use AI and ML for this task. This article is a very good example of that. The authors leveraged on the ability to use seq2seq code summarization techniques to match source code with comments and then with code repair suggestions. The results are promising and show that they are able to provide a relevant suggestion in 1 out of 5 cases. One out of three if we consider top 10 suggestions.

I’ve read this article with huge interest and I will try this myself. All data and code is available publicly, which allows to play around with this technology on your own system. Spoiler alert, though, that training takes one week per pass on a TPU.

Reviewing rounds prediction for code patches

Image by pixabay

Reviewing rounds prediction for code patches | SpringerLink

Understanding how reviews of source code are done seem to be one of my main interests recently. Partly because reviews are important for software quality, while taking time. Partly, also, because I think it is interesting to check if we can quantify a good opinion from a good software developer.

In this paper, the authors study how to predict to which degree one can predict how many comments a given patch will have. Now, this problem may not be the most exciting ones, but it attracted my attention because of the fact that the authors studied the same projects as we did. However, to the contrary of our work, they also take into consideration features that characterize software developer networks – for example the experience in commenting software patches or the networking.

Now, to the results. The models presented in this paper seem to be quite good in quantifying and predicting patches within the same project – all kinds of predictions have pretty good F1-scores, above 65%. This means that we can train these models for our own projects and to be able to predict whether a particular patch will be commented on once or twice, or even many times.

The performance of the model on the cross-projects dataset. There, the performance is ok for predicting whether a particular patch will be commented on. Predicting how many times the patch will be commented on, or even that it will be commented on many times, does not work very well. The magnitude of the performance measures oscillates close to the 0% mark, which means that the models are not better than just guessing. I guess, you cannot have it all… from one model.

To sum up, the reason I read this article in more detail than others, was essentially not the performance, but tryin to understand the underlying techniques which they use. I’d like to say that they use a good set of features, which I recommend other to use (and will definitely use myself in the next studies), and the fact that they use simple language models, like word2vec, to understand the programming language. What I lack, though, is scrutinizing whether there is a statistically significant dependency between the sentiment (or even its strength) and the length of the discussion.

Rationality by Steven Pinker

Rationality: What It Is, Why It Seems Scarce, Why It Matters : Pinker, Steven: Amazon.se: Böcker

I’ve read this book because I wanted to get some inspiration from social sciences. What I ended up with is a book about Bayesian statistics and its consequences. To some extent, I’m happy to have read it, because I got a better view of how to use Bayesian statistics in practice. Yet, I am a bit disappointed. Not about the statistics, but about the book.

A while back I read the book by Judea Pearl about Bayesian networks and this role in statistics. That book was about something new and a bit fresh. It got my interest and I felt refreshed after I read it. This book was a bit too much of a repetition. Don’t get me wrong, I like repetition and I like the style of Steven Pinker (he is one of my role models when it comes to academic writing).

The book is about making inferences by calculating probabilities using Bayes theorem. It explain them very well and shows how they are used and misused in modern society – from politics to medical diagnoses. The book shows a few tricks to make better decisions and inferences.

I guess I need to read it once more in order to get a better taste of its distinct flavor.

A Thousands Brain theory, book review

Image by Pixabay

A Thousand Brains: A New Theory of Intelligence : Hawkins, Jeff: Amazon.se: Böcker

This is a new book, written by one of the people behind Palm Pilot, who is both an engineer and a neuroscientist. The book proposes a theory on how to describe neocortex and its functions.

Why is neocortex so important, one may ask. It holds our intelligence and our consciousness. Some would say that it is the place which defines us as humans, which allows us to be aware and intelligent.

The interesting part of this book is the fact that it attempts to provide guidelines on the future of machine intelligence and machine learning. It shows different paths to achieve AGI (Artificial General Intelligence): either as developing a lot of specialized models and winding them together, or as making one large model for everything.

These two approaches are already present in the modern AI community. The latter one (large model for everything) can be seen in the work of OpenAI and GPT-3. The scientists behind that model train it on large corpora of text, hoping that it can understand our natural language and execute our commands. Well, for now it is mostly about creating programs.

The first approach is generally the original idea of AI and ML. The original idea is about training models for specific tasks, such as image recognition, classification, text translation. This is where most of the current research lies and where we have observed the latest breakthroughs – AlphaGo, AlphaStar.

However, the thesis in the book is that the approach of one large model is more natural, similar to how our brain works. The theory of how neocortex stores, frames and recalls information is the core of what we need to achieve in order to make it work in practice.

Well, there is more to the book than I can write in a blog post, so I strongly recommend to read it and reflect on how we use ML and AI today. I’m going to try few of these ideas in 2022!