Language models in Software Engineering (new paper review)

Image by Lorenzo Cafaro from Pixabay

Articla available at: https://arxiv.org/pdf/2205.11739.pdf

It’s no secret that I’ve been fascinated by modern, BERT-like language models. I’ve seen what they can do and how they operate, use them in two of my research projects. So, when this paper came around, I read it directly.

It’s a paper which makes an overview of what kind of tasks the language models are used in software engineering today. The list is long and contains a variety of tasks, e.g., code-to-code retrieval, repairing of source code or bug finding/fixing. In total a lot of these tasks, but, IMHO, a bit low-level tasks. There are no tasks that attempt to understand code at the design-level, for example whether we can really see specific design in the code.

The paper also shows which models are used, and provides references to these models. They list 20 models, with the tasks for which they were trained, including the datasets that they were trained on. Fantastic!

I need to dive deeper into these models, but I’m super happy about the fact that there is a list of these models now and that the language technology makes a significant body of work in software engineering now.

Automating the Measurement of Heterogeneous Chatbot Designs (paper review)

Image by NPXL_Studio from Pixabay

Paper from: http://miso.es/pubs/ACMSAC_2022.pdf

Using chatbots has gained importance in recent years, which has resulted in development of several chatbot platforms (like Amazon Lex, Google DialogFlow or IBM Watson). However, there is a limited number of studies related to quality assurance of chatbots. The paper from Pablo C. Cañizares, Sara Pérez-Soler, Esther Guerra and Juan de Lara addresses just this problem – how to guide testing of chatbots by using design metrics.

The paper proposes six global metrics (e.g., number of intents of the bot), eight intent metrics (e.g., number of training phrases per intent), three entity metrics (e.g., word length), and three flow metrics (e.g., conversation length). By measuring the values for these metrics, software designers of chatbots can predict three usability types – effectiveness, efficiency and satisfaction. To support the measurement process, the paper proposes a tool, available on GitHub, which can be used by practitioners. For some of the metrics, the tool employs machine learning and natural language processing. The metrics and the tool are evaluated on twelve chatbot designs. The tool could identify quality issues in terms of readability, conversation complexity, user experience and bot understanding. This demonstrates the usefulness of the tool in practice and how these metrics can help software developers in designing high-quality bots.

The metrics from the paper are:

  • INT – # intents
  • ENT – # user-defined entities
  • FLOW – # conversation entry points
  • PATH – # different conversation flow paths
  • CNF – # confusing phrases
  • SNT – # positive, neutral, negative output phrases
  • TPI – # training phrases per intent
  • WPTP – # words per training phrase
  • VPTP – # verbs per training phrase
  • PPTP – # parameters per training phrase
  • WPOP – # words per output phrase
  • VPOP – # verbs per output phrase
  • CPOP – # characters per output phrase
  • READ – reading time of the output phrases
  • LPE – # literals per entity
  • SPL – # synonyms per literal
  • WL – word length
  • FACT – # actions per flow
  • FPATH – # conversation flow paths
  • CL – conversation length

I will try to use these metrics if I write chatbot 🙂

Testing of ML systems

BIld av OpenClipart-Vectors från Pixabay

Smoke testing for machine learning: simple tests to discover severe bugs | SpringerLink

Machine learning systems are very popular today, at least when it comes to research applications. They are not as popular as one would wished (or liked) in the real applications. One of the reasons is the fact that they are hard to test. We do not know how to check if an algorithm will behave as expected in all similar situations – well, we do not know which situations are similar for us and for the ML system.

This paper looks at the problem from a different angle. The research question is: RQ: What are simple and generic software tests that are capable of finding bugs and improving the quality of machine learning algorithms?

The authors developed a set of smoke tests, which they see that all ML algorithms should pass. The paper is quite exhaustive and if you are interested, I recommend to take a look at this table:

Table 1 | Smoke testing for machine learning: simple tests to discover severe bugs | SpringerLink

I love the article. It is simple, to the point and very applied. I’m going to use that in my tests of ML algorithms in the future.

How good are language models for source code tasks?

https://ieeexplore-ieee-org.ezproxy.ub.gu.se/document/9653849

Using machine learning, and deep learning in particular, for software engineering tasks exploded recently. I would say that it exploded a bit too much. I’m myself to blame here as our team was one of the early adopters with the CCFlex model and source code analysis.

Well, this paper compares a number of modern deep learning models, so called transformers, in various code and comment analysis tasks. The authors did a great job in collecting a set of models and datasets, trained them and critically evaluated the performance.

I recommend reading the entire paper, but what they found was a bit surprised for me. First of all, they found that the transformer models are better for the natural language and not so great for the source code analysis. The hypothesis is that the structure of programs is important here. They have also found that pre-training is important, but not crucial. Pre-training attributes to a moderate effect in the end. The dataset, and its content, is much more important for the task at hand.

This is a great paper and I hope that this can become an essential reading for software engineers working with AI systems engineering supporting the software engineering tasks.

autoML – let’s talk about it…

Image from Pixabay

AutoML, a promise of green pastures, less work, optimal results. So, it is like that? In this post I share my view on this and experience from running the first test using that model.

First of all, let’s be honest, there is not such thing as a free lunch. In case of autoML (auto-sklearn), the price tag comes first with the effort, skills and time to install it and make it work. The second is the performance…. It’s painfully slow compared to your own models, simply because it tests a lot of models here and there. It also take a lot of time to download and to make it work.

But, first thing first, let me tell you where I start. So, I used the data from the MicroHRV project ( 3. MicroHRV: Recognizing Rare Events in Microwave Radio Links and Intensive Care Units using Machine Learning – Software Center (software-center.se)). The data is from patients being operated to remove clots of blood from the brain (although dangerous it may sound, the actual procedure is planned and calm). I wanted to check whether autoML can do better compared to what we have at the moment.

What we have at the moment (for that particular dataset) is: Accuracy: 0.98, Precision: 0.98, Recall: 0.98 – using Random Forest classifier. So, this is actually already very good. For the medical domain, that’s actually in class of its own, given our previous studies ended up with ca. 0.7 in accuracy at best.

When it comes to installing autoML – if you like stackoverflow, downgrading, upgrading, compiling, etc. and run Windows 10, then it’s your heaven. If you run Linux – no problems. Otherwise – stick to manual analyses:)

After two days (and nights) of trying, the best configuration was:

  • WSL – Windows Subsystem for Linux
  • Ubuntu 20, and
  • countless of oss libraries

It takes a while to get it to work, the question is whether the results are good enough…

After three hours of waiting, a lot of heat from my laptop, over 1,000 models tested resulted in Accuracy: 0.91, Precision: 0.94, Recall: 0.91

So, worse than my manual selection of models. I include the confusion matrices.

AutoML
Random forest

The matrices are not that different, as the validation sets are not that large either. However, it seems that the RF is still better than the best model from autoML.

I need work more on that and see if I do something wrong. However, I take this as a success – I’m better than autoML (still some use of an old professor) – instead of a let-down of not getting better results.

By the end of the day, 0.98 in accuracy is still very good!

Reproducing AI models – a guideline

Image by Pete Linforth from Pixabay

2107.00821.pdf (arxiv.org)

Machine learning has been used in software engineering as a great tool for both research and development. The fact that we have access to TensorFlow, PyCharm, and other toolkits, provides almost endless possibilities. Combine that with the hundreds (if not thousands) of datasets from Zenodo and Co. and you can train a model for almost anything.

So far, so good, I would say. Problems (yes, there are always some problems) appear when we want to reproduce the results of others. Training a model on your own dataset and making it available is easy. Trusting such a model in a new context is not.

Imagine an example of an ML model trained on data from Company X. We have probably tuned the parameters a lot, so the model works great there, but does it work for Company Y? Most probably it will not. Well, it will work, but the performance of the predictions are not going to be great.

So, Google has partner up with academic partners to set up SIGMODELS, and TensorFlow garden, initiatives that are aimed at making ML models more portable, experiments more replicable, and all the other goodies.

In this paper, the authors provide a set of checks, which we can use to make the models more transparent, which is the first step towards reproducibility. In these guidelines, the authors advocate for reporting the models architecture, their input and output structure, building blocks, loss functions, etc.

Naturally, they also recommend to report metrics which were used to optimize the models, e.g. accuracy, F1-score, MCC or others. I know, these are probably essentials, but you would be surprised to see that many authors do not really report these metrics. If they are omitted, then how do we know if the metrics were just so poor that the authors omitted them (low performance of the model) or that they are not relevant (low relevance of the metrics – which is a good thing).

For now, these guidelines are only a draft, but I hope that they will become more mainstream. just like the emprical guidelines from ACM (GitHub – acmsigsoft/EmpiricalStandards: Empirical standards for conducting and evaluating research in software engineering).

What will the future bring ….

https://www.amazon.se/Brief-Answers-Big-Questions-Stephen/dp/1473695996/ref=sr_1_1?dchild=1&keywords=Hawking&qid=1625726909&sr=8-1

As my summer goes on, I’ve decided to take a look at the book of one of my favorite authors and scientists – Prof. Stephen Hawking. I’ve loved his books when I was younger and I like the way he could bring difficult theories to the masses, like his famous “gray holes” as opposed to the black ones.

This book allowed me to reflect on some of the most common questions that people ask, even to me. Like whether AI will take over or whether we should even invest in AI. Since I’m not a physicist, I cannot answer most of the questions, but I think the AI question is something that I can even attempt.

So, will AI take over? Is GPT-3 something to worry about? Will we be out of work as programmers? Well, not really. I think that we live in a world that is very diverse and that we need human judgement to make sure that we can live on. Take the recent cyber attack on Kaseya, which is a US-based company with thousands of clients. The attack affected a minority of their clients, some 40 or so (if I remember the article correctly). However, it make the entire COOP chain in Sweden stranded. Food was given away for free as there was no way to take payments. Other grocery shops bought the grocery stock from the affected chain, to make sure people have enough food. So, what would AI do?

Let’s think statistically for a moment. 40 customers, out of ca. 40,000, is about 0.1 percent. So, ignoring this event would just make 99.9% accuracy for AI. Is this good? Statistically, this is great! Almost perfect. For AI, therefore, this would be like a great way of optimizing. Not paying ransom, make sure that 0.1% is taken as a negligible error somewhere.

Now, let’s think about the social value. Without knowing the rest of the customers affected, or even the ones that were not affected, I could say that the value of having food on your table trumps many other kinds of value. Well, maybe not your health, but definitely something like a car or a computer game. So, the societal impact of that is large. We could model that in the AI, but there is programmatic problem. How to calculate value of diverse things – a car or food. There is the monetary value, of course, but it’s not constant in time. For someone who is hungry for days, the value of a sandwich is infinitely larger than the value of the same sandwich for someone who has just eaten a delicious stake. Another problem is that the value is dependent on the location (is there another grocery shop close by?), your stock (which is individual and hard to find for AI), or even the ability to use another system of payment (can I just get my groceries and pay later?)

This example shows an inherent problem in finding the right data to use for AI. I believe that this is a problem that will not really be solved. And if it cannot be solved, I think I would like to pay a few cents extra to have a human in the loop. I would liked to know that there is an option, in the event if a hacker attack, to talk to a person who understands my needs and can help me. Give me food without paying, knowing where I live and that I will pay later.

Until there are models which understand us, humans, we need to be able to stick to having humans in the loop. Given that there are ca. 6 billion people in the world, potentially different, with conflicting needs, I do not things AI will be able to help us in critical parts of the society.

New from ML?

I seldom write about films and events, well maybe actually never, but this year, a lot has happened in the online way.

What’s new in Machine Learning | Keynote – YouTube

The video above is about the news from Google about their TensorFlow library, which include new ways of training models, compression and performance tuning and more.

TensorFlow Light and TensorFlow JS allow us to use the same models as for desktops, but on mobile devices. Really impressive. I’ve caught myself thinking whether I’m more impressed by the hardware capabilities of small devices, or the capabilities of software. Either way – super cool.

Google is not the only company announcing something. NVidia is also showing a lot of cool features for enterprises. Cloud access for rapid prototyping, model testing and deployments are in the center of that.

NVIDIA Executive Keynote for Enterprise AI at COMPUTEX 2021 – YouTube

I like gaming, so this is impressing, but even more impressive is to look at the last-year’s DLSS technology, which still cannot be beaten by the competition. Really nice.

Challenges when using ML for SE (article review)

Image by Pexels from Pixabay

104294.pdf (scitepress.org)

Machine learning has been used in software engineering for a while now. It used to be called advanced statistics, but with the popularization of artificial intelligence, we use the term machine learning more often. I’m one of those who like to use ML. It’s actually a mesmerizing experience when you train neural networks – change one parameter, wait a bit and see how the network performed, then again. Trust me, I’ve done it all too often.

I like this paper because it focuses on challenges for using ML, from the abstract:

In the past few years, software engineering has increasingly automating several tasks, and machine learning tools and techniques are among the main used strategies to assist in this process. However, there are still challenges to be overcome so that software engineering projects can increasingly benefit from machine learning. In this paper, we seek to understand the main challenges faced by people who use machine learning to assist in their software engineering tasks. To identify these challenges, we conducted a Systematic Review in eight online search engines to identify papers that present the challenges they faced when using machine learning techniques and tools to execute software engineering tasks. Therefore, this research focuses on the classification and discussion of eight groups of challenges: data labeling, data inconsistency, data costs, data complexity, lack of data, non-transferable results, parameterization of the models, and quality of the models. Our results can be used by people who intend to start using machine learning in their software engineering projects to be aware of the main issues they can face.

So, what are these challenges? Well, I’m not going to go into details about all of them, but I’d like to focus on the ones that are close to my heart – data labelling. The process of labelling, or tagging, data is usually very time consuming and very error-prone. You need to be able to remember how you actually labelled the previous data points (consistency), but also understand how to think when finding new cases. This paper does not list the challenges, but gives a pointer to a few paper where they are defined.

Siri, Write the Next Method… (article highlight)

BIld av yangjiepsy01 från Pixabay

Wen2021a.pdf (usi.ch)

I’ve came across this article by accident. Essentially I do not even remember what I was looking for, but that’s maybe not so important. Either way, I really want to try this tool.

This research study is about designing a tool for code completion, but not just a completion of a word/statement/variable, but providing a signature of the next method to implement.

From the abstract: “Code completion is one of the killer features of Integrated Development Environments (IDEs), and researchers have proposed different methods to improve its accuracy. While these techniques are valuable to speed up code writing, they are limited to recommendations related to the next few tokens a developer is likely to type given the current context. In the best case, they can recommend a few APIs that a developer is likely to use next. We present FeaRS, a novel retrieval-based approach that, given the current code a developer is writing in the IDE, can recommend the next complete method (i.e., signature and method body) that the developer is likely to implement. To do this, FeaRS exploits “implementation patterns” (i.e., groups of methods usually implemented within the same task) learned by mining thousands of open source projects. We instantiated our approach to the specific context of Android apps. A large-scale empirical evaluation we performed across more than 20k apps shows encouraging preliminary results, but also highlights future challenges to overcome.”

As far as I understand, this is a plug-in to android studio, so I will probably need to see if I can use it outside of this context. However, it seems to be very interesting….