New programming tools?

Glinda: Supporting Data Science with Live Programming, GUIs and a Domain-specific Language (acm.org)

I’m not going to add a picture here, because the actual paper contains a great picture, which is copyrighted. But, do we need another tool (I though), and if you think like that… well, think again.

Once I looked at the paper, I really liked the idea. This is a tool that combines the programming tasks of software engineers and such tasks like data exploration, labelling or cleaning. It’s a kind of tool like Jupyter Notebook, but it allows to interact with the data in a deeper way.

I strongly recommend to take a look at the tool. I’ve done a quick check and it looks really nice.

Challenges when using ML for SE (article review)

Image by Pexels from Pixabay

104294.pdf (scitepress.org)

Machine learning has been used in software engineering for a while now. It used to be called advanced statistics, but with the popularization of artificial intelligence, we use the term machine learning more often. I’m one of those who like to use ML. It’s actually a mesmerizing experience when you train neural networks – change one parameter, wait a bit and see how the network performed, then again. Trust me, I’ve done it all too often.

I like this paper because it focuses on challenges for using ML, from the abstract:

In the past few years, software engineering has increasingly automating several tasks, and machine learning tools and techniques are among the main used strategies to assist in this process. However, there are still challenges to be overcome so that software engineering projects can increasingly benefit from machine learning. In this paper, we seek to understand the main challenges faced by people who use machine learning to assist in their software engineering tasks. To identify these challenges, we conducted a Systematic Review in eight online search engines to identify papers that present the challenges they faced when using machine learning techniques and tools to execute software engineering tasks. Therefore, this research focuses on the classification and discussion of eight groups of challenges: data labeling, data inconsistency, data costs, data complexity, lack of data, non-transferable results, parameterization of the models, and quality of the models. Our results can be used by people who intend to start using machine learning in their software engineering projects to be aware of the main issues they can face.

So, what are these challenges? Well, I’m not going to go into details about all of them, but I’d like to focus on the ones that are close to my heart – data labelling. The process of labelling, or tagging, data is usually very time consuming and very error-prone. You need to be able to remember how you actually labelled the previous data points (consistency), but also understand how to think when finding new cases. This paper does not list the challenges, but gives a pointer to a few paper where they are defined.

How much does a testing cost and how to estimate it…

BIld av Armin Forster från Pixabay

CSUR5403-53 (acm.org)

Testing of software systems is a task which costs a lot. As a former tester, I see this as a neverending story – you’re done with your testing, new code is added, you are not done anymore, you test more, you’re done, new code…. and so on.

When I was a tester, there was no tools for automating the test process (we’re talking 1990’s here). Well, ok, there was CppUnit, and it was a great help – I could create a suite and execute it. Then I needed to add new test cases, create functional tests, etc. It was fun, until it wasn’t anymore.

I would have given a lot for having tools for test orchestration back then. A lot of things happened since then. This paper presents a great overview of how testing cost is estimated – I know, it’s not orchestration, but hear me out. I like this paper, because it shows anyways which tools are used, how test cost is estimated (e.g. based on metrics like coverage, effort, etc) and how the tests are evaluated.

I recommend this reading as an overview, a starting point for understanding the testing processes today, and, eventually, to optimize the test processes based on the right premises (not HiPPO).

If you want to test your CI test prioritization

BIld av Marc Pascual från Pixabay

Jin_Servant_ICSE21_AE.pdf (vt.edu)

Many of companies talk about using AI in their software engineering processes. However, they have problems in sharing their data with researchers and students. The legal processes with open sourcing data were and are scary. The processes of setting up internal collaborations is time consuming and therefore it needs more effort.

So, this is a great example of replicating some industrial set-ups in the open source community. I’ll use these data sets in my work and I’d love to see more initiatives like that.

Our team is working on one of those at the moment…

What makes a great code maintainer…

BIld av Rudy and Peter Skitterians från Pixabay



ICSE2021_B.pdf (igor.pro.br)

For many of us, software engineering is the possibility to create new projects, new products and cool services. We do that often, but we equally often forget about the maintenance. Well, maybe not forget, but we deliverately do not want to remember about it. It’s natural, as maintaining old code is not really anything interesting.

When reading this paper, I’ve realized that my view about the maintenance is a bit old. In my time in industry, maintainance was “bug-fixing” mostly. Today, this is more about community work. As the abstract of this paper says: “Although Open Source Software (OSS) maintainers devote a significant proportion of their work to coding tasks, great maintainers must excel in many other activities beyond coding. Maintainers should care about fostering a community, helping new members to find their place, while also saying “no” to patches that although are well-coded and well-tested, do not contribute to the goal of the project.”

This paper conducts a series of interviews with software maintainers. In short, their results are that great software maintainers are:

  • Available (response time),
  • Disciplined (follows the process),
  • Has a global view of what to achieve with the review,
  • Communicative,
  • Emapthetic,
  • Community building,
  • Technically excellent,
  • Quality aware,
  • Has domain experience,
  • Motivated,
  • Open minded,
  • Patient,
  • Diligent, and
  • Responsible

It’s a long list and the priority of each of these characteristics differs from one reviewer to another. However, it’s important that we see software maintainer as a social person who can contribute to the community rather than just sit in the dark office and reads code all day long. The maintainers are really the persons who make the software engineering groups work well.

After reading the paper, I’m more motivated to maintain the community of my students!

Siri, Write the Next Method… (article highlight)

BIld av yangjiepsy01 från Pixabay

Wen2021a.pdf (usi.ch)

I’ve came across this article by accident. Essentially I do not even remember what I was looking for, but that’s maybe not so important. Either way, I really want to try this tool.

This research study is about designing a tool for code completion, but not just a completion of a word/statement/variable, but providing a signature of the next method to implement.

From the abstract: “Code completion is one of the killer features of Integrated Development Environments (IDEs), and researchers have proposed different methods to improve its accuracy. While these techniques are valuable to speed up code writing, they are limited to recommendations related to the next few tokens a developer is likely to type given the current context. In the best case, they can recommend a few APIs that a developer is likely to use next. We present FeaRS, a novel retrieval-based approach that, given the current code a developer is writing in the IDE, can recommend the next complete method (i.e., signature and method body) that the developer is likely to implement. To do this, FeaRS exploits “implementation patterns” (i.e., groups of methods usually implemented within the same task) learned by mining thousands of open source projects. We instantiated our approach to the specific context of Android apps. A large-scale empirical evaluation we performed across more than 20k apps shows encouraging preliminary results, but also highlights future challenges to overcome.”

As far as I understand, this is a plug-in to android studio, so I will probably need to see if I can use it outside of this context. However, it seems to be very interesting….

Exploring code weaknesses in StackOverflow

https://doi.ieeecomputersociety.org/10.1109/TSE.2021.3058985

Whether we like it or not, software designers, programmers and architects use StackOverflow. Mostly because they want to be part of a community – help others and help themselves.

However, StackOverflow has become a de-facto go-to place to find programming answers. Oftentimes, these answers include usage of libraries or other solutions. These libraries solve the immediate problems, but they can also introduce vulnerabilities that the programmers are not aware of.

In this article, the authors review how C/C++ authors introduce and revise vulnerabilities in their code. From the introduction: “We scan 646,716 C/C++ code snippets from Stack Overflow answers. We observe that code weaknesses are detected in 2% of the C/C++ answers with code snippets; more specifically, there are 12,998 detected code weaknesses that fall into 36% (i.e., 32 out of 89) of all the existing C/C++ CWE types.

I like that the paper presents a number of good examples, which can be used for training of software engineers. Both at the university level and later during their work. Some of them can even be used to create coding guidelines for companies – including good and bad examples.

The paper has a lot of great findings about the way in which weaknesses and vulnerabilities are introduced, for example ” 92.6% (i.e., 10,884) of the 11,748 Codew has weaknesses introduced when their code snippets were initially created on Stack Overflow, and 69% (i.e., 8,103 out of 11,748) of the Codew has never been revised

I strongly recommend to read the paper and give it to your software engineers to scan….

Crowdsmelling…

BIld av Ajale från Pixabay

2012.12590.pdf (arxiv.org)

The concept of crowdsourcing is well known in our community. We are accustomed to reading other’s code and learning from it at the same time improving it. Even the “captcha’s” are a good example of crowdsourcing.

However, crowdsmelling? Well, the idea is not as outrageous as one might think. It’s actually an interesting one. It is essentially a way of using collective knowledge about code smells to design machine learning to recognize them. It’s actually the very idea which we use in our Software Center project, and which we support.

In this paper, the authors focus on special kind of code smells – the ones linked to technical debt. The results are promising and we should keep an eye on this work in order to see if this improves.

From the abstract: “Good performances were obtained for God Class detection (ROC=0.896 for Naive Bayes) and Long Method detection (ROC=0.870 for AdaBoostM1), but much lower for Feature Envy (ROC=0.570 for Random Forrest).”

War and algorithm

BIld av www_slon_pics från Pixabay

Amazon.com: War and Algorithm (9781786613646): Liljefors, Max, Noll, Gregor, Steuer, Daniel: Books

Understanding legal aspects of modern autonomous systems requires a philosophical and practical discourse. On the one hand, we need to understand what legal responsbility means in the context of autonomous systems. We need to understand who is responsible for the actions of the system, what the actions are and whether the system actually reacted as designed vs. whether new behaviour occurred.

On the other hand, we also need to understand that the introduction of the autonomous systems changes the legal systems. Autonomous systems do not require operators and therefore they are capable of interacting with each other. The notion of conflicts, damage and collateral damage get completely new dimensions.

I’ve picked up this book because I’ve had the possibility to work with one of the authors and met one more at a dinner a while back. They got me interested in the legal aspects of autonomous systems. In their book, the authors discuss various aspects of such systems. They start from the foundation of the legality of conflicts and then they move over to modern warfare. They provide historical examples of how the legal systems were (and are) shaped by the so-called LAWS (Lethal Autonomous Warfare Systems).

I particularly like the aspects related to the design of the systems and the fact that in chapter 4, the authors discuss the process of learning from the machine, or the algorithm. They call it the process of debugging, which is a new way of looking at the concept of understanding algorithms.

What I miss in the book, however, is the discussion on the quality of the AI systems. Although it is not explicit, it seems to me that the authors assume that an AI system is perfect, makes no mistakes dues to design defects (bugs). If this assumption is true, then it the discussion about the responsibility is a bit simpler, because we do not recognize the problems where an individual (a programmer) gives his/her best, but the testers or others in his team make mistakes. So, the responsibility is not on an individual (programmer, tester, architect), but on the entire company.

Either way, I’m happy that I had the possibility to listen to some of the authors and to work with them.

Law for Computer Scientists and other folks (review)

Law for Computer Scientists (pubpub.org)

Recently, a colleague of mine has recommended me this book. At first, I thought it would be a bit like “Law for dummies”, but it turned out to be much better than I actually thought.

The book is about how we, as software engineers, should look at the legal systems. It poses more questions than it actually answers, but it provides a number of great examples.

I sincerely recommend this book. The following parts have captured my attention:

  1. Existence of different types of law and jurisdictions: national, international and supernational. Data and computer programs are perfect examples of different jurisdictions and the fact that different types of laws apply.
  2. What constitutes data, meta-data and sensitive data. In Chapter 5, the authors mention that we cannot process sensitive data very easily, e.g. data about religion, gender, etc. Then, how can we make the systems fair and unbiased if we cannot process this kind of data?
  3. Cybercrimes and how to deal with them. The author provides great examples of legislation that is supposed to help to fight cybercrime.

However, the best is always left for last and this book is no exception. The author provides a great discussion on the future of our legal systems. She does that by discussing the concept of personhood for AI or any other complex system. Although it sounds like a distant future, it is closer than we think. EU has already started to work on this kind of legislation.

Finally, I love the fact that the author brings in the three laws of robotics by Asimov – a real connection to computer science and software engineering.