Testing machine learning systems…

Image by Comfreak from Pixabay

https://rdcu.be/caKuc

Today, everybody is talking about machine learning and AI. Some talk about deterministic models, some about statistical ones, some about bayesian, some talk about X-mas 🙂

My experience with working with machine learning is that we need to be very careful what we actually do. If we do the machine learning in the classical sense, e.g. neural network models or decision trees. Then we need to make sure that we test the system alongside the data. Never together with the data. We need to prepare a dataset that we use as a reference and which we know well.

Testing, in that scenario, becomes just like we know it. We can make calculations manually, or just step-by-step, and we can check if the algorithm behaves like this.

Testing the system is also not difficult if we follow principles of good engineering – separation of concerns, modularization, observability.

In the runtime, we need to make sure that we add mechanisms related to such aspects as out-of-bounds distributions and safety cages for ML algorithms.

Either way, I recommend this article for all ML designers and product managers who want to know what’s the state of the art in this field, from the perspective of testing. A good overview, nice reading!

Who and when needs automated code reviews…

https://rdcu.be/caKsW

Image by Arek Socha from Pixabay

Having worked with code reviews for a while, I strongly sympathize with the thesis put forward by the authors of this paper – code review tools are still far from being supporting for software developers.

Yes, they do automate the process and organize it. Yes, they help in assuring that all code is reviewed and yes, they do help to capture problems in the code and help to spread the knowledge.

However, what I expect from such a tool is to help me to find problems in the code. I would like to have a tool that would help me, as a designer, get better: avoid mistakes, use cool programming constructs, make better design. None of the tools I know help with that.

This paper shows that my understanding is similar to the developers studied in the paper. Documentation – automatically fixing and suggesting were top priority. Renaming suggestions, commenting and explaining were some others.

Detection of duplicated code, architectural analysis and similar things were also mentioned as expectations. I cannot agree more! These things are priority 1 – I would also expect them to be there.

Now, some are more difficult that others – like analyzing the architecture. Not a trivial task at all, cause what is the architecture? Where are the patterns? How to find it from the code? How to rely on the tools that research provides? W’re not there yet.

Duplicate code, however, is something we should be able to fix. I’ve looked at some repository that had over 200 papers about code clones, duplicates and what have you. Are all these papers good? Probably not, but even if 10% is good, then here we have 20 tools we can try.

I agree, we do have SonarQube and similar tools, but they are not integrated with code review. I cannot just link to a report from SQ when writing a review comment. I cannot add a review comment to a detected technical debt in SQ. So, no integration then?

Maybe it’s just a friday afternoon thing, but I hope that we can get better in making the last mile with our tools. Hope that we can address the expectations that the developers have …

Data analytics in SE

https://www.sciencedirect.com/science/article/abs/pii/S0950584920301981

Image by Werner Weisser from Pixabay

A few years ago, data analytics and big data were super popular in software engineering. In fact, they were a bit too popular, as many authors quoted big data because they had a diagram in the paper.

Fast forward to today and the situation is a bit different. We are more mature in using data in software development. We know that Big data is about the 5 Vs and that we can reason about it. We also know what providing the diagrams is not the same as using them to direct software development.

I found this paper when looking for literature for our new work on communication in software metrics teams. My colleagues study the communication and found that there can be several sources of confusion. Now, this paper is NOT about the confusion, but about prevalence of data analytics in software engineering. The working definition of Big Data Analytics is as follows in the paper: “Big data analytics is the process of using analysis algorithms running on powerful supporting platforms to uncover potentials concealed in big data, such as hidden patterns or unknown correlations”.

The paper poses three main research questions about the studies conducted in Big Data Analytics, about the approaches used and when they are used. I’m mostly interested in the second – which approaches are used. There, the authors pose three sub-questions:

RQ2.1: What types of analytics have been used in the ASD domain?
RQ2.2: What sources of data have been used?
RQ2.3: What methods, models, or techniques have been utilized in the studies?

In particular, the second one is the most interesting one – sources of data. There, the authors found that there are plenty. The entire table (Table 7 in the paper) is actually too large to quote, but let me just quote one of the categories: Source code and data model:

  • Source code
  • Ruby programs & Ruby on Rails
  • Java programs
  • Function calls
  • Code metrics
  • Development repository
  • Test case
  • Code quality
  • Application data schema

I recommend this as a good reading into the current state-of-the-art in data analytics in software engineering. I think we’ve matured a lot in the last decade as a community and that brings a lot of benefit. Our software development gets better and thus our software gets better.

From the abstract: In total, 88 primary studies were selected and analyzed. Our results show that BDA is employed throughout the whole ASD lifecycle. The results reveal that data-driven software development is focused on the following areas: code repository analytics, defects/bug fixing, testing, project management analytics, and application usage analytics.

Is confusion a factor when reviewing a code?

https://www.win.tue.nl/~aserebre/EMSE2020Felipe.pdf

Image by Myriams-Fotos from Pixabay

Reviewing the code is an art. After working with the topic for a few years, we’ve realized that this is like reading a chat – one person responds to a piece of message sent by another person. The message often being the code and the response being the review comment. What we’ve discovered is that the context of the review is important as well as the possibility to ask questions. We even discuss having a taxonomy of these review comments to ease understanding of “where” in the review process one is at the moment.

This article caught my attention because it is about understading when a reviewer is actually confused when reading the code and making the comment. It’s a very nice piece of work as it combines code review comments analysis and surveys.

The results of the survey are interesting as they point out that the authors are confused much less than the reviewers – which is often caused by the fact that the comment is a response, while the code is the message. Quoting the paper: RQ1 Summary – Reasons for confusion: We found a total of 30 reasons for confusion. The most prevalent are missing rationale, discussion of the solution: non-functional, and lack of familiarity with existing code. We observe that tools (code review, issue tracker, and version control) and communication issues, such as disagreement or ambiguity in communicative intentions, may also cause confusion during code reviews.

Finally, I like the fact that the authors do a full systematic review on the topic and triangulate the results. This work will become a number one reading for my students in the programming course, which will teach them how important good code is!

From the abstract:

Results: From the first study, we build a framework with 30 reasons for confusion, 14 impacts, and 13 coping strategies. The results of the systematic mapping study shows 38 articles addressing the most frequent reasons for confusion. From those articles, we found 19 different solutions for confusion proposed in the literature, and nine impacts were established related to the most frequent reasons for confusion.