Summer is around the corner, in some places it has actually arrived. This usually makes for relaxation and time for reflection.
I would like to recommend a book from fellow professors. The book is about the way in which thinking machines are considered today and about the potentials of GAI, General AI. The book is written in a popular science way, but the examples and the research behind this is solid. The authors discuss both the technology, but also the society – legislative aspects of AI and understanding of what it means to be thinking in general, and in some specific cases tool.
Have a great summer and stay tuned for new posts after the summer.
The video above is about the news from Google about their TensorFlow library, which include new ways of training models, compression and performance tuning and more.
TensorFlow Light and TensorFlow JS allow us to use the same models as for desktops, but on mobile devices. Really impressive. I’ve caught myself thinking whether I’m more impressed by the hardware capabilities of small devices, or the capabilities of software. Either way – super cool.
Google is not the only company announcing something. NVidia is also showing a lot of cool features for enterprises. Cloud access for rapid prototyping, model testing and deployments are in the center of that.
I like gaming, so this is impressing, but even more impressive is to look at the last-year’s DLSS technology, which still cannot be beaten by the competition. Really nice.
Modern software development organizations work a lot with metrics – they use them to focus their development effort, to understand their products, and, recently, to guide their online experimentation.
We’ve studied software measurement programs before, mostly from the perspective of what’s in the program – what is measured, how it is measured, who is responsible for the measurement and when the measurement is conducted.
In this article, we focus on the “softer” aspects of the measurement program – how mature is a metric team, and how mature is the organization where the metric team works. When studying four different companies, aka four different teams, we could observe that the maturity of the team goes hand-in-hand with the maturity of the organization. Mature organizations can take metrics and indicators into their workflows, and therefore increase the motivation to learn in the metrics team.
We’ve also found a few key factors for success of the metric team:
Formal, official mandate – which prevents questioning of the metric team, also important to make the team become a team, not a group of individuals
Link to the organization – a team needs to be part of a product or service development, they need to understand what is required from them.
Longevity – a team needs to be there for the long run, as the short term improvements will not last for very long (per definition), and therefore the motivation is not going to last.
I hope that this article, which is open access, will help others to understand how to evolve their teams and measurement programs.
I’m not going to add a picture here, because the actual paper contains a great picture, which is copyrighted. But, do we need another tool (I though), and if you think like that… well, think again.
Once I looked at the paper, I really liked the idea. This is a tool that combines the programming tasks of software engineers and such tasks like data exploration, labelling or cleaning. It’s a kind of tool like Jupyter Notebook, but it allows to interact with the data in a deeper way.
I strongly recommend to take a look at the tool. I’ve done a quick check and it looks really nice.
Machine learning has been used in software engineering for a while now. It used to be called advanced statistics, but with the popularization of artificial intelligence, we use the term machine learning more often. I’m one of those who like to use ML. It’s actually a mesmerizing experience when you train neural networks – change one parameter, wait a bit and see how the network performed, then again. Trust me, I’ve done it all too often.
I like this paper because it focuses on challenges for using ML, from the abstract:
” In the past few years, software engineering has increasingly automating several tasks, and machine learning tools and techniques are among the main used strategies to assist in this process. However, there are still challenges to be overcome so that software engineering projects can increasingly benefit from machine learning. In this paper, we seek to understand the main challenges faced by people who use machine learning to assist in their software engineering tasks. To identify these challenges, we conducted a Systematic Review in eight online search engines to identify papers that present the challenges they faced when using machine learning techniques and tools to execute software engineering tasks. Therefore, this research focuses on the classification and discussion of eight groups of challenges: data labeling, data inconsistency, data costs, data complexity, lack of data, non-transferable results, parameterization of the models, and quality of the models. Our results can be used by people who intend to start using machine learning in their software engineering projects to be aware of the main issues they can face. “
So, what are these challenges? Well, I’m not going to go into details about all of them, but I’d like to focus on the ones that are close to my heart – data labelling. The process of labelling, or tagging, data is usually very time consuming and very error-prone. You need to be able to remember how you actually labelled the previous data points (consistency), but also understand how to think when finding new cases. This paper does not list the challenges, but gives a pointer to a few paper where they are defined.
Testing of software systems is a task which costs a lot. As a former tester, I see this as a neverending story – you’re done with your testing, new code is added, you are not done anymore, you test more, you’re done, new code…. and so on.
When I was a tester, there was no tools for automating the test process (we’re talking 1990’s here). Well, ok, there was CppUnit, and it was a great help – I could create a suite and execute it. Then I needed to add new test cases, create functional tests, etc. It was fun, until it wasn’t anymore.
I would have given a lot for having tools for test orchestration back then. A lot of things happened since then. This paper presents a great overview of how testing cost is estimated – I know, it’s not orchestration, but hear me out. I like this paper, because it shows anyways which tools are used, how test cost is estimated (e.g. based on metrics like coverage, effort, etc) and how the tests are evaluated.
I recommend this reading as an overview, a starting point for understanding the testing processes today, and, eventually, to optimize the test processes based on the right premises (not HiPPO).
Many of companies talk about using AI in their software engineering processes. However, they have problems in sharing their data with researchers and students. The legal processes with open sourcing data were and are scary. The processes of setting up internal collaborations is time consuming and therefore it needs more effort.
So, this is a great example of replicating some industrial set-ups in the open source community. I’ll use these data sets in my work and I’d love to see more initiatives like that.
Our team is working on one of those at the moment…
For many of us,
software engineering is the possibility to create new projects, new products
and cool services. We do that often, but we equally often forget about the
maintenance. Well, maybe not forget, but we deliverately do not want to
remember about it. It’s natural, as maintaining old code is not really anything
interesting.
When reading this
paper, I’ve realized that my view about the maintenance is a bit old. In my
time in industry, maintainance was “bug-fixing” mostly. Today, this
is more about community work. As the abstract of this paper says:
“Although Open Source Software (OSS) maintainers devote a significant
proportion of their work to coding tasks, great maintainers must excel in many
other activities beyond coding. Maintainers should care about fostering a
community, helping new members to find their place, while also saying “no” to
patches that although are well-coded and well-tested, do not contribute to the
goal of the project.”
This paper conducts
a series of interviews with software maintainers. In short, their results are
that great software maintainers are:
Available (response time),
Disciplined (follows the
process),
Has a global view of what to
achieve with the review,
Communicative,
Emapthetic,
Community building,
Technically excellent,
Quality aware,
Has domain experience,
Motivated,
Open minded,
Patient,
Diligent, and
Responsible
It’s a long list and
the priority of each of these characteristics differs from one reviewer to
another. However, it’s important that we see software maintainer as a social
person who can contribute to the community rather than just sit in the dark
office and reads code all day long. The maintainers are really the persons who
make the software engineering groups work well.
After reading the
paper, I’m more motivated to maintain the community of my students!
I’ve came across
this article by accident. Essentially I do not even remember what I was looking
for, but that’s maybe not so important. Either way, I really want to try this
tool.
This research study
is about designing a tool for code completion, but not just a completion of a
word/statement/variable, but providing a signature of the next method to
implement.
From the abstract:
“Code completion is one of the killer features of Integrated Development
Environments (IDEs), and researchers have proposed different methods to improve
its accuracy. While these techniques are valuable to speed up code writing, they
are limited to recommendations related to the next few tokens a developer is
likely to type given the current context. In the best case, they can recommend
a few APIs that a developer is likely to use next. We present FeaRS, a novel
retrieval-based approach that, given the current code a developer is writing in
the IDE, can recommend the next complete method (i.e., signature and method
body) that the developer is likely to implement. To do this, FeaRS exploits
“implementation patterns” (i.e., groups of methods usually implemented within
the same task) learned by mining thousands of open source projects. We
instantiated our approach to the specific context of Android apps. A
large-scale empirical evaluation we performed across more than 20k apps shows
encouraging preliminary results, but also highlights future challenges to
overcome.”
As far as I
understand, this is a plug-in to android studio, so I will probably need to see
if I can use it outside of this context. However, it seems to be very
interesting….
Whether we like it or not, software designers, programmers and architects use StackOverflow. Mostly because they want to be part of a community – help others and help themselves.
However, StackOverflow has become a de-facto go-to place to find programming answers. Oftentimes, these answers include usage of libraries or other solutions. These libraries solve the immediate problems, but they can also introduce vulnerabilities that the programmers are not aware of.
In this article, the authors review how C/C++ authors introduce and revise vulnerabilities in their code. From the introduction: “We scan 646,716 C/C++ code snippets from Stack Overflow answers. We observe that code weaknesses are detected in 2% of the C/C++ answers with code snippets; more specifically, there are 12,998 detected code weaknesses that fall into 36% (i.e., 32 out of 89) of all the existing C/C++ CWE types. “
I like that the paper presents a number of good examples, which can be used for training of software engineers. Both at the university level and later during their work. Some of them can even be used to create coding guidelines for companies – including good and bad examples.
The paper has a lot of great findings about the way in which weaknesses and vulnerabilities are introduced, for example ” 92.6% (i.e., 10,884) of the 11,748 Codew has weaknesses introduced when their code snippets were initially created on Stack Overflow, and 69% (i.e., 8,103 out of 11,748) of the Codew has never been revised “
I strongly recommend to read the paper and give it to your software engineers to scan….