Is Quantum the next big thing for the masses?

But what is quantum computing? (Grover’s Algorithm)

If you are looking at the quantum computing, and you are a programmer, people start “dumbing-it-down” for you with telling about superpositions and multiple bits in one. Well, not entirely true and this is a misconception.

In this video, the author explains how quantum works, based on the mathematics. Don’t worry, it’s really approachable, without dumbing it down or mansplaining.

Thanks to my colleague who send me the video!

Do we need a large model to generate good code?

arxiv.org/pdf/2504.07343

Code generation in all forms, from solving problems to test case creation, adversarial testing, fixing security vulnerability, is super-popular in contemporary software engineering. It helps the engineers to be more efficient in their work, and it helps the managers to get more out of the resources at their disposal.

However, there is a bit of a darker side to it. We can all buy GitHub CoPilot or another add-in. We can even get the companies to set a special instance in their cloud for us. But it 1) costs and 2) uses a lot of energy, and 3) is probably a security risk as we need to send our code via an open network.

The alternative is to use publicly available models and create an add-in for the tool to use it. Fully on-site, no information even leaves premises (https://bit.ly/4iMNgrU). But, how good are these models.

In this paper, the authors studied models that are in the class that we can run on a modern desktop computer. Yes, we need a GPU for them, but a 4090 or 5090 would be enough. They tested: Llama, Gemma 2, Gemma 3, DeepSeek-R1 and Phi-4. They found that the Phi-4 model was a bit worse than the OpenAI’s GPT 03-mini-high, but it was very close. The 03-mini-high got ca. 87% correctness with pass@3, and the Phi-4 achieves 64%. Yes, not exactly the same, but still damn close.

I suggest to read the paper for everyone looking into possibilities of using these models on their own.

Requirements and AI

The last few months took a lot of my energy to transit from administrative duties to more research oriented ones. Although I like blogging a lot, there was simply no time left for that. Well, I did write and there will be a new book coming out soon, so here is a preview of what the book will be about.

Not only, this presentation shows how well we managed to develop a tool that helps one of the Software Center companies to keep market leadership in the standardizaiton and requirements.

Enjoy!

New kids on the block, or are they?

A bit of a different blog post today. I’ve just finished a course that I teach to 2nd year undergraduate students – embedded and real-time software systems. I love to see how my students grow from not knowing anything about C to programming embedded systems with interrupts, serial communication between two Arduinos and using preprocessor to implement advanced variability.

In this blog post, however, I want to write a bit about the future of software engineering as I see it. Everyone talks about AI and how it will take our jobs and reduce the need for software engineers. It will, no doubt about that. What it will not do is take the jobs of the BEST programmers on the market. If you are a great designer and software engineer, you will be even better, you will take jobs from everyone else.

This will happen only if we engage in competition. We cannot just rely on ChatGPT, DeepSeek or Manus to write our software and texts. We need to be the best programmers with these tools – faster than anyone else, more secure than anyone else and more innovative than anyone else. That means that we need to get closer to our customers. We need to understand them better than they understand themselves, and we need to do it in the ethical way – we cannot treat our customers as products, we need to treat them as people.

The same goes to our stakeholders. In my course, my stakeholders are my head of department, my dean, my boss and my students. The students are the most important ones. I am here to help them to grow, and I am priviledged when they come to my lectures, but I cannot force them. I need to make sure that I enrich their days, that they feel that my lectures are worth their while. I hope that I deliver, I see that most of them come to the lectures, most of them are happy.

We must engage in competition a bit more – the best ones must feel that they have deserved it. Otherwise, what’s the point of being the best if everyone else is also the best?

Let’s make 2025 an Action Research year!

Image by Haeruman from Pixabay

Guidelines for Conducting Action Research Studies in Software Engineering

Happy 2025! Let’s make it a great year full of fantastic research results and great products. How to achieve that goal? Well, let’s take a look at this paper about guidelines for conducting action research.

These guidelines are based on my experiences with working as software engineer. I’ve started my career in industry and even after moving to academia I stayed close to the action – where software gets done. Reflecting on the previous years, I’ve looked at my GitHub profile and realized that only two repositories are used in industry. Both are used by my colleagues from Software Center, who claim that this software provided them with new, cool possibilities. I need to create more of this kind of impact in 2025.

Let’s make 2025 an Action research year!

Ai should challenge…

https://dl.acm.org/doi/full/10.1145/3649404

We often talk about GenAI as it is going to replace us. Well, maybe it will, but given what I saw in programming, it will not happen tomorrow. GenAI is good at supporting and co-piloting human programmers and software engineers, but it does not solve complex problems such as architectural design or algorithm design.

In this article, the authors pose an alternative thesis. They support the thesis that GenAI should challenge humans to be better and to unleash their creativity. In this piece, the authors identify the use of AI to provoke things like better text headlines for articles, identifying non-tested code, dead-code or other types of challenges.

They finish up the article with the thesis that we, universities, need to be better at teaching critical thinking. So, let’s do that from the new year!

What developers want from AI…

https://dl.acm.org/doi/10.1145/3690928

In this time just before X-Mas, I sat down to read the latest issue of the Communications of the ACM. There are a few very interesting articles there, starting from a piece from Moshe Verdi on the concept of theoretical computer science, through an interesting piece of text on artificial AI to a very interesting article that I’m writing about now.

The starting point of this article is the fact that we, software engineers, are taught that we should talk to our customers, discover requirements together with them and validate our products together with them. At the same time, we design AI Engineering software without this in mind. A lot of start-ups (I will not mention any, but there are many) rush into providing tools that use LLMs to support software development tasks such as programming. However, we do not really know what the developers want.

In this article, they present a survey of almost 1,000 developers on what they want. Guess what – programming is NOT in the top three on this list. Testing, debugging, documentation or code analysis are the top requests. The developers enjoy creating code, what they do not enjoy is finding bugs or testing the software – it takes time and is not extremely productive. Yes, it feels great what you find you bug and yes, it feels great when the tests finally pass, but it feels even greater when you work on new feature or requirement.

We follow the same principle in Software Center. When creating new tools, we always asks the companies what they really need and how they need it. Now, we work on improving the process of debugging and defect analysis in CI/CD. We started by a survey. You can find it here. Please register if you want to see the results of the survey – and contribute!

With this, I would like to wish you all a Merry Christmas and a Happy New Year. Let’s make 2025 even better than 2024!

Nexus… book review

Nexus : en kort historik över informationsnätverk från stenåldern till AI : Harari, Yuval Noah, Retzlaff, Joachim: Amazon.se: Böcker

I’m a big fan of Yuval Noah Harari’s work. A professor who can write books like no one else, one of my role models. I’ve read Sapiens, Homo Deus and 21 Lessons… now it was time for Nexus.

The book is about information networks and AI. Well, mostly about the information networks and storytelling. AI is there, but not as much as I wanted to see. Not to complain, Harari is a humanist and social scientists, not a software engineer or computer scientists.

The book discusses what information really is and how it evolves over time. It focuses on storytelling and providing meaning for the data and the information. It helps us to understand the power of stories and the power of information – one could say that the “pen is mightier than the sword”, and this book delivers on that.

I recommend this as a reading over X-Mas, as the holidays are coming.

Quantum software engineering

IEEE Xplore Full-Text PDF:

Quantum computing has been around for a while now. It’s been primarily a playground for physicists and computer scientists close to mathematics. The major issue was that the error rates and instability of the quantum bits prevented us from using this kind of paradigm on a larger scale (at least how I understand it).

Now, it seems that we are getting close to commercialization of this approach. Several companies are developing quantum chips that will allows us to use more of this technology in more fields.

The paper that I want to bring up today discusses what kind of challenges we, software engineers, can solve in quantum computing – and it is not programming. We need to work more on requirements, architecture, reuse of software and quality of it. So, basically the typical software engineering aspects.

BTW: On the 12th of December, we have a workshop on Quantum Computing in Software center – Reporting workshop: The end of Software Engineering – as we know it – Software Center

When it gets too much or Revenge of the Tipping point…

BIld av Katja S. Verhoeven från Pixabay

https://www.bokus.com/bok/9780316575805/revenge-of-the-tipping-point-overstories-superspreaders-and-the-rise-of-social-engineering/?utm_campaign=Performance%20Max%20%7C%20English%20%7C%20Rooth&gad_source=1&gclid=Cj0KCQiAuou6BhDhARIsAIfgrn4spmK1A21PF2Luov0HXzMwMFMsTcJKUSsvnIH5UEfxDs_lBz3TOUMaAuLEEALw_wcB

I’ve just finished reading this great book about the way in which the tipping point tips to the wrong side. It’s mostly about the law of “The large effect of the few” as Malcolm Gladwell puts it. In short, this law means that in certain situations, it’s the minority that is responsible for large effects. For example, the minority of old, badly maintained cars that contribute to to over 55% of pollution in one of the US cities. It’s about when one person, a superspreader, ends up in very specific conditions that allow this person to spread the contagion of the COVID virus at the beginning of the pandemic.

Now, we see that in software engineering a lot when we look at the tooling that we use. Let’s take the CI/CD tool Jenkins as an example. It is one of many different tools that were on the market at that time. It was not even the major one, but it was a sibling to a professional tool that was maintained by Oracle (if I recall correctly). Yet, it became very popular and the other tools did not. Since they were siblings, they were not worse, not better either; maybe a little different. What made it tip was the adoption of this tool in the community. A few superspreaders started to use it and discovered how good the tool is for automation of CI/CD tasks.

I see the same parallel to AI today. What was it that tipped the use of AI? IMHO it was a few things:

  1. Google’s LSTM use in Search – since there was a commercial value, it made sense to adopt it. Commercial adoption means business value, improvement and management focus (funding).
  2. Big data – after almost a decade of talking about big data, collecting it and indexing it, we were ready to provide the data-hungry modules with the data they needed to do something useful.
  3. HuggingFace – our ability to share models and use them without requirements on costly GPUs and large (and good) datasets.
  4. Access to competence – since we have so many skilled computer scientists and software engineers, it was easy to get hold of the competence needed to turn ideas into products. Google’s Deepmind is a perfect example of it. People behind it got the Nobel Prize.

Well, the rest is history as they say…. But, what will the next invention on the verge of the tipping point be?