The last few months took a lot of my energy to transit from administrative duties to more research oriented ones. Although I like blogging a lot, there was simply no time left for that. Well, I did write and there will be a new book coming out soon, so here is a preview of what the book will be about.
Not only, this presentation shows how well we managed to develop a tool that helps one of the Software Center companies to keep market leadership in the standardizaiton and requirements.
A bit of a different blog post today. I’ve just finished a course that I teach to 2nd year undergraduate students – embedded and real-time software systems. I love to see how my students grow from not knowing anything about C to programming embedded systems with interrupts, serial communication between two Arduinos and using preprocessor to implement advanced variability.
In this blog post, however, I want to write a bit about the future of software engineering as I see it. Everyone talks about AI and how it will take our jobs and reduce the need for software engineers. It will, no doubt about that. What it will not do is take the jobs of the BEST programmers on the market. If you are a great designer and software engineer, you will be even better, you will take jobs from everyone else.
This will happen only if we engage in competition. We cannot just rely on ChatGPT, DeepSeek or Manus to write our software and texts. We need to be the best programmers with these tools – faster than anyone else, more secure than anyone else and more innovative than anyone else. That means that we need to get closer to our customers. We need to understand them better than they understand themselves, and we need to do it in the ethical way – we cannot treat our customers as products, we need to treat them as people.
The same goes to our stakeholders. In my course, my stakeholders are my head of department, my dean, my boss and my students. The students are the most important ones. I am here to help them to grow, and I am priviledged when they come to my lectures, but I cannot force them. I need to make sure that I enrich their days, that they feel that my lectures are worth their while. I hope that I deliver, I see that most of them come to the lectures, most of them are happy.
We must engage in competition a bit more – the best ones must feel that they have deserved it. Otherwise, what’s the point of being the best if everyone else is also the best?
In the work with generative AI, there is a constant temptation to let the AI take over and do most of the jobs. There are even ways to do that in software engineering, for example by linking the code generation with testing.
In this HuggingFace blog, the authors provide a description of an autonomous agent framework that can automate a lot of tasks. They provide a very nice description of the levels at which these agents operate, here is the table, quoted directly from the blog:
Agency Level
Description
How that’s called
Example Pattern
☆☆☆
LLM output has no impact on program flow
Simple processor
process_llm_output(llm_response)
★☆☆
LLM output determines basic control flow
Router
if llm_decision(): path_a() else: path_b()
★★☆
LLM output determines function execution
Tool call
run_function(llm_chosen_tool, llm_chosen_args)
★★★
LLM output controls iteration and program continuation
Multi-step Agent
while llm_should_continue(): execute_next_step()
★★★
One agentic workflow can start another agentic workflow
Multi-Agent
if llm_trigger(): execute_agent()
Source: HuggingFace
I like the model and I’ve definitely done level one and two, maybe parts of level three. With this framework, you can do level three very easily, so I recommend to take a look at that.
Maybe, this will be the topic of the next Hackathon we do at Software Center, who knows… there is one coming up on March 20th.
AI has transformed the way we develop software and create new products. It is here to stay and it will just grow bigger. This year, one of the important events is CES where the Nvidia’s CEO shows the latest developments.
Well, no surprise that generative AI is the key. Generating frames, worlds, programs, dialogs, agents, anything basically. The newest GPUs generate 33 million pixels out of 2 million real ones. It’s tremendous improvements compared to the previous generation (4x improvement).
The coolest announcement is actually not the hardware but software. The world models instead of language models are probably the coolest software part. Being able to tokenize any kind of modality and make the model generative leads to really innovative areas. Generating new driving scenarios, training robots to imitate the best cooks, drivers, artists are only a few of the examples.
And finally – robots, robots and robots. According to the keynote, this is the technology that is on the verge of becoming mainstream. Humanoid robots that allow for brown field development is the key development here.
Now, the keynote is a bit long, but it’s definitely worth looking at.
Happy 2025! Let’s make it a great year full of fantastic research results and great products. How to achieve that goal? Well, let’s take a look at this paper about guidelines for conducting action research.
These guidelines are based on my experiences with working as software engineer. I’ve started my career in industry and even after moving to academia I stayed close to the action – where software gets done. Reflecting on the previous years, I’ve looked at my GitHub profile and realized that only two repositories are used in industry. Both are used by my colleagues from Software Center, who claim that this software provided them with new, cool possibilities. I need to create more of this kind of impact in 2025.
We often talk about GenAI as it is going to replace us. Well, maybe it will, but given what I saw in programming, it will not happen tomorrow. GenAI is good at supporting and co-piloting human programmers and software engineers, but it does not solve complex problems such as architectural design or algorithm design.
In this article, the authors pose an alternative thesis. They support the thesis that GenAI should challenge humans to be better and to unleash their creativity. In this piece, the authors identify the use of AI to provoke things like better text headlines for articles, identifying non-tested code, dead-code or other types of challenges.
They finish up the article with the thesis that we, universities, need to be better at teaching critical thinking. So, let’s do that from the new year!
In this time just before X-Mas, I sat down to read the latest issue of the Communications of the ACM. There are a few very interesting articles there, starting from a piece from Moshe Verdi on the concept of theoretical computer science, through an interesting piece of text on artificial AI to a very interesting article that I’m writing about now.
The starting point of this article is the fact that we, software engineers, are taught that we should talk to our customers, discover requirements together with them and validate our products together with them. At the same time, we design AI Engineering software without this in mind. A lot of start-ups (I will not mention any, but there are many) rush into providing tools that use LLMs to support software development tasks such as programming. However, we do not really know what the developers want.
In this article, they present a survey of almost 1,000 developers on what they want. Guess what – programming is NOT in the top three on this list. Testing, debugging, documentation or code analysis are the top requests. The developers enjoy creating code, what they do not enjoy is finding bugs or testing the software – it takes time and is not extremely productive. Yes, it feels great what you find you bug and yes, it feels great when the tests finally pass, but it feels even greater when you work on new feature or requirement.
We follow the same principle in Software Center. When creating new tools, we always asks the companies what they really need and how they need it. Now, we work on improving the process of debugging and defect analysis in CI/CD. We started by a survey. You can find it here. Please register if you want to see the results of the survey – and contribute!
With this, I would like to wish you all a Merry Christmas and a Happy New Year. Let’s make 2025 even better than 2024!
I’m a big fan of Yuval Noah Harari’s work. A professor who can write books like no one else, one of my role models. I’ve read Sapiens, Homo Deus and 21 Lessons… now it was time for Nexus.
The book is about information networks and AI. Well, mostly about the information networks and storytelling. AI is there, but not as much as I wanted to see. Not to complain, Harari is a humanist and social scientists, not a software engineer or computer scientists.
The book discusses what information really is and how it evolves over time. It focuses on storytelling and providing meaning for the data and the information. It helps us to understand the power of stories and the power of information – one could say that the “pen is mightier than the sword”, and this book delivers on that.
I recommend this as a reading over X-Mas, as the holidays are coming.
Quantum computing has been around for a while now. It’s been primarily a playground for physicists and computer scientists close to mathematics. The major issue was that the error rates and instability of the quantum bits prevented us from using this kind of paradigm on a larger scale (at least how I understand it).
Now, it seems that we are getting close to commercialization of this approach. Several companies are developing quantum chips that will allows us to use more of this technology in more fields.
The paper that I want to bring up today discusses what kind of challenges we, software engineers, can solve in quantum computing – and it is not programming. We need to work more on requirements, architecture, reuse of software and quality of it. So, basically the typical software engineering aspects.