Machine learning in compilers???

BenchPress: A Deep Active Benchmark Generator (arxiv.org)

To be honest, I did not expect machine learning to be part of a compiler… I’ve done programming since I was 13, understood compilers during my second year at the university and even wrote one (well, without any ML, that is).

Why would a compiler need machine learning, I wondered. It’s a pretty simple program – it takes a grammar, then parses the source code and translates that to a machine code (or some other low level representation). It has to be deterministic as the same program cannot compile to two different machine codes. It’s just the way it is….

It turns out that machine learning is used in modern compilers to perform optimizations. The optimizations are done to take advantage of modern processors, their registers and long instructions sets. These optimizations are meant to support machine code in being more parallel, allowing the modern multi-core, multi-thread processors to utilize every little bit of energy in all their cores.

In this paper, the authors use language models like BERT to create a benchmark that will allow different optimization techniques to be compared. This means, that the same compiler, can test itself against these benchmarks in order to find the best possible solution. Clever!

However, this is it from me. I’m planning on writing a compiler, let alone an optimizer. I may use BERT models in the future for generation of programs, but I will most probably end there. But, in case you wonder – there is ML in compilers 🙂

Testing deep neural networks (article highlight)

A Probabilistic Framework for Mutation Testing in Deep Neural Networks (arxiv.org)

Testing of neural networks is still an open problem. Due to the complexity of their connections, and their probabilistic nature, it is difficult to find defects. Although there is a lot of approaches, e.g., using autoencoders or using surprise adequacy measures, testing of neural networks is something of a mistery for me.

I could say that the topic was under my radar for a while. I actually though that there is not much need for testing research in software engineering; even though I run two projects with the testing components. For one, I thought that deep learning is basically like a “rabbit hole” – the more you test it, the more interesting properties you discover. I’ve tried to use testing to understand what kind of things the models learn, but I’m not sure that this is the right approach. I’m affraid that this will never be the case – the deep learning models learn something, we can evaluate it, but we can never really fully understand what the models has learned.

Now, this article uses mutation testing for the purpose to find the best test suite to validate the models. Well, it does more than that. It offers a framework where we can use three different models to evaluate the mutants and choose the ones that are expected to provide the best results. It is built on top of frameworks/models like DeepCrime (link here) and can provide a better selection approach. So far, the framework has been evaluated on the standard dataset – MNIST – but I hope that it will be expanded on other datasets in the future.

Testing deep learning systems in automotive software – article highlight

BIld av Pexels från Pixabay

IEEE Xplore Full-Text PDF:

The summer is in full swing, and after a few weeks of leisure and relaxation, I’m back to work. In one of our research projects, we examine the ability to test deep learning systems for computer vision in autonomous drive systems. It’s been a challenge, as the field is rather scattered. There is a lot of work on testing DL systems but without the specifics of the safety or autonomous drive. At the same time, there are a lot of studies about testing autonomous systems – usually using simulations.

So, in this paper, the authors focus on using metamorphic testing to test DL networks. By manipulating the input images, they observe how the network reacts and what the predicted behavior is. This helps to establish some sort of boundaries regarding when the system is safe to operate and how it can behave in practice. It allows an understanding of which neurons were essentially activated in the network (which is not the same as network coverage).

The paper presents a tool for that purpose, which is something that I really need to try on our autoencoders from the DeVELOP project.