Reviewing source code is something that I talked about a few times already. It is an activity which is almost as old as software engineering itself. Back in the beginning, this activity was done manually just before the release, as a complement to testing.
Then came google with their software engineering practices, something that is known today as “shift left”, meaning that software quality assurance activities should be done close to when the source code is actually developed. Then came Microsoft with their “Modern Code Reviews” that advocated code reviews before actually committing the source code to the main branch.
Now, we are pretty good at reviewing source code. It is an activity which is done rather fast. As the amount of data from code reviews grows, we are getting more eager to try to use AI and ML for this task. This article is a very good example of that. The authors leveraged on the ability to use seq2seq code summarization techniques to match source code with comments and then with code repair suggestions. The results are promising and show that they are able to provide a relevant suggestion in 1 out of 5 cases. One out of three if we consider top 10 suggestions.
I’ve read this article with huge interest and I will try this myself. All data and code is available publicly, which allows to play around with this technology on your own system. Spoiler alert, though, that training takes one week per pass on a TPU.