AI Ethics – a programmer’s perspective

Image by Tumisu from Pixabay

I’ve been working with machine learning for a while and observed the discussion about AI and ethics. From the philosophical perspective the discussion is very much problem-oriented; the discussion is about “paper cuts” from using AI.

I’ve recently looked at the article from SDTimes (sdtimes.org) about AI Ethics (https://sdtimes.com/ai/ai-ethics-early-but-formative-days/) and its early days. I’ve also looked at the book behind this article: The big nine by Amy Webb (https://www.amazon.com/Big-Nine-Thinking-Machines-Humanity/dp/1541773756). It seems that the discussion there misses an important point – that AI is based on machine learning algorithms, which are applied statistical methods.

This applied nature of AI means that it is algorithms using data to make decisions. For me, as a programmer, this poses an important threat – how can I know what is ethical and what is not ethical if it is not in the data? What does ethics mean in terms of programming – how can I evaluate the ethics?


I can break it down into a few programming challenges:

  • Requirements on ethics – how can requirements on ethics be expressed?
  • Measurements of ethics – how can we measure that something is ethical or not?
  • Implementation and traceability of ethics – where does ethics get implemented? Should I look for it in the code? Where?

In the first part, the philosophers could help a great deal. They can point us to the direction of how to reason about ethics and what kind of data we should collect or use when training ML algorithms.

In the second part, we as software engineer researchers, can help. Once we know what ethics can be, we can quantify it. We can even use statistics to approximate ethics for a given algorithm. However, I’ve not seen any approach for that.

Finally, if we know how to measure ethics, we can try to link that to code and try to approximate some sort of traceability of ethics in the program code – at least to start with. Later we can even trace the ethics requirements in the code, just as we profile functions for resources and trace safety requirements.

Well, these are just some of my throughs on the topic. As I said in the title – they are from the perspective of a programmer and researcher applying ML.

For further reading, I recommend to read a great piece of work in ACM Communications: AI judges and juries – Communications of the ACM, Vol. 61 Issue 12.

How do software engineers work with ML? — an interesting paper from Microsoft

Machine learning is one of the current hot areas. As AI is believed to be the next big breakthrough, machine learning is the technology behind AI that makes it all possible.

However, ML is also a technology, it’s a software algorithm and product that needs to be developed. It’s true that the development of ML systems has become much easier in the last years, since TensorFlow, PyTorch and other frameworks are available for free. So, is the problem of developing ML system solved once we have these frameworks?

No, it’s actually far from that. We still need software engineers to design, implement, deploy and OPERATE these systems in a robust way.

In our research, we studied the adoption of ML in industry in the days before tensorflow, where ML was still perceived to be “advanced statistics” and when deep learning was still called “neural networks” – look at the PDF, and another one here.

Now, if we observe the exponential adoption of ML in industry, we can also catch the big companies to come with mature processes on how to use ML. An example of that is the paper from Microsoft. The paper describes some of the challenges, and, the most important, it describes the workflow of developing ML system. This workflow is focused a lot on data – which is metrics 🙂

What I would like to advocate in this post is that we need to have more statistics and data analysis methods in software engineering education. We should prepare our future software designers to work with data equally as to work with programming!

Software analytics in the large scale – article review from IEEE Software in the light of our research on software development speed

In the latest IEEE Software issue we can find an interesting article from our colleagues in Spain, working on software analytics (https://doi-org.ezproxy.ub.gu.se/10.1109/MS.2018.290101357).

Something that has caught my attention is the focus of the platform and visualizations on the code review process. The review speed and the review process are important for software development companies (see our work on this topic:
https://content.sciendo.com/abstract/journals/fcds/43/4/article-p281.xml). However, to get a good dashboard with these measures, which communicates the goal in the correct way is not as easy as it looks.

One of the problems is that the dashboard is too complex – too many measures related to speed can cause contradicting diagrams – e.g. review speed can increase but the integration speed can decrease, so what happened with the entire speed?

Another problem is that we focus only on speed, but never really discuss how this influences other aspect, e.g. code quality, product quality, maintainability, etc.

In the best of words this would be easy, but we live in a world which is not perfect. However, the article from IEEE Software shows that this can be achieved by providing more flexibility in the platform where the dashboard is created.

Software Analytics or Software Factfulness

I’ve recently read Hans Roslund’s book “Factfulness”, which is about the ability to recognize patterns and analyze data in the global scale. The book is about global trends like poverty, education, health, etc. Not much, if anything, about software engineering.

However, when reading I constantly thought about its relation to software analytics. In software analytics we look at software products and activities with the goal to find patterns and to understand what’s going on with our products. We produce diagrams and knowledge that stems from these diagrams. Although we provide a lot of information about the software, I’ve not really seen much work on the interpretation of software analytics.

The examples of software analytics, which I’ve stumbled on, usually are about designing diagrams and triggering decisions. They are rarely about putting this in context. How do we know that we need to take actions to reduce complexity? Is our product exceptionally complex? or is it just that all software products are getting complex?

Maybe we should dedicate more time to discuss consequences about software analytics — become more factful!

Link to the book: https://www.gapminder.org/factfulness-book/

Measuring Agile Software Development

It’s been a while since I blogged last, but does not mean that our team is not working:) Quite the contrary.

In the last few months we were busy with the investigation of how to measure agile software development and DevOps. We have looked at the companies that are about to make a transformation from waterfall and V to Agile. We also looked at the companies that did that recently and that did that kind of transformation a while back.

We found that the information needs evolve rapidly as companies evolve.

Companies willing to transform/in-transformation focus on measuring the improvement of their operations. They want to be faster, provide more features in shorter time frames, increase the quality. They also want to measure how much they transformed.

Companies that have just transformed focus on following agile practices despite that there is no such thing. They seek measurements that are “agile”, and often end up with measures of velocity, backlogs and customer reactiveness. They are happy to be agile and move on.

However, after a while they discover that these measures (i) do not have anything to do with their product, (ii) do not really care about long-term sustainability of their business, so they look at the mature agile companies.

Mature agile companies, however, focus on the products and customers. They look at the stability of their products and on the development of their business models. They focus on architectural stability and automation rather than on velocity and story points.

I hope that you enjoy the presentation on the topic that we soon give at VESC in Gothenburg.

 

How good is your measurement program?

One of our work – the MESRAM model for assessing the quality of measurement programs – has been used by our colleagues to evaluate measurement programs at two different companies: https://doi.org/10.1016/j.infsof.2018.06.006

The paper shows how easy it is to use the model and that it provides very nice results in terms of how well they reflect the real quality of the program.

If you are interested in these results from the Software Center metrics project, please also visit the original paper: https://doi.org/10.1016/j.jss.2015.10.051

And also a few papers that help you assess the quality of your KPIs and metrics: https://doi.org/10.1109/IWSM-Mensura.2016.033

Trailer about the metrics project

Dissemination of research results in the age of YouTube is not very easy. I would say it’s quite impossible. That’s why I’ve tried to make it a bit more interesting and made this trailer with the use of iMovie.

It’s my first edited video, so please be nice to it!

The link to the video at GU Play: https://play.gu.se/media/Metrics+theme+trailer/0_2f0dw0uz

Software center metrics day – reflections…

This year, the Software Center Metrics Day took place in the end of October, just a few days before the autumn break. The program included a mix of talked from academia and industry, https://www.software-center.se/research-themes/technology-themes/development-metrics/metrics-day-2018-metrics-software-analytics-and-machine-learning/, and was focused on the recent developments of the metrics area.

What I’ve learned from the event was that it is extremely easy to work with deep learning models. Our colleagues from Microsoft Gothenburg showed us how easy it is to use Azure to create image recognition models. Something that has evolved from research playgrounds to really easy-to-use powerful machine learning.

I’ve also learned how performance measurement in the cloud works. Thanks to our colleague Philip Leitner and his team, we could learn how to best optimize performance.

We have also seen the latest-and-greatest from Spotfire business analytics team, just across the water (literally!) We have also seen how the new car platforms are designed and what kind of metrics are used to drive the design.

Finally, we have also seen how start-up companies reason about the measurement and how their mother companies influence their way of measuring.

Stay tuned for the next metrics day in 2019!

Using Deep Learning to Understand code

One of our software center activities is focused on reducing the effort that the designers spend on code analysis and quality assurance. In this project we are looking at creating a model for high and low quality code – in general.

Now I’ve come across this nice paper about using deep learning for finding whether code is more readable or not: https://doi.org/10.1016/j.infsof.2018.07.006

The paper is written by a research team from City University of Hong Kong and Beijing University of Technology. The paper presents a method that has been evaluated against human reviewers and is based on techniques that require no feature engineering. It shows that it is better than the previous approaches, yet requires less effort to set up.

The paper also provides the possibility to reuse the code – great and very interesting reading.

In Software Center, we create a deep learning model that can learn the quality of code from tools for code review and reduce the review effort by order of magnitude. Please take a look at our presentation from the Software Center Metrics Day.

Stay tuned!

Software data fuels AI, ML and Software Analytics

I’ve talked about software analytics in the previous post, in particular the latest issue of IEEE Software. In this post, let me introduce an interesting book for software engineers and software engineering scientists interested in software analytics: Bird, C., Menzies, T., & Zimmermann, T. (Eds.). (2015). The Art and Science of Analyzing Software Data. Elsevier.

After reading a few chapters, one conclusion emerged – the fact that modern software analytics is not about algorithms, it’s about data and its collection. It’s about measurement, quantification and metrics. Even the analysis of qualitative data is often done using measurements in order to speed it up.

Harvard Business Review claimed that “Big Data is Not the New Oil” as there are fundamental differences between the scarce fossil fuel and abundant data from software project (https://hbr.org/2012/11/data-humans-and-the-new-oil). However, even though data is not scarce, I believe that it will fuel the software industry for at least one more decade.

Therefore, we still need to teach our students how to work with data, how to collect and analyse it, and how to assess its value. We also need to understand how to monetise the data.