Transparency and explainability of AI…

Image by Sergey Gricanov from Pixabay

Transparency and explainability of AI systems: From ethical guidelines to requirements – ScienceDirect

In the area of ChatGPT and increasingly larger language models, it is important to understand how these models reason. Not only because we want to put them in safety-critical systems, but mostly because we need to know why they make things up.

In this paper, the authors draw conclusions regarding how to increase the transparency of AI models. In particular, they highlight that:

  • The AI ethical guidelines of 16 organizations emphasize explainability as the core of transparency.
  • When defining explainability requirements, it is important to use multi-disciplinary teams.

The define a four-quandrant model for explainability of requirements and AI systems. The model links four key questions to a number of aspects:

  1. What to explain (e.g., roles and capabilities of AI).
  2. In what kind of situation (e.g., when testing).
  3. Who explains (e.g., AI explains itself).
  4. To whom to explain (e.g., customers).

It’s an interesting reading that takes AI systems to more practical levels and provide the ability to turn explainability into software requirements.

Author: Miroslaw Staron

I’m professor in Software Engineering at IT faculty. I usually blog about interesting articles (for me) and my own reflections on the development of Software Engineering, AI, computer science and automotive software.