In the area of ChatGPT and increasingly larger language models, it is important to understand how these models reason. Not only because we want to put them in safety-critical systems, but mostly because we need to know why they make things up.
In this paper, the authors draw conclusions regarding how to increase the transparency of AI models. In particular, they highlight that:
- The AI ethical guidelines of 16 organizations emphasize explainability as the core of transparency.
- When defining explainability requirements, it is important to use multi-disciplinary teams.
The define a four-quandrant model for explainability of requirements and AI systems. The model links four key questions to a number of aspects:
- What to explain (e.g., roles and capabilities of AI).
- In what kind of situation (e.g., when testing).
- Who explains (e.g., AI explains itself).
- To whom to explain (e.g., customers).
It’s an interesting reading that takes AI systems to more practical levels and provide the ability to turn explainability into software requirements.