{"id":826,"date":"2023-03-31T17:50:33","date_gmt":"2023-03-31T16:50:33","guid":{"rendered":"https:\/\/metrics.blogg.gu.se\/?p=826"},"modified":"2023-03-26T17:58:34","modified_gmt":"2023-03-26T16:58:34","slug":"transparency-and-explainability-of-ai","status":"publish","type":"post","link":"https:\/\/metrics.blogg.gu.se\/?p=826","title":{"rendered":"Transparency and explainability of AI&#8230;"},"content":{"rendered":"\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"640\" src=\"https:\/\/metrics.blogg.gu.se\/files\/2023\/03\/glass-3983411_1920-1024x640.jpg\" alt=\"\" class=\"wp-image-827\" srcset=\"https:\/\/metrics.blogg.gu.se\/files\/2023\/03\/glass-3983411_1920-1024x640.jpg 1024w, https:\/\/metrics.blogg.gu.se\/files\/2023\/03\/glass-3983411_1920-300x188.jpg 300w, https:\/\/metrics.blogg.gu.se\/files\/2023\/03\/glass-3983411_1920-768x480.jpg 768w, https:\/\/metrics.blogg.gu.se\/files\/2023\/03\/glass-3983411_1920-1200x750.jpg 1200w, https:\/\/metrics.blogg.gu.se\/files\/2023\/03\/glass-3983411_1920-1320x825.jpg 1320w, https:\/\/metrics.blogg.gu.se\/files\/2023\/03\/glass-3983411_1920.jpg 1920w\" sizes=\"(max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><figcaption>Image by <a href=\"https:\/\/pixabay.com\/users\/sw_reg_03-11532456\/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=3983411\">Sergey Gricanov<\/a> from <a href=\"https:\/\/pixabay.com\/\/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=3983411\">Pixabay<\/a><\/figcaption><\/figure>\n\n\n\n<p><a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0950584923000514?via%3Dihub\">Transparency and explainability of AI systems: From ethical guidelines to requirements &#8211; ScienceDirect<\/a><\/p>\n\n\n\n<p class=\"has-drop-cap\">In the area of ChatGPT and increasingly larger language models, it is important to understand how these models reason. Not only because we want to put them in safety-critical systems, but mostly because we need to know why they make things up. <\/p>\n\n\n\n<p>In this paper, the authors draw conclusions regarding how to increase the transparency of AI models. In particular, they highlight that:<\/p>\n\n\n\n<ul><li>The AI ethical guidelines of 16 organizations emphasize explainability as the core of transparency.<\/li><li>When defining explainability requirements, it is important to use multi-disciplinary teams.<\/li><\/ul>\n\n\n\n<p>The define a four-quandrant model for explainability of requirements and AI systems. The model links four key questions to a number of aspects:<\/p>\n\n\n\n<ol><li>What to explain (e.g., roles and capabilities of AI).<\/li><li>In what kind of situation (e.g., when testing).<\/li><li>Who explains (e.g., AI explains itself).<\/li><li>To whom to explain (e.g., customers). <\/li><\/ol>\n\n\n\n<p>It&#8217;s an interesting reading that takes AI systems to more practical levels and provide the ability to turn explainability into software requirements. <\/p>\n","protected":false},"excerpt":{"rendered":"<p>Transparency and explainability of AI systems: From ethical guidelines to requirements &#8211; ScienceDirect In the area of ChatGPT and increasingly larger language models, it is important to understand how these models reason. Not only because we want to put them in safety-critical systems, but mostly because we need to know why they make things up. &hellip; <a href=\"https:\/\/metrics.blogg.gu.se\/?p=826\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Transparency and explainability of AI&#8230;&#8221;<\/span><\/a><\/p>\n","protected":false},"author":68,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4,5],"tags":[],"_links":{"self":[{"href":"https:\/\/metrics.blogg.gu.se\/index.php?rest_route=\/wp\/v2\/posts\/826"}],"collection":[{"href":"https:\/\/metrics.blogg.gu.se\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/metrics.blogg.gu.se\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/metrics.blogg.gu.se\/index.php?rest_route=\/wp\/v2\/users\/68"}],"replies":[{"embeddable":true,"href":"https:\/\/metrics.blogg.gu.se\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=826"}],"version-history":[{"count":1,"href":"https:\/\/metrics.blogg.gu.se\/index.php?rest_route=\/wp\/v2\/posts\/826\/revisions"}],"predecessor-version":[{"id":828,"href":"https:\/\/metrics.blogg.gu.se\/index.php?rest_route=\/wp\/v2\/posts\/826\/revisions\/828"}],"wp:attachment":[{"href":"https:\/\/metrics.blogg.gu.se\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=826"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/metrics.blogg.gu.se\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=826"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/metrics.blogg.gu.se\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=826"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}