Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Social Explainable AI -

Social Explainable AI

Communications of NII Shonan Meetings
Buch | Hardcover
674 Seiten
2026
Springer Nature Switzerland AG (Verlag)
978-981-96-5289-1 (ISBN)
CHF 74,85 inkl. MwSt
  • Titel nicht im Sortiment
  • Artikel merken

This open access book introduces social aspects relevant to research and developments of explainable AI (XAI). The new surge of XAI responds to the societal challenge that many algorithmic approaches (such as machine learning or autonomous intelligent systems) are rapidly increasing in complexity, making it difficult for users to accept their recommendations. A large body of approaches now exists with many ideas of how algorithms should be explainable or even be able to explain their output. However, few of them consider the users' perspective, and even less address the social aspects of using XAI. To fill the gap, the book offers a conceptualization of explainability as a social practice, a framework for contextual factors, and an operationalization of users' involvement in creating relevant explanations. 

For this, scholars across disciplines gathered at the Shonan meeting to account for how explanation generation can be tailored to diverse users and their heterogeneous goals when interacting with XAI. As a result, social interaction is the key to the involvement of the users. Accordingly, we define sXAI (social explainable AI) as systems that interact with the users in such a way that an incremental adaptation of explaining to the users is possible on the fly to yield a relevant explanation at the interface with both active partners human and AI. To encourage novel interdisciplinary research, we propose to account for the following dimensions:

  • Patterndness: XAI should account for different contexts that yield different social roles impacting the construction of explanations.
  • Incrementality: XAI should build on the contribution of the involved partners who adapt to each other.
  • Multimodality: XAI needs to use different communication modalities (e.g., visual, verbal, and auditory).

This book also addresses how to evaluate social XAI systems and what ethical aspects must be considered when employing sXAI. Together, the book pushes forward the building of a community interested in the sXAI. To increase the readability across disciplines, each chapter offers an elaborated rapid access to its content.

This open access book introduces social aspects relevant to research and developments of explainable AI (XAI). The new surge of XAI responds to the societal challenge that many algorithmic approaches (such as machine learning or autonomous intelligent systems) are rapidly increasing in complexity, making justified use of their recommendations difficult for the users. A large body of approaches now exists with many ideas of how algorithms should be explainable or even be able to explain their output. However, few of them consider the users' perspective, and even less address the social aspects of using XAI. To fill the gap, the book offers a conceptualization of explainability as a social practice, a framework for contextual factors, and an operationalization of users' involvement in creating relevant explanations. For this, scholars across disciplines gathered at the Shonan meeting to account for how explanation generation can be tailored to diverse users and their heterogeneous goals when interacting with XAI. As a result, social interaction is the key to the involvement of the users. Accordingly, we define sXAI (social eXplainable AI) as systems that interact with the users in such a way that an incremental adaptation of explaining to the users is possible, along with the unfolding context of interaction, to yield a relevant explanation at the interface with both active partners—human and AI. To encourage novel interdisciplinary research, we propose to account for the following dimensions: •    Patterndness: XAI should account for different contexts that yield different social roles impacting the construction of explanations. •    Incrementality: XAI should build on the contribution of the involved partners who adapt to each other. •    Multimodality: XAI needs to use different communication modalities (e.g., visual, verbal, and auditory). This book also addresses how to evaluate social XAI systems and what ethical aspects must be considered when employing sXAI. Together, the book pushes forward the building of a community interested in sXAI. To increase the readability across disciplines, each chapter offers a rapid access to its content. This open access book introduces social aspects relevant to research and developments of explainable AI (XAI). The new surge of XAI responds to the societal challenge that many algorithmic approaches (such as machine learning or autonomous intelligent systems) are rapidly increasing in complexity, making justified use of their recommendations difficult for the users. A large body of approaches now exists with many ideas of how algorithms should be explainable or even be able to explain their output. However, few of them consider the users' perspective, and even less address the social aspects of using XAI. To fill the gap, the book offers a conceptualization of explainability as a social practice, a framework for contextual factors, and an operationalization of users' involvement in creating relevant explanations. For this, scholars across disciplines gathered at the Shonan meeting to account for how explanation generation can be tailored to diverse users and their heterogeneous goals when interacting with XAI. As a result, social interaction is the key to the involvement of the users. Accordingly, we define sXAI (social eXplainable AI) as systems that interact with the users in such a way that an incremental adaptation of explaining to the users is possible, along with the unfolding context of interaction, to yield a relevant explanation at the interface with both active partners—human and AI. To encourage novel interdisciplinary research, we propose to account for the following dimensions: •    Patterndness: XAI should account for different contexts that yield different social roles impacting the construction of explanations. •    Incrementality: XAI should build on the contribution of the involved partners who adapt to each other. •    Multimodality: XAI needs to use different communication modalities (e.g., visual, verbal, and auditory). This book also addresses how to evaluate social XAI systems and what ethical aspects must be considered when employing sXAI. Together, the book pushes forward the building of a community interested in sXAI. To increase the readability across disciplines, each chapter offers a rapid access to its content.

Chapter 1 TBD.- Chapter 2 TBD.- Chapter X TBD. 

Erscheinungsdatum
Zusatzinfo 40 Illustrations, black and white
Verlagsort Cham
Sprache englisch
Maße 155 x 235 mm
Themenwelt Informatik Software Entwicklung User Interfaces (HCI)
Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Schlagworte Explainable Artificial Intelligence • HCI • Human-AI Collaboration • Human Computer Interaction • Multimodal • open access • Shonan Meeting • XAI
ISBN-10 981-96-5289-8 / 9819652898
ISBN-13 978-981-96-5289-1 / 9789819652891
Zustand Neuware
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
Kindersachbuch über die Welt von Morgen

von Christoph Drösser

Buch | Hardcover (2025)
Gabriel in der Thienemann-Esslinger Verlag GmbH
CHF 24,90
Wissensverarbeitung - Neuronale Netze

von Uwe Lämmel; Jürgen Cleve

Buch | Hardcover (2023)
Carl Hanser (Verlag)
CHF 48,95
was alle wissen sollten, die Websites und Apps entwickeln

von Jens Jacobsen; Lorena Meyer

Buch | Hardcover (2024)
Rheinwerk (Verlag)
CHF 55,85