AI-enabled decision-making is becoming mainstream in many social and organizational contexts. Despite multiple benefits including efficiency, effectiveness, and objectivity, scholars from multiple academic disciplines have also identified various areas of concern related to AI-enabled decisions and mechanisms, including functional unreliability, epistemic opacity, privacy violations, a lack of accountability, and instances of algorithmic discrimination (Barocas and Selbst, 2016). A key instrument in achieving more trustworthy and ethical AI is to provide effective means for stakeholders to challenge the decisions and mechanisms of AI systems. As reflected in emerging AI regulation (e.g., EU AI Act), opportunities for contestation are especially crucial in high risk application areas (e.g., law enforcement, justice, medicine, recruitment, and autonomous driving) where problematic decisions of AI systems may have a direct negative impact on stakeholders. However, developing ways to effectively challenge aspects of AI can be equally important in cases where AI has potential indirect negative consequences for people, communities, and the environment (Crawford, 2021). While contestability of AI is implicitly addressed by various emerging norms (regulation and standards), technical mechanisms (explanations, user interfaces features, open source software), and reporting procedures (data sheets, model cards, system cards), it has so far rarely been investigated in a holistic fashion incorporating interdisciplinary perspectives on the subject.
This Research Topic addresses facets of contestation in AI and treats it as a challenge to provide the necessary communicational means to enable contestation in concrete socio-technical settings. We understand contestability as the given ability to contest decisions made by or with the aid of algorithmic systems, as well as the process leading there. A specific realization of contestability defines, accordingly, what can be contested, who can contest, who is accountable, and what types of reviews are involved (Lyons, 2021). Contestation may emerge as an ex-post possibility and practice challenging a decision in an AI system. It might also involve ex-ante strategies of predictive contestability designed into an AI system (Alfrink et al., 2022; Almada, 2019; Hirsch et al., 2017). Such contestability by design involves technical tools currently being developed in the area of explainable AI and AI auditing, but also broader communicative strategies that support multiple actors in criticizing, reflecting on, and challenging AI mechanisms and decisions. Scholars informed by multiple theoretical approaches and scientific disciplines (Baumer et al., 2015; Hirsbrunner et al., 2022; Sengers, 2005) have conceptualized and probed approaches for contesting predictions, mechanisms, and knowledge in socio-technical algorithmic systems and processes. In the future, such approaches will have to be embedded into concrete technologies, systems, and cultures.
In this Research Topic, we bring together contributions addressing concepts, approaches, and techniques of AI contestability in the context of organizational and cross-organizational communication. This may involve interventions from research fields such as science and technology studies, organizational sociology, critical algorithm and data studies, applied ethics, legal studies, data science, software engineering, human-centered computing, and critical design.
We look forward to covering issues such as
• communicative structures enabling/exhibiting contestation of AI decisions in concrete organizational or cross-organizational settings;
• organization and infrastructuring of contestability in design processes (e.g., strategies of participatory, critical, and reflective design);
• design of technical tools surfacing limitations of large language models (LLMs) regarding cultural and intersectional diversity, coverage, and representativeness;
• conceptual comparisons between contestability and related terms in the AI ethics debate, such as explainability, human-in-the-loop, redress, and accountability;
• governance structures enabling AI contestation in and across organizations;
• values, norms, and rules governing contestation;
• interventions using generative media as a means for social and political contestation.
Image credit: Anne Fehres and Luke Conroy & AI4Media / Better Images of AI / Hidden Labour of Internet Browsing / CC-BY 4.0
Manuscript Summaries
Authors planning to submit a manuscript to the Research Topic are strongly encouraged to submit a manuscript summary by September 30, 2024. Your summary should be a brief overview of the manuscript you plan to submit. The Research Topic editors will review your summary and provide feedback for consideration when writing your full article. Manuscript summaries will not be published, and there is no associated fee.
For this Research Topic, the editors recommend that manuscript summaries be between 300 and 2,000 words in length.
Keywords:
artificial intelligence, data science, ethics, methods, critique, human-computer interaction, practice, design research
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
AI-enabled decision-making is becoming mainstream in many social and organizational contexts. Despite multiple benefits including efficiency, effectiveness, and objectivity, scholars from multiple academic disciplines have also identified various areas of concern related to AI-enabled decisions and mechanisms, including functional unreliability, epistemic opacity, privacy violations, a lack of accountability, and instances of algorithmic discrimination (Barocas and Selbst, 2016). A key instrument in achieving more trustworthy and ethical AI is to provide effective means for stakeholders to challenge the decisions and mechanisms of AI systems. As reflected in emerging AI regulation (e.g., EU AI Act), opportunities for contestation are especially crucial in high risk application areas (e.g., law enforcement, justice, medicine, recruitment, and autonomous driving) where problematic decisions of AI systems may have a direct negative impact on stakeholders. However, developing ways to effectively challenge aspects of AI can be equally important in cases where AI has potential indirect negative consequences for people, communities, and the environment (Crawford, 2021). While contestability of AI is implicitly addressed by various emerging norms (regulation and standards), technical mechanisms (explanations, user interfaces features, open source software), and reporting procedures (data sheets, model cards, system cards), it has so far rarely been investigated in a holistic fashion incorporating interdisciplinary perspectives on the subject.
This Research Topic addresses facets of contestation in AI and treats it as a challenge to provide the necessary communicational means to enable contestation in concrete socio-technical settings. We understand contestability as the given ability to contest decisions made by or with the aid of algorithmic systems, as well as the process leading there. A specific realization of contestability defines, accordingly, what can be contested, who can contest, who is accountable, and what types of reviews are involved (Lyons, 2021). Contestation may emerge as an ex-post possibility and practice challenging a decision in an AI system. It might also involve ex-ante strategies of predictive contestability designed into an AI system (Alfrink et al., 2022; Almada, 2019; Hirsch et al., 2017). Such contestability by design involves technical tools currently being developed in the area of explainable AI and AI auditing, but also broader communicative strategies that support multiple actors in criticizing, reflecting on, and challenging AI mechanisms and decisions. Scholars informed by multiple theoretical approaches and scientific disciplines (Baumer et al., 2015; Hirsbrunner et al., 2022; Sengers, 2005) have conceptualized and probed approaches for contesting predictions, mechanisms, and knowledge in socio-technical algorithmic systems and processes. In the future, such approaches will have to be embedded into concrete technologies, systems, and cultures.
In this Research Topic, we bring together contributions addressing concepts, approaches, and techniques of AI contestability in the context of organizational and cross-organizational communication. This may involve interventions from research fields such as science and technology studies, organizational sociology, critical algorithm and data studies, applied ethics, legal studies, data science, software engineering, human-centered computing, and critical design.
We look forward to covering issues such as
• communicative structures enabling/exhibiting contestation of AI decisions in concrete organizational or cross-organizational settings;
• organization and infrastructuring of contestability in design processes (e.g., strategies of participatory, critical, and reflective design);
• design of technical tools surfacing limitations of large language models (LLMs) regarding cultural and intersectional diversity, coverage, and representativeness;
• conceptual comparisons between contestability and related terms in the AI ethics debate, such as explainability, human-in-the-loop, redress, and accountability;
• governance structures enabling AI contestation in and across organizations;
• values, norms, and rules governing contestation;
• interventions using generative media as a means for social and political contestation.
Image credit: Anne Fehres and Luke Conroy & AI4Media / Better Images of AI / Hidden Labour of Internet Browsing / CC-BY 4.0
Manuscript Summaries
Authors planning to submit a manuscript to the Research Topic are strongly encouraged to submit a manuscript summary by September 30, 2024. Your summary should be a brief overview of the manuscript you plan to submit. The Research Topic editors will review your summary and provide feedback for consideration when writing your full article. Manuscript summaries will not be published, and there is no associated fee.
For this Research Topic, the editors recommend that manuscript summaries be between 300 and 2,000 words in length.
Keywords:
artificial intelligence, data science, ethics, methods, critique, human-computer interaction, practice, design research
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.