Content Tags

There are no tags.

Dislocated Accountabilities in the AI Supply Chain: Modularity and Developers' Notions of Responsibility

Authors
David Gray Widder, Dawn Nafus

Responsible AI guidelines often ask engineers to consider how their systems might harm. However, contemporary AI systems are built by composing many preexisting software modules that pass through many hands before becoming a finished product or service. How does this shape responsible AI practice? In interviews with 27 AI engineers across industry, open source, and academia, our participants often did not see the questions posed in responsible AI guidelines to be within their agency, capability, or responsibility to address. We use Lucy Suchman's notion of located accountability to show how responsible AI labor is currently organized, and to explore how it could be done differently. We identify cross-cutting social logics, like modularizability, scale, reputation, and customer orientation, that organize which responsible AI actions do take place, and which are relegated to low status staff or believed to be the work of the next or previous person in the chain. We argue that current responsible AI interventions, like ethics checklists and guidelines that assume panoptical knowledge and control over systems, could improve by taking a located accountability approach, where relations and obligations intertwine and incrementally add value in the process. This would constitute a shift from "supply chain" thinking to "value chain" thinking.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.