Late News

How an AI tool for fighting hospital deaths actually worked in the real world

At first glance, this is an example of a major technical victory. Through careful development and testing, an AI model successfully augmented doctors’ ability to diagnose disease. But a new report from the Data & Society research institute says this is only half the story. The other half is the amount of skilled social labor that the clinicians leading the project needed to perform in order to integrate the tool into their daily workflows. This included not only designing new communication protocols and creating new training materials but also navigating workplace politics and power dynamics.

The case study is an honest reflection of what it really takes for AI tools to succeed in the real world. “It was really complex,” says coauhtor Madeleine Clare Elish, a cultural anthropologist who examines the impact of AI.

Repairing innovation

Innovation is supposed to be disruptive. It shakes up old ways of doing things to achieve better outcomes. But rarely in conversations about technological disruption is there an acknowledgment that disruption is also a form of “breakage.” Existing protocols turn obsolete; social hierarchies get scrambled. Making the innovations work within existing systems requires what Elish and her coauthor Elizabeth Anne Watkins call “repair work.”

During the researchers’ two-year study of Sepsis Watch at Duke Health, they documented numerous examples of this disruption and repair. One major issue was the way the tool challenged the medical world’s deeply ingrained power dynamics between doctors and nurses.

In the early stages of tool design, it became clear that rapid response team (RRT) nurses would need to be the primary users. Though attending physicians are typically in charge of evaluating patients and making sepsis diagnoses, they don’t have time to continuously monitor another app on top of their existing duties in the emergency department. In contrast, the main