The Danger of Intimate Algorithms

We must reimagine our algorithmic systems as responsible innovations that serve to support liberatory and just societies.

The following is a lightly edited transcript of remarks delivered at the Co-Opting AI: Body event, held on October 28, 2019, and organized by Mona Sloane.

After a sleepless night—during which I was kept awake by the constant alerts from my new automatic insulin pump and sensor system—I updated my Facebook status to read: “Idea for a new theory of media/technology: ‘Abusive Technology.’ No matter how badly it behaves one day, we wake up the following day thinking it will be better, only to have our hopes/dreams crushed by disappointment.” I was frustrated by the interactions that took place between, essentially, my body and an algorithm. But perhaps what took place could best be explained through a joke:

What did the algorithm say to the body at 4:24 a.m.?

“Calibrate now.”

What did the algorithm say to the body at 5:34 a.m.?

“Calibrate now.”

What did the algorithm say to the body at 6:39 a.m.?

“Calibrate now.”

And what did the body say to the algorithm?

“I’m tired of this shit. Go back to sleep unless I’m having a medical emergency.”

Although framed humorously, this scenario is a realistic depiction of the life of a person with type 1 diabetes, using one of the newest insulin pumps and continuous glucose monitor (CGM) systems. The system, Medtronic’s MiniMed 670G, is marketed as “the world’s first hybrid closed loop system,”1 meaning it is able to automatically and dynamically adjust insulin delivery based on real-time sensor data about blood sugar. It features three modes of use: (1) manual mode (preset insulin delivery); (2) hybrid mode with a feature called “suspend on low” (preset insulin delivery, but the system shuts off delivery if sensor data indicates that blood sugar is too low or going down too quickly); and (3) auto mode (dynamically adjusted insulin delivery based on sensor data).

In this context, the auto mode is another way of saying the “algorithmic mode”: the machine, using an algorithm, would automatically add insulin if blood sugar is too high and suspend the delivery of insulin if blood sugar is too low. And this could be done, the advertising promised, in one’s sleep, or while one is in meetings or is otherwise too consumed in human activity to monitor a device.2 Thanks to this new machine, apparently, the algorithm would work with my body. What could go wrong?

Unlike drug makers, companies that make medical devices are not required to conduct clinical trials in order to evaluate the side effects of these devices prior to marketing and selling them. While the US Food and Drug Administration usually assesses the benefit-risk profile of medical devices before they are approved, often risks become known only after the devices are in use (the same way bugs are identified after an iPhone’s release and fixed in subsequent software upgrades). The FDA refers to this information as medical device “emerging signals” and offers guidance as to when a company is required to notify the public.

As such, patients are, in effect, exploited as experimental subjects,3 who live with devices that are permanently in beta. And unlike those who own the latest iPhone, a person who is dependent on a medical device—due to four-year product warranties, near monopolies in the health care and medical device industry, and health insurance guidelines—cannot easily downgrade, change devices, or switch to another provider when problems do occur.

It’s easy to critique technological systems. But it’s much harder to live intimately with them. With automated systems—and, in particular, with networked medical devices—the technical, medical, and legal entanglements get in the way of more generous relations between humans and things.

I began using the new Medtronic system in January 2018—upgrading, so to speak—when the plastic casing on my previous pump began breaking down. There are many aspects of the system that I have taken the past two years to observe, document, and understand through autoethnographic field notes.

With my previous system—consisting of proprietary devices from different companies that did not communicate with each other, but that made for good companions in my care—I could adjust the calibration times according to my daily rhythms and still have accurate information about my blood sugar patterns. While, on the one hand, the devices had their own needs—batteries, data, networks, software, and hardware—on the other hand, I could ignore the alerts and alarms when they were intrusive. The alerts and alarms were much less frequent, and some could be dismissed via an iPhone application. However, with the new system, it is necessary to enter blood sugar information immediately after an alert in order to keep the system in auto mode and to continue to access the data (otherwise, it disappears completely from the screen of the device). With automated, closed-loop systems—like the new Medtronic system—more of the agency and control is deliberately entrusted to the device and the human is cast aside.

In short, automation takes work. Specifically, the system requires human labor in order to function properly (and this can happen at any time of the day or night). Many of the pump’s alerts and alarms signal that “I need you to do something for me,” without regard for the context. When the pump needs to calibrate, it requires that I prick my finger and test my blood glucose with a meter in order to input more accurate data. It is necessary to do this about three or four times per day to make sure that the sensor data is accurate and the system is functioning correctly. People with disabilities such as type 1 diabetes are already burdened with additional work in order to go about their day-to-day lives—for example, tracking blood sugar, monitoring diet, keeping snacks handy, ordering supplies, going to the doctor. A system that unnecessarily adds to that burden while also diminishing one’s quality of life due to sleep deprivation is poorly designed, as well as unjust and, ultimately, dehumanizing.

At night, I was awakened no less than five times with alerts of all kinds. What was the algorithm thinking?

While there is a myriad of problems with the Medtronic MiniMed 670G, here I will discuss only a few of the most significant and interrelated ones: namely, a technical issue called the “loop of death,” “alert fatigue,” and continual sleep deprivation.

Prior to beginning auto mode, I met with a representative from Medtronic for a mandatory training session. Hearing this mode described, I wondered whether I might sleep through the night a bit better. (Normally, with my previous system, I was awakened when I had low blood sugar, which needed to be treated immediately. While this happened fairly frequently, it was not a nightly occurrence. My hope was that the algorithm could both manage high blood sugar and prevent low blood sugar such that I would not be awakened at night.) The company rep replied, “I met with a patient recently that described the new system, saying that it is ‘the best sleep of my life.’”

That Sunday night, the day before an important day-long meeting, I woke at 2:30 a.m. to a buzzing from the 670G device, alerting me to calibrate the sensor. I paused the alert, and yet I was awakened one hour later, and again an hour after that. That night, I was awakened no less than five times with alerts of all kinds. What was the algorithm thinking?

I was groggy and delirious the following day. I made it through the day, but had to excuse myself from the meeting several times to calibrate the pump in order to stay in auto mode. By the end of the day, I was completely exhausted and frustrated. Three of the fingers on my left hand were purple and sore, each displaying a constellation of punctures, after I pricked them nearly 30 times to calibrate the pump (as opposed to the usual two or three times a day).

The next day was when I posted about “abusive technologies.” This post prompted an exchange about theorist Lauren Berlant’s “cruel optimism,” described as a relation or attachment in which “something you desire is actually an obstacle to your flourishing.” One week later, I abandoned the device’s auto mode completely.


Co-Opting AI

By Mona Sloane

The extremely high frequency of calibrations that day—what a Reddit diabetes forum calls the “loop of death”—is now known to have been partly the fault of the first-generation transmitter. When I started auto mode for a second time, 18 months later, in July 2019, my diabetes educator told me that the first thing that I needed to do was to request the next-generation transmitter, which would eliminate some of these problems.

Indeed, 18 months later, my experience with auto mode was much improved (and I have had many fewer episodes of severe low blood sugar). But the frequency of 3 a.m. calibrations (as well as an assortment of other alerts and alarms signaling everything from low batteries and low reservoirs to the exiting of auto mode) continues to be a serious problem. There are many possible explanations for the frequent calibrations, but even the company does not have a clear understanding of why I am experiencing them. For example, with algorithmic systems, it has been widely demonstrated that even the engineers of these systems do not understand exactly how they make decisions. One possible explanation is that my blood sugar data may not fit with the patterns in the algorithm’s training data. In other words, I am an outlier. Another possibility is that I need to calibrate more frequently each day before being prompted. Or, perhaps, the adhesive tape that attaches the sensor to the body might become slightly looser during the week (though I do not believe this to be true). Or the place on my body where the sensor is attached is not ideal. Or maybe the sensor becomes less reliable over the course of the week for some other reason. But none of these reasons explains why I should need to be frequently woken up in the middle of the night due to the design of the system.

On a “good” day, I experience only one or two alerts; but at other times, I might get over 25 (that is, more than one every hour). These alerts become more frequent as the week goes on—for example, when the glucose sensor is in its fourth through seventh day of use (it is replaced every seven days).

For patients, “alert fatigue” is a serious issue that causes sleep interruption and leads us to ignore or dismiss alarms.

In the medical field, the term “alert fatigue” is used to describe how “busy workers (in the case of health care, clinicians) become desensitized to safety alerts, and consequently ignore—or fail to respond appropriately—to such warnings,” according to the US Department of Health and Human Services’ Agency for Healthcare Research and Quality. As a result, patient safety is compromised.

And doctors and nurses are not the only professionals to be constantly bombarded and overwhelmed with alerts; as part of our so-called “digital transformation,” nearly every industry will be dominated by such systems in the not-so-distant future. The most oppressed, contingent, and vulnerable workers are likely to have even less agency in resisting these systems, which will be used to monitor, manage, and control everything from their schedules to their rates of compensation. As such, alerts and alarms are the lingua franca of human-machine communication.

For patients as well, “alert fatigue” is a serious issue that causes sleep interruption and, in addition, leads us to ignore or dismiss alarms, even those that could signal something life-threatening. In my case, I’ve observed everything from daily exhaustion and irritability to depression and even thoughts of suicide.

Sensors and humans make strange bedfellows indeed. I’ve learned to dismiss the alerts while I’m sleeping (without paying attention to whether they indicate a life-threatening scenario, such as extreme low blood sugar). I’ve also started to turn off the sensors before going to bed (around day four of use) or in the middle of the night (as soon as I realize that the device is misbehaving). While I’ve tried several adjustments that have been suggested by my diabetes educator—such as moving the sensor to my side and calibrating more frequently during the day—these have resulted in only very minor improvements.

Ultimately, I’ve come to believe that I am even “sleeping like a sensor” (that is, in shorter stretches that seem to mimic the device’s calibration patterns). Thanks to this new device, and its new algorithm, I have begun to feel a genuine fear of sleeping.


Designing AI with Justice

By Sasha Costanza-Chock

To be sure, I am not the only one who can complain of sleep deprivation, which also afflicts those with newborn babies, overly excited pets, sleep apnea, and late-night digital addictions. It is known to be extremely bad for your health. There is a reason that the CIA has used it to torture people. The need for sleep is a human universal (and is common in almost all other creatures). But to be continually awakened and to have my body deprived of sleep by a little machine nestled under the blankets that professes to care about me—that does feel uniquely cruel.

As companies design the next generation of “smart” medical devices, the government must require them to more seriously consider the social, cultural, and psychological impacts of their inventions as potential risks. In this case, there is no point in fixing the body at the expense of degrading the mind.

And, metaphorically, if you cannot sleep, you cannot dream. If we are to reimagine our algorithmic systems as responsible innovations that serve to support liberatory and just societies, we must have the capacity to dream.

In recent years, many scholars and technologists have turned their attention to arguing for transparency, fairness, accountability, and equity in the design of algorithmic systems. Artists and designers have engaged with these technologies in creative ways in order to reimagine their role in society (and to make us more aware of the dangers). Activists, including tech workers, have mobilized campaigns in order to resist them. All these goals are important in order for us to understand the ways in which algorithmic systems are becoming embedded in all aspects of society.

Based on my intimate experiences living with algorithms, I have a few thoughts of my own. First, if we are to adopt such systems at all, let’s make it abundantly clear how much labor is involved to make them work and who is going to perform that labor, as well as when and where that labor is necessary.

Second, no one company should control all aspects of a system. Interoperability between different proprietary and open-source technologies is essential to allow people to have greater agency in what systems to use.

Third, as new algorithmic features are added—shifting the boundaries between human and machine agency—let’s be careful to understand the ways in which experiences with these systems are defined by gender, race, class, sexuality, and age.

But for now, I’d settle for an algorithmic system that sleeps through the night.

My sincere thanks to Sabir Khan, Yanni Loukissas, and Nassim Parvin at the Georgia Institute of Technology; Zach McDowell at the University of Illinois at Chicago; and Martin Tironi, Pablo Hermansen, Matias Valderrama, and Renato Bernasconi at the Pontificia Universidad Católica de Chile. Each of these individuals played an important role in supporting this work over the past two years through the organization of symposia, public lectures, workshops, and panels where I gained valuable feedback. I would also like to acknowledge the directors, faculty, fellows, and staff at the Institute of Advanced Study at Durham University (UK), where, as a visiting fellow from January to March 2020, I finally sat down to draft this essay. Though the context for this work has shifted a great deal since, the relevance of documenting injustice and dehumanization at the intersection of artificial intelligence, design, disability, and health care is ever more urgent if we are to craft more equitable futures. Lastly, my great appreciation goes to Mona Sloane for organizing and moderating the incredibly generative Co-Opting AI: Body event, in October 2019. icon

  1. Such systems were pioneered in open-source hacker communities prior to their commercial introduction.
  2. In fact, the advertising touts, “Sleeping soundly. Waking up energized. The MiniMed 670G system lets you wake up every morning well-rested and ready to take on the day.”
  3. Thanks and acknowledgements go to the medical anthropologist Danya Glabau, who made me aware of this term in her own presentation at the October 2019 Co-Opting AI: Body event.
Featured image: White bed comforter (2016). Photograph by Jaymantri / Pexels