The Dark Matter of Digital Health

We’re on winter break until January 4. Meanwhile, please enjoy 10 staff favorites from this past year. Today’s pick was originally published on April 14, 2020.
Digital health is solidifying the divide between those whose health is valued and those whose health is ignored.

The following is a lightly edited transcript of remarks delivered at the Co-Opting AI: Body event, held on October 28, 2019, and organized by Mona Sloane.


A little over a year ago, I began to feel an attractive pull from digital-health trackers. I was beginning to more closely monitor my fertility and overall health, in hopes of becoming pregnant. I bought a scale, white and sleek enough to fit in the corner of my bedroom—the first such purchase in my lifetime—and downloaded the Fitbit app on my Android phone. Could self-tracking help me and my potential children become stronger, prettier, and healthier? Could this paradigmatic digital-health tool help me overcome the unseemly limits of my earthly flesh, tempted as it is by video games, homemade pastries, and twice-cooked pork?

What I was seeking from these devices was transcendence, or what I’ve described in previous writing as “the possibility to overcome weaknesses of body and willpower that seem to be part of the ordinary human experience.” The desire for transcendence shapes how we regard not only our individual bodies but also our collective histories. And it is this desire that drives the development of a variety of AI-based health technologies, including digital-health trackers like the Fitbit app. Each new technology is, implicitly, a promise: purchase this, and you will leave your bodily concerns and social histories behind.

But this desire is dangerous. Even if such transcendence were possible—and it isn’t—any individual instances of it do nothing to address historical medical abuse and exclusions from the benefits of health care, which have allowed some people to live longer and healthier lives than others. Digital-health tools today promise quick technological fixes for deep social problems but ultimately leave those problems untouched.

For example, proponents frequently state that the goal of digital health is to overcome racial and gender inequalities in health outcomes, as well as to provide broader access to care. These technologies—from computer programs that make treatment recommendations to self-trackers worn on the wrist to smartphone apps that log menstrual cycles—promise that we can move beyond unjust and unequal pasts and better distribute health-care resources in the future. Industry boosters would have us believe that we can accomplish this to a meaningful degree just by firing up an algorithm.

In fact, the opposite is happening: digital health is solidifying the divide between those whose health is valued and those whose health is ignored.

Such technologies reveal the ongoing importance of what sociologist Simone Browne might call the modern health economy’s “dark matter.” Browne uses this term, which in astrophysics refers to matter that cannot be seen but that exerts gravitational influence, as a metaphor for how Black people have been instrumental in developing technologies, even as they have remained invisible on its margins. She recounts the ways that Black people’s bodies have served as subjects for the development of modern surveillance systems in the United States, from the use of city streetlights to find fugitive slaves to modern biometric surveillance, such as digital fingerprint scans, used in immigration and criminal justice enforcement.

The arguments Browne presents can easily be applied to the digital-health realm, where the technology promises transcendence for some, while treating others as merely a resource to be extracted. Black people and women have been central to the development of modern medical knowledge—as users of health care and as unwilling experimental subjects—and yet they still do not share equally in the rewards. That is because digital-health technologies privilege certain groups, like users of high-end smartphones and white patients at well-resourced teaching hospitals. In extending services and care only to a select few, these technologies do not right past wrongs on their own.

Given the increasing prevalence of AI-driven systems in health care, we must scrutinize promises of transcendence and consider whether such technologies deliver on their promises.

Browse

Our Metrics, Ourselves

By Natasha Dow Schüll

Consider the fertility-tracking company Clue, which offers a free app that tracks menstrual cycles. By inputting information about their own cycle into the app, users receive helpful graphics about when they are most fertile, when they aren’t, and when their next cycle may start, allowing family planning and peace of mind.

But what does the company get out of all this information shared by its users? The answer isn’t yet clear, but according to some recent reporting about these apps, it might very well be profit and control, at the expense of the users themselves.

Clue claims that it aims to fix gender inequalities in medicine. In an article in Elle from 2018, “The Race to Hack Your Period Is On,” company representatives and devoted users discussed the data ambitions of the company with reporter Marina Khidekel. As Khidekel writes:

After [a] 2016 study ranked the app number one, researchers from academia came calling, excited by its treasure trove of information—the largest data set about menstruation in existence. Clue has now set up partnerships with institutions like Columbia, Oxford, and Stanford Universities and the Kinsey Institute, all of which hope to shed new light on a blatantly understudied segment of the population: women. (At some point, [Clue cofounder Ida] Tin and every researcher I spoke to unleashed a tirade about women’s health being an academic afterthought.)

While the app promises convenience to end users, behind the scenes the company offers up user data to academic researchers around the world. The company justifies the use of its data for research as a way to fill in the gaps in knowledge about women’s health.

This should raise eyebrows right away, because researchers have demonstrated that deidentified digital-health information can be reidentified. While privacy isn’t the only concern raised by this business model, it is especially important to consider with reproductive health because such data can be used to perpetuate workplace discrimination. US employers have been caught monitoring their employees’ fertility data, a brazen and paternalistic move that another fertility-tracking company, Ovia, claims is a way for employers to responsibly control health-care spending. Collecting and sharing data about fertility would seem to warrant concern in the US, especially among workers hoping to conceive and give birth.

At the same time, Clue representatives also take pains to position the company as sensitive to the role that wealth plays in shaping access to health care. Further down in the article, readers learn from Ida Tin about how the company is seeking to balance the pursuit of profits with broad access to fertility tracking:

Then there’s the issue of making money, which Clue does not do yet. While other apps sell data to third parties or spam users with ads, Tin isn’t interested, and hopes instead to create something so valuable that customers will want to put down money for it. “I hope people will pay to understand what’s going on inside their bodies, and see that data as a kind of life insurance,” she says. Yet she doesn’t want Clue to be exclusionary to low-income users, either. “So I don’t really know yet. We’re doing experiments.”

These comments by and about Clue evidence a politics of transcendence, this time from a company perspective rather than from a user one. The company’s stated aim is to redress gender imbalances concerning the production of scientific knowledge about human health. Furthermore, it claims to strive to keep its technology accessible regardless of users’ financial means, in apparent contrast to the rapidly rising cost of medical care in the US.

Yet these promises, altruistic on the surface, buttress a technology that at its core is about harvesting data from users and passing it along to third parties. One must wonder if Clue will give in to the siren song of lucrative institutional partnerships with large employers, pharmaceutical giants, and insurance companies the way that some companies in the digital-health economy, such as the DNA-testing service 23andMe, have done.

Browse

Designing AI with Justice

By Sasha Costanza-Chock

Clue’s strategy exemplifies the ways that digital-health entrepreneurs think about the business of health. They claim to design better solutions that will naturally and inexorably push society toward equitable access and health outcomes. Entrepreneurs, they argue, can do well (financially) while doing good (socially).

This is the implicit promise in the business models of many digital-health companies: present data collection is justified because it will offer some potential future benefits to users’ health, as well as to the company’s bottom line. Data is collected from users without clearly defined research goals or rationale, in the hopes that in the future a use that will be both ethical and appealing to consumers will be discovered. As Tin reveals in the Clue interview, the hope is often to drive research or produce insights of such great value to potential users that they will be willing to pay for the service, or for future premium offerings. In the case of fertility trackers, a second promise is to make health care more equitable for a group many such companies see as overlooked: women with fertility concerns.

This approach forces the company and its users to act as though a specified future—one featuring beneficial insights, productive research, increasing company profits, and greater health equity—is sure to arrive no matter what. Again, this is not unique to Clue. It is a general feature of the broader machine-learning, data-driven economy, and it is the core value proposition of almost every digital-health tool.

A.I. systems overlook groups that have historically been excluded and medically abused—namely menstruators and Black people, and especially Black women.

While the promise is that AI-driven, data-crunching health-technology systems will autonomously eliminate bias, if we pay a moment’s attention to how they actually work, we’ll be more likely to realize, as sociologist Ruha Benjamin has forcefully argued in her recent book, Race after Technology, that they will reproduce the inequities of the past.

For example, a paper published in Science in October 2019 described how one health-care resource-allocation algorithm that incorporates historical US health-care usage data made fewer recommendations for complex care for Black patients due to historical racial disparities in health-care spending. Another 2019 study, involving some of the same researchers, found that scheduling algorithms tended to place Black patients in overbooked slots because Black patients historically had more cancellations and made fewer appointments than white patients. These algorithms treated these disparities as givens, and continued to replicate the pattern in scheduling Black patients for future care.

However, a glance at the history of health-care delivery and access in the United States suggests a more complex picture: Black people have historically had less access to high-quality health care, as well as to wealth and disposable household income to cover copays, deductibles, and optional visits and procedures. Black people have also been regularly subjected to systematic undertreatment and medical racism in direct encounters with medical staff. Black people might delay, cancel, or opt out of care due to cost or a wish to avoid overtly racist interactions. But there is nothing inevitable about any observed lack of follow-up by Black patients; their seemingly ambivalent and inconsistent approach to medical care is a logical and strategic response to the historical failings of the system, and it is fixable.

Ultimately, as a Wired write-up of the study suggested, algorithms that improve efficiency can miss the reasons for differences in health-care usage by white and Black patients in the United States. As the reporter, Tom Simonite, writes of the resource-allocation algorithm discussed above, “The algorithm studied did not take account of race when estimating a person’s risk of health problems. Its skewed performance shows how even putatively race-neutral formulas can still have discriminatory effects when they lean on data that reflects inequalities in society.”

At a structural level, cases like this reveal the “dark matter,” to return to Browne’s term, of the digital-health economy. AI systems overlook groups that have historically been excluded and medically abused—namely menstruators and Black people, and especially Black women.

Browse

Can Fair Use Make for Fairer AI?

By Amanda Levendowski

In basing decisions on past data, the creators of these algorithms are permitting past inequities to creep into the frame. In setting up systems that seek to rapidly “fix” inequalities while improving efficiency, they overlook the exclusionary histories that led to health disparities in the first place. All too often, it is the already marginalized who will continue to be excluded because of past exclusions.

The promise of AI-driven digital-health technologies to fix social problems is a red herring. The public rhetoric around such technologies directs our attention to the maverick capacities of entrepreneurs to find new uses for data. Technologists claim that more—and more intensive use of—data will improve our lives, reduce racism and sexism, and improve “access” to health-care products and services. But these claims distract from what they are already doing: further entrenching health inequalities that are rooted in racist, sexist, and classist patterns of health-care access and use in the United States. icon

Featured image: Photograph by Pixabay for Pexels