“Democracy Depends on It”: Carissa Véliz on Privacy and Ending Data Surveillance

“There is nothing shocking or radical about ending an economic practice that has too many negative externalities.”

Carissa Véliz has written a book that everyone should read. If you are uneasy about the creeping digitalization of human life and our ever-eroding expectations of privacy—or would like to be—then you will enjoy and benefit from her book Privacy Is Power: Why and How You Should Take Back Control of Your Data, just released in an expanded paperback edition by Melville House. Those already worried—or who have just finished the book’s first chapter detailing the dizzying quantity of surveillance under which ordinary life now takes place—will be energized and challenged by Véliz’s bold solutions. Most striking is her argument for shutting down the data economy entirely, or very nearly so, just as we put an end to other forms of activity that are socially hazardous and based in the violation of fundamental rights. It is a shame that this position has not been more widely discussed; after reading or speaking with Professor Véliz, one begins to think that the burden of the argument ought to rest with those who wish to maintain the data economy rather than those who would abolish it.

Véliz is a philosopher and professor at the Institute for Ethics in AI at the University of Oxford who publishes extensively in academic journals on a range of ethical issues having to do with the reach of technology in our lives. Nevertheless, the problems to which she turns her attention are never the pejoratively “academic” debates that can make one wonder about the usefulness of analytic philosophy for living a good or moral life, but pressing questions of current import in need of clear analysis and workable solutions. So it comes as little surprise that she would write a book for a general readership and that it would be lucid, generous in insight, and exacting in its commitment to human well-being.

In all of Véliz’s work, one is struck by a sense of moral urgency that combines a commitment to the seriousness of contemporary issues with a faith in the effectiveness of philosophy’s tools for understanding and navigating them. This moral urgency is something like the unstated first premise of her work. It is there in her conversation too, although, as in the prose, it sometimes seems to vanish behind the clarity of her thought, the generosity of her attention, and her ready wit and irony. We spoke in June.

Lowry Pressly (LP): One of the things that I liked most about your book was the idea that privacy today resembles contemporary ecological disasters in that it is essentially a collective problem. That privacy is a common, rather than individual, good. This is a view of privacy distinct from and somewhat at odds with the one you tend to hear from lawmakers, judges, philosophers, and ordinary citizens. It’s also a little counterintuitive. Why should we think of privacy as collective?


Carissa Véliz (CV): It is in the interest of big tech to convince us that privacy is very individual, a personal choice, up to you. They have gotten a lot of power and wealth from this formulation of privacy. That should make us suspicious, or at least on guard and more critical than if our losses of privacy didn’t benefit them.

So why should we think about privacy as collective? Well, for one thing, just consider experience. Try to protect your privacy while your family doesn’t. It is just impossible. If your family shares their DNA with one of these DNA test-kit companies, there goes your DNA, no matter how careful you are. If your family takes pictures of you in family gatherings and uploads them to social media, it doesn’t matter how careful you are, there goes your privacy.

We also suffer the consequences of losses of privacy collectively. It is similar to ecological damages. It doesn’t matter whether you are perfect at recycling; if other people don’t recycle, you are going to suffer the consequences of climate change as much as anyone else. Privacy is the same. The most obvious example is Cambridge Analytica: Only 270,000 people gave their data to the political firm, and, with that data, the firm got access to the data of 87 million people who never consented to it. And with that data, they created a tool that could profile voters around the world, and thereby erode democracy, not only in those countries but also globally, as with Brexit. So that is a very clear case of how when we lose privacy, we all pay.


LP: So the idea of personal privacy is in a sense like the carbon footprint, which was promoted by the oil industry as a way of individualizing the moral question of climate change.


CV: Yes, exactly. I didn’t know that, but it makes sense if you think about it. Just another iteration of the strategy of divide and conquer.

We are pushed in the direction of thinking about privacy as something individual partly because it is very closely associated with personal data, and the term “personal” sounds like something individualistic. I don’t know what else to call it but a language mishap. And then there are other things, like companies having “privacy policies.” A company having a privacy policy just means that they have a policy about privacy. That doesn’t mean that they have a good policy about privacy. Studies are very clear that whenever people see “privacy policy,” they immediately assume that their privacy is being protected, and that of course is often not the case. So I agree with you that we don’t necessarily have the language to convey the collective nature of privacy.


How the Campus Becomes the Border

By Daniel Gonzalez et al.

LP: It’s curious—at the same time we are confined by the terms we inherited to think about privacy, we also have a hard time thinking clearly about it. Or at least I do, and I don’t think I’m the only one. My experience is that if I were to ask a random person on the street whether they care about privacy, they will say, “Yes, of course, it is really important to me.” But if I then say, “All right, what is privacy for? Why should we care about it?” Most of us have trouble coming up with an adequate answer—adequate to ourselves, I mean, to the force of our values. I don’t know if this is a fact about privacy or about the difficulty of elaborating the values that guide our lives, but either way it seems a striking fact about privacy that most of us want it without having a very clear idea of what it is that we want. I wonder if this is your experience and if you have given any thought to the implications of the fact that privacy is difficult to elaborate, compared to other fundamental values?


CV: I suspect that our grandparents would have had an easier time explaining why privacy was important. And it has to do with at least two elements. One is culture; privacy was something that your parents taught you, that your friends taught you. It was easy to understand that if your boss knew whom you voted for, they might not like you as much, or they might not have hired you in the first place.

Going back further, I’m currently writing an academic book about privacy, and one of the chapters is about how privacy has a long evolutionary history, and if you can imagine yourself being in the savannah thousands of years ago, what does it mean that somebody is watching you? It means that you are vulnerable to being their prey. There is a phenomenological feeling of having your privacy violated that makes you feel on alert, in danger. Even if this feeling is very evolutionarily old, it still happens to us, for example, when a stranger looks at us too intently in a bad neighborhood.

But there are no physical signs or no physical sensations when your data gets collected. You don’t even notice. You don’t see it, you often don’t understand it. You don’t know what data it is, what inferences are going to be made with it, who is going to look at that data, where it is going to end up. You have no idea. You are in the dark about it.


Can Free Assembly Survive the Internet?

By Lucy Bernholz et al.

LP: That reminds me of one of your book’s more powerful claims, which is your characterization of personal data as the asbestos of contemporary economic and political life. It is such a good analogy that it’s hard for me now not to see data as a toxic substance. Large data breaches seem more like chemical spills, with bad, unpredictable consequences for individuals and societies, which in turn calls to mind the dangers of improper storage and invites questions about why we are producing and storing this stuff in the first place if it’s so dangerous. Yet the comparison also recalls the “toxic assets” behind the 2008 economic crisis.


CV: Yes, I like the analogy because it works on so many levels. Asbestos is a mineral that is cheap, easy to mine, and incredibly useful because it doesn’t catch fire easily. It is very durable, so we put it everywhere. We put it in tiles, in our cars, plumbing, in our roofs, and it turns out that it is incredibly toxic. Every year hundreds of thousands of people die from cancer due to exposure to it. There is no safe threshold of exposure. In the same way, personal data is incredibly cheap, it is easy to mine, it is useful in many ways, but it also exposes us to these unnecessary harms.

Another reason I like the analogy is because it shows how we are in this together. It is not only about individuals. It is also about institutions. Every data point is a potential leak, a potential lawsuit. Every company and every institution that collects more personal data than they need are generating their own risk. They are also generating risk for society more generally—just imagine an adversary hacking the data and using it for intelligence purposes. We all hope that we never have to live through a war, but if we did, you can imagine just how dangerous that could be. Or what if somebody comes into power who is not very democratic and gets hold of this architecture of surveillance that we have built? You could just imagine how difficult it would be to fight against a regime like that. Just imagine a Nazi regime today with the data that we collect. It could be a much more powerful regime than anything we have seen before.

Going back to the point about the financial markets, if I wrote the book again, I would include a note about that as well, because it turns out that the people who designed the financial market of subprime mortgages were the same people who designed the real-time bidding market for personalized ads. Both markets suffer from many of the same mistakes and problematic elements. Among other problems, they are incredibly opaque markets. When people bought subprime mortgages, they didn’t realize what they were buying. They thought they were buying something else. In the same way, when people buy personalized ads, they think they are buying one thing, but actually the metrics are inflated at best. And at worst there is a high amount of click fraud because there are intermediaries who sell fake ads.

Say you are a publisher and an intermediary tells you, “We are going to show your ad in the New York Times, and people who are 20-to-22-year-olds are going to see it.” In the past, if you bought an ad in an important newspaper, you could just buy the newspaper yourself and verify whether the ad was there. You would know if the deal had been honored. But today, because ads are personalized, you, the publisher, are not a 22-year-old living in New York. So you are not going to get to see the ad, and many times that ad doesn’t even exist.

So there are many concerns about this market causing a financial bubble, and eventually a financial crisis when publishers realize that they have been paying incredibly high prices for ads that are just not worth it.


Face Surveillance Was Always Flawed

By Amanda Levendowski

LP: It is not a good sign that the architects of one are the architects of the other, although it is not particularly surprising either.

One of the challenges that I face with students is getting them to take seriously the potential downsides to the data economy in the process of developing their own judgments, even if they ultimately land in favor of the status quo. Viewed through your lens of data’s toxicity, it strikes me a bit like how people used to blasély brush their teeth with radium or gave their children toys painted with lead. Let me give an example. Toward the end of last semester, we held a screening of the Florian Henckel von Donnersmarck film The Lives of Others.


CV: I love that film.


LP: Me too. I have some students from former Soviet republics, and a lot of it hit home for them. They’re only 18, 19, but a sense of the deep and pervasive constraints of the surveillance state is a part of their collective memory. By contrast, my US students have a hard time imagining that it could happen to them—which is strange, since it’s not like the government here hasn’t been spying on its citizens, too. It often doesn’t register that they have much to lose, especially when potential harms are weighed against how deeply mobile technology is woven into the fabric of their lives. But then we heard about the leaked Supreme Court decision overturning the federal right to abortion in this country, and we know from your book how easy it is to track anybody with publicly available information available for purchase. Certain states in this country are likely to make it a crime to go across state lines to procure an abortion, at which point it becomes very easy even for private citizens to buy this data from legitimate, so to speak, data brokers and enforce the law either themselves or via vigilante statutes.

I guess what I’m trying to say is that it seems like the danger becomes real and believable only all too late, and this might have something to do with our inarticulacy about privacy.


CV: Yes, that is the curse of human psychology. This is why in philosophy we have this image of the owl of Minerva always arriving late. Ecology is the same way. Anything abstract, long-term, or so negative that it is hard to cope with tends to get ignored. Even death! Of course, if you ask anyone, “Are you going to die?” they will say “Yes.”


LP: Not me.

CV: Exactly, most of us don’t act as if we are going to die. It is so unbelievable that one day we won’t be here that it is hard to wrap our heads around it. In the same way, when you have lived in a democracy all your life, it is easy to take it for granted. And this chronic neglect has such a negative effect, not only for taking care of what is precious, but also for being ambitious about the society that we want to build. So many people talk to me about big tech like these companies and institutions are unavoidable and permanent, when 30 years ago they weren’t even around. Imagination is such an important thing. It has to do with being able to think counterfactually. And maybe that is something that philosophy can contribute to a bit. It is not a panacea, but the practice of thinking about how things could be otherwise is a good habit of mind.


LP: Technology companies, like the post-9/11 surveillance state before them, have had tremendous success in making their increasingly invasive presence in our lives seem inevitable. They have more or less succeeded in setting the terms of the debate—both in inculcating in us new habits of mind, as you say, but also sort of preemptively framing questions about what’s possible and what’s indispensable with regard to the data economy. Maybe this has something to do with how my students—who, unlike us, were born into a world perpetually connected by the internet—can be so incredibly insightful in identifying problems but then feel a bit impotent when it comes to changing things.


CV: Many of the things that we take for granted today once seemed impossible, from eight-hour shifts to weekends, universal suffrage, the right to education, the right to safe water—all kinds of things that today are part of our everyday humdrum one day seemed impossible and idealistic. And companies want to make us feel disempowered, as if there is nothing we can do about the world around us. But one reason that should make us feel like we might have power is how much money these companies put into lobbying. They put so many millions into pushing for what they want because they know themselves to be fragile. A company like Facebook depends on personal data, and regulation or resistance can completely change (and potentially obliterate) their company. When Apple gave people a little bit more control over their data, an easier way to say “no” to data collection, Facebook’s stock dropped 25 percent in one day—the steepest drop in the history of US companies. That shows you just how vulnerable they are.

Another thing to keep in mind is that people tend to think that this is a lost battle: your data is already out there, and it can seem useless to resist. But actually the most valuable personal data is the personal data that you create today. Personal data has a short expiration date.  You change your taste, you move houses, you switch jobs, and suddenly personal data created a year ago doesn’t apply anymore. And that means that if you start being careful with your privacy today, you can make a huge difference. Much bigger than people imagine.


Privacy Cultures

By Palmer Rampell

LP: This brings us to what might be the most significant argument of your book: ending the data economy, full stop. How does ending the data economy fix the toxicity of data? How does it keep us safe?


CV: When I started researching privacy, it surprised me that no one was calling for the end of the data economy. Of course I found many people who were alarmed by the state of our privacy. But I didn’t find anyone who ventured to suggest that maybe we shouldn’t be buying and selling personal data at all. You read scores of concerning anecdotes, even books about it, like Shoshana Zuboff’s famous The Age of Surveillance Capitalism. But the only practical suggestion in that book was to fight capitalism, which in my mind is misguided, because you can imagine a completely noncapitalist society in which there is terrible surveillance. That has already happened!

The data economy has become such a given that when I started advocating for its end, people were shocked, asking me if we can actually do that. But there is nothing shocking or radical about ending an economic practice that has too many negative externalities. We have banned certain kinds of economic activity in the past because of them being too toxic for society. Theft is a very profitable business model too, but that is not enough of a reason to allow it. And in the same way, if the data economy is very profitable (for a very select few) but it turns out that it is wrecking democracy and exposing society to incredible harm, then, as profitable as it may be, it might not be worth it. Even more so if it turns out that it is only profitable because it is actually creating a financial bubble that is going to burst. And even though it sounds radical to end the data economy, it wouldn’t be as hard as it seems. People imagine that they would have to give up searching online and social media. But nobody said that. Technology companies want us to think that they need the data economy to give us those things that we love, but actually they mostly collect personal data as a way to earn money, and there are other business models available. In 2013, Google was an incredibly profitable company. A journalist calculated that Google earned $10 per user per year. If they were to charge us that, they would still be a hugely profitable company. Some people ask me whether that kind of business model is realistic. I was recently asked this question in a room full of people. In response, I asked the audience whether anyone there was paying for a Netflix account. Pretty much every hand in the room went up, and Netflix is much more expensive than $10 per year.

There are all kinds of business models that we could design and implement—business models that are transparent, fair, and don’t damage democracy. What is unacceptable and extreme is to have a business model that depends on the systematic and mass violation of rights. icon

Featured image: Carissa Véliz. Photograph courtesy of Carissa Véliz