Public Thinker: Siva Vaidhyanathan on Facebook and Other “Antisocial” Media

Thinking in public demands knowledge, eloquence, and courage. In this interview series, we hear from public scholars about how they found their path and how they communicate to a wide audience.
Siva Vaidhyanathan has built a career as a media studies and communications ...
Siva Vaidhyanathan

Siva Vaidhyanathan has built a career as a media studies and communications scholar. His work draws from the recent cultural past of digital technologies to speak to the dominant presence of digital life today. Throughout, his guiding questions have been about the ways we live with and through digital systems and what those relationships mean for us as individuals, as political actors, as community members.

Vaidhyanathan’s approach to his subject matter is laudably reasonable and realistic. He brings political critique to bear on various digital technologies, while grounding that critique in a practical understanding of the technologies’ benefits and drawbacks. His are not tirades or spiked jeremiads. Beyond the commonplace that all technologies include public goods and bads, though, he uncovers the assumptions and cultural commitments that give them shape. To develop his message, he has long aimed his findings at audiences both within and beyond the academy, with a publication record that spans peer-reviewed journals to the New York Times, the New Yorker, and the Washington Post, to Slate, NPR, the BBC, CNN, MSNBC, and more.

His new book, Antisocial Media: How Facebook Disconnects Us and Undermines Democracy, helps explain why the problem with Facebook is, basically, Facebook, in its design and structure. I recently spoke with Vaidhyanathan, the Robertson Professor of Media Studies at the University of Virginia, about the new book, his work to demystify technologies, and his dream of finding Antisocial Media outdated in due time because we have developed policies to redress the challenges social media brings to democracy.


B. R. Cohen (BRC): Facebook. This book is ostensibly about Facebook. Which is about digital social media, technological politics, forms of would-be connection. I’m only rehearsing the standards here. You’re at a useful intersection for our age, a scholar who’s invested a career in studying social media and digital technology, and a writer whose work isn’t trapped in academic hallways. Even so, the topic seems daunting. How did you figure out how to study the site and its technological politics?

 

Siva Vaidhyanathan (SV): Well, first I had to figure out how big the problem of Facebook is, how daunting and unsolvable. I knew I wanted to tell its global story. But when I started pondering the big number—2.2 billion people, the number of regular Facebook users around the world—it occurred to me that I hadn’t encountered that number in any other media scholarship or media account. Imagine the BBC ever reaching 2.2 billion people. So once I started to grasp the enormity of Facebook’s influence in the world, I felt like Kierkegaard, collapsing with fear and trembling before the power of the Almighty. I started to think there might be no solution to the problems that Facebook has brought us. There might not be. That’s where I was near the beginning.

 

BRC: Where did you land in the book?

 

SV: At this point in time, I don’t see a solution. I would like to see Facebook break off Instagram and WhatsApp and get a little bit less powerful and rich. I would like to see more data protection regulation occur in the world. But I started the book thinking that’s where I was heading. What I’ve recognized is that those things are insufficient. They are necessary, but insufficient, to limit the pernicious power of Facebook. What I came to at the end—and this kind of surprised me—was a much more extreme and radical conclusion: that Facebook was a very bad idea. But it’s a bad idea that we cannot fix. Pandora’s box can’t be unopened.

 

BRC: You write that the problem with Facebook is Facebook. It’s a big thing, requiring big responses.

 

SV: There are little things Facebook could do, too. They’re doing things to curb some of the pernicious effects in the world, but they refuse to confront the core problem, which is, yes, Facebook itself. And that’s not surprising. They would run through the halls of the Facebook offices screaming if they actually confronted the fact that the problem with Facebook is Facebook, that to truly reform or limit the damage, they’d have to undo its essence.

 

BRC: In your analysis, this is also an issue of scale. I mean, over a quarter of the human population …

 

SV: Yes, these problems are all problems of scale and amplification. The fact that Facebook now hosts profiles of 2.2 billion people and growing, the fact that it has a set of algorithms that amplify extreme content that generates strong emotions, the fact that Facebook operates in more than 120 languages, and the fact that it’s tethered to an advertising system that precisely and constantly targets people with propaganda means that the only way to fix any of these problems is to undo one or more of those aspects. And if you did that, you would not have Facebook. Facebook has made us victims of its own success.

Browse

The Big Picture: Misinformation Society

By Victor Pickard

BRC: Let me back up a minute. We’re very quickly talking about a behemoth of a communication technology. Radio, television, film, the internet—they’re all systems of communication technology. I don’t say that to reduce their identity but to ask about how you got started on this path, how you came to study digital social media.

 

SV: I went to graduate school in the 1990s, and I can tell you this: nobody went to grad school in the 1990s to study the internet. But I was working on my first book, a cultural history of American copyright, and by the end of the 1990s it was clear I’d have to take on digital technology. Digital technology was destined to mess with copyright, so I found myself scrambling to learn how the internet worked, how digital compression worked, how digital recording worked, about this emerging thing called internet culture, because all of these things were going to affect the practices of copyright protection and the practices of cultural dissemination in the immediate future. And I was right about that one, right?

 

BRC: Fair point, that’s true. It strikes me, your career’s time line as a teacher and writer and public speaker maps onto the time line of the development of the very systems of digital media you study. As in, you started studying internet culture and politics around the time internet culture and politics were growing into the publicly consumable form of communication we know it as today.

 

SV: I think that’s right. That first book on copyright came out in 2001. It went from the origins of copyright in the British Isles, in 1710, right up through the dot-com boom and then the rise of Napster, which is how I concluded the book in 2000, 2001. At that very moment I was on the job market, and universities were looking for people who could write and teach about the internet. There were only a handful of us, many of whom had taken more traditional fields of study and bent them toward the digital at some professional risk. And we all were very fortunate that there were departments of communication and media studies and library and information studies that were willing to take a chance on a completely new field. That means that many of us who were trained in other fields—my PhD was in American Studies—found ourselves in very different departments.

 

BRC: I suppose that could be a straightforward point about careers and job markets, but in your case it also speaks to the perception—or lack thereof—that this newly scaled form of digital communication is a thing one would study. Like today, if someone got an academic job as a specialist in, I don’t know, Bitmoji Studies, because 20 years from now a quarter of all human publications were bitmoji-based.

 

SV: I think so. I taught in a communication department at NYU, moved to a library school at the University of Wisconsin, ended up back at NYU to continue in my old department, and then came to UVA in 2008, to their Department of Media Studies. I immersed myself in the field. It was exciting in part because there were no older scholars to tell me I was doing it wrong. So I was probably doing it wrong, but I managed to get away with it long enough that I became one of the older scholars.

I’m constantly trying to get the fish to realize that they’re swimming in the water, and that’s not always easy.

BRC: Part of the field of relevance here is media studies, communication technologies. You’ve called it “Critical Information Studies.” But the broader context is engineering and technology, about how engineers think and work and what values drive technologies. You said earlier that, since you were at the leading edge of this area of study, you scrambled to learn how the internet worked, how compression worked, and so on. These are technical aspects, though ultimately you’re talking about social media as more than its mere technical features. Did you find it odd in the beginning of your career that engineers weren’t the ones asking the questions you were, that your study was not coming from the actual network engineers?

 

SV: You know, what I found early in my career was that people in computer science and other areas of engineering were fascinated by law, policy, ethics, and all of the other cultural, political, and social ramifications of technology. So I quickly made friends with technology scholars who were very generous with their time, to make sure that I didn’t make too many mistakes when I described digital technologies. It was gratifying that engineers especially were urging us on in the early years to create this field that would in many ways criticize their work, but, certainly from their point of view, make their work better and more humane.

 

BRC: Why is it usually that direction? Did you find engineers asking social scientists and humanists to make sure they weren’t making too many mistakes?

 

SV: I think that there is some of that, there is. My long-term relationships with engineering professors have been very much along those lines. They invite me to speak to their students with some frequency. I admit I may be an exception, in the sense that I’ve developed longstanding relationships. I’ve been heavily involved in the development of our data science degree programs at my school, for example, and law policy and ethics have been central to that process from the beginning. Honestly, what’s even more interesting to me is that the students are much more cynical and resistant to my point of view than the faculty. That’s another reason engineering faculty invite me in to speak to their students. I wish it would happen more, but to be fair, I think it’s more about being too busy and immersed in our own work than any sort of ideological resistance.

 

BRC: On that point about student resistance, I’m often struck by how easily students—and I teach engineers—adopt an identity as an engineer before they even go through their course of study. They’re attached to some sense of self-identity based on the cultural projection of what an engineer is, often answering questions with, “As an engineer, I think this …” even when they’ve basically taken two classes. I wonder if this is even more pronounced with digital technologies, given that our students are born and bred in our digital world, as digital millennials, I suppose.

 

SV: I tend to think that that’s a much older phenomenon. I resist the definition of millennial, I don’t think there is such an animal. Older people have imposed the imperative that young people are supposed to think about becoming engineers until they find that they can’t do the work, and then they switch to something else. That’s not a new phenomenon, but I think it’s more powerful now. I think it comes from a fundamental error on the part of many people in power that being an engineer is a very rewarding way of life and that the world needs more engineers. That may be partly true, but the fact is, not everybody has the toolset or preparation to become an engineer. It’s a highly specialized way of life. Many engineering schools around the country right now, including ours at UVA, try to ensure that engineers are broad-minded people. They—we—ask them to engage with big ideas and histories and so forth. That’s an aspiration. I’m sure we don’t always live up to it. But it is an ongoing and widespread conversation to help generate citizen-engineers rather than merely tracking people into careers.

BRC: Underlying all of this is the thought, We don’t want—or it wouldn’t be good—to have the Zuckerberg mentality, one where a massive technological system was designed and structured without attention to law, policy, ethics from the start, obliging us to patch it on after the fact, as a forced afterthought. I’ll get back to Facebook specifically below, but generally, we started this conversation with the comment that talking about digital social media is talking about technological politics. Yet it takes a lot of work to make that a common way of thinking about these systems. Most people think about those as two separate conversations, but they are always the same one. Your work in the classroom, and what I get from your writing, seeks to demystify digital technologies.

 

SV: I think so, yes. I try to do that technology-busting rhetorical move all the time. It’s important that we recognize that even just digital technologies have their own special features (affordances, as we’d say in Science and Technology Studies). But even then, within digital technology broadly, there are specific technologies that do different things and have different effects in the world. So AI and VR aren’t the same thing, and they’re going to have different effects on the world as they operate in different contexts. I’m not a fan of Marshall McLuhan, but I always teach one of his core insights that I think is crucial to understanding our role as human beings, not just to understanding technology, which is that technologies are extensions of ourselves—the car is an extension of my legs. It amplifies what I can do in terms of getting from point A to point B. It’s more than that, of course. It turns out it’s also an extension of my stereo system. It’s an extension of my phone—

 

BRC: It’s my coffee holder—

 

SV: Yeah, it’s an extension of my kitchen in the sense of my breakfast nook. It performs multiple functions. But it displaces and supplements and extends different activities that I was already going to do. And that’s really helpful. Think of even the ones that don’t seem quite so obvious, that a lightbulb is an extension of our eyes, because it extends the power of our eyes. And the other thing that we need to do—McLuhan was good at reminding us of this—is that we should denaturalize technology. Don’t assume the chalkboard in my classroom is a given and the computer in my classroom is technology—that one is natural and one is unnatural. The chalkboard and the computer that displays PowerPoint slides do similar work in different ways. As we look at the overall technological makeup of the classroom, we have to keep all of that in mind and decide which technologies to use in which situations.

 

BRC: And with the car example, if this is a policy discussion and you want to talk about, say, texting and driving, then you can’t just approach that issue as if the technologies at hand—phones, cars—were discrete and static objects. They are, as you say, new extensions and social capacities. You have to approach them with that social complexity, not just technical capability, no?

 

SV: Sure, and then there are whole other levels, even in just this case. The phone fundamentally changes the car, and the car fundamentally changes the phone. The act of driving is very different now than it was 10 years ago. The act of communicating through an electronic device is different now than it was 10 years ago. Even though we had mobile phones 10 years ago, the fact that I have instant Bluetooth connection in my phone makes my car a phone. I have voice command in there! It’s bizarre.

 

BRC: It’s that line I’ve heard, maybe it was from a comedian: I can’t believe I can make calls on my camera.

 

SV: Exactly. Or play Candy Crush Saga on my camera.

As long as we insist on conducting our politics through a commercial service, we are going to destroy our politics.

BRC: On this point, about how to talk about technology in the broader public sphere—and thus about technology policy—it just seems so much more difficult with a system that became so ubiquitous so quickly, I mean, 2.2 billion users on that platform alone in a decade. With cars, it built up somewhat more slowly, so, sure, it was problematic, it still is problematic, but we’ve been working on it for a long time. Here, though, is a new form of connection and daily infrastructure for so many of us. And not only that, but its massive presence makes people feel like they already understand it, because they use and maneuver through it everyday. The question is, what do you do, as a scholar, to talk about Facebook in ways that shed light for people who probably feel like they already get it?

 

SV: The thing is, in the past 10 years, digital technology has become harder to understand, not easier. The easier it is to use, the harder it is to understand. I hate to play the when-I-was-young card, but just this once … When I was young, you could open up a personal computer and pull out the soundboard and pull out the video board, if you even had one, and swap in new RAM rather easily. You often didn’t even need a screwdriver. You could look and see, Oh, here’s the motherboard, here’s where the RAM slots are, here’s where the different peripherals go, here’s where the printer plugs in, and you could take it apart and you could put it back together with a tiny screwdriver and—and learn how it works. And you could customize it, like you can customize a 1965 Mustang. That ability to hack it meant that there was a greater potential that you could understand its powers and its limitations. But it also meant that you got to see the markings of human interaction.

 

BRC: It’s a nice image. The ways we handle technologies, as in, actually use our hands and interact.

 

SV: Right, so that the box in which my first few personal computers sat was big enough for human hands to operate inside. It was designed so that you could get your hands in there, and you could clearly see that people had used their hands to install this stuff. Now the computer sitting in front of me is not even an inch thick and is remarkably powerful and, for all I know, is run by a troop of tiny elves, because there’s no way for me to know what’s inside. It’s a sealed box. Everything is so easy to use that we don’t even ask, I wonder how that works? We’re in this bizarre situation where the only thing young people are told is to sign up for some sort of code academy and learn how to code. They are prevented from getting under the hood and changing spark plugs. We all are. And that’s intentional. The car companies wanted to create black boxes that are heavily computerized to limit competition in customization and repairs. That’s increasingly the technological world in which we live, one in which we are supposed to assume everything is magic and that it will just work. Going back to your earlier question, part of my job is to demystify these technologies, to get people to recognize that they are the products of human hands and human ingenuity. So you can see that human decisions and biases affected how they were designed and how they work in the world.

 

BRC: I’ll point to a line in your book about media systems being shaped by human relations and prejudices, ideologies, political power. Part of what you’re saying is that the relationships we have with these systems that a near-quarter of the human population uses—us included, I mean, you and I are talking by Skype through our laptops right now—these relationships encourage us to think we are individuals acting through social media, that we make these choices without recognizing how the design encourages certain forms of interaction over others. We’re so inside it we can’t even tell we’re acting on decisions of others’ making. I feel like I’m about to quote Marx here, so I’ll back off.

 

SV: Again, part of my job—in class, in writing—is to demystify technologies, so we recognize that they are not magical and mystical, but also to get us to recognize how weird it all is and how new it all is. I’m constantly trying to get the fish to realize that they’re swimming in the water, and that’s not always easy. But it’s important, you have to disorient yourself to get a sense of the whole and a sense of the process.

BRC: To the point at hand, then, what do you say about Facebook and democracy? Specifically, throughout your book you thread references to deliberative democracy, to what it means to deliberate as a citizen. How is the site designed to foster or undermine that?

 

SV: Facebook is designed to allow or encourage immediate and shallow interactions with a post. It could be a photograph, a block of text, an article from some publication, but in every case, you’re not invited to annotate the text or engage in a deliberative back-and-forth conversation. You’re invited to deal with a string of nested comments, encouraging, by design, speed and shallowness. If Facebook were devoted to deliberation, to encouraging users to participate and engage in an affective cognitive manner, it wouldn’t allow people to comment unless they had clicked through and scanned every word of the story in question. Wouldn’t that be interesting? You’re not allowed to comment unless you do that? And no, of course Facebook has no incentive to do that. It’s designed for rapid, often emotional, response. It is very poorly designed for deep deliberation.

Now, people all around the world are trying to create platforms that can enhance the quality of deliberation. Some will be successful in that effort, but they are unlikely to be commercially successful. There is no money in deliberation. So as long as we insist on conducting our politics through a commercial service, we are going to destroy our politics.

 

BRC: You have, then, a concrete example of capitalist metrics in tension with those of democratic deliberation. That doesn’t have to be some esoteric think piece point. It’s right there in the clicks.

 

SV: And look, there’s a part of this that could make the conversation confusing, because when I say that Facebook undermines democracy, people who share my politics often take that word, democracy, and assume I mean all the things I support, basic human rights and civil rights and equal opportunity and all those liberal principles, but that’s not what I mean in this instance. I mean here a public’s ability to govern itself. And when you have a diverse public, you need to have norms and practices that respect that diversity and allow for multiple voices to argue over a shared set of facts toward an end, and that end might be a compromise or some agreement. The end can come in many forms, but we’re not even close to having that right now. Facebook’s not the only reason, I hope I don’t even have to add that, but Facebook is so powerful that it’s hurting rather than helping.

 

BRC: Scale and amplification.

 

SV: Right, that’s right.


BRC: You refer to a quote from Neil Postman that says, and I’m paraphrasing, that television is an instrument that directs not only our knowledge of the world, but our knowledge of ways of knowing the world. This seems apt for a discussion of social media, that our uses of digital social media not only influence what we know of the world, the facts that we gather, but through its forms of interaction it influences the very ways of understanding what we have available to us.

 

SV: So look, one of my great hopes is that when the next big wave comes, way beyond Facebook, people who’ve read my work come to it with a set of questions and approaches that can help make sense of it up front. If more of us had thought deeply about what Neil Postman offered us, we might not have rushed into the Facebook world with such enthusiasm. That’s all you can hope for in our business, that some people might think differently next time they face a problem.

 

BRC: Would you rather people find Antisocial Media in 20 years and think, “That is so dated, I can’t believe people were worried about that,” or that the future reader thinks, “Well, he really nailed it, we haven’t heeded what he offered,” just as Zuckerberg wasn’t the broad-minded engineer that engineering schools were trying to educate?

 

SV: My dream is to have people pick it up five years from now and say, “Wow, he was totally wrong about everything.” It would be wonderful if within a short time my book looks like a piece of alarmist trash, because that would mean that we actually confronted and solved a lot of these problems. That would be nice. It’s comforting to be of the age and level of security that I am, where I would rather be completely wrong than completely right. But I also hope my pessimism about solutions is quickly proven wrong. I would be thrilled if some smart group of people read Antisocial Media and came to the end and got really angry and dissatisfied that I don’t see a way out of the Facebook problem, and that inspired them to prove me wrong. Please, go create laws or policies or technologies that address this problem, or at least minimize it. I’d love it if I could inspire smarter people than myself to take on these challenges. icon

Featured image: Siva Vaidhyanathan. Photograph by Dan Addison / UVA Communications