Buying something off of Everlane or Amazon doesn’t make you a member of a community, no matter how much the marketing department tries to convince you otherwise. Yet in the past year, as our social life has withered under the pressures of quarantine, we’ve been forced to move our real-life communities online—and often to seek out new ones as well, on social media or even through mutual-aid spreadsheets advertised on websites and lampposts across the country.
Who gets to decide what an online community is and how it functions? The owners of online platforms always portray their proprietary databases and algorithms as fundamental to the user experience. But what if, instead, we focused on ourselves as users, with all our contradictory impulses, and looked for ways to liberate ourselves?
Users are customers, products, but also, sometimes, workers and coconspirators. A series of recent books have taken this realization to heart by probing online communities as spaces of struggle and resistance. Joanne McNeil’s Lurking reads the history of the internet through the process of “becoming a user,” focusing on the ways that this identity can be simultaneously a source of power and one of vulnerability. Jessa Lingel’s An Internet for the People examines the contradictions of Craigslist, a platform that claims to center and uplift ordinary users rather than corporate power. Sarah T. Roberts’s Behind the Screen looks at the labor that makes online communities possible: the traumatizing work of moderation, often offshored and made invisible by platform design.
These books go beyond traditional binary oppositions between techno-utopianism and techno-fetishism, which rely on the idea that the internet is a unitary thing: corrupting or liberating, responsible for our worst ills or promising a way out of them. Instead, they consider online collectivity as it actually exists, a complex product of clashing user desires and corporate aspirations.
Struggles between classes and material interests play out here as they do in other workplaces and social spaces in our capitalist society, but ideologies and values are important as well. After all, the founders of many of the platforms we rely on constantly talk in terms of their visions for the future, and some invest billions in what seem to be chimerical future-oriented projects. Yet they remain a relatively homogenous group, even as user bases grow more numerous, engaged, and diverse.
Who gets to decide what an online community is and how it functions?
This tension—between the liberatory force of online community and the power of white male tech elites (as Fred Turner showed in his 2006 book From Counterculture to Cyberculture)—goes back to the very origins of the internet. For example, Stewart Brand, an influential pioneer in home computing and networking, got his start in the New Communalist movement of the 1960s, in which 10s of thousands of people formed intentional communities across the United States. Brand helped to link these communes and their antisystem frontier ideology with the techno-fetishism of the Cold War defense research network. In the process, he created a language that allowed libertarian billionaires to pose as rebels defending authentic communal life against ossified political and corporate bureaucracies.
Endlessly recycled stories about whiz kids fighting the system by building startups in their Silicon Valley garages obscure a much more complex tangle of experiences. The real story, as McNeil argues in Lurking, is about how people take the tools that online platforms give them and build communities and relationships of their own, often set against the owners of the platforms themselves.
For instance, ad tech makes it essential to reduce the fluid play of invented and assumed identities to the rigid, marketable framework of a profile irrevocably anchored to a “real name.” Trans people, sex workers, and others for whom the “reality” of a name is in doubt have often fought back against this reduction.
While Facebook is a more familiar antagonist, McNeil also shows how Friendster, one of the earliest web-based social networks, fought to delete profiles for the likes of Elvis and peanut-butter cups. In the process, it misunderstood the nature of the network it had created: “Visibility isn’t the same as being verified and vetted. … Tethering an identity to the internet meant that a user could only travel so far.” Friendster’s ability to enforce this tethering was still limited, but, with the omnipresence of smartphones, the task has become easier. As our real lives become more and more entangled with privately controlled platforms, more and more details of those lives—from the food we eat to the places we visit—become fodder for monetization. “When it comes to the tech giants that control centralized platforms,” McNeil writes, “sharing is taking.”
Alongside the corporate farming of user identities, there exists the ghost of a different kind of collective, which began to form earlier than most internet users assume. McNeil traces the history of communities like Echo, a long-lasting New York City–based BBS service founded by Stacy Horn in 1990. (A BBS, or bulletin board system, allowed a computer to dial in to a central system to access a set of message boards, like a combination of a web forum and an ISP, or internet service provider; users could take advantage of them for anything from classifieds to dating and free-flowing chat about current events.) Unusually, Echo’s gender distribution was nearly equal, and it housed a literary and artistic coterie more like the regulars of an artsy 1990s coffee shop than the stereotyped basement dwellers that came to typify subsequent communities. Other ’90s platforms, like LatinoLink, CyberPowWow, and Cafe los Negroes, were founded by and for people of color. (Charlton McIlwain’s Black Software  explores other online spaces created by and for Black users.) Like Echo, these spaces were later forgotten as the early internet was rewritten as a world of straight white males.
Yet these moments of autonomy—as McNeil is careful to point out—are not relics of a past golden age, evidence of a good and healthy version of the internet before monetization. Even then they were the exception, not the rule, and they had problems of their own. Spaces where users organize themselves, establish norms with an eye to safety and inclusion rather than marketability, and are free to experiment with identity rather than being bound to a demographic profile have never been the standard. Still, they have always been possible.
One attempt at such a space is Craigslist. As it has grown to replace newspaper classifieds across the world, Craig Newmark’s website has retained its spare design and its commitment to some version of the anticapitalist ethos of 1980s Silicon Valley communalism. Craigslist has resisted the temptation to monetize user data or generate funds in any but the most transparent ways; it prioritizes user autonomy over content moderation; and it presents a model of the market that relies on productive reuse, localism, and small-scale person-to-person transactions, rather than (for instance) buying new, foreign-made goods at a big-box store.
These values—nonmonetization of data, freedom for users, local economics—should be taken seriously, as Jessa Lingel argues in An Internet for the People: they’re not cynical marketing slogans but reflect genuine commitments on Craigslist’s part. Yet no matter how sincerely the platform takes them, all these values are contested and allow for multiple interpretations.
For instance, when Craigslist interprets user freedom as a lack of content moderation, it risks compromising the freedom of people who don’t want to put up with sexist or racist abuse. Yet implementing content moderation not only raises a new set of questions about who is doing the moderating but also risks turning the moderation system itself into a site of abuse.
Anonymity can be a refuge—for people who aren’t out as LGBT, for instance—but it can also be a shield for bad actors, especially in the fraught context of a personal ad. (Is an anonymous profile a sign of someone who doesn’t want their nonnormative sexuality to become public knowledge or of someone hiding their abusive past?)
Community members often don’t share values; people who buy or sell objects on the basis of their putative intrinsic or emotional value resent entrepreneurs for whom the objects are simply stuff. Such disagreements get at fundamental questions about whether users should share Craigslist’s anticommercial ethos, or accept it as simply a governing principle for the platform itself—in other words, accept that the website functions like a commune or a group house.
Yet sometimes threats can also come from outside. In perhaps the most prominent recent controversy involving Craigslist, groups seeking to criminalize sex work implemented legislation that forced the site to police sex-work ads, which had been a lifeline and a source of independence for many sex workers.
When we’re faced with such outside interventions, we risk developing a vision of online community life that falls into what Gavin Mueller has called (in the context of the copyleft and free software movement) “digital Proudhonism”: the ideal of a community of independent producers that needs only to overcome big-tech monopolies and legal barriers to flourish. Digital Proudhonism is a tempting and popular viewpoint, but it ignores the fact that the internet has never been run by small-scale producers, and that the vast majority of people who create it (as users, writers, or developers) would benefit more from organizing and collective action than from entrepreneurial freedom. Moreover, digital Proudhonism obscures or sidelines the class relationships that have made the internet what it is. For instance, Lingel sees class as primarily an aesthetic or cultural language that codes certain modes of interaction as poor or wealthy, rather than as a set of power relationships that determine who benefits from the way Craigslist operates.
Most users will never see or hear of the workplaces where their content is processed and moderated.
This vision of the internet—as a battle between large companies and small entrepreneurs—is powerfully rebutted by the work of Sarah T. Roberts. Behind the Screen focuses on the labor of the thousands of content moderators around the world whose job it is to sort through millions of abusive, graphic, and disturbing posts that are shared to social media every day. Platforms like Facebook need to present themselves as at least relatively safe spaces for their users; encountering cartel beheading clips or child pornography is clearly incompatible with that goal.
Moderating this kind of content is a harrowing experience for the workers: “Horror movies are ineffective at this point. I’ve seen all that stuff in real life … it’s permanently damaging,” says one of Roberts’s interviewees. Yet these workers—in part because of the supposedly menial nature of their jobs—are employed on a temporary contract basis and excluded from the comparatively generous perks, including health care coverage, that other employees on tech campuses receive.
Most users will never see or hear of the workplaces where their content is processed; tech companies are careful to present our feeds as generated by our friends and processed by an algorithm. When we buy a table or an iPhone, we are increasingly accustomed to reflecting on the working conditions of the people who make it; when we browse a website, we still think of ourselves as just using it, a form of selective vision that enables a variety of forms of exploitation.
The anonymity and concealed nature of moderation labor also allow corporations to move these jobs offshore to countries like the Philippines, where low wages and an English-speaking population make for a profitable combination. Roberts interviews a series of Filipinx workers who are keenly aware of the escalating demands to hit productivity metrics that accompany offshoring. Even on the level of Manila’s urban fabric, where moderation and call-center work take place in a series of newly built free-trade enclaves, the growth of offshoring shows the connection between the apparently immaterial labor of online content and the physical infrastructure of cities across the world as they become more dependent on the provision of remote services.
Yet the tech overlords aren’t always calling the shots. Roberts shows how moderation workers are beginning to overcome the isolation and atomization that is such a perennial feature of their jobs. In both the United States and the Philippines, they have begun to organize and fight collectively for better working conditions. After the publication of Behind the Screen, Facebook’s content moderators received a $52 million settlement in a class-action lawsuit against the company, getting potentially thousands of dollars in payouts as well as mental-health counseling. Such sums cannot fully compensate for the psychic harm incurred through watching thousands of gruesome videos, but they show that companies are beginning to reckon with the collective power of their contractors—something their business model had originally been designed to foreclose.
As the internet becomes more and more consolidated under the umbrella of a handful of corporate behemoths who are themselves increasingly imbricated with the national-security state, it becomes imperative for us as ordinary users to develop a new sense of solidarity: the recognition both of common interests and of shared ethical obligations toward one another. We aren’t all tech workers, of course, so labor unions aren’t a universally applicable solution, but we can come to understand that the business model of the tech giants relies on the complicity and psychological manipulation of their user base.
McNeil is right to point out that “decentralizing the internet, alone, is not enough to rid the internet of its worst users,” but it can reduce the profit motive that incentivizes the formation of certain kinds of communities and amplifies the toxic dynamics that drive them. Platforms like Mastodon—a sort of decentralized, open-source version of Twitter that anyone can host on a networked machine—can be part of the solution, even if they perennially struggle to attract users whose friends are already elsewhere (and can be prone to developing toxic dynamics of their own).
That is not to embrace technological solutionism: new apps won’t save the world. Yet the way we live now is so interwoven with online communities that it is hard to imagine anything changing without those spaces becoming liberated as well.
At the height of the summer 2020 uprisings, hundreds of thousands of people became reliant on Facebook events to identify protests in their area. It quickly became apparent that unvetted, opportunist actors were taking advantage of this dynamic to promote their social-media brands or advance compromised actions that involved collaboration with the state.
It seems unlikely that Mark Zuckerberg was personally responsible for this phenomenon, but the opacity and virality mechanisms of his platforms contributed to undermining trust among the participants—and of course Facebook and Instagram have collaborated extensively with law enforcement. As protestors faced higher risks from the state, they began to shift to closed Signal chats or even off of networked platforms entirely.
What would have to change for this to not be necessary? Can we remain users—more or less autonomous accounts on an open, large-scale platform—and still keep one another safe?
This article was commissioned by Mona Sloane.