Social media companies are struggling to govern their platforms, and users are feeling the effects. Harassment, fake news, debates over nudity and terrorist content: cyberspace today is overwhelming. Platforms are scrambling to shore up their products, yet solving these problems is a moral and political project, not a technical one. And it starts with us, the users, taking a more active role in the governance of these spaces.
It’s difficult to know what it means to be a citizen of the internet when the internet itself is constantly in flux. Michael Hauben described “netizens” in 1997 as internet users who are more than regular users. Rather, they are selfless, and dedicated to collectively constructing and maintaining cyberspace—but they are not explicitly political actors. In fact, Hauben points out that he created the word netizen because the term ‘net citizen’ was too political. To be a netizen was primarily to engage in building a new space in the open web. The political process of rule-making in these spaces was not explicit, but rather embedded in these projects.
Solving tech’s problems is a moral and political project, not a technical one. Today, our online lives are dictated (paywall) by a handful of centrally-managed walled gardens. The power wielded by Facebook and Google is evident in their dominance of advertising (paywall) and source of traffic for external publishers. Once inside their walls, they attempt to close the door behind us. That’s why Google uses its search results to keep you in its realm of services and Facebook servers are directly hosting content from outside publishers. As more online activity that was once dispersed across the web coalesces within these walled gardens, the titans of the industry have far greater control over how we behave online. They are able to collect unprecedented quantities of sensitive data about each of us—data that can be used to sell us shoes as well as predict our personalities, political beliefs, and mental health. This Internet is vastly different from the open web where Hauben’s netizen flourished.
Platform administrators have historically made the rules governing these walled gardens, but governments, threatened by their global power, increasingly want to control how their citizens communicate online. Cybercrime laws that restrict expression online are now common. Increasingly, platforms are being asked to enforce laws under threat of massive fines (paywall) and shutdowns—policies that will force platforms to err on the side of removing content. Agreeing to these demands, as companies including Facebook and Google are already doing in some places, is creating alternative online realities for users.
A bordered internet is not a global one. If this trend continues, the Internet will further splinter, pushing people apart. Platforms will be expected to enforce countless standards for crimes like defamation and incitement to discrimination around the globe. In most cases we won’t know the locations of the parties we interact with online, making it difficult to know which laws apply to an exchange at any time. Publications wishing to reach a global audience will be removed from Facebook news feeds and Google search results in countries with restrictive laws. As the enforcement of such laws becomes normalized, other countries may follow suit. The startling implication is that the internet becomes a very different place from country to country. A bordered internet is not a global one.
This reality requires an updated version of online citizenship—one that seeks to preserve the Internet’s global reach without sacrificing the collaborative energy that makes an open, generative web possible. To accomplish this, we need platforms to incorporate users in the democratic governance of cyberspace. It’s an idea that American cyberlaw scholars Lawrence Lessig and David Post have both advanced, for good reason. Platforms lack the legitimacy to make rules in the eyes of governments, but governments aren’t able to adequately control the internet, because of its scale. A return to a version of the Internet lacking these walled gardens seems unlikely. However, introducing democratic values to the process would grant platforms greater legitimacy, and likely lead to better outcomes.
We need to radically reimagine the platform-user relationship. To do this, we need to radically reimagine the platform-user relationship, with two fundamental requirements: The creation of formal structures through which users can deliberate on and influence platform governance, and a grounding of that engagement in a sense of obligation to a community within the platform.
In theory, users can already influence platform policy through market forces, but this avenue is inadequate in practice. In his 1970 book Exit, Voice, and Loyalty, the German-born economist Albert Hirschman proposed two possible responses for dissatisfied members of an organization: They could leave for new organizations, or they could exercise voice by remaining in the group and speaking up. The political philosopher William Galston makes the observation that exit and voice are market-like and political behavior, respectively. He argues that barriers to leaving online groups are low, so exit will be the dominant choice.
While Galston may be correct that it is technically easy to leave online platforms, he underestimates how difficult it is to leave in other respects.
Consider what legal scholar James Grimmelmann calls the “Cheers problem.” As the theme song to the TV show goes, “you want to go where everybody knows your name.” Grimmelmann argues that the Cheers problem allows platforms to exercise technical power over users without fear of exit. You may dislike a technical change to Facebook, or Reddit, or Instagram, but you’re unlikely to switch to a new platform unless your friends leave too. Grimmelmann’s point illustrates that platform rule-making via exit fails because online platforms are only valuable to the extent that they are networked with the right people.
Grimmelmann proposes that denying platforms social power (for example, by having many small, competing platforms) would force them to take the voice of “We the Users” seriously. But this method has a weakness. It reaches voice via the threat of exit. Its source of legitimacy is not a respect for citizens but a fear of competition. It would be difficult to consistently find the right balance in which the barriers are low enough to make exit a serious threat but high enough to compel users to commit to voice instead. Assuming that balance is struck, Grimmelmann’s “We the Users” are not citizens. They are just disgruntled consumers. As soon as the market reorients to reduce the threat of exit, they are powerless.
I believe that a fundamental difference between a citizen and a consumer online is membership in a community. As political theorist Darin Barney observes, a member of a community feels a sense of obligation to a group that is so strong it defies one’s personal interests, separating communities from other unencumbered forms of association. He goes on to argue that volunteerism is the dominant form of association online because we tend to join online groups only out of interest, not a sense of obligation.
Barney is correct. We are not born into Internet communities; members must first join out of interest. This makes the natural development of obligation online difficult. But the reason members join and the reason they choose to stay are not necessarily the same. Something like obligation must be developed and cultivated over time. The question we must ask is whether members of an online community can join out of interest but stay because they developed a sense of obligation.
Users can be formally and actively incorporated into the process of rulemaking. Investment in collaboratively building a space online like Wikipedia (or netizens’ work on the open web) seems to accomplish this, but such collaborative efforts in today’s walled gardens are rare. Users often collaborate on the creation of content, but not on the creation of the space itself. Yet users can be formally and actively incorporated into the process of rulemaking. The platforms are building cyberspace, but its governance can serve as the project that weds users’ self-interest with an obligation to the community itself.
This solution creates a circular requirement for citizenship: developing obligation in an online community requires the activity of citizenship just as much as citizenship requires community. Interest would still be the reason new users join Internet communities, but obligation—and by extension, community—may be developed over time through an active engagement with rulemaking duties. An example might be volunteer moderators on Reddit who actively participate in platform governance (although it’s typically a non-democratic form of governance.)
Another case may be found in the online multiplayer video game League of Legends. From 2011 to 2014, the game included a jury-like feature called ‘the Tribunal’—a system that allowed players to review evidence and then vote on the guilt of other players who had been accused of toxic behavior. The system was effective, according to the researcher behind it, but it was also slow and costly, so it was replaced with a machine learning algorithm.
I acknowledge that this is asking a lot of platforms and their users. Perhaps the idea of a global democratic community is too ambitious. Or maybe the idea that private companies can be home to a truly public sphere is fatally flawed. But the future of the internet will be what we make of it. Those who strive for an open and collaborative future must accept this burden.
We welcome your comments at firstname.lastname@example.org. Learn how to write for Quartz Ideas.