Social Networking and Ethics

First published Fri Aug 3, 2012; substantive revision Mon Aug 30, 2021

In the 21st century, new media technologies for social networking such as Facebook, Twitter, WhatsApp and YouTube began to transform the social, political and informational practices of individuals and institutions across the globe, inviting philosophical responses from the community of applied ethicists and philosophers of technology. While scholarly responses to social media continue to be challenged by the rapidly evolving nature of these technologies, the urgent need for attention to the social networking phenomenon is underscored by the fact that it has profoundly reshaped how many human beings initiate and/or maintain virtually every type of ethically significant social bond or role: friend-to-friend, parent-to-child, co-worker-to co-worker, employer-to-employee, teacher-to-student, neighbor-to-neighbor, seller-to-buyer, doctor-to-patient, and voter-to-voter, to offer just a partial list. Nor are the ethical implications of these technologies strictly interpersonal, as it has become evident that social networking services (hereafter referred to as SNS) and other new digital media have profound implications for democracy, public institutions and the rule of law. The complex web of interactions between SNS developers and users, and their online and offline communities, corporations and governments—along with the diverse and sometimes conflicting motives and interests of these various stakeholders—will continue to require rigorous ethical analysis for decades to come.

Section 1 of the entry outlines the history and working definition of social networking services. Section 2 identifies the early philosophical foundations of reflection on the ethics of online social networks, leading up to the emergence of Web 2.0 standards (supporting user interactions) and full-fledged SNS. Section 3 reviews the primary ethical topic areas around which philosophical reflections on SNS have, to date, converged: privacy; identity and community; friendship, virtue and the good life; democracy, free speech, misinformation/disinformation and the public sphere; and cybercrime. Finally, Section 4 reviews some of the metaethical issues potentially impacted by the emergence of SNS.

1. History and Definitions of Social Networking Services

‘Social networking’ is an inherently ambiguous term requiring some clarification. Human beings have been socially ‘networked’ in one manner or another for as long as we have been on the planet, and we have historically availed ourselves of many successive techniques and instruments for facilitating and maintaining such networks. These include structured social affiliations and institutions such as private and public clubs, lodges and churches as well as communications technologies such as postal and courier systems, telegraphs and telephones. When philosophers speak today, however, of ‘Social Networking and Ethics’, they usually refer more narrowly to the ethical impact of an evolving and loosely defined group of information technologies, most based on or inspired by the ‘Web 2.0’ software standards that emerged in the first decade of the 21st century. While the most widely used social networking services are free, they operate on large platforms that offer a range of related products and services that underpin their business models, from targeted advertising and data licensing to cloud storage and enterprise software. Ethical impacts of social networking services are loosely clustered into three categories – direct impacts of social networking activity itself, indirect impacts associated with the underlying business models that are enabled by such activity, and structural implications of SNS as novel sociopolitical and cultural forces.

1.1 Online Social Networks and the Emergence of ‘Web 2.0’

Prior to the emergence of Web 2.0 standards, the computer had already served for decades as a medium for various forms of social networking, beginning in the 1970s with social uses of the U.S. military’s ARPANET and evolving to facilitate thousands of Internet newsgroups and electronic mailing lists, BBS (bulletin board systems), MUDs (multi-user dungeons) and chat rooms dedicated to an eclectic range of topics and social identities (Barnes 2001; Turkle 1995). These early computer social networks were systems that grew up organically, typically as ways of exploiting commercial, academic or other institutional software for more broadly social purposes. In contrast, Web 2.0 technologies evolved specifically to facilitate user-generated, collaborative and shared Internet content, and while the initial aims of Web 2.0 software developers were still largely commercial and institutional, the new standards were designed explicitly to harness the already-evident potential of the Internet for social networking. Most notably, Web 2.0 social interfaces redefined the social topography of the Internet by enabling users to build increasingly seamless connections between their online social presence and their existing social networks offline—a trend that shifted the Internet away from its earlier function as a haven for largely anonymous or pseudonymous identities forming sui generis social networks (Ess 2011).

Starting in the first decade of the 21st century, among the first websites to employ the new standards explicitly for general social networking purposes were Orkut, MySpace, LinkedIn, Friendster, Bebo, Habbo and Facebook. Subsequent trends in online social networking include the rise of sites dedicated to media and news sharing (YouTube, Reddit, Flickr, Instagram, Vine, Snapchat, TikTok), microblogging (Tumblr, Twitter, Weibo), location-based networking (Foursquare, Loopt, Yelp, YikYak), messaging and VoIP (WhatsApp, Messenger, WeChat), social gaming (Steam, Twitch) and interest-sharing (Pinterest).

1.2 Early Scholarly Engagement with Social Networking Services

Study of the ethical implications of SNS was initially seen as a subpart of Computer and Information Ethics (Bynum 2018). While Computer and Information Ethics certainly accommodates an interdisciplinary approach, its direction and problems were initially largely defined by philosophically-trained scholars such as James Moor (1985) and Deborah G. Johnson (1985). Yet this has not been the early pattern for the ethics of social networking. Partly due to the coincidence of the social networking phenomenon with the emerging interdisciplinary social science field of ‘Internet Studies’ (Consalvo and Ess, 2011), the ethical implications of social networking technologies were initially targeted for inquiry by a loose coalition of sociologists, social psychologists, anthropologists, ethnographers, law and media scholars and political scientists (see, for example, Giles 2006; Boyd 2007; Ellison et al. 2007; Ito 2009). Consequently, philosophers who have turned their attention to social networking and ethics have had to decide whether to pursue their inquiries independently, drawing primarily from traditional philosophical resources in applied computer ethics and the philosophy of technology, or to develop their views in consultation with the growing body of empirical data and conclusions already being generated by other disciplines. While this entry will primarily confine itself to reviewing existing philosophical research on social networking ethics, links between those researches and studies in other disciplinary contexts remain vital.

Indeed, recent academic and popular debates about the harms and benefits of large social media platforms have been driven far more visibly by scholars in sociology (Benjamin 2019), information studies (Roberts 2019), psychology (Zuboff 2019) and other social sciences than by philosophers, who remain comparatively disengaged. In turn, rather than engage with philosophical ethics, social science researchers in this field typically anchor normative dimensions of their analyses in broader political frameworks of justice and human rights, or psychological accounts of wellbeing. This has led to a growing debate about whether philosophical ‘ethics’ remains the right lens through which to subject social networking services or other emerging technologies to normative critique (Green 2021, Other Internet Resources). This debate is driven by several concerns. First is the growing professionalization of applied ethics (Stark and Hoffmann 2019) and its perceived detachment from social critique. A second concern is the trend of insincere corporate appropriation of the language of ethics for marketing, crisis management and public relations purposes, known as ‘ethicswashing’ (Bietti 2020). Finally, there is the question of whether philosophical theories of ethics, which have traditionally focused on individual actions, are sufficiently responsive to the structural conditions of social injustice that drive many SNS-associated harms.

2. Early Philosophical Concerns about Online Social Networks

Among the first philosophers to take an interest in the ethical significance of social uses of the Internet were phenomenological philosophers of technology Albert Borgmann and Hubert Dreyfus. These thinkers were heavily influenced by Heidegger’s (1954 [1977]) view of technology as a monolithic force with a distinctive vector of influence, one that tends to constrain or impoverish the human experience of reality in specific ways. While Borgmann and Dreyfus were primarily responding to the immediate precursors of Web 2.0 social networks (e.g., chat rooms, newsgroups, online gaming and email), their conclusions, which aim at online sociality broadly construed, are directly relevant to SNS.

2.1 Borgmann’s Critique of Social Hyperreality

Borgmann’s early critique (1984) of modern technology addressed what he called the device paradigm, a technologically-driven tendency to conform our interactions with the world to a model of easy consumption. By 1992’s Crossing the Postmodern Divide, however, Borgmann had become more narrowly focused on the ethical and social impact of information technologies, employing the concept of hyperreality to critique (among other aspects of information technology) the way in which online social networks may subvert or displace organic social realities by allowing people to “offer one another stylized versions of themselves for amorous or convivial entertainment” (1992, 92) rather than allowing the fullness and complexity of their real identities to be engaged. While Borgmann admits that in itself a social hyperreality seems “morally inert” (1992, 94), he insists that the ethical danger of hyperrealities lies in their tendency to leave us “resentful and defeated” when we are forced to return from their “insubstantial and disconnected glamour” to the organic reality which “with all its poverty inescapably asserts its claims on us” by providing “the tasks and blessings that call forth patience and vigor in people.” (1992, 96)

There might be an inherent ambiguity in Borgmann’s analysis, however. On the one hand he tells us that it is the competition with our organic and embodied social presence that makes online social environments designed for convenience, pleasure and ease ethically problematic, since the latter will inevitably be judged more satisfying than the ‘real’ social environment. But he goes on to claim that online social environments are themselves ethically deficient:

Those who become present via a communication link have a diminished presence, since we can always make them vanish if their presence becomes burdensome. Moreover, we can protect ourselves from unwelcome persons altogether by using screening devices….The extended network of hyperintelligence also disconnects us from the people we would meet incidentally at concerts, plays and political gatherings. As it is, we are always and already linked to the music and entertainment we desire and to sources of political information. This immobile attachment to the web of communication works a twofold deprivation in our lives. It cuts us off from the pleasure of seeing people in the round and from the instruction of being seen and judged by them. It robs us of the social resonance that invigorates our concentration and acumen when we listen to music or watch a play.…Again it seems that by having our hyperintelligent eyes and ears everywhere, we can attain world citizenship of unequaled scope and subtlety. But the world that is hyperintelligently spread out before us has lost its force and resistance. (1992, 105–6)

Critics of Borgmann saw him as adopting Heidegger’s (1954 [1977]) substantivist, monolithic model of technology as a singular, deterministic force in human affairs (Feenberg 1999; Verbeek 2005). This model, known as technological determinism, represents technology as an independent driver of social and cultural change, shaping human institutions, practices and values in a manner largely beyond our control. Whether or not this is ultimately Borgmann’s view (or Heidegger’s), his critics saw it in remarks of the following sort: “[Social hyperreality] has already begun to transform the social fabric…At length it will lead to a disconnected, disembodied, and disoriented sort of life…It is obviously growing and thickening, suffocating reality and rendering humanity less mindful and intelligent.” (Borgmann 1992, 108–9)

Critics asserted that Borgmann’s analysis suffered from his lack of attention to the substantive differences between particular social networking technologies and their varied contexts of use, as well as the different motivations and patterns of activity displayed by individual users in those contexts. For example, Borgmann neglected the fact that physical reality does not always enable or facilitate connection, nor does it do so equally for all persons. For example, those who live in remote rural areas, neurodivergent persons, disabled persons and members of socially marginalized groups are often not well served by the affordances of physical social spaces. As a consequence, Andrew Feenberg (1999) claims that Borgmann overlooked how online social networks can supply sites of democratic resistance for those who are physically or politically disempowered by many ‘real-world’ networks.

2.2 Hubert Dreyfus on Internet Sociality: Anonymity versus Commitment

Philosopher Hubert Dreyfus (2001) shared Borgmann’s early critical suspicion of the ethical possibilities of the Internet; like Borgmann, Dreyfus’s reflections on the ethical dimension of online sociality conveyed a view of such networks as an impoverished substitute for the real thing. Like Borgmann, Dreyfus’s suspicion was informed by his phenomenological roots, which led him to focus his critical attention on the Internet’s suspension of fully embodied presence. Yet rather than draw upon Heidegger’s metaphysical framework, Dreyfus (2004) reached back to Kierkegaard in forming his criticisms of life online. Dreyfus asserts that what online engagements intrinsically lack is exposure to risk, and without risk, Dreyfus tells us, there can be no true meaning or commitment found in the electronic domain. Instead, we are drawn to online social environments precisely because they allow us to play with notions of identity, commitment and meaning, without risking the irrevocable consequences that ground real identities and relationships. As Dreyfus put it:

…the Net frees people to develop new and exciting selves. The person living in the aesthetic sphere of existence would surely agree, but according to Kierkegaard, “As a result of knowing and being everything possible, one is in contradiction with oneself” (Present Age, 68). When he is speaking from the point of view of the next higher sphere of existence, Kierkegaard tells us that the self requires not “variableness and brilliancy,” but “firmness, balance, and steadiness” (Dreyfus 2004, 75)

While Dreyfus acknowledges that unconditional commitment and acceptance of risk are not excluded in principle by online sociality, he insists that “anyone using the Net who was led to risk his or her real identity in the real world would have to act against the grain of what attracted him or her to the Net in the first place” (2004, 78).

2.3 Contemporary Reassessment of Early Phenomenological Critiques of SNS

While Borgmann and Dreyfus’s views continue to inform the philosophical conversation about social networking and ethics, both of these early philosophical engagements with the phenomenon manifest certain predictive failures (as is perhaps unavoidable when reflecting on new and rapidly evolving technological systems). Dreyfus did not foresee the way in which popular SNS such as Facebook, LinkedIn and Twitter would shift away from the earlier online norms of anonymity and identity play, instead giving real-world identities an online presence which in some ways is less ephemeral than bodily presence (as those who have struggled to erase online traces of past tweets or to delete Facebook profiles of deceased loved ones can attest).

Likewise, Borgmann’s critiques of “immobile attachment” to the online datastream did not anticipate the rise of mobile social networking applications which not only encourage us to physically seek out and join our friends at those same concerts, plays and political events that he envisioned us passively digesting from an electronic feed, but also enable spontaneous physical gatherings in ways never before possible. That said, such short-term predictive failures may not, in the long view, turn out to be fatal to their legacies. After all, some of the most enthusiastic champions of the Internet’s liberating social possibilities to be challenged by Dreyfus (2004, 75), such as Sherry Turkle, have since articulated far more pessimistic views of the trajectory of new social technologies. Turkle’s concerns about social media in particular (2011, 2015), namely that they foster a peculiar alienation in connectedness that leaves us feeling “alone together,” resonate well with Borgmann’s earlier warnings about electronic networks.

2.3.1 Borgmann, Dreyfus and the ‘Cancel Culture’ Debates

The SNS phenomenon continues to be ambiguous with respect to confirming Borgmann and Dreyfus’ early predictions. One of their most unfounded worries was that online social media would lead to a culture in which personal beliefs and actions are stripped of enduring consequence, cut adrift from real-world identities as persons accountable to one another. Today, no regular user of Twitter or Reddit is cut off from “the instruction of being seen and judged” (Borgmann 1992). And contra Dreyfus, it is primarily through the power of social media that people’s identities in the real world are now exposed to greater risk than before – from doxing to loss of employment to being physically endangered by ‘swatting.’

If anything, contemporary debates about social media’s alleged propagation of a stifling ‘cancel culture,’ which bend back upon the philosophical community itself (Weinberg 2020, Other Internet Resources), reflect growing anxieties among many that social networking environments primarily lack affordances for forgiveness and mercy, not judgment and personal accountability. Yet others see the emergent phenomenon of online collective judgment as performing a vital function of moral and political levelling, one in which social media enable the natural ethical consequences of an agent’s speech and acts to at last be imposed upon the powerful, not merely the vulnerable and marginalized.

2.3.2. The Civic Harms of Social Hyperreality

One aspect of Borgmann’s (1992) account has recently rebounded in plausibility; namely, his prediction of a dire decline in civic virtues among those fully submerged in the distorted political reality created by the disembodied and disorienting ‘hyperintelligence’ of online social media. In the wake of the 2016 UK and US voter manipulation by foreign armies of social media bots, sock puppets, and astroturf accounts, the world has seen a rapid global expansion and acceleration of political disinformation and conspiracy theories through online social networks like Facebook, Twitter, YouTube and WhatsApp.

The profound harms of the ‘weaponization’ of social media disinformation go well beyond voter manipulation. In 2020, disinformation about the COVID-19 pandemic greatly impeded public health authorities by clouding the public’s perception of the severity and transmissibility of the virus as well as the utility of prophylactics such as mask-wearing. Meanwhile, the increasing global influence of ever-mutating conspiracy theories borne on social media platforms by the anonymous group QAnon suggests that Borgmann’s warning of the dangers of our rising culture of ‘hyperreality,’ long derided as technophobic ‘moral panic,’ was dismissed far too hastily. While the notorious ‘Pizzagate’ episode of 2016 (Miller 2021) was the first visible link between QAnon conspiracies and real-world violence, the alarming uptake in 2020 of QAnon conspiracies by violent right-wing militias in the United States led Facebook and Twitter to abandon their prior tolerance of the movement and ban or limit access to hundreds of thousands of QAnon-associated accounts.

Such moves came too late to stabilize the epistemic and political rift in a shared reality. By late 2020, QAnon had boosted a widely successful effort by supporters of outgoing President Donald Trump to create a (manifestly false) counter-narrative around the 2020 election purporting that he had actually won, leading to a failed insurrection at the U.S. Capitol on January 6, 2021. Borgmann’s warnings on ‘hyperreality’ seem less like moral panic and more like prescience when one considers the existence of a wide swath of American voters who remain convinced that Donald Trump remains legitimately in office, directing actions against his enemies. Such counter-narratives are not merely ‘underground’ belief systems; they compete directly with reality itself. On June 17, 2021, the mainstream national newspaper USA Today found it necessary to publish a piece titled “Fact Check: Hilary Clinton was not hanged at Guantanamo Bay” (Wagner 2021) in response to a video being widely shared on the social media platforms TikTok and Instagram, which describes in fine detail the (very much alive) Clinton’s last meal.

Borgmann’s long-neglected work on social hyperreality thus merits reevaluation in light of the growing fractures and incoherencies that now splinter and twist our digitally mediated experience of what remains, underneath it all, a common world. The COVID-19 pandemic and increasingly catastrophic impacts of climate change testify to humanity’s vital need to remain anchored in and intelligently responsive to a shared physical reality.

Yet both the spread of social media-driven disinformation and the rise of online moral policing reveal an unresolved philosophical tension that Borgmann’s own work did not explicitly confront. This is the Concept of Toleration and its Paradoxes, which continue to bedevil modern political thought. Social networking services have transformed this festering concern of political philosophy into something verging on an existential crisis. When malice and madness can be amplified on a global scale at lightspeed, in a manner affordable and accessible to anyone with a smartphone or wifi connection, what is too injurious and too irremediable, to be said, or shared (Marin 2021)?

Social media continue to drive a range of new philosophical investigations in the domains of social epistemology and ethics, including ‘vice epistemology’ (Kidd, Battaly, Cassam 2020). Such investigations raise urgent questions about the relationship between online disinformation/misinformation, individual moral and epistemic responsibility, and the responsibility of social media platforms themselves. On this point, Regina Rini (2017) has argued that the problem of online disinformation/misinformation is not properly conceived in terms of individual epistemic vice, but rather must be seen as a “tragedy of the epistemic commons” that will require institutional and structural solutions.

3. Contemporary Ethical Concerns about Social Networking Services

While early SNS scholarship in the social and natural sciences tended to focus on SNS impact on users’ psychosocial markers of happiness, well-being, psychosocial adjustment, social capital, or feelings of life satisfaction, philosophical concerns about social networking and ethics have generally centered on topics less amenable to empirical measurement (e.g., privacy, identity, friendship, the good life and democratic freedom). More so than ‘social capital’ or feelings of ‘life satisfaction,’ these topics are closely tied to traditional concerns of ethical theory (e.g., virtues, rights, duties, motivations and consequences). These topics are also tightly linked to the novel features and distinctive functionalities of SNS, more so than some other issues of interest in computer and information ethics that relate to more general Internet functionalities (for example, issues of copyright and intellectual property).

Despite the methodological challenges of applying philosophical theory to rapidly shifting empirical patterns of SNS influence, philosophical explorations of the ethics of SNS have continued in recent years to move away from Borgmann and Dreyfus’ transcendental-existential concerns about the Internet, to the empirically-driven space of applied technology ethics. Research in this space explores three interlinked and loosely overlapping kinds of ethical phenomena:

  • direct ethical impacts of social networking activity itself (just or unjust, harmful or beneficial) on participants as well as third parties and institutions;
  • indirect ethical impacts on society of social networking activity, caused by the aggregate behavior of users, platform providers and/or their agents in complex interactions between these and other social actors and forces;
  • structural impacts of SNS on the ethical shape of society, especially those driven by the dominant surveillant and extractivist value orientations that sustain social networking platforms and culture.

Most research in the field, however, remains topic- and domain-driven—exploring a given potential harm or domain-specific ethical dilemma that arises from direct, indirect, or structural effects of SNS, or more often, in combination. Sections 3.1–3.5 outline the most widely discussed of contemporary SNS’ ethical challenges.

3.1 Social Networking Services and Privacy

Fundamental practices of concern for direct ethical impacts on privacy include: the transfer of users’ data to third parties for intrusive purposes, especially marketing, data mining, and surveillance; the use of SNS data to train facial-recognition systems or other algorithmic tools that identify, track and profile people without their free consent; the ability of third-party applications to collect and publish user data without their permission or awareness; the dominant reliance by SNS on opaque or inadequate privacy settings; the use of ‘cookies’ to track online user activities after they have left a SNS; the abuse of social networking tools or data for stalking or harassment; widespread scraping of social media data by academic researchers for a variety of unconsented purposes; undisclosed sharing of user information or patterns of activity with government entities; and, last but not least, the tendency of SNS to foster imprudent, ill-informed or unethical information sharing practices by users, either with respect to their own personal data or data related to other persons and entities. Facebook has been a particular lightning-rod for criticism of its privacy practices (Spinello 2011, Vaidhyanathan 2018), but it is just the most visible member of a far broader and more complex network of SNS actors with access to unprecedented quantities of sensitive personal data.

Indirectly, the incentives of social media environments create particular problems with respect to privacy norms. For example, since it is the ability to access information freely shared by others that makes SNS uniquely attractive and useful, and since platforms are generally designed to reward disclosure, it turns out that contrary to traditional views of information privacy, giving users greater control over their information-sharing practices can actually lead to decreased privacy for themselves and others in their network. Indeed, advertisers, insurance companies and employers are increasingly less interested in knowing the private facts of individual users’ lives, and more interested in using their data to train algorithms that can predict the behavior of people very much like that user. Thus the real privacy risk of our social media practices is often not to ourselves but to other people; if a person is comfortable with the personal risk of their data sharing habits, it does not follow that these habits are ethically benign. Moreover, users are still caught in the tension between their personal motivations for using SNS and the profit-driven motivations of the corporations that possess their data (Baym 2011, Vaidhyanathan 2018). Jared Lanier frames the point cynically when he states that: “The only hope for social networking sites from a business point of view is for a magic formula to appear in which some method of violating privacy and dignity becomes acceptable” (Lanier 2010).

Scholars also note the way in which SNS architectures are often structurally insensitive to the granularity of human sociality (Hull, Lipford & Latulipe 2011). That is, such architectures tend to treat human relations as if they are all of a kind, ignoring the profound differences among types of social relation (familial, professional, collegial, commercial, civic, etc.). As a consequence, the privacy controls of such architectures often flatten the variability of privacy norms within different but overlapping social spheres. Among philosophical accounts of privacy, Nissenbaum’s (2010) view of contextual integrity has seemed to many to be particularly well suited to explaining the diversity and complexity of privacy expectations generated by new social media (see for example Grodzinsky and Tavani 2010; Capurro 2011). Contextual integrity demands that our information practices respect context-sensitive privacy norms, where ‘context’ refers not to the overly coarse distinction between ‘private’ and ‘public,’ but to a far richer array of social settings characterized by distinctive roles, norms and values. For example, the same piece of information made ‘public’ in the context of a status update to family and friends on Facebook may nevertheless be considered by the same discloser to be ‘private’ in other contexts; that is, she may not expect that same information to be provided to strangers Googling her name, or to bank employees examining her credit history.

On the design side, such complexity means that attempts to produce more ‘user-friendly’ privacy controls face an uphill challenge—they must balance the need for simplicity and ease of use with the need to better represent the rich and complex structures of our social universes. A key design question, then, is how SNS privacy interfaces can be made more accessible and more socially intuitive for users.

Hull et al. (2011) also take note of the apparent plasticity of user attitudes about privacy in SNS contexts, as evidenced by the pattern of widespread outrage over changed or newly disclosed privacy practices of SNS providers being followed by a period of accommodation to and acceptance of the new practices (Boyd and Hargittai 2010). In their 2018 book Re-Engineering Humanity, Brett Frischmann and Evan Selinger argue that SNS contribute to a slippery slope of “techno-social engineering creep” that produces a gradual normalization of increasingly pervasive and intrusive digital surveillance. A related concern is the “privacy paradox,” in which users’ voluntary sharing of data online belies their own stated values concerning privacy. However, recent data from Apple’s introduction in iOS 14.5 of opt-in for ad tracking, which the vast majority of iOS users have declined to allow, suggests that most people continue to value and act to protect their privacy, when given a straightforward choice that does not inhibit their access to services (Axon 2021). Working from the late writings of Foucault, Hull (2015) has explored the way in which the ‘self-management’ model of online privacy protection embodied in standard ‘notice and consent’ practices only reinforces a narrow neoliberal conception of privacy, and of ourselves, as commodities for sale and exchange. The debate continues about whether privacy violations can be usefully addressed by users making wiser privacy-preserving choices (Véliz 2021), or whether the responsibilization of individuals only obscures the urgent need for radical structural reforms of SNS business models (Vaidhyanathan 2018).

In an early study of online communities, Bakardjieva and Feenberg (2000) suggested that the rise of communities predicated on the open exchange of information may in fact require us to relocate our focus in information ethics from privacy concerns to concerns about alienation; that is, the exploitation of information for purposes not intended by the relevant community. Such considerations give rise to the possibility of users deploying “guerrilla tactics” of misinformation, for example, by providing SNS hosts with false names, addresses, birthdates, hometowns or employment information. Such tactics would aim to subvert the emergence of a new “digital totalitarianism” that uses the power of information rather than physical force as a political control (Capurro 2011).

Finally, privacy issues with SNS highlight a broader philosophical and structural problem involving the intercultural dimensions of information ethics and the challenges for ethical pluralism in global digital spaces (Ess 2021). Pak Hang Wong (2013) has argued for the need for privacy norms to be contextualized in ways that do not impose a culturally hegemonic Western understanding of why privacy matters; for example, in the Confucian context, it is familial privacy rather than individual privacy that is of greatest moral concern. Rafael Capurro (2005) has also noted the way in which narrowly Western conceptions of privacy occlude other legitimate ethical concerns regarding new media practices. For example, he notes that in addition to Western worries about protecting the private domain from public exposure, we must also take care to protect the public sphere from the excessive intrusion of the private. Though he illustrates the point with a comment about intrusive uses of cell phones in public spaces (2005, 47), the rise of mobile social networking has amplified this concern by several factors. When one must compete with Facebook or Twitter for the attention of not only one’s dinner companions and family members, but also one’s fellow drivers, pedestrians, students, moviegoers, patients and audience members, the integrity of the public sphere comes to look as fragile as that of the private.

3.2 The Ethics of Identity and Community on Social Networking Services

Social networking technologies open up a new type of ethical space in which personal identities and communities, both ‘real’ and virtual, are constructed, presented, negotiated, managed and performed. Accordingly, philosophers have analyzed SNS both in terms of their uses as Foucaultian “technologies of the self” (Bakardjieva and Gaden 2012) that facilitate the construction and performance of personal identity, and in terms of the distinctive kinds of communal norms and moral practices generated by SNS (Parsell 2008).

The ethical and metaphysical issues generated by the formation of virtual identities and communities have attracted much philosophical interest (see Introna 2011 and Rodogno 2012). Yet as noted by Patrick Stokes (2012), unlike earlier forms of online community in which anonymity and the construction of alter-egos were typical, SNS such as Facebook increasingly anchor member identities and connections to real, embodied selves and offline ‘real-world’ networks. Yet SNS still enable users to directly manage their self-presentation and their social networks in ways that offline social spaces at home, school or work often do not permit. The result, then, is an identity grounded in the person’s material reality and embodiment but more explicitly “reflective and aspirational” (Stokes 2012, 365) in its presentation, a phenomenon encapsulated in social media platforms such as Instagram. This raises a number of ethical questions: first, from what source of normative guidance or value does the aspirational content of an SNS user’s identity primarily derive? Do identity performances on SNS generally represent the same aspirations and reflect the same value profiles as users’ offline identity performances? Do they display any notable differences from the aspirational identities of non-SNS users? Are the values and aspirations made explicit in SNS contexts more or less heteronomous in origin than those expressed in non-SNS contexts? Do the more explicitly aspirational identity performances on SNS encourage users to take steps to actually embody those aspirations offline, or do they tend to weaken the motivation to do so?

A further SNS phenomenon of relevance here is the persistence and communal memorialization of Facebook profiles after the user’s death; not only does this reinvigorate a number of classical ethical questions about our ethical duties to honor and remember the dead, it also renews questions about whether our moral identities can persist after our embodied identities expire, and whether the dead have ongoing interests in their social presence or reputation (Stokes 2012).

Mitch Parsell (2008) raised early concerns about the unique temptations of ‘narrowcast’ social networking communities that are “composed of those just like yourself, whatever your opinion, personality or prejudices.” (41) Such worries about ‘echo chambers’ and ‘filter bubbles’ have only become more acute as political polarization continues to dominate online culture. Among the structural affordances of SNS is a tendency to constrict our identities to a closed set of communal norms that perpetuate increased polarization, prejudice and insularity. Parsells admitted that in theory the many-to-many or one-to-many relations enabled by SNS allow for exposure to a greater variety of opinions and attitudes, but in practice they often have the opposite effect. Building from de Laat (2006), who suggests that members of virtual communities embrace a distinctly hyperactive style of communication to compensate for diminished informational cues, Parsell claimed that in the absence of the full range of personal identifiers evident through face-to-face contact, SNS may also indirectly promote the deindividuation of personal identity by exaggerating and reinforcing the significance of singular shared traits (liberal, conservative, gay, Catholic, etc.) that lead us to see ourselves and our SNS contacts more as representatives of a group than as unique persons (2008, 46).

Parsell also noted the existence of inherently pernicious identities and communities that may be enabled or enhanced by SNS tools—he cites the example of apotemnophiliacs, or would-be amputees, who use such resources to create mutually supportive networks in which their self-destructive desires receive validation (2008, 48). Related concerns have been raised about “Pro-ANA” sites that provide mutually supportive networks for anorexics seeking information and tools to allow them to perpetuate disordered and self-harming identities (Giles 2006; Manders-Huits 2010).

Restraint of such affordances necessarily comes at some cost to user autonomy—a value that in other circumstances is critical to respecting the ethical demands of identity, as noted by Noemi Manders-Huits (2010). Manders-Huits explores the tension between the way in which SNS treat users as profiled and forensically reidentifiable “objects of (algorithmic) computation” (2010, 52) while at the same time offering those users an attractive space for ongoing identity construction. She argues that SNS developers have a duty to protect and promote the interests of their users in autonomously constructing and managing their own moral and practical identities. This autonomy exists in some tension with widespread but still crude practices of automated SNS content moderation that seek on the one hand, to preserve a ’safe’ space for expression, yet may disproportionately suppress marginalized identities (Gillespie 2020).

The ethical concern about SNS constraints on user autonomy is also voiced by Bakardjieva and Gaden (2012) who note that whether they wish their identities to be formed and used in this manner or not, the online selves of SNS users are constituted by the categories established by SNS developers, and ranked and evaluated according to the currency which primarily drives the narrow “moral economy” of SNS communities: popularity (2012, 410). They note, however, that users are not rendered wholly powerless by this schema; users retain, and many exercise, “the liberty to make informed choices and negotiate the terms of their self-constitution and interaction with others,” (2012, 411) whether by employing means to resist the “commercial imperatives” of SNS sites (ibid.) or by deliberately restricting the scope and extent of their personal SNS practices.

SNS can also enable authenticity in important ways. While a ‘Timeline’ feature that displays my entire online personal history for all my friends to see can prompt me to ‘edit’ my past, it can also prompt me to face up to and assimilate into my self-conception thoughts and actions that might otherwise be conveniently forgotten. The messy collision of my family, friends and coworkers on Facebook can be managed with various tools offered by the site, allowing me to direct posts only to specific sub-networks that I define. But the far simpler and less time-consuming strategy is to come to terms with the collision—allowing each network member to get a glimpse of who I am to others, while at the same time asking myself whether these expanded presentations project a person that is more multidimensional and interesting, or one that is manifestly insincere. As Tamara Wandel and Anthony Beavers put it:

I am thus no longer radically free to engage in creating a completely fictive self, I must become someone real, not who I really am pregiven from the start, but who I am allowed to be and what I am able to negotiate in the careful dynamic between who I want to be and who my friends from these multiple constituencies perceive me, allow me, and need me to be. (2011, 93)

Even so, Dean Cocking (2008) has argued that many online social environments, by amplifying active aspects of self-presentation under our direct control, compromise the important function of passive modes of embodied self-presentation beyond our conscious control, such as body language, facial expression, and spontaneous displays of emotion (130). He regards these as important indicators of character that play a critical role in how others see us, and by extension, how we come to understand ourselves through others’ perceptions and reactions. If Cocking’s view is correct, then SNS that privilege text-based and asynchronous communications may hamper our ability to cultivate and express authentic identities. The subsequent rise in popularity of video and livestream SNS services such as YouTube, TikTok, Stream and Twitch might therefore be seen as enabling of greater authenticity in self-presentation. Yet in reality, the algorithmic and profit incentives of these platforms have been seen to reward distorted patterns of expression: compulsive, ‘always performing’ norms that are reported to contribute to burnout and breakdown by content creators (Parkin 2018).

Ethical preoccupations with the impact of SNS on our authentic self-constitution and representation may be assuming a false dichotomy between online and offline identities; the informational theory of personal identity offered by Luciano Floridi (2011) problematizes this distinction. Soraj Hongladarom (2011) employs such an informational metaphysic to deny that any clear boundary can be drawn between our offline selves and our selves as cultivated through SNS. Instead, our personal identities online and off are taken as externally constituted by our informational relations to other selves, events and objects.

Likewise, Charles Ess makes a link between relational models of the self found in Aristotle, Confucius and many contemporary feminist thinkers and emerging notions of the networked individual as a “smeared-out self” (2010, 111) constituted by a shifting web of embodied and informational relations. Ess points out that by undermining the atomic and dualistic model of the self upon which Western liberal democracies are founded, this new conception of the self forces us to reassess traditional philosophical approaches to ethical concerns about privacy and autonomy—and may even promote the emergence of a much-needed “global information ethics” (2010, 112). Yet he worries that our ‘smeared-out selves’ may lose coherence as the relations that constitute us are increasingly multiplied and scattered among a vast and expanding web of networked channels. Can such selves retain the capacities of critical rationality required for the exercise of liberal democracy, or will our networked selves increasingly be characterized by political and intellectual passivity, hampered in self-governance by “shorter attention spans and less capacity to engage with critical argument” (2010, 114)? Ess suggests that we hope for, and work to enable the emergence of, ‘hybrid selves’ that cultivate the individual moral and practical virtues needed to flourish within our networked and embodied relations (2010, 116).

3.3 Friendship, Virtue and the Good Life on Social Networking Services

SNS can facilitate many types of relational connections: LinkedIn encourages social relations organized around our professional lives, Twitter is useful for creating lines of communication between ordinary individuals and figures of public interest, MySpace was for a time a popular way for musicians to promote themselves and communicate with their fans, and Facebook, which began as a way to link university cohorts and now connects people across the globe, also hosts business profiles aimed at establishing links to existing and future customers. Yet the overarching relational concept in the SNS universe has been, and continues to be, the ‘friend,’ as underscored by the now-common use of this term as a verb to refer to acts of instigating or confirming relationships on SNS.

This appropriation and expansion of the concept ‘friend’ by SNS has provoked a great deal of scholarly interest from philosophers and social scientists, more so than any other ethical concern except perhaps privacy. Early concerns about SNS friendship centered on the expectation that such sites would be used primarily to build ‘virtual’ friendships between physically separated individuals lacking a ‘real-world’ or ‘face-to-face’ connection. This perception was an understandable extrapolation from earlier patterns of Internet sociality, patterns that had prompted philosophical worries about whether online friendships could ever be ‘as good as the real thing’ or were doomed to be pale substitutes for embodied ‘face to face’ connections (Cocking and Matthews 2000). This view was robustly opposed by Adam Briggle (2008), who claimed that online friendships might enjoy certain unique advantages. For example, Briggle asserted that friendships formed online might be more candid than offline ones, thanks to the sense of security provided by physical distance (2008, 75). He also noted the way in which asynchronous written communications can promote more deliberate and thoughtful exchanges (2008, 77).

These sorts of questions about how online friendships measure up to offline ones, along with questions about whether or to what extent online friendships encroach upon users’ commitments to embodied, ‘real-world’ relations with friends, family members and communities, defined the ethical problem-space of online friendship as SNS began to emerge. But it did not take long for empirical studies of actual SNS usage trends to force a profound rethinking of this problem-space. Within five years of Facebook’s launch, it was evident that a significant majority of SNS users were relying on these sites primarily to maintain and enhance relationships with those with whom they also had a strong offline connection—including close family members, high-school and college friends and co-workers (Ellison, Steinfeld and Lampe 2007; Ito et al. 2009; Smith 2011). Nor are SNS used to facilitate purely online exchanges—many SNS users today rely on the sites’ functionalities to organize everything from cocktail parties to movie nights, outings to athletic or cultural events, family reunions and community meetings. Mobile SNS applications amplify this type of functionality further, by enabling friends to locate one another in their community in real-time, enabling spontaneous meetings at restaurants, bars and shops that would otherwise happen only by coincidence.

Yet lingering ethical concerns remain about the way in which SNS can distract users from the needs of those in their immediate physical surroundings (consider the widely lamented trend of users obsessively checking their social media feeds during family dinners, business meetings, romantic dates and symphony performances). Such phenomena, which scholars like Sherry Turkle (2011, 2015) continue to worry are indicative of a growing cultural tolerance for being ‘alone together,’ bring a new complexity to earlier philosophical concerns about the emergence of a zero-sum game between offline relationships and their virtual SNS competitors. They have also prompted a shift of ethical focus away from the question of whether online relationships are “real” friendships (Cocking and Matthews 2000), to how well the real friendships we bring to SNS are being served there (Vallor 2012). The debate over the value and quality of online friendships continues (Sharp 2012; Froding and Peterson 2012; Elder 2014; Turp 2020; Kristjánsson 2021); in large part because the typical pattern of those friendships, like most social networking phenomena, continues to evolve.

Such concerns intersect with broader philosophical questions about whether and how the classical ethical ideal of ‘the good life’ can be engaged in the 21st century. Pak-Hang Wong claims that this question requires us to broaden the standard approach to information ethics from a narrow focus on the “right/the just” (2010, 29) that defines ethical action negatively (e.g., in terms of violations of privacy, copyright, etc.) to a framework that conceives of a positive ethical trajectory for our technological choices; for example, the ethical opportunity to foster compassionate and caring communities, or to create an environmentally sustainable economic order. Edward Spence (2011) further suggests that to adequately address the significance of SNS and related information and communication technologies for the good life, we must also expand the scope of philosophical inquiry beyond its present concern with narrowly interpersonal ethics to the more universal ethical question of prudential wisdom. Do SNS and related technologies help us to cultivate the broader intellectual virtue of knowing what it is to live well, and how to best pursue it? Or do they tend to impede its development?

This concern about prudential wisdom and the good life is part of a growing philosophical interest in using the resources of classical and contemporary virtue ethics to evaluate the impact of SNS and related technologies (Vallor 2016, 2010; Wong 2012; Ess 2008). This program of research promotes inquiry into the impact of SNS not merely on the cultivation of prudential virtue, but on the development of a host of other moral and communicative virtues, such as honesty, patience, justice, loyalty, benevolence and empathy.

3.4 Democracy, Freedom and Social Networking Services in the Public Sphere

As is the case with privacy, identity, community and friendship on SNS, ethical debates about the impact of SNS on civil discourse, freedom and democracy in the public sphere must be seen as extensions of a broader discussion about the political implications of the Internet, one that predates Web 2.0 standards. Much of the literature on this subject focuses on the question of whether the Internet encourages or hampers the free exercise of deliberative public reason, in a manner informed by Jürgen Habermas’s (1992/1998) account of discourse ethics and deliberative democracy in the public sphere (Ess 1996 and 2005b; Dahlberg 2001; Bohman 2008). A related topic of concern is SNS fragmentation of the public sphere by encouraging the formation of ‘echo chambers’ and ‘filter bubbles’: informational silos for like-minded individuals who deliberately shield themselves from exposure to alternative views. Early worries that such insularity would promote extremism and the reinforcement of ill-founded opinions, while also preventing citizens of a democracy from recognizing their shared interests and experiences (Sunstein 2008), have unfortunately proven to be well-founded (as noted in section 2.3.2). Early optimism that SNS would facilitate popular revolutions resulting in the overthrow of authoritarian regimes (Marturano 2011; Frick and Oberprantacher 2011) have likewise given way to the darker reality that SNS are perhaps even more easily used as tools to popularize authoritarian and totalitarian movements, or foster genocidal impulses, as in the use of Facebook to drive violence against the Rohingya minority in Myanmar (BBC 2018).

When SNS in particular are considered in light of these questions, some distinctive considerations arise. First, sites like Facebook and Twitter (as opposed to narrower SNS utilities such as LinkedIn) facilitate the sharing of, and exposure to, an extremely diverse range of types of discourse. On any given day on Facebook a user may encounter in her NewsFeed a link to an article in a respected political magazine followed by a video of a cat in a silly costume, followed by a link to a new scientific study, followed by a lengthy status update someone has posted about their lunch, followed by a photo of a popular political figure overlaid with a clever and subversive caption. Vacation photos are mixed in with political rants, invitations to cultural events, birthday reminders and data-driven graphs created to undermine common political, moral or economic beliefs. Thus while a user has a tremendous amount of liberty to choose which forms of discourse to pay closer attention to, and tools with which to hide or prioritize the posts of certain members of her network, the sheer diversity of the private and public concerns of her fellows would seem to offer at least some measure of protection against the extreme insularity and fragmentation of discourse that is incompatible with the public sphere.

Yet in practice, the function of hidden platform algorithms can defeat this diversity. Trained on user behavior to optimize for engagement and other metrics that advertisers and platform companies associate with their profit, these algorithms can ensure that I experience only a pale shadow of the true diversity of my social network, seeing at the top of my feed only those posts that I am most likely to find subjectively rewarding to engage with. If, for example, I support the Black Lives Matter movement, and tend to close the app in frustration and disappointment whenever I see BLM denigrated by someone I consider a friend, the platform algorithm can easily learn this association and optimize my experience for one that is more conducive to retaining my presence. It is important to note, however, that in this case the effect is an interaction between the algorithm and my own behavior. How much responsibility for echo chambers and resulting polarization or insularity falls upon users, and how much on the designers of algorithms that track and amplify our expressed preferences?

Philosophers of technology often speak of the affordances or gradients of particular technologies in given contexts (Vallor 2010) insofar as they make certain patterns of use more attractive or convenient for users (while not rendering alternative patterns impossible). Thus while I can certainly seek out posts that will cause me discomfort or anxiety, the platform gradient will not be designed to facilitate such experiences. Yet it is not obvious if or when it should be designed to do so. As Alexis Elder notes (2020), civic discourse on social media can be furthered rather than inhibited by prudent use of tools enabling disconnection. Additionally, a platform affordance that makes a violent white supremacist feel accepted, valued, safe and respected in their social milieu (precisely for their expressed attitudes and beliefs in white supremacist violence) facilitates harm to others, in a way that a platform affordance that makes an autistic person or a transgender woman feel accepted, valued, safe and respected for who they are, does not. Fairness and equity in SNS platform design do not entail neutrality. Ethics explicitly demands non-neutrality between harm and nonharm, between justice and injustice. But ethics also requires epistemic anchoring in reality. Thus even if my own attitudes and beliefs harm no one, I may still have a normative epistemic duty to avoid the comfort of a filter bubble. Do SNS platforms have a duty to keep their algorithms from helping me into one? In truth, those whose identities are historically marginalized will rarely have the luxury of the filter bubble option; online and offline worlds consistently offer stark reminders of their marginalization. So how do SNS designers, users, and regulators mitigate the deleterious political and epistemic effects of filter bubble phenomena without making platforms more inhospitable to vulnerable groups than they already are?

One must also ask whether SNS can skirt the dangers of a plebiscite model of democratic discourse, in which minority voices are dispersed and drowned out by the many. Certainly, compared to the ‘one-to-many’ channels of communication favored by traditional media, SNS facilitate a ‘many-to-many’ model of communication that appears to lower the barriers to participation in civic discourse for everyone, including the marginalized. However, SNS lack the institutional structures necessary to ensure that minoritized voices enjoy not only free, but substantively equal access to the deliberative function of the public sphere.

We must also consider the quality of informational exchanges on SNS and the extent to which they promote a genuinely dialogical and deliberative public sphere marked by the exercise of critical rationality. SNS norms tend to privilege brevity and immediate impact over substance and depth in communication; Vallor (2012) suggests that this bodes poorly for the cultivation of those communicative virtues essential to a flourishing public sphere. This worry is only reinforced by empirical data suggesting that SNS perpetuate the ‘Spiral of Silence’ phenomenon that results in the passive suppression of divergent views on matters of important political or civic concern (Hampton et. al. 2014). In a related critique, Frick and Oberprantacher (2011) claim that the ability of SNS to facilitate public ‘sharing’ can obscure the deep ambiguity between sharing as “a promising, active participatory process” and “interpassive, disjointed acts of having trivia shared.” (2011, 22)

There remains a notable gap online between the prevalence of democratic discourse and debate—which require only the open voicing of opinions and reasons, respectively—and the relative absence of democratic deliberation, which requires the joint exercise of collective intentions, cooperation and compromise as well as a shared sense of reality on which to act. The greatest moral challenges of our time—responding to the climate change crisis, developing sustainable patterns of economic and social life, managing global threats to public health—aren’t going to be solved by ideological warfare but by deliberative, coordinated exercise of public wisdom. Today’s social media platforms are great for cultivating the former; for the latter, not so much.

Another vital issue for online democracy relates to the contentious debate emerging on social media platforms about the extent to which controversial or unpopular speech ought to be tolerated or punished by private actors, especially when the consequences manifest in traditional offline contexts and spaces such as the university. For example, the norms of academic freedom in the U.S. were greatly destabilized by the ‘Salaita Affair’ (in which a tenured job offer by the University of Illinois at Urbana-Champaign to Steven Salaita was withdrawn on the basis of his tweets criticizing Israel) and several other cases in which academics were censured or otherwise punished by their institutions as a result of their controversial social media posts (Protevi 2018). Yet how should we treat a post by a professor that expresses a desire to sleep with their students, or that expresses their doubts about the intelligence of women, or the integrity of students of a particular nationality? It remains to be seen what equilibrium can be found between moral accountability and free expression in communities increasingly mediated by SNS communications. A related debate concerns the ethical and social value of the kind of social media acts of moral policing frequently derided as insincere or performative ‘virtue signaling.’ To what extent are social media platforms a viable stage for moral performances, and are such performances merely performative? Are they inherently ‘grandstanding’ abuses of moral discourse (Tosi and Warmke 2020), or can they in fact be positive forces for social progress and reform (Levy 2020, Westra 2021)?

It also remains to be seen to what extent civic discourse and activism on SNS will continue to be manipulated or compromised by the commercial interests that currently own and manage the technical infrastructure. This concern is driven by the growing economic and political influence of companies in the technology sector, what Luciano Floridi (2015b) calls ‘grey power,’ and the potentially disenfranchising and disempowering effects of an economic model in which most users play a passive role (Floridi 2015a). Indeed, the relationship between social media users and service providers has become increasingly contentious, as users struggle to demand more privacy, better data security and more effective protections from online harassment in an economic context where they have little or no direct bargaining power (Zuboff 2019).

This imbalance was powerfully illustrated by the revelation in 2014 that Facebook researchers had quietly conducted psychological experiments on users without their knowledge, manipulating their moods by altering the balance of positive or negative items in their News Feeds (Goel 2014). The study added yet another dimension to existing concerns about the ethics and validity of social science research that relies on SNS-generated data (Buchanan and Zimmer 2012), concerns that drive an increasingly vital and contested area of research ethics (Woodfield 2018, franzke et al. 2020).

Ironically, in the power struggle between users and SNS providers, social networking platforms themselves have become the primary battlefield, where users vent their collective outrage in an attempt to force service providers into responding to their demands. The results are sometimes positive, as when Twitter users, after years of complaining, finally shamed the company in 2015 into providing better reporting tools for online harassment. Yet by its nature the process is chaotic and often controversial, as when later that year, Reddit users successfully demanded the ouster of CEO Ellen Pao, under whose leadership Reddit had banned some of its more repugnant ‘subreddit’ forums (such as “Fat People Hate”).

The only clear consensus emerging from the considerations outlined here is that if SNS are going to facilitate any enhancement of a Habermasian public sphere, or the civic virtues and praxes of reasoned discourse that any functioning public sphere must presuppose, then users will have to actively mobilize themselves to exploit such an opportunity (Frick and Oberprantacher 2011). Such mobilization may depend upon resisting the “false sense of activity and accomplishment” (Bar-Tura, 2010, 239) that may come from merely clicking ‘Like’ in response to acts of meaningful political speech, forwarding calls to sign petitions, or simply ‘following’ an outspoken social critic on Twitter whose ‘tweeted’ calls to action are drowned in a tide of corporate announcements, celebrity product endorsements and personal commentaries. Some argue that it will also require the cultivation of new norms and virtues of online civic-mindedness, without which online ‘democracies’ will continue to be subject to the self-destructive and irrational tyrannies of mob behavior (Ess 2010).

3.5 Social Networking Services and Cybercrime

SNS are hosts for a broad spectrum of ‘cybercrimes’ and related direct harms, including but not limited to: cyberbullying/cyberharassment, cyberstalking, child exploitation, cyberextortion, cyberfraud, illegal surveillance, identity theft, intellectual property/copyright violations, cyberespionage, cybersabotage and cyberterrorism. Each of these forms of criminal or antisocial behavior has a history that well pre-dates Web 2.0 standards, and philosophers have tended to leave the specific correlations between cybercrime and SNS as an empirical matter for social scientists, law enforcement and Internet security firms to investigate. Nevertheless, cybercrime is an enduring topic of philosophical interest for the broader field of computer ethics, and the migration to and evolution of such crime on SNS platforms raises new and distinctive ethical issues.

Among those of great ethical importance is the question of how SNS providers ought to respond to government demands for user data for investigative or counterterrorism purposes. SNS providers are caught between the public interest in crime prevention and their need to preserve the trust and loyalty of their users, many of whom view governments as overreaching in their attempts to secure records of online activity. Many companies have opted to favor user security by employing end-to-end encryption of SNS exchanges, much to the chagrin of government agencies who insist upon ‘backdoor’ access to user data in the interests of public safety and national security.

A related feature of SNS abuse and cybercrime is the associated skyrocketing need for content moderation at scale by these platforms. Because automated tools for content moderation remain crude and easily gamed, social media platforms rely on large human workforces working for low wages, who must manually screen countless images of horrific violence and abuse, often suffering grave and lasting psychological harm as a result (Roberts 2019). It is unclear how such harms to the content moderating workforce can be morally justified, even if they help to prevent the spread of such harm to others. The arrangement has uncomfortable echoes of Ursula LeGuin’s The Ones Who Walk Away From Omelas; so should platform users be the ones walking away? Or do platforms have an ethical duty to find a morally permissible solution, even if it endangers their business model?

Another emerging ethical concern is the increasingly political character of cyberharassment and cyberstalking. In the U.S., women who spoke out about the lack of diversity in the tech and videogame industries were early targets during online controversies such as 2014’s ‘Gamergate’ (Salter 2017), during which some victims were forced to cancel speaking appearances or leave their homes due to physical threats after their addresses and other personal info were posted on social media (a practice known as ‘doxing’ or ‘doxxing’). More recently, journalists have been doxed and subjected to violent threats, sometimes following accusations that their reporting itself constituted doxing (Wilson 2018).

Doxing presents complex ethical challenges (Douglas 2016). For victims of doxing and associated cyberthreats, traditional law enforcement bodies offer scant protection, as these agencies are often ill-equipped to police the blurry boundary between online and physical harms. But moreover, it’s not always clear what distinguishes immoral doxing from justified social opprobrium. If someone records a woman spitting racial epithets in a passerby’s face, or a man denying a disabled person service in a restaurant, and the victim or an observer posts the video online in a manner that allows the perpetrator to be identified by others in their social network, is that unethical shaming or just deserts? What’s the difference between posting someone’s home address, allowing them and their family to be terrorized by a mob, and posting someone’s workplace so that their employer can consider their conduct? Cases such as these get adjudicated by ad hoc social media juries weekly. Sometimes legal consequences do follow, as in the case of the notorious Amy Cooper, who in 2020 was charged with filing a false police report after being filmed by a Black man who she falsely accused of threatening her in Central Park. Are doxing and other modes of social media shaming legitimate tools of justice? Or are they indications of the dangers of unregulated moral policing? And if the answer is ‘both,’ or ‘it depends,’ then what are the key moral distinctions that allow us to respond appropriately to this new practice?

4. Social Networking Services and Metaethical Issues

A host of metaethical questions are raised by the rapid emergence of SNS. For example, SNS lend new data to an earlier philosophical debate (Tavani 2005; Moor 2008) about whether classical ethical traditions such as utilitarianism, Kantian ethics or virtue ethics possess sufficient resources for illuminating the implications of emerging information technology for moral values, or whether we require a new ethical framework to handle such phenomena. Charles Ess (2006, 2021) has suggested that a new, pluralistic “global information ethics” may be the appropriate context from which to view novel information technologies. Other scholars have suggested that technologies such as SNS invite renewed attention to existing ethical approaches such as pragmatism (van den Eede 2010), virtue ethics (Vallor 2016) feminist or care ethics (Hamington 2010; Puotinen 2011) that have often been neglected by applied ethicists in favor of conventional utilitarian and deontological resources.

A related metaethical project relevant to SNS is the development of an explicitly intercultural information ethics (Ess 2005a; Capurro 2008; Honglaradom and Britz 2010). SNS and other emerging information technologies do not reliably confine themselves to national or cultural boundaries, and this creates a particular challenge for applied ethicists. For example, SNS practices in different countries must be analyzed against a conceptual background that recognizes and accommodates complex differences in moral norms and practices (Capurro 2005; Hongladarom 2007, Wong 2013). SNS phenomena that one might expect to benefit from intercultural analysis include: varied cultural patterns and preference/tolerance for affective display, argument and debate, personal exposure, expressions of political, interfamilial or cultural criticism, religious expression and sharing of intellectual property. Alternatively, the very possibility of a coherent information ethics may come under challenge, for example, from a constructivist view that emerging socio-technological practices like SNS continually redefine ethical norms—such that our analyses of SNS and related technologies are not only doomed to operate from shifting ground, but from ground that is being shifted by the intended object of our ethical analysis.

Finally, there are pressing practical concerns about whether and how philosophers can actually have an impact on the ethical profile of emerging technologies such as SNS. If philosophers direct their ethical analyses only to other philosophers, then such analyses may function simply as ethical postmortems of human-technology relations, with no opportunity to actually pre-empt, reform or redirect unethical technological practices. But to whom else can, or should, these ethical concerns be directed: SNS users? Regulatory bodies and political institutions? SNS software developers? How can the theoretical content and practical import of these analyses be made accessible to these varied audiences? What motivating force are they likely to have?

These questions have become particularly acute of late with the controversy over alleged corporate capture by technology companies of the language of ethics, and associated charges of ‘ethics-washing’ (Green 2021 [Other Internet Resources], Bietti 2020). Some argue that ethics is the wrong tool to fight the harms of emerging technologies and large technology platforms (Hao 2021); yet alternative proposals to focus on justice, rights, harms, equity or the legitimate use of power unwittingly fall right back within the normative scope of ethics. Unless we resort to a cynical frame of ‘might makes right,’ there is no escaping the need to use ethics to distinguish the relationships with sociotechnical phenomena and powers that we regard as permissible, good, or right, from those that should be resisted and dismantled.

The profound urgency of this task becomes apparent once we recognize that unlike those ‘life or death’ ethical dilemmas with which many applied ethicists are understandably often preoccupied (e.g., abortion, euthanasia and capital punishment), emerging information technologies such as SNS have in a very short time worked themselves into the daily moral fabric of virtually all of our lives, transforming the social landscape and the moral habits and practices with which we navigate it. The ethical concerns illuminated here are, in a very real sense, anything but ‘academic,’ and neither philosophers nor the broader human community can afford the luxury of treating them as such.

Bibliography

  • Axon, S. 2021, “96% of US Users Opt-Out of App Tracking in iOS 14.5, Analytics Find,” Ars Technica, May 7, 2021. [Axion 2021 available online]
  • Bakardjieva, M. and A. Feenberg, 2000, “Involving the Virtual Subject,” Ethics and Information Technology, 2(4): 233–240.
  • Bakardjieva, M. and G. Gaden, 2012, “Web 2.0 Technologies of the Self,” Philosophy and Technology, 25(3): 399–413.
  • Bar-Tura, A., 2010, “Wall-to-Wall or Face-to-Face,” in Facebook and Philosophy, D.E. Wittkower (ed.), Chicago: Open Court, pp. 231–239.
  • Barnes, S.B., 2001, Online Connections: Internet Interpersonal Relationships, Cresskill, NJ: Hampton Press.
  • Baym, N.K., 2011, “Social Networks 2.0,” in The Handbook of Internet Studies, M. Consalvo and C. Ess (eds.), Oxford: Wiley-Blackwell, pp. 384-405.
  • Benjamin, R., 2019, Race After Technology: Abolitionist Tools for the New Jim Code, New York: Polity.
  • BBC, 2018, “Facebook Admits it was Used to ‘Incite Offline Violence’ in Myanmar,” November 6, 2018. [available online]
  • Bietti, E., 2020, “From Ethics Washing to Ethics Bashing: A View on Tech Ethics From Within Moral Philosophy,” Proceedings of the 2020 Conference on Fairness, Accountability and Transparency, New York: Association for Computing Machinery, pp. 210–219.
  • Bohman, J., 2008, “The Transformation of the Public Sphere: Political Authority, Communicative Freedom and Internet Publics,” in Information Technology and Moral Philosophy, J. van den Hoven and J. Weckert (eds.), Cambridge UK: Cambridge University Press, pp. 66–92.
  • Borgmann, A., 1984, Technology and the Character of Contemporary Life, Chicago: University of Chicago Press.
  • –––, 1992, Crossing the Postmodern Divide, Chicago: University of Chicago Press.
  • Boyd, D., 2007, “Why Youth (Heart) Social Networking Sites: The Role of Networked Publics in Teenage Social Life,” in Youth, Identity and Social Media, D. Buckingham (Ed.), Cambridge MA: MIT Press, pp. 119–142.
  • Boyd, D. and E. Hargittai, 2010, “Facebook Privacy Settings: Who Cares?” First Monday, 15(8): 13–20.
  • Briggle, A., 2008, “Real Friends: How the Internet can Foster Friendship,” Ethics and Information Technology, 10(1): 71–79.
  • Buchanan, E.A. and M. Zimmer, 2012, “Internet Research Ethics,” The Stanford Encyclopedia of Philosophy (Spring 2015 Edition), Edward N. Zalta (ed.), URL=<https://plato.stanford.edu/archives/spr2015/entries/ethics-internet-research/>
  • Bynum, T., 2011, “Computer and Information Ethics,” The Stanford Encyclopedia of Philosophy (Spring 2011 Edition), Edward N. Zalta (ed.), URL=<https://plato.stanford.edu/archives/spr2011/entries/ethics-computer/>
  • Capurro, R., 2005, “Privacy. An Intercultural Perspective,” Ethics and Information Technology, 7(1): 37–47.
  • –––, 2008, “Intercultural Information Ethics,” in Handbook of Information and Computer Ethics, K.E. Himma and H.T. Tavani (eds.), Hoboken, NJ: Wiley and Sons, pp. 639–665.
  • –––, 2011, “Never Enter Your Real Data,” International Review of Information Ethics, 16: 74–78.
  • Cocking, D., 2008, “Plural Selves and Relational Identity,” in Information Technology and Moral Philosophy, J. van den Hoven and J. Weckert (eds.), Cambridge UK: Cambridge University Press, pp. 123–141.
  • Cocking, D. and S. Matthews, 2000, “Unreal Friends,” Ethics and Information Technology, 2(4): 223–231.
  • Consalvo, M. and C. Ess, 2011, The Handbook of Internet Studies, Oxford: Wiley-Blackwell.
  • Dahlberg, L., 2001, “The Internet and Democratic Discourse: Exploring the Prospects of Online Deliberative Forums Extending the Public Sphere,” Information, Communication and Society, 4(4): 615–633.
  • de Laat, P. 2006, “Trusting Virtual Trust,” Ethics and Information Technology, 7(3): 167–180.
  • Dreyfus, H., 2001, On the Internet, New York: Routledge.
  • –––, 2004, “Nihilism on the Information Highway: Anonymity versus Commitment in the Present Age,” in Community in the Digital Age: Philosophy and Practice, A. Feenberg and D. Barney (eds.), Lanham, MD: Rowman & Littlefield, pp. 69–81.
  • Douglas, D.M., 2016, “Doxing: A Conceptual Analysis.” Ethics and Information Technology, 18: 199–210.
  • Elder, A., 2014, “Excellent Online Friendships: An Aristotelian Defense of Social Media,” Ethics and Information Technology, 16(4): 287–297.
  • –––, 2020, “The Interpersonal is Political: Unfriending to Promote Civic Discourse on Social Media,” Ethics and Information Technology, 22: 15–24.
  • Elgesem, D., 1996, “Privacy, Respect for Persons, and Risk,” in Philosophical Perspectives on Computer-Mediated Communication, C. Ess (ed.), Albany, NY: SUNY Press, pp. 45–66.
  • Ellison, N.B., C. Steinfeld, and C. Lampe, 2007, “The Benefits of Facebook ‘Friends’: Social Capital and College Students’ Use of Online Social Network Sites,” Journal of Computer-Mediated Communication, 12(4): article 1.
  • Ess, C., 1996, “The Political Computer: Democracy, CMC and Habermas,” in Philosophical Perspectives on Computer-Mediated Communication, (C. Ess, ed.), Albany, NY: SUNY Press, pp. 197–230.
  • –––, 2005a, “Lost in Translation? Intercultural Dialogues on Privacy and Information Ethics,” Ethics and Information Technology, 7(1): 1–6.
  • –––, 2005b, “Moral Imperatives for Life in an Intercultural Global Village,” in The Impact of the Internet on our Moral Lives, R.J. Cavalier (ed.), Albany NY: SUNY Press, pp. 161–193.
  • –––, 2006, “Ethical Pluralism and Global Information Ethics,” Ethics and Information Technology, 8(4): 215–226.
  • –––, 2010, “The Embodied Self in a Digital Age: Possibilities, Risks and Prospects for a Pluralistic (democratic/liberal) Future?” Nordicom Information, 32(2): 105–118.
  • –––, 2011, “Self, Community and Ethics in Digital Mediatized Worlds,” in Trust and Virtual Worlds: Contemporary Perspectives, C. Ess and M. Thorseth (eds.), Oxford: Peter Lang, pp. vii–xxix.
  • –––, 2021, “Interpretative Pros Hen Pluralism: from computer-mediated colonization to a pluralistic intercultural digital ethics,” Philosophy and Technology, 33(4): 551–569.
  • Feenberg, A., 1999, Questioning Technology, New York: Routledge.
  • Floridi, L., 2011, “The Informational Nature of Personal Identity,” Minds and Machines, 21(4): 549–566.
  • –––, 2015a, “Free Online Services: Enabling, Disenfranchising, Disempowering,” Philosophy and Technology, 28: 163–166.
  • –––, 2015b, “The New Grey Power,” Philosophy and Technology, 28: 329–332.
  • franzke, a. s., A. Bechmann, M. Zimmer, C. Ess, and the Association of Internet Researchers, 2020, Internet Research: Ethical Guidelines 3.0, Association of Internet Researchers. [franzke, et al. 2020 available online]
  • Frick, M. and A. Oberprantacher, 2011, “Shared is Not Yet Sharing, Or: What Makes Social Networking Services Public?” International Review of Information Ethics, 15: 18–23.
  • Frischmann, B. and E. Selinger, 2018, Re-Engineering Humanity, Cambridge: Cambridge University Press.
  • Froding, B. and Peterson, M., 2012, “Why Virtual Friendship is No Genuine Friendship,” Ethics and Information Technology, 14(3): 201–207.
  • Giles, D., 2006, “Constructing Identities in Cyberspace: The Case of Eating Disorders,” British Journal of Social Psychology, 45: 463–477.
  • Gillespie, T., 2020, “Content Moderation, AI, and the Question of Scale,” Big Data and Society, 7(2). [Gillespie 2020 available online]
  • Goel, V., 2014, “Facebook Tinkers with Users’ Emotions in News Feed Experiment, Stirring Outcry,” The New York Times (Technology section), June 29, 2014. [Goel 2014 available online]
  • Grodzinsky, F.S. and H.T. Tavani, 2010, “Applying the ‘Contextual Integrity’ Model of Privacy to Personal Blogs in the Blogosphere,” International Journal of Internet Research Ethics, 3(1): 38–47.
  • Habermas, J., 1992/1998, Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy, Cambridge, MA: MIT Press.
  • Hamington, M., 2010, “Care Ethics, Friendship and Facebook,” in Facebook and Philosophy, D.E. Wittkower (ed.), Chicago: Open Court, pp. 135–145.
  • Hampton, K., L. Rainie, W. Lu, M. Dwyer, I. Shin, and K. Purcell, 2014, “Social Media and the ‘Spiral of Silence’,” Pew Research Center, Published August 26, 2014, available online.
  • Hao, K. 2021. “Stop Talking About AI Ethics. It’s Time to Talk About Power,” MIT Technology Review, April 23, 2021. [Hao 2021 available online]
  • Heidegger, M., 1954 [1977], The Question Concerning Technology and Other Essays, New York: Harper and Row.
  • Honglaradom, S., 2007, “Analysis and Justification of Privacy from a Buddhist Perspective,” in S. Hongladarom and C. Ess (eds.), Information Technology Ethics: Cultural Perspectives, Hershey, PA: Idea Group, pp. 108–122.
  • –––, 2011, “Personal Identity and the Self in the Online and Offline World,” Minds and Machines, 21(4): 533–548.
  • Hongladarom, S. and J. Britz, 2010, “Intercultural Information Ethics,” International Review of Information Ethics, 13: 2–5.
  • Hull, G., 2015, “Successful Failure: What Foucault Can Teach Us about Privacy Self-Management in a World of Facebook and Big Data,” Ethics and Information Technology, online. doi:10.1007/s10676-015-9363-z
  • Hull, G., H.R. Lipford, and C. Latulipe, 2011, “Contextual Gaps: Privacy Issues on Facebook,” Ethics and Information Technology, 13(4): 289–302.
  • Introna, L., 2011, “Phenomenological Approaches to Ethics and Information Technology,” The Stanford Encyclopedia of Philosophy (Summer 2011 Edition), Edward N. Zalta (ed.), URL=<https://plato.stanford.edu/archives/sum2011/entries/ethics-it-phenomenology/>
  • Ito, M., et al., 2009, Hanging Out, Messing Around, Geeking Out: Living and Learning with New Media, Cambridge, MA: MIT Press.
  • Johnson, Deborah G., 1985, Computer Ethics, Englewood Cliffs, NJ: Prentice Hall.
  • Kidd, I.J., H. Battaly, and Q. Cassam, 2020, Vice Epistemology, New York : Routledge.
  • Kristjánsson, K., 2021, “Online Aristotelian Character Friendship as an Augmented Form of Penpalship,” Philosophy and Technology, 34: 289–307.
  • Lanier, J. 2010, You Are Not a Gadget: A Manifesto, New York: Knopf.
  • Levy, N., 2020, “Virtue Signaling Is Virtuous,” Synthese, published online 16 April 2020. doi:10.1007/s11229-020-02653-9
  • Manders-Huits, N., 2010, “Practical versus Moral Identities in Identity Management,” Ethics and Information Technology, 12(1): 43–55.
  • Marin, L., 2021, “Sharing (Mis)information on Social Networking Sites: An Exploration of the Norms for Distributing Content Authored by Others,” Ethics and Information Technology, published online 02 February 2021. doi:10.1007/s10676-021-09578-y
  • Marturano, A., 2011, “The Ethics of Online Social Networks—An Introduction,” International Review of Information Ethics, 16: 3–5.
  • Miller, M.E., 2021, “Pizzagate’s Violent Legacy,” The Washington Post, February 16, 2021. [Miller 2021 available online]
  • Moor, J., 1985, “What is Computer Ethics?” Metaphilosophy, 16(4): 266–275.
  • –––, 2008, “Why We Need Better Ethics for Emerging Technologies,” in Information Technology and Moral Philosophy, J. van den Hoven and J. Weckert (eds.), Cambridge: UK: Cambridge University Press, pp. 26–39.
  • Nissenbaum, M., 2004, “Privacy as Contextual Integrity,” Washington Law Review, 79(1): 119–157.
  • –––, 2010, Privacy in Context: Technology, Policy, and the Integrity of Social Life, Palo Alto, CA: Stanford University Press.
  • Parkin, S., 2018, “The YouTube Stars Heading for Burnout: ‘The Most Fun Job Imaginable Became Deeply Bleak’”, The Guardian, September 8, 2018. [Parkin 2018 available online]
  • Parsell, M., 2008, “Pernicious Virtual Communities: Identity, Polarisation and the Web 2.0,” Ethics and Information Technology, 10(1): 41–56.
  • Protevi, J., 2018. “Realpolitik and Academic Freedom,” in Academic Freedom, J. Lackey (ed.), Oxford: Oxford University Press, pp. 85–101.
  • Puotinen, S., 2011, “Twitter Cares? Using Twitter to Care About, Care for and Care With Women Who Have Had Abortions,” International Review of Information Ethics, 16: 79–84.
  • Rini, R., 2017, “Fake News and Partisan Epistemology,” Kennedy Institute of Ethics Journal, [Rini 2017 available online]
  • Roberts, S.T., 2019, Behind the Screen: Content Moderation in the Shadows of Social Media, New Haven: Yale University Press.
  • Rodogno, R., 2012, “Personal Identity Online,” Philosophy and Technology, 25(3): 309–328.
  • Salter, M., 2017, Crime, Justice and Social Media, New York: Routledge.
  • Sharp, R., 2012, “The Obstacles Against Reaching the Highest Level of Aristotelian Friendship Online,” Ethics and Information Technology, 14(3): 231–239.
  • Smith, A., 2011, “Why Americans Use Social Media,” Pew Research Center, 15 November 2011, [Smith 2011 available online].
  • Spinello, R.A., 2011, “Privacy and Social Networking Technology,” International Review of Information Ethics, 16: 41–46.
  • Stark, L. and A.L. Hoffmann, 2019, “Data is the New What? Popular Metaphors and Professional Ethics in Emerging Data Cultures.” Journal of Cultural Analytics, 1(1), May 2, 2019. [Stark and Hoffman 2019 available online]
  • Stokes, P., 2012, “Ghosts in the Machine: Do the Dead Live on in Facebook?,” Philosophy and Technology, 25(3): 363–379.
  • Sunstein, C., 2008, “Democracy and the Internet,” in Information Technology and Moral Philosophy, J. van den Hoven and J. Weckert (eds.), Cambridge UK: Cambridge University Press, pp. 93–110.
  • Tavani, H.T., 2005, “The Impact of the Internet on our Moral Condition: Do we Need a New Framework of Ethics?” in The Impact of the Internet on our Moral Lives, R.J. Cavalier (ed.), Albany, NY: SUNY Press, pp. 215–237.
  • –––, 2007, “Philosophical Theories of Privacy: Implications for an Adequate Online Privacy Policy,” Metaphilosophy, 38(1): 1–22.
  • Tosi, J. and B. Warmke, 2020, Grandstanding: The Use and Abuse of Moral Talk, New York: Oxford University Press.
  • Turkle, S., 1995, Life on the Screen: Identity in the Age of the Internet, New York: Simon and Schuster.
  • –––, 2011, Alone Together: Why we Expect More from Technology and Less from Each Other, New York: Basic Books.
  • –––, 2015, Reclaiming Conversation: The Power of Talk in a Digital Age, New York: Penguin Press.
  • Turp, M.-J. 2020., “Social Media, Interpersonal Relations and the Objective Attitude,” Ethics and Information Technology, 22: 269–279.
  • Vaidhyanathan, S., 2018, Antisocial Media: How Facebook Disconnects Us and Undermines Democracy, New York: Oxford University Press.
  • Vallor, S., 2010, “Social Networking Technology and the Virtues,” Ethics and Information Technology, 12 (2): 157–170.
  • –––, 2012, “Flourishing on Facebook: Virtue Friendship and New Social Media,” Ethics and Information Technology, 14(3): 185–199.
  • –––, 2016, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. New York: Oxford University Press.
  • van den Eede, Y., 2010, “‘Conversation of Mankind’ or ‘Idle Talk’?: A Pragmatist Approach to Social Networking Sites,” Ethics and Information Technology, 12(2): 195–206.
  • Véliz, C., 2021, Privacy is Power, New York: Penguin Press.
  • Verbeek, P., 2005, What Things Do: Philosophical Reflections on Technology, Agency and Design, University Park, PA: Pennsylvania State University Press.
  • Wagner, B., 2021. “Fact Check: Hilary Clinton was not Hanged at Guantanamo Bay,” USA Today, June 17 2021. [Wagner 2021 available online]
  • Wandel, T. and A. Beavers, 2011, “Playing Around with Identity,” in Facebook and Philosophy, D.E. Wittkower (ed.), Chicago: Open Court, pp. 89–96.
  • Westra, E., 2021, “Virtue Signaling and Moral Progress,” Philosophy and Public Affairs, 49(2). [Westra 2021 available online]
  • Wilson, J., 2018, “Doxxing, assault, death threats: the new dangers facing US journalists covering extremism,” The Guardian, June 14, 2018. [Wilson 2018 available online]
  • Wong, P.H., 2010, “The Good Life in Intercultural Information Ethics: A New Agenda,” International Review of Information Ethics, 13: 26–32.
  • –––, 2012, “Dao, Harmony and Personhood: Towards a Confucian Ethics of Technology,” Philosophy and Technology, 25(1): 67–86.
  • –––, 2013, “Confucian Social Media: An Oxymoron?” Dao, 12: 283–296.
  • Woodfield, K. (ed.), 2018, The Ethics of Online Research, Bingley, UK: Emerald Publishing.
  • Zuboff, S., 2019, The Age of Surveillance Capitalism: The Fight for A Human Future at the New Frontier of Power, New York: Public Affairs.

Copyright © 2021 by
Shannon Vallor <svallor@ed.ac.uk>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free