ORIGINAL RESEARCH ARTICLE

License to Heal: Understanding a Healthcare Platform Organization as a Multi-Level Surveillant Assemblage

Handan Vicdan1, Mar Pérezts2* and Asım Fuat Fırat3

1Emlyon Business School, Department of Marketing, Lifestyle Research Center, 23 Avenue Guy de Collongue, Ecully, France

2Emlyon Business School, Department of Social Sciences and Humanities, OCE Research Center, 23 Avenue Guy de Collongue, Ecully, France

3Department of Marketing, Vackar College of Business & Entrepreneurship, The University of Texas – Rio Grande Valley, Edinburg, TX, USA

 

Citation: M@n@gement 2021: 24(4): 18–35 - http://dx.doi.org/10.37725/mgmt.v24.4586

Responsible Editor: Thomas Roulet

Copyright: © 2021 Vicdan et al. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), permitting all non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Published by AIMS, with the support of the Institute for Humanities and Social Sciences (INSHS).

Received: 12 November 2019; Accepted: 17 February 2021; Published: 15 December 2021

Competing interests and funding: The authors have not received any funding or benefits from the industry or elsewhere to conduct this study.

*Corresponding author: Mar Perezts, Email: perezts@em-lyon.com

 

Abstract

Platform organizations bring renewed attention to power disparities and risks in the rise of surveillance capitalism. However, such critical accounts provide a partial understanding of the complexity of surveillance phenomena in such shifting socio-technical and digital environments. The findings from a netnographic investigation of a healthcare platform organization, PatientsLikeMe, unravel how platforms become the locus where multi-level flows of surveillance converge, thereby constituting what we identify as a surveillant assemblage. We develop a comprehensive approach for understanding how platforms constitute a dynamic crossroads of micro-, meso- and macro-surveillance phenomena within and beyond the online communities they create. This study highlights this surveillant assemblage’s emerging practices and potentially empowering outcomes that enable multi-stakeholder involvement in big data and knowledge generation in healthcare. Broader implications of multi-level surveillance in and through platforms are discussed.

Keywords: Multi-level surveillance; Surveillant assemblage; Platform organization; Netnography; Healthcare

 

Mass datafication in the rise of platform organizations has renewed legitimate concerns regarding surveillance phenomena owing to the platform’s overwhelming algorithmic data-extraction power. Beyond anything in Orwell’s (1949) dystopian imagination or Foucault’s (1979) descriptions of disciplinary institutions, surveillance in platforms seems ubiquitous (Andrejevic, 2012) and ever more complex today (Farinosi, 2011). As spatially delimited digital infrastructures bringing together a network of stakeholders, platforms are reconfiguring socio-economic ecosystems (Orlikowski & Scott, 2014), combining self-quantification with community dynamics and market relations (Chanal & Caron-Fasan, 2010) to produce new or challenge existing forms of organization, provision of services and wealth creation (Srnicek, 2016). Through their ‘data mining capabilities of discovering meaningful patterns and distilling them into predictive analytics’ (Cohen, 2013, p. 1920), they are ‘shaping and computing the everyday’ (Alaimo & Kallinikos, 2017). Among other things, on platforms, people are massively, continuously, knowingly (or not), and willingly (or reluctantly) being surveilled for a profit (Fuchs, 2011; Zuboff, 2019). This brings renewed questions concerning how surveillance functions and what outcomes it yields in and through platforms where diverse human or nonhuman stakeholders participate in the production, distribution, monitoring, and usage of information at an unprecedented scale.

While scholarly attention has overly focused on critical accounts, the way surveillance actually functions in such shifting socio-technical environments as platforms remains an understudied and complex issue. One such platform captured our attention: PatientsLikeMe, a for-profit ad-free healthcare platform harboring several disease-specific online communities. Our extensive netnographic study of PatientsLikeMe suggests that it is more than extracting and selling health data from surveilling its online community. Both in and through the platform, various levels of surveillance mechanisms were at play, yielding outcomes other than coercive control and exploitation. These included a series of relational and learning mechanisms for patients to care for each other while increasing their own disease literacy and empowerment (micro), changing patient–healthcare provider relationships via personalized medicine and normalizing data sharing in and beyond the platform (meso), and shifting power hierarchies in healthcare and medical knowledge generation through connecting multiple stakeholders, tools, and practices (macro).

Such findings problematize the theoretical and ideological assumptions (Alvesson & Sandberg, 2011) that dominate surveillance regarding platforms, seen predominantly as a macro-level phenomenon aimed at control and profit. Indeed, some scholars have noted how surveillance can be as much about controlling as about caring (e.g., Lyon, 2006; Iedema & Rhodes, 2010; Sewell & Barker, 2006), highlighting a tension at the crossroads of multiple levels and outcomes of surveillance that warrant further attention. This leads us to argue that in overly focusing on the – important, yet not sole – issue of power disparities, control and profit-making, other dynamics and outcomes of surveillance in platforms have been both neglected and easily dismissed as either naïve or as the product of the ideological apparatus sustained by platform giants. What has been particularly overlooked is to address in greater depth the multi-level complexity of surveillance practices in and through platforms and their outcomes, which is the objective of this research study. Our study findings unpack the dynamics of surveillance flows constituting multi-level sociomaterial interactions involving and affecting diverse human or nonhuman stakeholders, including PatientsLikeMe community, healthcare providers, pharmaceuticals, governmental bodies, research institutions, tracking tools and algorithms, and data-tracking and aggregation processes. In studying how such multi-level surveillance flows were intricately woven together on PatientsLikeMe, we bring forward two main contributions.

First, we draw attention to how surveillance functions in and through platforms, which constitute a dynamic crossroad of micro-, meso-, and macro-surveillance phenomena both within their online communities and beyond. This led us to understand platforms as digitally mediated spaces for surveillance, that is, as loci where multi-level surveillance phenomena converge, thereby constituting a surveillant assemblage, a term coined by Haggerty and Ericson (2000) which has revitalized studies of contemporary surveillance (Galič et al., 2017). Drawing selectively on Deleuzian (Deleuze, 1990; Deleuze & Guattari, 1980) analysis of surveillance transformations in the digital age, they apply the notion of assemblages to surveillance studies to theorize the multiple objects in flow (information, bodies, desires, institutions, and processes) that make several systems of surveillance converge into a functional entity that transforms the purposes and hierarchies of surveillance.

Second, we answer calls to explore surveillance’s outcomes other than discipline, control, and capitalistic exploitation, seeking to explore its potentially productive (Lyon, 2007), participatory (Albrechtslund, 2008), collaborative (Pridmore, 2013), or even existential (Hafermalz, 2020) outcomes. In our case, in becoming a new pole where flows of information, actors, and surveillance practices converge, that is, in constituting a surveillant assemblage, PatientsLikeMe fosters an empowering potential of surveillance in healthcare, drawing attention to enabling and constraining power dynamics on healthcare platforms.

We begin by reviewing the literature on platforms and surveillance to point out some limitations and neglected aspects. We then present our case and netnographic method, followed by our findings that allow us to understand surveillance in platforms as a multi-level surveillant assemblage. Finally, we discuss implications and contributions, and outline some potentials and pitfalls for future research as concluding remarks.

Platforms and surveillance

Surveillance capitalism and its risks

Platforms, as dominant organizational forms, today are ‘data-based organizations that extract value and make profit from the social everyday they themselves engineer’ (Alaimo & Kallinikos, 2017, p. 176). As put by Fuchs (2011, p. 289), ‘the discussion of surveillance in web 2.0 is important because such platforms collect huge amounts of personal data in order to work’, that is, by commodifying every digital trace of those surveilled (Lyon, 2001, 2002; West, 2019). As surveillance is ‘a technology-dependent concept’ (Galič et al., 2017, p. 26), how surveillance unfolds in such specific socio-technical and organizational environments is inescapably linked to the fact that they have become mastodons of wealth creation (Srnicek, 2016). Although links between surveillance and capitalism go much further back than today’s Internet giants, such as Google and Facebook (Foster & McChesney, 2014; Fuchs, 2011), the commodification capacities of platforms’ infrastructure for extracting data asymmetrically, without dialogue or consent (Galič et al., 2017) have brought renewed concerns for surveillance capitalism’ that relies on surveillance as its key modus operandi, which ‘tunes’ and ‘herds’ (Zuboff, 2019, pp. 294–295) our online behaviors into commoditized assets.

Such production of information becomes the source of data-driven and algorithmically commercialized instrumentarian power that Zuboff (2015, p. 75) christens ‘‘Big Other’ (…) that effectively exile(s) persons from their own behavior while producing new markets of behavioral prediction and modification’. Furthermore, such transformations in information accumulation rely on compatible legal structures that paradoxically enable it by considering the data extracted to be ‘raw’ data, that is, being in the ‘public domain’, available for taking (Cohen, 2018). This has fostered numerous preoccupations about how it secures government collusion, and about the strength of its ideological and narrative justifications, including the free and open network, instant and customized personalization, and new forms of consumer power. However, the ‘sharing’ economy has been shown to not always deliver on the promise of social connection, and instead, harbors labor exploitation, discrimination, and inequality, as well as power asymmetries, algorithmic biases and discriminations, tailored advertising, and invasion of privacy (e.g., Gillespie, 2017; Schor & Attwood-Charles, 2017; Visser et al., 2019; West, 2019). Reversely, the concept of privacy – which is defined as freedom from and resistance to surveillance (Farinosi, 2011; Lyon, 2001; Solove, 2002) – is also sustained by strong and equally contestable ideologies, and often obscure power dynamics that play out in eminently complex ways in platforms.

The complexity and ambivalence of surveillance

Economic dominance of platforms today makes considerations of surveillance in this context to almost exclusively ‘emphasize how they cumulatively pose a threat to civil liberties’ (Haggerty & Ericson, 2000, p. 610), leading numerous relevant issues on surveillance in platforms to be neglected, easily dismissed, or deprioritized. These concerns nevertheless reflect only a partial picture, given the complexity of how surveillance works in and through platforms. As big data are both ‘a powerful tool to address various societal ills, offering the potential of new insights into areas as diverse as cancer research, terrorism, and climate change [… and] a troubling manifestation of Big Brother, enabling invasions of privacy, decreased civil freedoms, and increased state and corporate control’ (Boyd & Crawford, 2012, p. 664), some studies encourage us to not completely downplay the productive potentialities of surveillance in favor of its risks.

Owing to this inherent ambivalence, surveillance studies tend to neglect potential positive developments of surveillance practices (e.g., in science and medicine, relevant to our empirical case) (Galič et al., 2017). In unpacking this ambivalence, we remark that most of the dark claims operate at a macro-economic level, and that more in-depth studies of how specific organizational forms, such as platforms, participate in surveillance (whether capitalistic) are insufficiently developed. This is surprising, particularly considering that surveillance studies have always been sensitive to the organizational, material, and technological dimensions that make surveillance possible, at least since Bentham’s seminal work on the Panopticon and architecture-based conceptions of surveillance (Galič et al., 2017), which have over-stretched this metaphor (Brivot & Gendron, 2011; De la Robertie & Lebrument, 2019; Hafermalz, 2020; Haggerty, 2006). However, in ‘going beyond’ the Panopticon, paying attention to the organizational architecture, the infrastructure and the technological materiality of surveillance need not be forgotten.

Without denying the risks, other outcomes besides control and exploitation are worth noticing, with leading scholars in surveillance studies continuously stressing the ambivalence of surveillance as being simultaneously about controlling and caring (Haggerty & Ericson, 2000; Lyon, 2001, 2006, 2014). This yields a bilateral tension between surveillance’s coercive (supervision and subordination) and caring (protection from deviance) roles (Sewell & Barker, 2006). For instance, studies on self-surveillance and organizational surveillance (Iedema & Rhodes, 2010; Vaz & Bruno, 2003) have shown how power and care are actually very difficult to separate. This might explain why researchers have overlooked caring surveillance in favor of more obvious issues of domination and often disguised control (Weiss, 2005). Binary conceptualizations of surveillance – for example, coercive caring and passive resistant (Iedema & Rhodes, 2010; Sewell & Barker, 2006) – tend to provide a polarized and therefore limited picture, leaving much unchartered territory in emerging surveillance phenomena (Albrechtslund, 2008; Pridmore, 2013), particularly in the digital environments.

Scholars developed the concepts of reverse surveillance (Brin, 1998) and sousveillance (Mann et al., 2003) to draw attention to the disciplinary effect exerted by the bottom-up movement to counter the top-down scrutiny by the corporate elite. Being watched (e.g., followed on social media) is also an asset (Lyon, 2007) fulfilling exhibitionist and narcissistic desires (Koskela, 2004), while making surveillance playful and entertaining (Albrechtslund & Dubbeld, 2005). Voluntary visibilization practices through surveillance technologies can even be a matter of ‘existential recognition’ of belonging (Hafermalz, 2020). Consequently, considering surveillance as a potentially empowering (Albrechtslund, 2008; Haggerty & Ericson, 2000) and a productive practice (Lyon, 2007) warrants further attention, as it is a controversial, counter-intuitive issue.

Consequently, we aim to uncover how surveillance plays out at various levels that platforms operate within, shifting relations and directions of visibility mediated by various technologies. Platforms become a privileged site to study the transformations of the surveillant gaze, as they increasingly blur the differences between agents and targets of surveillance and mitigate power relations (Galič et al., 2017). In platforms, surveillance technologies are more than an ‘electronic panopticon’ (Lyon, 1994) that imposes a centralized gaze (usually by the State or the corporation) on passive others (citizens, employees or consumers). They simultaneously foster lateral surveillance (Brivot & Gendron, 2011), self-surveillance (Albrechtslund, 2008; Vaz & Bruno, 2003), among other variants where the increased visibility between a surveyor and surveilled make them more active in this process (Farinosi, 2011). Following Hafermalz (2020, p. 7, quoting Han 2015), it is worth asking what happens when ‘the inhabitants of the digital panopticon […] are engaged in ‘lively communication’ and actively produce and share personal information and ‘bare’ themselves to one another ‘of their own free will?’’ Reviewing the literature on the notion of surveillant assemblages, the next section adds the theoretical missing piece for uncovering this complex and dynamic context, inclusive of some of the neglected aspects and ambivalent tensions detailed above.

Surveillant assemblages: Studying surveillance in the digital context

In a recent review, Galič and colleagues (2017) provide a tripartite chronological–thematic overview of surveillance studies and locate Zuboff’s development of surveillance capitalism within the second phase of post-panoptic or ‘infrastructural theories’. However, we are interested in the other two approaches included in this phase: Deleuze’s (1990) analysis of control societies (Deleuze & Guattari, 1980) and its reinterpretation by Haggerty and Ericson (2000) to conceptualize the notion of surveillant assemblage. What makes this phase particularly interesting is that it signals the move from physical materiality (e.g., the prison with its central tower) to digital materiality (e.g., algorithms and connected devices), on the one side, and from a (relatively) single direction of explicit gazing to a networked, decentralized, and often invisible gazing, on the other. Furthermore, the goal of surveillance is no longer discipline, but control, as it is no longer institutions but corporations that orchestrate surveillance. This shifts the temporality of surveillance from a potentially omnipresent gaze on individuals to an effective omnipresent recording of what Deleuze calls dividuals, that is, the fragmented divided individual, whose variety can be recorded and stored in many different data banks in the rise of digital technologies for consumerist purposes.

Building selectively on this Deleuzian heritage that we have very succinctly summarized, Haggerty and Ericson (2000, p. 606) noted that we are ‘witnessing a convergence of what were once discrete surveillance systems to the point that we can now speak of an emerging ‘surveillant assemblage.’’ They argue that this can better account for technological transformations, the move from a human gaze to machine-based tracking in a rhizomatic and networked expansion, and a certain leveling of hierarchies (later developed and nuanced by Haggerty, 2006). An assemblage is a ‘functional entity’ grouping heterogeneous objects in flow (including people, institutions, processes, algorithms, information…) that ‘work’ together, serving a surveillance purpose of generating data and knowledge through monitoring of ongoing interaction between these objects (Delanda, 2016). Assemblages are a ‘potentiality’: their apparent stability conceals in fact a multiplicity of flows in temporarily more or less fixed arrangements, each, in turn, composed of multiple discrete assemblages. A range of desires – understood as immanent, active and positive forces, including desires for control, governance, security, profit and entertainment – hold the assemblage together (Haggerty & Ericson, 2000, p. 609). Surveillant assemblages ‘can hence be seen as ‘recording machines,’ as their task is to capture flows and convert them into reproducible events’ (Galič et al., 2017, p. 21), particularly now with computerized datafication, as ‘centres of appropriation where these flows can be captured’ (Haggerty & Ericson, 2000, p. 608).

Humans here, work alongside technology in a ‘multi-agency’ constitution of assemblages, and human bodies are understood as hybrid compositions, flesh-technology-information amalgams, broken down into dividuals or ‘data doubles’ (e.g., consumer profiles), which are then re-assembled (e.g., consumption patterns). Before they can be controlled, molded, or punished, bodies must be known by ‘centres of calculation’, such as statistical institutions, the police or laboratories that open or close access to information, for the people behind such ‘data doubles’. At the interface of de-corporealized measurable bodies and technologies, surveillance reduces flesh (but also non-human bodies) to information, turning them into a renewed form of ‘surplus value’. Previously scattered records are digitally combined to serve new purposes, such as marketing, policing, and entertainment (Haggerty, 2006). Combinations are triggered and intensified ‘when there is some perceived ex post facto or prospective need’ (Haggerty & Ericson, 2000, p. 618), and are therefore potentially infinite.

In sum, the surveillant assemblage ‘can be seen as the first recognition that surveillance needs to be analyzed in context’ (Galič et al., 2017, p. 34), as we shall do through our empirical case discussed next. Seeking to understand the complexity of how surveillance functions in and through platforms in these terms, we ask how do multi-level flows, directions, agents, and targets of surveillance interact on platforms and with which outcomes?

Studying surveillance in a healthcare platform organization

Relevance of the healthcare sector

Control over knowledge about the body but also the conditions of discourse on this knowledge (e.g., the ‘truth’ about disease) has traditionally been the object of one-way scrutiny by ‘sages’ (e.g., the medieval clergy, the rational ‘medical gaze’), as shown by Foucault’s (1975) genealogy of the clinic. As such, each system of truth prevents collaboration and contestation of the conditions, as well as of the content of the discourse by actors other than its main gatekeepers (clergy, then doctors), and first and foremost by patients themselves. To date, patients usually do not have an inherent right to their data that healthcare providers hold, as medical data flow predominantly to experts.

This power asymmetry rooted in control of expert information operates at various levels. At the macro-level of policies (cf. Fotaki, 2009), governments and healthcare providers set boundaries on the distribution of patient health records to third parties with the excuse of protecting patient privacy, and large pharmaceutical companies promote health ideologies structured around their products (such as vaccinology) (Picard et al., 2017), two sensitive issues that the COVID-19 pandemic has recently revamped. At the micro-level, consumerist approaches to healthcare reinforce the myth of patient choice and promote ‘prosumption’ as a means to exploit patients (by making them produce the service they need, in making them responsible for managing their diseases), thereby increasing patients’ vulnerability and inequality in terms of access to care, particularly when it depends on and is mediated by supposedly ‘neutral’ technologies that gloss over patients’ differences (Visser et al., 2019). However, albeit being intertwined, access to healthcare and access to health information are two different things. While the consumerist approach focuses on the former, our study focuses on the latter, and more specifically on its two dimensions and their effects. Access to health information is indeed both about literacy and knowledge that patients have access to, share and contribute to producing, and also about the sociomaterial and technological conditions for this discourse to emerge among previously disconnected actors.

With the emergence of participatory medicine and patient-generated data (Topol, 2015), Web 2.0 technologies are transforming the ways patients seek information and manage their health by tracking and generating scientific health information in real time (Eysenbach, 2008; Hesse et al., 2010). Hierarchies of knowledge production and dissemination have started to become more leveled with increased access to medical knowledge and services (Rier & Indyk, 2006), to ‘anyone who is curious, regardless of their training’ (Boyd & Crawford, 2012, p. 664). As argued by a parallel and complementary study on PatientsLikeMe, ‘unconventional, Internet-based organizational forms address traditional expert problems (medical research) through the systematic involvement of non-professionals (patients)’ in ‘stark contrast to the complex, expert-dominated, prestige-laden, and costly institutional arrangements characteristic of medical research’ (Kallinikos & Tempini, 2014, p. 817; Tempini, 2015). The way this challenges the power hierarchies in healthcare is changing what ‘research’ and medical knowledge generation mean.

Research context: PatientsLikeMe.com

PatientsLikeMe is a Medicine 2.0 platform described as a web-based service using social media technologies for social networking, participation, and collaboration among healthcare actors (Eysenbach, 2008), connecting over 750,000 patients suffering from severe and/or chronic illnesses and collecting self-reported health data about over 2,800 conditions.1 These data are compiled by PatientsLikeMe, and used for scientific and commercial research.2 PatientsLikeMe was founded as a privately funded for-profit organization in 2004 by the Heywood Brothers, after one of them was diagnosed with a life-changing neuro-degenerative disease, amyotrophic lateral sclerosis (ALS). It crowdsources patient care (Topol, 2014) and is established as an opt-in service, not a healthcare provider, to circumvent the Federal Health Insurance Portability and Accountability Act of 1996 rules that control the flow of patient data in the USA.

At no monetary cost, PatientsLikeMe enables patients first to manage their care by tracking their disease’s evolution through their profiles and a variety of self-reporting and datafication tools, which was otherwise almost impossible to do and comprehensively visualize by patients themselves3 (see Kallinikos & Tempini, 2014; Tempini, 2015). PatientsLikeMe creates disease-specific communities and enables patients with the same condition to share information and experiences with other community members, as well as other external actors, such as healthcare providers, pharmaceuticals, and the government. PatientsLikeMe aggregates this patient data (plus patient-reported measures on quality of life; forum discussions and patient surveys) into a massive goldmine on drug side effects, and patient health and lifestyles. These patient-reported data are continuously and systematically recorded, pooled and shared with partners for medical research in anonymized aggregated form, and these partners also recruit PatientsLikeMe patients for randomized clinical trials. PatientsLikeMe partners with non-profit organizations and academic research institutions for free, and sells aggregate patient data to pharmaceutical companies – its only revenue-generating source. It foregoes advertising to provide comfort for patients in sharing their data, hence relying only on word-of-mouth sustainability.

Methodology

We conducted an extended observational netnography of PatientsLikeMe (Caliandro & Gandini, 2017; Kozinets, 2010), a methodology and design that are particularly rich for studying online communities (Park et al., 2019). As healthcare can be considered as a sensitive research topic, this allowed us to precisely follow naturally occurring dynamics of members (their archival, live dialogues, and interactions through field-notes) in an unobtrusive way, thus avoiding undesirable outsider effects (Langer & Beckman, 2005).

We ensured trustworthiness in netnographic research (Kozinets, 2010) through extended immersion, ongoing observation, and triangulation over a 2-year period. We complied with ethical research standards, since, first, ‘the behavior being observed is commonly performed in public […], and in a setting where the anonymity of the person being observed is assured’ (Zikmund & Babin, 2006, p. 242). Second, we collected and analyzed the existing publicly available archival and observational data, and third, the researcher’s presence was fully disclosed while not interacting with members (Kozinets, 2010). Anonymity is maintained to protect participants’ both real and pseudo identities (Park et al., 2019).

Our netnographic data collection was conducted in four iterative steps. First, the decision to research PatientsLikeMe was initially inspired by a cover story in Business 2.0 magazine (Schonfeld & Morrison, 2007) that identified 10 exemplar organizations – including PatientsLikeMe – that had the potential to rewrite the rules of existing industries and create new markets. We began by observing and learning about PatientsLikeMe and its members, and why and how they interact with the tracking technologies and other actors on the platform.

Second, we sought to develop a deep understanding of what makes people share their sensitive private health and lifestyle data (our exploratory question) and contribute to medical knowledge production and distribution, and how this process is organized on PatientsLikeMe. To do this, we registered with a guest account, which enables various researchers to engage on this platform. We began our observations in the public community blog, where there were ongoing high-traffic discussions around privacy, data sharing and tracking, patient empowerment, and big data use. We observed that the majority of the posts on these issues were from patients in multiple sclerosis (MS) and Mood sub-communities, which then directed us to continue our investigation in the forums of these sub-communities.

Third, we triangulated data from a variety of sources in multiple rounds, involving (1) archival data from news and articles about PatientsLikeMe in business magazines (e.g., New York Times, Business 2.0, Forbes, Economist, Wall Street Journal, and BusinessWeek), the public community blog, YouTube videos, published research by PatientsLikeMe and industry partners in scientific journals; (2) field notes; and (3) textual data from numerous member posts and their interactions in different sub-communities and the community blog (see Table 1 for detailed information on data sources and use). Attention was given to all exchanges in MS and Mood sub-communities (patient-to-patient, patient-to-administrators, researchers, and founders’ interactions).

Table 1. Data sources
Data type Use in data analysis
PLM Archival Data
  • Raw data from discussion forums from MS subcommunity (2670-word pages)
  • Raw data from discussion forums from Mood subcommunity (1117-word pages)
  • Raw data from PLM community blog open to public (351-word pages)
  • Patient profile data: (40-word pages)
  • https://www.facebook.com/PatientsLikeMe
Total: around 7000 postings
  • Focus on doctor/researcher-patient, patient-patient, patient-PLM community support team/founder, doctor/researcher-PLM founder/community support team conversations and posts
  • Understand what motivates/deters patients to join and share their data on this platform, how they benefit from the data they provided, how use these tools both inside and outside the platform
  • Understand patient privacy concerns
  • Understand the power dynamics in the community for sharing, gathering, and organizing data
  • Understand how patients benefit from the data they provided
  • Understand the history of PLM and how it functions
  • How patient data are used in research
  • How disease tracking tools are developed
Secondary Data1) News data:
  • Learn about the history and origins of PLM platform and their mission
  • Learn about the changing power dynamics in the healthcare industry and scientific medical research process with participatory medicine
  • Understand the function of data intermediaries (such as PLM) in medical research
  • Learn about how patient data are used from research conducted in the community by PLM research scientists
  • Understand how patients inform medical practices through post-market drug data
  • Understand the platform’s stance on privacy and openness and how the platform de/reconstructs these concepts.
  • Understand how the user-generated medical knowledge challenges scientific drug development
  • Learn about different stakeholders involved in research on the platform
  • Learn about how PLM promotes data sharing and openness in healthcare
  • Learn about the partnerships PLM develops in healthcare in order to measure, prevent, predict, and treat diseases
  • Understand the value of tracking patient data for the patient, the government, and the pharmaceutical
  • Understand why patients want healthcare industry engagement in tracking patient data
  • Understand how PLM involves industry partners in tracking patient data, listening to patients and developing new disease measurement tools with patients
  • Understand the future of personalized medicine and drug discovery via health platforms
  • Find out what kind of data protection mechanisms PLM develops to prevent data scraping and privacy issues
  • Question the value and limits of patient generated data compared to evidence-based scientific data
  • Question the value and limits of scientific medical research process
  • Understand how measurement (tracking) tools are developed
2) Video data on PLM:
Field notes from non-participatory observations
  • Notes taken during patient-patient, patient-doctor, patient-researcher, patient-PLM founder, patient-PLM community support team interactions from MS and Mood subcommunities
  • Understand the power dynamics among stakeholders for sharing and producing health data and patient-led medical research
  • Understand the opportunities and challenges associated with tracking patient data

Fourth, our analysis (see the next section) used the keyword method (keywords: disclosure, privacy, sharing, tracking tools, openness, transparency, community, secrecy, monitoring, tracking, big data, crowdsourcing, empowerment, Web 2.0, pharmaceutical, government) to go through the immense number of posts (cf. Table 1). This was informed by our exploratory question on private health data tracking and sharing (Kozinets, 2010). We then realized that in the discussions and in the practices of sharing, tracking, and monitoring each other’s health, PatientsLikeMe was actually fostering what appeared to be a complex surveillance network among multiple stakeholders. In our iterative analysis, we were increasingly directed toward surveillance literature, which led us to progressively inductively study PatientsLikeMe through this lens and ultimately as a surveillant assemblage, as detailed in our analysis.

Data analysis

Observations and downloading of textual data were done over 2 years until saturation was reached (Glaser & Strauss, 1967), with ongoing initial reflections by the first author. Subsequently, together we iteratively proceeded to the initial data coding to discover the main themes, particularly relevant for understanding the dynamic complexity of online interaction phenomena (Caliandro & Gandini, 2017). Following the inductive coding schema (Glaser & Strauss, 1967), we compared first-order codes (informant-centric terms to define digitally mediated practices and interactions on PatientsLikeMe) to clarify themes through an inductive and recursive process and create second-order theory-laden constructs (Gioia et al., 2013). This process was performed individually, and then together, to establish the credibility of codes discovered. In the open coding phase, also known as the constant comparative method, we identified two broad categories: how actors engaged on PatientsLikeMe (through 15 digitally-mediated practices) and why (16 desired or resulting outcomes of these practices), looking for emerging patterns as well as staying alert to irregularities. Once coded, we realized that these broad categories were operating at micro-, meso-, or macro-levels, which led us to group initial findings into such levels for greater clarity and analytical purposes. Our final data structure is summarized in Fig. 1.

Fig 1
Figure 1. Data Structure

While data privacy was one of the key issues being discussed by PatientsLikeMe users and that initially drove our attention to study this platform, our findings yielded intriguing positions from the community members concerning how tracking tools were being used for surveillance purposes on various interrelated levels. Health data were being massively recorded and shared, and arguments for such sharing outnumbered and outweighed privacy concerns, and a variety of stakeholders were ‘converging’ on the platform.

This is when we realized that health-tracking tools were used for surveillance purposes (e.g., patients surveilled their health but also others’), and patients were not the sole targets of surveillance, which was also being directed to experts, and pharmaceutical companies (taking surveillance beyond the platform). This intriguing observation made us think of surveillance as a way to make sense of our data (with five second-order codes to categorize the surveillance practices across micro-, meso- and macro-levels) and tempted us to engage in a critical reading. However, the more we immersed in it, and despite the iterative loops with critical literature on the risks of pervasive surveillance in the digital age, we realized it would be imposing a reading that did not quite fit. We agreed that the themes that emerged were telling a different story, pointing toward other outcomes than capitalistic exploitation. We identified two second-order codes to categorize such outcomes: patient empowerment and multi-stakeholder engagement in medical knowledge generation, indeed, challenging traditional power hierarchies in healthcare.

At that point, we moved to the axial or conceptual coding and returned abductively to the literature calling for more attention being paid to other more ‘positive’ outcomes of surveillance, and particularly Haggerty and Ericson’s (2000) notion of surveillant assemblage that could account for the converging flows of surveillance in and through PatientsLikeMe. We finally made relational connections within the data structure more salient (Gioia et al., 2013) with second-order constructs pertaining to two derived theoretical dimensions: (1) multi-level flows of surveillance and (2) desires that ‘hold together’ the surveillant assemblage, each spread out across micro-, meso- and macro-levels (summarized in Fig. 2).

Fig 2
Figure 2. Multi-level Surveillant Assemblage

In the next section, we unpack our findings using selected quotes from the various data sources that triggered and illustrate each theme.

Findings: PatientsLikeMe’s surveillant assemblage

Through this study, we identified that each actor had particular expectations or desired outcomes, which motivated the kinds of surveillance practices they engaged in. Ultimately, these practices unravelled at micro-, meso-, and macro-interrelated levels in and through the platform, making PatientsLikeMe the locus where such multi-level flows of surveillance converged and ‘worked together’, thereby constituting a surveillant assemblage, summarized in the form of a funnel in Fig. 2, and unpacked next.

Micro-level

Self-surveillance

Actors reorganize their roles and identities as dividuals (in particular, patients) through interactions with tracking tools on the platform. In their profiles, patients objectify themselves by listing symptoms via disease-specific and general tracking technologies, treatments received, and different lifestyles led. For example, epilepsy patients use a weekly tool called the ‘seizure meter’ to track the type, frequency, and severity of their seizures and code their symptom data in a structured format. Such tools are developed and continuously updated via patient input, thus enabling patients to closely monitor their own condition and even discover new conditions, the outcome of which is more self-awareness and disease literacy:

I’ve gained a lot of insight from mood tracking tools, seeing patterns in my mood cycles and how that relates to factors like medication changes and weight gain. (Patient59, Mood)

Over the years I’ve had bouts of depression which I’ve added to my Mood Map…Then – whoa – in looking over the chart I realized that the bigger issue in my life has been compulsion…Having identified this I can now wrestle with how to live with it. (Pat62, Mood)

In conventional surveillance, individuals lacked self-understanding, while information retention by the medical elite was somehow justified as ‘for their own good’ (Foucault, 1975). These self-surveillance practices are what PatientsLikeMe was primarily built for, and in seeking to increase awareness of their embodied self and disease literacy, they engage in self-datafication. This allows them to visibilize their disease and themselves:

MS is not a dirty secret for me. It’s a tiny part of who I am…I will not keep it a secret, and giving a face to MS makes it more personal and human to others. (Patient66, MS)

The more I talk about living with HIV, the easier it becomes to actually live with it. It also gives me a chance to put a face to the disease, humanizing it and hopefully dispelling some ignorance and fear. (Patient67, HIV)

Data sharing increases both individual and public literacy about diseases, which humanizes and makes them more ‘livable’.

Intra-surveillance

PatientsLikeMe patients also desire to find others suffering from the same condition in disease-specific communities as its name suggests, and constitute support groups to generate and share data, aided by visualization tools once more:

We’re just aware of each other’s moods more on here through mood charts and can empathize with each other more. (Patient61, Mood)

Our analysis of video data also reveals that search tools and disease symptom charts enable patient analysis of disease data via matching algorithms to compare with other patients’ personal profiles and personalize his or her treatments accordingly. (https://www.youtube.com/watch?v=wch3GdWJiLc). The platform also licenses patient reported outcome measures under creative commons open for all to share and suggest improvements for a personalized treatment protocol, including drug and supplement use, lifestyle and dietary changes, and communicating disease-specific data (https://www.youtube.com/watch?v=XQZ5M9oLkXw). However, gazing on others’ profiles not only enhances disease literacy and better knowledge of the self through exchanging with peers but also triggers forms of empathic oversight:

We had a patient whose weight was dropping precipitously, which can accelerate a patient’s deterioration in ALS. Since patients track and share their important outcome measures, another patient could remind him of the importance of keeping his weight up. (Founder1, Community blog)

Everyone snarling back at her is NOT going to help. Anybody looked at her mood map? Depression almost off the chart, same with compulsion. She is having a BAD TIME, people… (…) Doesn’t that suggest that we should be rallying around her to support her, instead of further tearing her down??? (Patient 63, Mood)

This empathy is further sustained by self-modulation and responsibilization to enable a smooth flow of data sharing. Dialogues between patients and staff reveal this responsibilization as patients watch over each other, and endow community members with a licence to surveil and police in order to protect:

Our community members are a very switched-on group. If anybody posts something suspicious or overtly commercial we normally hear about it in a matter of minutes and can respond appropriately. (SupportStaff3, Mood)

Our observations of forum discussions also suggest that light-touch moderation discourse facilitates surveillance by pushing patients to share and update as much data as possible on their profiles in an effort to increase the authenticity of their profiles and data harmonization:

Patients who (…) sort of have an aggressive voice on the forum, who don’t fill out their profile tend to get pushed by others in the community to do that, because it becomes a validator of who they are, which is an incredibly powerful part of our model. (Founder1, Mood)

Sometimes, such responsibilization backfires, for example, when dealing with trolls or those who create chaos in the community and obstruct data generation. In such cases, PatientsLikeMe staff interferes and controls the situation for a continuous and smooth data generation, and creates a forum code of conduct. Overall, this also helps those new to the platform to feel confident enough to interact, being welcomed by older community members.

To sum up, by engaging in self- and intra-surveillance, knowledge and caring function in an intricate way with potentially vital consequences:

This is where I come instead of self-harming or attempting suicide. The highlight of every third day is that I get to do my mood map, which stops me from putting myself in hospital. (Patient94, Mood)

Self- and intra-surveillance on PatientsLikeMe overcome the prior disconnection between patients in the traditional health system and yield betterment of social relations, where patients aim for self-datafication and personalization while engaging in caring surveillance relationships.

Meso-level

Inter-surveillance

Surveillance practices also extended beyond the platform through its network of stakeholders, thereby challenging the existing meso-level interactions. PatientsLikeMe engages physicians through clinical research and using the platform as a way of tracking patients between visits. Patients also bring ‘doctor visit sheets’ designed for tracking their condition during their doctors’ appointments, which destabilize traditional relational power dynamics:

Drugs I’m currently on were never offered to me during my six years with MS, due to my inability to adequately describe my symptoms, and my doctor’s inability to think outside the box from what he defaulted to using. Through doctor visit sheets, I learned to describe my symptoms better, and shared with my doctor other treatments. With a little time, he became more open, and I communicated better. (Patient23, MS)

In the past, we physicians had information power and (since it dictated our livelihood) we guarded it jealously. Patients didn’t have the ability (or desire) to read through complex medical texts to understand their diagnoses. Now, they routinely come to me armed with a printout from WebMd or PatientsLikeMe and more often than not they are spot on. As a physician, I am no longer one who hoards information but a consultant who provides experience, context, meaning and perspective to what the patient is experiencing. (Physician1, Community blog)

Some physicians in PatientsLikeMe now praise self-tracking tools and the resulting patient-generated data, perceived as a sign of increased patient literacy and an opportunity to collaborate with them in clinical settings. This enhances inter-surveillance dynamics by carrying the flows of surveillance beyond the patient community and the platform. This is motivated by the desire of more personalized medicine and reducing medical errors. Patients can compare their physician’s decisions against their medical history recorded via tracking tools on PatientsLikeMe, which enables a smoother patient data flow and improved connection among healthcare providers:

Almost every time I’ve been hospitalized, I’ve been thrown in with an unfamiliar psychiatrist who gives me a new diagnosis and a completely different set of medications. …I now keep a centralized record of my mood and treatment history via mood charts and doctor visit sheets on PatientsLikeMe, which I use as a reference and share with treatment providers. (Patient37, Mood)

While concerns regarding personalization through surveillance practices remain, self-reflexive accounts signal both the awareness of these issues and the mitigating role of the platform:

I think some patients may be their own worst enemy. For instance, doctor visit sheet and mood charts. I try very hard to be completely honest as I answer the survey questions, but I can easily see how I can manipulate the results to convince my doctor to prescribe me certain meds or diagnose me with something I may feel more comfortable with. Sometimes knowing you are being scrutinized, and specifically what about you is being scrutinized can affect your behavior. (Patient97, Mood)

In destabilizing meso-level interactions in healthcare, PatientsLikeMe challenges the current constitution of privacy by making patient engagement in surveillance and sharing of private health data acceptable through framing data sharing as a ‘human right’ and normalizing discourses, such as transparency and de-identification. Furthermore, in carrying the flows of surveillance to clinical settings PatientsLikeMe promotes a transformation from proprietorship to partnership in disclosure and distribution of medical data, which challenges the very definition of privacy, that is, control over externalization of one’s personal information, which belongs to the person (Goodwin, 1991). For example, PatientsLikeMe co-authored the Declaration of Health Data Rights and launching HealthDataRights.org in June 2009. The founders of PatientsLikeMe also testified before the National Committee for Health and Vital Statistics at the Gov 2.0 Summit to challenge the reliance of macro institutions on secrecy and desire to have control over private information:

A modern focus on privacy as a goal has moved the line to the point that medicine is slowed, treatments are delayed, and patients die for failure to have what they need when they need it. We have substituted real harm for mostly theoretical harm. We believe that openness is much more powerful concept than privacy in medicine, and one that gives patients the power to take control of their health. (Founder1, Community blog)

The founders of PatientsLikeMe challenge the dynamics of the current healthcare market that functions on privacy as an absolute goal and a fundamental human right, a right to ‘not share’. Openness and transparency promoted as core organizational values work through people’s desires to improve and manage their care, while making data anonymous provides feelings of safety for normalizing surveillance at the meso-level.

Macro-level

Corporate and institutional surveillance

Other healthcare stakeholders, such as pharmaceutical companies and research institutions, are also involved in surveillance relationships in PatientsLikeMe. Data collected using tracking technologies are shared with them to bridge the gap between anecdote and evidence-based medicine, to inform and guide future clinical practice. This translates into pharmaceutical companies tracking PatientsLikeMe patients for research (e.g., Genentech was granted access to PatientsLikeMe data to study drug effectiveness for cancer treatment); recruiting patients for randomized clinical trials (e.g., Novartis has recruited PatientsLikeMe MS patients); and analyzing existing PatientsLikeMe patient-led clinical research to validate their own research, test or generate hypothesis, and inform their clinical trial protocol design. This aims to provide alternative solutions to existing ailments and improve existing solutions in a timely manner, as reflected in our analysis of news items and published medical research using PatientsLikeMe data. For example, PatientsLikeMe research scientists won the 2009 Journal of Medical Internet Research Medicine 2.0 Award for research on secondary uses of drugs or drugs that are off-patent and unlikely to be studied systematically. This enables pharmaceutical companies to have easy and inexpensive access to patient-generated data, potentially lower research costs and faster clinical research processes that provide real-time feedback from patients (cf. Arnst, 2008 in Table 1).

Despite some data scraping incidents on the platform (Angwin & Stecklow, 2010) (our video data also suggest that PatientsLikeMe creates a contractual relation with industry partners in order not to reidentify or scrape data, which will have legal consequences [https://www.youtube.com/watch?v=pTuemJ7ISaM]), patients actually endorse the use of PatientsLikeMe data for improved patient care:

We give Pharma real life experiences with medications they provide for us, and make them better with less side effects. (Patient 103, Mood)

Data collection is what helps us get drug firms pay attention to what works or does not work for MS. (Patient 104, MS)

Including corporate and institutional research partners in the surveillant assemblage has several outcomes. Traditionally, clinical trials have been dominated by medical authorities and academics, which resulted in information sequestration, privileging the medical elite (Epstein, 2007). Although expert-collected clinical data and patient-generated data may sometimes contradict due to methods and sampling (Kitchin, 2015), such discrepancies provide useful information as well, and on the whole, this collaboration is destabilizing the disciplining rules or criteria of top-down medicine. One of the examples include PatientsLikeMe’s MS rating scale, designed to exhibit symptom data in the patient’s own language, which is made in collaboration with pharmaceutical companies and specialized healthcare providers to test the validity of these tools (see Kallinikos & Tempini, 2014):

If there are inaccuracies, we can modify the tool. We did this in ALS community where one member indicated that ALS rating scale didn’t pick up her changes having progressed to a very advanced stage of disability. An extension was designed to pick up on changes. (SupportStaff2, MS)

This tracking tool developed with patient input was validated by comparing the results with findings generated using standard clinical assessment methods employed in MS (Bove et al., 2013). PatientsLikeMe also launched the Open Research Exchange hub to establish a collaboration ecosystem of industry partners and patients, thereby changing the way medical knowledge is generated.

Another important outcome is that access, an important challenge for big data, as it is privately produced, is overcome to a greater extent (Kitchin, 2015). The PatientsLikeMeListen service enables industrial partners to receive patient-reported outcome data via tracking tools on drug effectiveness and side-effect information. PatientsLikeMe also performs sentiment analyses of patient discussions on drug effectiveness in community forums, and enables pharmaceutical access to aggregated sentiment data. Along with scientific expertise, patient experiential expertise has the potential to become a productive force in medical research.

State surveillance

By including State agencies in its web of interrelated actors, PatientsLikeMe contributes to change our traditional understanding of State surveillance and regulation in two ways: first, by inverting the surveillant gaze. Instead of monitoring patients with the aim of controling them as in a Foucaldian scenario, PatientsLikeMe provides a way for enhancing state oversight over pharmaceutical companies by taking post-market surveillance (aka pharmacovigilance) to monitor drug safety after-market release,4 a step further. Since 2008, PatientsLikeMe developed a partnership with the Food and Drug Administration (FDA) and its MedWatch pharmacovigilance system5 to self-generate, record, and track individual reports, and aggregate data from the patient community to share with the FDA:

Patients desperately need a way to collect reports of adverse effects from medications, a responsibility shirked by the FDA and subverted by the drug companies. (Patient16, MS)

We launched a pilot program in our MS community, which helps patients submit treatment related adverse events directly to FDA through PatientsLikeMe. Understanding when these events occur helps FDA better regulate pharmaceutical and medical industries to protect consumer safety and bring safer, more effective products. (Founder2, MS)

State collaboration with social media platforms and the use of patient-generated data on these platforms for pharmacovigilance are a fairly recent phenomenon, as the State focuses on patient’s voice for complementing the existing pharmacovigilance has been very limited (https://www.centerwatch.com/articles/16413). By connecting the patient directly to FDA to report drug side effects, PatientsLikeMe increases the responsibility of the State to further regulate the pharmaceutical industry (e.g., issue product recalls, warnings, and safety messages) and compare patient-reported adverse effects with those of clinicians in randomized clinical trials to assess the risks and benefits of these reports. Such macro-level responsibilization reduces the intrusion of pharmaceutical industry and influences how people evaluate and communicate the risks of existing medications (Cox et al., 2010).

Second, PatientsLikeMe provides information to patients regarding state health-related initiatives. For instance, PatientsLikeMe imports data from a federal site called clinicaltrials.gov (where all US clinical trials are required to register) to inform patients about the ongoing active randomized clinical trials, for which they may be eligible. Other collaborations, for example, between the US Department of Veterans Affairs’ Epilepsy Centers of Excellence, PatientsLikeMe, and UCB Pharmaceuticals for research into the factors that influence health outcomes for veterans with epilepsy, bring State bodies into the surveillant assemblage.

Overall, multiple stakeholder engagement in organizing big data generation via surveillance practices in and through PatientsLikeMe indicates that surveillance in platforms is not a unilateral or top-down process but a multi-level flow of interrelated agencies and practices that ‘work together’ and are ‘held together’ by the outcomes desired by each party. All the stakeholders involved become knowingly both the agents and the targets of surveillance converging in a co-constructed and surveilled digital environment, and it is these practices that constitute PatientsLikeMe as a multi-level surveillant assemblage.

Discussion

This study shifted analytical attention to aspects of surveillance often left understudied: what kind of surveillance practices emerge in and through platforms, which stakeholders are involved, and what outcomes are produced? By exploring how ‘individuals actively participate (wittingly or not) in their own visibility, thereby creating new potentialities of surveillance by others’ (Brivot & Gendron, 2011, p. 140), we raise a scantly studied question on what happens when those who were once under surveillance, and thereby disciplined and controlled, now enact surveillance themselves. Concretely, we were able to highlight how surveillance operates at multiple inter-related levels that are ‘knotted together’, that is, converging on the platform. This provides two main contributions and several potential directions for future research studies.

Platforms as loci where multi-level surveillance phenomena converge

First, we show how platforms serve as digitally mediated spaces for surveillance, constituting a dynamic crossroads of micro-, meso- and macro-surveillance phenomena both within their online communities and beyond. Unpacking the multi-level complexity of surveillance in and through platforms reveals how such levels, practices, and objects are held together in a digital environment, converging as a surveillant assemblage (Haggerty & Ericson, 2000). Figure 2 provides a visual representation of this convergence in the form of a funnel, illustrating multiple objects in flow (information, bodies, desires, institutions, and processes), making several systems, directions, outcomes, and levels of surveillance converge into a functional entity that transforms the purposes and hierarchies of surveillance. Our case reveals platforms’ potential for fostering and shaping a complex web of such surveillance flows, how and why all these converge and ‘work’ together and constitute a surveillant assemblage.

This conceptualization can revitalize the study of surveillance in and through platforms by allowing a more fine-grained analysis of its complex, rhizomatic, and multi-level dimensions. It could also complement sociomaterial accounts of platforms (Orlikowski & Scott, 2014) with whom surveillance studies share the notion of ‘assemblage’. However, Deleuze and Guattari, whose work Haggerty and Ericson relied on to theorize surveillant assemblages, use the French word agencement, which translates into English as ‘assemblage’. This nevertheless neglects a key dimension in Deleuze and Guattari’s thinking rooted in its Latin origin: agens, that is, its agentic, dynamic, and processual dimension, somewhat present in Haggerty and Ericson’s notion of ‘desires’ that make the systems converge, but not as salient as in Deleuze and Guattari. This is of prime importance in sociomaterial studies of information technology (Cecez-Kecmanovic et al., 2014) and in studies of the role of human and non-human actors and interactions (Taupin, 2019) in yielding social and institutional change and alternative forms of organizing (Ouahab & Maclouf, 2019). This essential point should make us further refine our understanding of the surveillant assemblage not only as a sociomaterial assemblage of surveillance flows but also as an agencement process that sets such flows in motion and makes them converge and produce or perform particular outcomes – a key question for future studies to explore.

Potentialities of the platforms’ surveillant assemblage

Second, our study answers call to go beyond the risks of platform-related surveillance (e.g., Albrechtslund, 2008; Hafermalz, 2020; Lyon, 1994, 2007; Pridmore, 2013). Within the surveillant assemblage of PatientsLikeMe, surveillance becomes more relational and entangled than ever and seems to be empowering patients’ license to heal by shifting traditional power dynamics (Haggerty, 2006) in healthcare. Data sharing and aggregation enable new forms of patient empowerment and knowledge generation, ultimately influencing patient well-being, transforming roles, hierarchies of visibility and power, and relations among stakeholders in and beyond the platform. Our netnographic design is part of the novelty of our study of surveillance in platforms inclusive of elements relevant to the experience of surveillance on the receiving end (Hafermalz, 2020). Our study findings hint toward the potentialities of surveillance no longer being just an omniscient exploitative practice exerting control over populations (Zuboff, 2015, 2019). Such risks do not disappear, but are subsumed into a more complex process where it is not so easy to either fully accuse or praise platforms’ surveillance effects.

This highlights not only constraining but also enabling power dynamics on healthcare platforms. The question of access to health, being mediated by technology, has been criticized for stressing inequalities (Fotaki, 2009; Visser et al., 2019). However, PatientsLikeMe, while being fundamentally a platform of patients, brings an understanding of individual patients not as consumers of healthcare (as it does not offer healthcare services) but as a networking platform of multiple stakeholders. Gaining literacy and managing one’s disease fit a prosumer narrative but is sustained by a different ideology: ‘sharing as a fundamental human right’. There is, indeed, no such thing as a ‘universal patient’ (Visser et al., 2019), and PatientsLikeMe precisely seeks to harbor a myriad of different patient experiences, whose multiplicity should be made visible by the platform. This brings attention to ‘why individuals take on the responsibility of ‘visibilizing’ themselves, (…) due to an existential need to be seen as a legitimate member (…) to remain ‘on the inside’’ (Hafermalz, 2020, p. 7) and not be cast out of society due to their disease.

Through the surveillant assemblage, all involved actors knowingly become both the agents and the targets of surveillance, which is not fixed or unilaterally given by the technological environment but is enacted in practice. Patients engage in self-datafication that enables increasing caring and awareness of the self and others. Decisions people make concerning their care become more reflexive and confident, which then allows them to make more direct and deliberate demands from other entities on and beyond the platform. We find that both state and non-state actors, private actors, and communities gain relative power to perform surveillance activities, and greater control over being surveilled voluntarily. For patients, such voluntary visibilization of oneself is often framed as a reflexive, relational, and existential necessity (Hafermalz, 2020) to connect with others (Iedema & Rhodes, 2010). For instance, practices were shifting previous conceptualizations of privacy and surveillance, and challenging their oppositional constitution. As already foreseen by Haggerty and Ericson (2000, p. 616), ‘privacy is now less a line in the sand beyond which transgression is not permitted, than a shifting space of negotiation where privacy is traded for products, better services or special deals’. Concerns about privacy were omnipresent, and understandably so; however, discourses about privacy on PatientsLikeMe were contrary to what we expected: such practices and their outcomes seemed to challenge the ‘in-house assumptions’ (Alvesson & Sandberg, 2011) dominating the literature on surveillance in platforms, which did not easily recognize the potentials for new developments.

In less than two decades, platforms have achieved the prediction that Haggerty and Ericson (2000) had concluded their study with: the ‘disappearance of disappearance’ in surveillance capitalism’s total reach, ‘making the Panopticon seem prosaic’ (Galič et al., 2017, p. 25). However, even though the platforms’ rhetoric may constitute an order through which patient engagement in surveillance practices might be tailored to corporations’ self-interests (Rose, 1999), patients did not unilaterally equate to naïve dopes subjected to the exploitative gaze of surveillance capitalism (Zuboff, 2019) and did not seem to engage with PatientsLikeMe in a consumerist fashion, contrary to what other recent works on healthcare technologies have denounced (see Visser et al., 2019). PatientsLikeMe patients, who perceive benefits to self and others, and who feel ‘in charge’ of their data, tend to choose data sharing over privacy, and opt for personalization via collaborative modes of surveillance facilitated by new technologies (Topol, 2014). Hence, a communitarian perception of privacy as a shared liability based on crafting a balance between secrecy of individual information and responsibility for aiding common good prevails (Etzioni, 1999).

Overall, the effects of and responses to surveillance invite us to think beyond binary and reactive responses to surveillance, and answer calls for a more encompassing view of surveillance as it unfolds in platform organizations (Majchrzak et al., 2013). One of the notable effects in this view of surveillance is responsibilization. To generate medical knowledge and effectively contribute to levelling the power dynamics in healthcare, surveillance in and through platforms warrants a coordinated approach and responsibilization (Shamir, 2008) at all ends of the spectrum, not only the neoliberal responsibilization of patients. The macro-level for data security and management of surveillance on one hand, and at the micro-level of patients on the other, as the sustainability of the system relies on patient literacy and expertise on providing their data and updating it regularly. Data (il)literacy remains an important struggle, as it may obstruct decisions concerning patient care and scientific knowledge production, as well as power dynamics, as disaccord may occur between patients and physicians in terms of the standards used in surveillance tools to track patient health (Bove et al., 2013). Another effect is personalization, which poses ambiguities in terms of data manipulation and discrimination that potentially risk patient care and transparency in patient–physician collaboration. While personalized medicine via platform surveillance is in its early stages, for it to work efficiently, the vast amount of de-identified data will require the necessary infrastructure, expertise, and systematic validation from involved parties and tools (Topol, 2014) in order to analyze and make sense of it, and allow patient control in enabling access to their specific health data that could cause stigma and discrimination in managing patient care, but in ‘a double movement of autonomization and responsibilization’ (Rose, 1999, p. 174; Shamir, 2008).

Overall, our findings point to the empowerment of patients and the inclusion of multiple stakeholders in medical knowledge generation through convergent surveillance practices in and beyond platforms. This moves the traditional loci of patient and disease tracking (from hospitals, laboratories, and doctors’ offices) toward a personal and collective setting, which challenge the existing scientific research infrastructure and epistemological expert-based tradition of medical research (Kallinikos & Tempini, 2014). Although we find that PatientsLikeMe offers a great potential for patients to become literate in understanding and analyzing their illnesses, whether and how much this potential will be realized and what conditions are required for this realization are significant future research questions in PatientsLikeMe and other platforms. For instance, designing platforms in ways that promote literacy (Firat & Vicdan, 2008) and increased inclusion of patients in the design of surveillance tools, the constitution of surveillance practices, and oversight on the disclosure and sharing of the data will be essential. Although platforms harbor the potential for increasing the scope and directionality of surveillance, avoiding some of the discriminatory effects of surveillance requires constant vigilance by all stakeholders, including governments, advocacy groups, companies, and citizens, and granting control of information flow among data partners to data subjects (Lyon, 2002).

Concluding remarks

As we witness the global reach of COVID-19 pandemic, we realize the astounding contemporary relevance of how ‘surveillance is driven by the desire to bring systems together, to combine practices and technologies and integrate them into a larger whole (…) providing for exponential increases in the degree of surveillance capacity’ (Haggerty & Ericson, 2000, p. 210). In early April 2020, a dedicated forum in PatientsLikeMe included over 70,000 members and a very active Twitter account, where patients and health providers interacted to clarify questions about this outbreak, filling and refining the Covid symptoms tracker, and reporting and comparing treatments received. Such initiatives based on citizen-generated data (e.g., eu-citizen.science) are burgeoning worldwide as a possible response to fast-evolving global sanitary crises for early detection (cf. Joshi et al., 2020) and monitoring the virus spreading (Knight, 2020). This is providing a contemporary macro-justification of the openness philosophy of sharing micro-health data defended by PatientsLikeMe and others, and is something that each individual can contribute to while still complying with lockdown measures. However, this only seems possible through particular surveillant assemblages like we have described in this analysis, where flows of information on a variety of surveillance systems converge, geared at producing and distributing knowledge, while still remaining alert to potential manipulations and requiring new responsibilizations, as future research might continue to explore.

Acknowledgments

The authors thank Editor Thomas Roulet and the anonymous reviewers for their support and help in revising this work, as well as the members of OCE Research Centre and the MOTI seminar at GEM for their precious feedback on earlier versions.

References

Alaimo, C. & Kallinikos J. (2017). Computing the everyday: Social media as data platforms. The Information Society, 33(4), 175−191. doi: 10.1080/01972243.2017.1318327

Albrechtslund, A. (2008) Online social networking as participatory surveillance. First Monday, 13(3). Retrieved from http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2142/1949

Albrechtslund, A. & Dubbeld, L. (2005). The plays and arts of surveillance: Studying surveillance as entertainment. Surveillance and Society, 3(2/3), 216−221. doi: 10.24908/ss.v3i2/3.3502

Alvesson, M. & Sandberg, J. (2011). Generating research questions through problematization. Academy of Management Review, 36(2), 247−271. doi: 10.5465/amr.2009.0188

Andrejevic, M. (2012). Ubiquitous surveillance. In K. Ball, K. Haggerty & D. Lyon (Eds.), Routledge handbook of surveillance studies (pp. 91−98). Taylor & Francis.

Angwin, J. & Stecklow, S. (2010). Scrapers dig deep for data on web. The Wall Street Journal. Retrieved from http://online.wsj.com/articles/SB10001424052748703358504575544381288117888

Arnst, C. (2008, December 4). Health 2.0: Patients as partners. BusinessWeek. https://www.bloomberg.com/news/articles/2008-12-03/health-2-dot-0-patients-as-partners

Bove, R., Secor, E., Healy, B. C., Musallam, A.et al. (2013). Evaluation of an online platform for multiple sclerosis research: Patient description, validation of severity scale, and exploration of BMI effects on disease course. PLoS One, 8(3), e59707. doi:10.1371/journal.pone.0059707

Boyd, D. M. & Crawford, K. (2012). Critical questions for big data. provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679. doi: 10.1080/1369118X.2012.678878

Brin, D. (1998). The transparent society: Will technology force us to choose between privacy and freedom? Addison-Wesley.

Brivot, M. & Gendron, Y. (2011). Beyond panopticism: On the ramifications of surveillance in a contemporary professional setting. Accounting, Organizations and Society, 36, 135–155. doi: 10.1016/j.aos.2011.03.003

Caliandro, A. & Gandini, A. (2017). Qualitative research in digital environments: A research toolkit. Routledge.

Cecez-Kecmanovic, D., Galliers, R. D., Henfridsson, O., Newell, S.et al. (2014). The sociomateriality of information systems: Current status, future directions. MIS Quarterly, 38(3), 809–830. doi: 10.25300/MISQ/2014/38:3.3

Chanal, V. & Caron-Fasan, M. (2010). The difficulties involved in developing business models open to innovation communities: The case of a crowdsourcing platform. M@n@gement, 13(4), 318−341. doi: 10.3917/mana.134.0318

Cohen, E. (2008). Patients find support, help via online networking. http://edition.cnn.com/2008/HEALTH/10/09/ep.health.web.sites/index.html

Cohen, J. E. (2013). What privacy is for. Harvard Law Review, 126(7), 1904−1933. Retrieved from https://ssrn.com/abstract=2175406

Cohen, J. E. (2018). The biopolitical public domaIn: The legal construction of the surveillance economy. Philosophy & Technology, 31(2), 213−33. doi: 10.1007/s13347-017-0258-2

Cox, A. D., Cox, D. & Mantel, S. P. (2010). Consumer response to drug risk information: The role of positive affect. Journal of Marketing, 74(July), 31−44. doi: 10.1509/jmkg.74.4.031

De La Robertie, C. & Lebrument, N. (2019). Unplugged – Thinking the organisational and managerial challenges of intelligent towns and cities: A critical approach to the Smart Cities phenomenon. M@n@gement, 2(2), 357–372. doi: 10.3917/mana.222.0357

Delanda, M. (2016). Assemblage theory. Edinburgh University Press.

Deleuze, G. (1990). Post-scriptum sur les sociétés de contrôle. L’Autre Journal 1, 1–8.

Deleuze, G. & Guattari, F. (1980). Capitalisme et, Tome 2. Éditions de Minuit.

Epstein, C. (2007). Guilty bodies, productive bodies, destructive bodies: Crossing the biometric borders. International Political Sociology, 1(2), 149–164. doi: 10.1111/j.1749-5687.2007.00010.x

Etzioni, A. (1999). Limits of privacy. Basic Books.

Eysenbach, G. (2008). Medicine 2.0: Social networking, collaboration, participation, apomediation, and openness. Journal of Medical Internet Research, 10(3), e22. doi: 10.2196/jmir.1030

Farinosi, M. (2011). Deconstructing Bentham’s panopticon: The new metaphors of surveillance in the web 2.0 environment. TripleC, 9(1), 62−76. doi: 10.31269/triplec.v9i1.249

Firat, F. & Vicdan, H. (2008). A new world of literacy, information technologies, and the incorporeal selves: Implications for macromarketing thought. Journal of Macromarketing, 28(4), 381–396. doi: 10.1177/0276146708325385

Foster, J. B. & McChesney, R. W. (2014). Surveillance capitalism: Monopoly-finance capital, the military-industrial complex, and the digital age. Monthly Review, 66(3), 1. Retrieved from https://monthlyreview.org/2014/07/01/surveillance-capitalism/

Fotaki, M. (2009). Are all consumers the same? Choice in health, social care and education in England and elsewhere. Public Money & Management, 29(2), 87–94. doi: 10.1080/09540960902767956

Foucault, M. (1975). The birth of the clinic: An archaeology of medical perception. Vintage Books.

Foucault, M. (1979). Discipline and punish: The birth of the prison (A. Sheridan Trans.) Vintage Books.

Frost, J. H. & Massagli, M. P. (2008). Social uses of personal health information within PatientsLikeMe, an online patient community: What can happen when patients have access to one another’s data. Journal of Medical Internet Research, 10(3), e15. doi: 10.2196/jmir.1053

Fuchs, C. (2011). Web 2.0, prosumption and surveillance. Surveillance and Society, 8(3), 288–309. doi: 10.24908/ss.v8i3.4165

Galič, M., Timan, T. & Koops, B. (2017). Bentham, Deleuze and beyond: An overview of surveillance theories from the panopticon to participation. Philosophy & Technology, 30, 9–37. doi: 10.1007/s13347-016-0219-1

Gill, M. (2019, September 2). Patient data for sale. British Medical Journal Blog. https://blogs.bmj.com/bmj/2019/09/02/michael-gill-patient-data-for-sale/

Gillespie, T. (2017). Algorithmically recognizable: Santorum’s Google problem, and Google’s santorum problem. Information, Communication & Society, 20(1), 63−80. doi: 10.1080/1369118X.2016.1199721

Gioia, D. A., Kevin, G. C. & Hamilton, A. L. (2013). Seeking qualitative rigor in inductive research: Notes on the Gioia methodology. Organizational Research Methods, 16, 15–31. doi: 10.1177/1094428112452151

Glaser, B. G. & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Aldine Publishing Co.

Goetz, T. (2008, March 23). Practicing patients. The New York Times. https://www.nytimes.com/2008/03/23/magazine/23patients-t.html

Goodwin, C. (1991). Privacy: Recognition of a consumer right. Journal of Public Policy and Marketing, 10(1), 149−166. doi: 10.1177/074391569101000111

Hafermalz, E. (2020). Out of the panopticon and into exile: Visibility and control
in distributed new culture organizations. Organization Studies, 42(5), 697–717. doi: 10.1177/0170840620909962

Haggerty, K. D. & Ericson, R. V. (2000). The surveillant assemblage. British Journal of Sociology, 51(4), 605−622. doi: 10.1080/00071310020015280

Haggerty, K. (2006). Tear down the walls: On demolishing the panopticon. In D. Lyon (Ed.), Theorising surveillance: The panopticon and beyond (pp. 23–45). Willan Publishing.

Han, B.-C. (2015). The transparency society. Stanford University Press.

Hesse, B. W., Hansen, D., Finholt, T., Munson S.et al. (2010). Social participation in health 2.0. IEEE Computer, IEEE, 43(11), 45−52. doi: 10.1109/mc.2010.326

Heywood, J. (2009, May 20). Testimony of James Heywood co-founder and chairman PatientsLikeMe Inc. before the national committee on vital and health statistics subcommittee on privacy, confidentiality and security. https://s3.amazonaws.com/patientslikeme_research/PLM_NCVHS_Testimony.pdf

Iedema, R. & Rhodes, C. (2010). The undecided space of ethics in organizational surveillance. Organization Studies, 31(2), 199−217. doi: 10.1177/0170840609347128

Inaugural JMIR Medicine 2.0® award goes to PatientsLikeMe researchers (2009, August 25). Journal of Medical Internet Research. https://www.jmir.org/announcements/27

Johnson, C. Y. (2008, November 6). Through website, patients creating their own drug studies. Boston Globe. https://archive.boston.com/news/health/articles/2008/11/16/through_website_patients_creating_own_drug_studies/

Joshi, A. Sparks, R., McHugh, J., Karimi, S.et. al. (2020). Harnessing tweets for the early detection of an acute disease event. Epidemiology, 31(1), 90−97. doi: 10.1097/EDE.0000000000001133

Kallinikos, J. & Tempini, N. (2014). Patient data as medical facts. Information Systems Research, 25(4), 817–833. doi: 10.1287/isre.2014.0544

Kitchin, R. (2015). The opportunities, challenges and risks of big data for official statistics. Statistical Journal of the IAOS, 31, 471–481. doi: 10.3233/SJI-150906

Knight, W. (2020). How AI is tracking the coronavirus outbreak. Wired, 08 February 2020, Retrieved from https://www.wired.com/story/how-ai-tracking-coronavirus-outbreak/

Koskela, H. (2004). Webcams, TV shows and mobile phones: Empowering exhibitionism. Surveillance and Society, 2, 199–215. doi: 10.24908/ss.v2i2/3.3374

Kozinets, R. V. (2010). Netnography: Doing ethnographic research in the age of the Internet. Sage.

Langer, R. & Beckman, S. C. (2005). Sensitive research topics: Netnography revisited. Qualitative Market Research: An International Journal, 8(2), 189–203. Retrieved from http://www.emeraldinsight.com/1352-2752.htm

Lyon, D. (1994). The electronic eye. The rise of surveillance society. University of Minnesota Press.

Lyon, D. (2001). Surveillance society: Monitoring everyday life. Open University Press.

Lyon, D. (2002). Everyday surveillance: Personal data and social classifications. Information, Communication & Society, 5(2), 242−257. doi: 10.1080/13691180210130806

Lyon, D. (2006). The search for surveillance theories. In D. Lyon (Ed.), Theorising surveillance: The panopticon and beyond (pp. 3–20). Willan Publishing.

Lyon, D. (2007). Surveillance, power, and everyday life. In R. Mansell, C. A. Avgerou, D. Quah & R. Silverstone (Eds.), The Oxford handbook of information and communication technologies (pp. 449−472). Oxford University Press.

Lyon, D. (2014). Surveillance, Snowden, and big data: Capacities, consequences, critique. Big Data and Society, 1−13. doi: 10.1177/2053951714541861

Majchrzak, A., Faraj, S., Kane, G. C. & Azad, B. (2013). The contradictory influence of social media affordances on online communal knowledge sharing. Journal of Computer Mediated Communication, 19(1), 38−55. doi: 10.1111/jcc4.12030

Mann, S., Nolan, J. & Wellman, B. (2003). Sousveillance: Inventing and using wearable computing devices for data collection in surveillance environments. Surveillance and Society, 3, 331−355. doi: 10.24908/ss.v1i3.3344

Orlikowski, W. J. & Scott, S. V. (2014). What happens when evaluation goes online? Exploring apparatuses of valuation in the travel sector. Organization Science, 25(3), 868–891. doi: 10.1287/orsc.2013.0877

Ouahab, A. & Maclouf, E. (2019). Diversity and struggles in critical performativity. The case of french comunity-supported agriculture. M@n@gement, 22(4), 537−558. Retrieved from https://management-aims.com/index.php/mgmt/article/view/4254

Orwell, G. (1949). Nineteen eighty-four. Penguin.

Park, E., Im, G., Storey, V. C. & Baskerville, R. L. (2019). Never, never together agaIn: How post-purchase affect drives consumer outcomes within the context of online consumer support communities. Journal of the Association for Information Systems, 20(1), 58–104. doi: 10.17705/1jais.00529

Picard, S. Steyer, V., Philippe, X. & Pérezts, M. (2017). Exploring corporations activism: Predatory modus operandi and its effects on institutional field dynamics. In C. Garsten & A. Sorböm (Eds.), Power, policy and profit: Corporate engagement in politics and governance (pp. 152–169). Edward Elgar.

Pridmore, J. (2013). Collaborative surveillance: Configuring contemporary marketing practice. In K. Ball & L. Sneider (Eds.), The surveillance-industrial complex: A political economy of surveillance (pp. 107−121). Routledge.

Rier, D. A. & Indyk, D. (2006). The rationale of interorganizational linkages to connect multiple sites of expertise, knowledge production, and knowledge transfer. Social Work in Health Care, 42(3−4), 8−27.

Rose, N. (1999). Powers of freedom. Cambridge University Press.

Schonfeld, E. & Morrison, C. (2007). The next disruptors: The 10 game changing startups most likely to upend existing industries and spawn new entrepreneurial opportunities. Business 2.0, 8(8), 56–64.

Schor, J. B. & Attwood-Charles, W. (2017). The ‘sharing’ economy: Labor, inequality, and social connection on for-profit platforms. Sociology Compass, 11(8), e12493. doi: 10.1111/soc4.12493

Sewell, G. & Barker, J. R. (2006). Coercion versus care: Using irony to make sense of organizational surveillance. Academy of Management Review, 31(4), 934−961. doi: 10.5465/amr.2006.22527466

Shamir, R. (2008). The age of responsibilization: On market-embedded morality. Economy and Society, 37(1), 1−19. doi: 10.1080/03085140701760833

Singer, N. (2010, May 29). When patients meet online: Are there side effects? The New York Times. https://www.nytimes.com/2010/05/30/business/30stream.html

Solove, D. J. (2002). Conceptualizing privacy. California Law Review, 90(4), 1087−1155. doi: 10.2307/3481326

Srnicek, N. (2016). Platform capitalism. Polity.

Taupin, B. (2019). The role of nonhuman entities in institutional work: The case of the ocean in a surfing-centered local economy. M@n@gement, 22(4), 584−618. doi: 10.3917/mana.224.0584

Tempini, N. (2015). Governing PatientsLikeMe: Information production and research through an open, distributed, and data-based social media network. The Information Society, 31(2), 193−211. doi: 10.1080/01972243.2015.998108

Topol, E. (2014). Individualized medicine from pretomb to tomb. Cell, 157(1), 241−253. https://doi.org/10.1016/j.cell.2014.02.012

Topol, E. (2015). The patient will see you now: The future of medicine is in your hands. Basic Books.

Vaz, P. & Bruno, F. (2003). Types of self-surveillance: From abnormality to individuals ‘at risk’. Surveillance & Society, 1(3), 272−291. doi: 10.24908/ss.v1i3.3341

Visser, L. M., Benshop, Y. W. M., Bleijenbergh, I. L. & van Riel, A. C. R. (2019). Unequal consumers: Consumerist healthcare technologies and their creation of new inequalities. Organization Studies, 40(7), 1025–1044. doi: 10.1177/0170840618772599

Weiss, R. M. (2005). Overcoming resistance to surveillance: A genealogy of the EAP discourse. Organization Studies, 26(7), 973−997. doi: 10.1177/0170840605054600

West, S. M. (2019). Data capitalism: Redefining the logics of surveillance and privacy. Business & Society, 58(1), 20–41. doi: 10.1177/0007650317718185

Zikmund, W. G. & Babin, B. J. (2006). Exploring marketing research (9th ed.) South-Western.

Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 3(1), 75–89. doi: 10.1057/jit.2015.5

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Public Affairs.

Footnotes

1. Retrieved from http://www.patientslikeme.com/about

2. According to the latest available list (dated January 2019, Retrieved from https://patientslikeme-bibliography.s3.amazonaws.com/PLM%20Research%20Manuscripts%20Bibliography.pdf), PatientsLikeMe has participated in the production of 111 scientific publications.

3. See, for instance, the profile and charts of Heywood (deceased in 2006), made openly Retrieved from https://www.patientslikeme.com/members/40

4. In the USA, this is led by the FDA, usually performed via clinical trials with small numbers of patients in controlled circumstances.

5. MedWatch is FDA’s voluntary drug adverse event reporting system largely used by healthcare providers and inaccessible to patients until the partnership.