Laplace once argued that if one could "comprehend all the forces by which nature is animated," it would be possible to predict the future and explain the past. The advent of analysis of large-scale data sets has been accompanied by newfound concerns about "Laplace’s Demon" as it relates to certain fields of science as well as management, evaluation, and audit. I begin by asking how statistical data are constructed, illustrating the hermeneutic acts necessary to create a variable. These include attributing a certain characteristic to a particular phenomenon, isolating the characteristic of interest, and assigning a value to it. In addition, a population must be identified and a sample must be "taken" from that population. Next, I examine how statistical analyses are conducted, examining the interpretive acts there as well. In each case, I show how big data add new challenges. I then show how statistics are incorporated into audits and evaluations, emphasizing how alternative interpretations are concealed in the audit process. I conclude by noting that these issues cannot be "resolved" as Laplace suggested. His Demon, already banished from physics, needs to be banished from other fields of science, management, audits, and evaluations as well.
In the 1950s and 1960s, prominent institutional economists in the United States offered what became the orthodox theory on the obstacles to commercializing scientific knowledge. According to this theory, scientific knowledge has inherent qualities that make it a public good. Since the 1970s, however, neoliberalism has emphasized the need to convert public goods to private goods to enhance economic growth, and this theory has had global impacts on policies governing the generation and diffusion of scientific research and innovation. We critique the foundational conceptualizations of scientific knowledge as either public or private by examining Germany’s treatment of scientific outputs as club goods. We then compare the relative impacts on social welfare of distinct United States and German approaches to food and agricultural research and innovation. We conclude with reflections on how these findings might contribute to a democratic debate on how best to manage scientific knowledge to enhance social welfare.
While the Quantified Self has often been described as a contemporary iteration of Taylorism, this article argues that a more accurate comparison is to be made with what Anson Rabinbach has termed the "European Science of Work." The European Science of Work sought to modify Taylor’s rigid and schematic understanding of the laboring body through the incorporation of insights drawn from the rich European tradition of physiological studies. This "softening" of Taylorist methods had the effect of producing a greater "isorhythmia" or synchronicity between the bodily rhythms of workers and those of the mode of production itself and was embraced by employers as a way to dampen worker militancy. Through a discursive analysis of the promotion of sensor analytics by management consultants VoloMetrix and Humanyze, I argue that the contemporary quantification of the workplace represents a similar project of "soft domination," as the intimate, bottom-up mode of surveillance it fosters seeks to more closely mold workers’ physiological and social rhythms to the structure of the workplace and the working day.
This paper explores how a particular form of regulation—prior ethical review of research—developed over time in a specific context, testing the claims of standard explanations for such change (which center on the role of exogenous shocks in the form of research scandals) against more recent theoretical approaches to institutional changes, which emphasize the role of gradual change. To makes its case, this paper draws on archival and interview material focusing on the research ethics review system in the UK National Health Service. Key insights center on the minimal role scandals play in shaping changes in this regulatory setting and how these depend upon the absence of a single coherent profession (and accompanying social contract) associated with biomedical research.
In recent years, US policy makers have faced persistent calls for the price of flood and hurricane insurance cover to reflect the true or real risk. The appeal to a true or real measure of risk is rooted in two assumptions. First, scientific research can provide an accurate measure of risk. Second, this information can and should dictate decision-making about the cost of insurance. As a result, contemporary disputes over the cost of catastrophe insurance coverage, hurricane risk being a prime example, become technical battles over estimating risk. Using examples from the Florida hurricane rate-making decision context, we provide a quantitative investigation of the integrity of these two assumptions. We argue that catastrophe models are politically stylized views of the intractable scientific problem of precise characterization of hurricane risk. Faced with many conflicting scientific theories, model theorists use choice and preference for outcomes to develop a model. Models therefore come to include political positions on relevant knowledge and the risk that society ought to manage. Earnest consideration of model capabilities and inherent uncertainties may help evolve public debate from one focused on "true" or "real" measures of risk, of which there are many, toward one of improved understanding and management of insurance regimes.
This paper contributes to two topics that have received insufficient attention in science and technology studies: the social dimensions of causal reasoning and how the knowledge-making site of expert testimony affects the production and reception of social scientific knowledge. It deals with how social scientists make causal claims when testifying as expert witnesses in trials where causal claims are relevant, using as a case study the so-called L’Aquila trial, in which experts were summoned by the parties to testify on the causes of risk-related behavior by the inhabitants of an Italian city in early 2009. In particular, I analyze the process of selection of causal loci, the attribution and removal of agency, the increase and decrease of causal factors in the explanation of an event, and the delimitation of the explanandum. As a general insight derived from the case, it is argued that the position of experts in a trial––which side summons them––may be a more important factor than their branch of expertise to account for certain types of these practices.
Current debates in science and technology studies emphasize that the bio-economy—or, the articulation of capitalism and biotechnology—is built on notions of commodity production, commodification, and materiality, emphasizing that it is possible to derive value from body parts, molecular and cellular tissues, biological processes, and so on. What is missing from these perspectives, however, is consideration of the political-economic actors, knowledges, and practices involved in the creation and management of value. As part of a rethinking of value in the bio-economy, this article analyzes three key political-economic processes: financialization, capitalization, and assetization. In doing so, it argues that value is managed as part of a series of valuation practices, it is not inherent in biological materialities.
In recent years, cross-national collaboration in medical research has gained increased policy attention. Policies are developed to enhance data sharing, ensure open-access, and harmonize international standards and ethics rules in order to promote access to existing resources and increase scientific output. In tandem with this promotion of data sharing, numerous ethics policies are developed to control data flows and protect privacy and confidentiality. Both sets of policy making, however, pay limited attention to the moral decisions and social ties enacted in the everyday routines of scientific work. This paper takes its point of departure in the practices of a Danish laboratory with great experience in international collaboration regarding genetic research. We focus on a simple query, what makes genetic material and health data flow, and which hopes and concerns travel along with them? We explore what we call the flows, the nonflows, and the overflows of material and information, and we document the work producing the flows of health data and biomaterial. We call this work "ethics work" and argue that it is crucial for data sharing though it is rarely articulated in ethics policies, remains inadequately funded, and lacks acknowledgment in policies promoting international data sharing.
How might science and technology studies and science, technology and society studies (STS) learn from its studies of other knowledge traditions? This article explores this question by looking at Chinese medicine (CM). The latter has been under pressure from modernization and "scientization" for a century, and the dynamics of these pressures have been explored "symmetrically" within STS and related disciplines. But in this work, CM has been the "the case" and STS theory has held stable. This article uses a CM term, reasoning-as-propensity (shi, 勢), to look at contemporary practices of cancer care in a hospital in Taiwan. It describes how shi (勢) informed the design of a new decoction, Kuan Sin Yin, while also relating to the production of scientific knowledge, biomedical interventions, Buddhist practices, and the patients living with cancer themselves. Does CM’s use of shi (勢) simply confirm the essential and incompatible otherness of CM? Looked at from outside the answer seems to be yes. However, this article explores how STS might change itself—and the theory–practice division in STS—by thinking through shi (勢) in dialogue with its othered object. This opens the possibility of an STS for CM.
Open Science policies encourage researchers to disclose a wide range of outputs from their work, thus codifying openness as a specific set of research practices and guidelines that can be interpreted and applied consistently across disciplines and geographical settings. In this paper, we argue that this "one-size-fits-all" view of openness sidesteps key questions about the forms, implications, and goals of openness for research practice. We propose instead to interpret openness as a dynamic and highly situated mode of valuing the research process and its outputs, which encompasses economic as well as scientific, cultural, political, ethical, and social considerations. This interpretation creates a critical space for moving beyond the economic definitions of value embedded in the contemporary biosciences landscape and Open Science policies, and examining the diversity of interests and commitments that affect research practices in the life sciences. To illustrate these claims, we use three case studies that highlight the challenges surrounding decisions about how––and how best––to make things open. These cases, drawn from ethnographic engagement with Open Science debates and semistructured interviews carried out with UK-based biologists and bioinformaticians between 2013 and 2014, show how the enactment of openness reveals judgments about what constitutes a legitimate intellectual contribution, for whom, and with what implications.
Archaeological data are shadowy in a number of senses. They are notoriously incomplete and fragmentary, and the sedimented layers of interpretive scaffolding on which archaeologists rely to constitute these data as evidence carry the risk that they will recognize only those data that conform to expectation. These epistemic anxieties further suggest that, once recovered, there is little prospect for putting "legacy" data to work in new ways. And yet the "data imprints" of past lives are a rich evidential resource; archaeologists successfully mine old data sets for new insights that redirect inquiry, often calling into question assumptions embedded in the scaffolding that made their recovery possible in the first place. I characterize three strategies by which archaeologists address the challenges posed by legacy data: secondary retrieval and recontextualization of primary data, and the use old data in experimental simulations of the cultural past under study. By these means, archaeologists establish evidential claims of varying degrees of credibility, not by securing empirical bedrock but through a process of continuously building and rebuilding provisional empirical foundations.
Defunct satellites and other technological waste are increasingly occupying Earth’s orbital space, a region designated as one of the global commons. These dilapidated technologies that were commissioned to sustain the production and exchange of data, information, and images are an extraterrestrial equivalent of the media devices which are discarded on Earth. While indicating the extension of technological momentum in the shared commons of space, orbital debris conveys the dark side of media materialities beyond the globe. Its presence and movements interfere with a gamut of governmental, commercial, and scientific operations, contesting the strategies of its management and control and introducing orbital uncertainty and disorder in the global affairs of law, politics, economics, and techno-science. I suggest that this debris formation itself functions as media apparatus —it not only embodies but also exerts its own effects upon the material and social relations that structure our ways of life, perplexing dichotomies between the common and owned, governed and ungovernable, wealth and waste. I explore these effects of debris, framing its situation in the orbital commons as a vital matter of concern for studies of the human relationship with media technologies and their waste.
Old age is not normally associated with innovativeness and technical prowess. To the contrary, when treating age as a distinct category, policy makers, innovation scholars, and companies typically regard younger people as drivers of innovation, and the early adoption of new technology. In this paper, we critically investigate this link between age, ineptness, and technology adoption using a case study of the diffusion of electric bikes in the Netherlands. We demonstrate how, during the first wave of e-bike acceptance, old age was constructed as an arena in which important learning processes took place, and where older persons became early adopters of e-bikes. Theoretically, this paper speaks critically to the prolific literature on innovation diffusion and its treatment of adopter categories as generic concepts. Using age as a central dimension, our research highlights the situated and constructed nature of adopter categories, and thus challenges age-based assumptions about innovation and technology use by younger and older persons. These insights about what we term the rejuvenation of e-bikes help us rectify existing biases of older persons as an inherently problematic group of technology users.
Inclusivity is widely considered a requirement of defensible environmental risk consultations and is often either mandated or recommended to help ensure attention to stakeholders’ diverse views. Experience suggests the opposite: the emphasis on an inclusive consultation process often makes it impossible for decision makers to listen carefully to stakeholders and for citizens’ views to influence the design and choice of proposed actions. This paper briefly reviews the promise of environmental risk consultations before outlining several of the more serious problems associated with an emphasis on inclusivity: long lists of undifferentiated concerns, facts tainted by stakeholders’ perspectives and worldviews, little access to clarifying dialogue or tests of expertise, few opportunities to scrutinize knowledge quality, avoidance of controversial issues, and an overwhelming abundance of information. As a result, the promotion of inclusivity often serves as a convenient excuse for decision makers to silence citizens by substituting quantity for quality, breadth for depth, and an adversarial approach for dialogue and informed understanding.
This study seeks to examine the impact of alumni connections between the evaluators and evaluatees on the results of peer review ratings for the Korean national R&D project and selection success rate. Specifically, this study analyzed the evaluation results of 8,402 research proposal entries submitted between 2007 and 2011 for the "general researchers support project," all in the Natural Science and Engineering areas and sponsored by the National Research Foundation of Korea. Each proposal entry was evaluated by three evaluators, and approximately 39 percent of the proposals had at least one evaluator from the same university. The results of this study showed that evaluators have a tendency to give relatively high scores to research proposals submitted by the alumni of the same universities as their alma mater. Also, when an evaluator from the same university as an evaluatee was included in the evaluator group, the results of this study showed that the percentage of entry submissions was higher compared to the contrary. Such results show that in the process of peer review–based research proposal evaluations for national R&D projects, alumni connections have significant influence on evaluation results in South Korea.
This paper discusses the introduction of fraudulent "molecular detector" (non)technology into Mexico. The case is used to argue that contemporary science and technology studies’ approaches to scientific policy-making make basic assumptions about the societies they operate in that are inconsistent with the Mexican context. This paper also argues that contrary to what happens in the so-called Global North, the relative power of Mexican science in government and policy circles is as much limited by its relatively weak position as much as it is by self-censorship and unrealized impact in the country’s fragile democracy. The case is also used to highlight the necessity for more politically involved scientific institutions in Mexico, as these become critical safeguards against incoming destabilizing technologies from more powerful nations into the local "peripheral" context.
A distinctive form of anticolonial analysis has been emerging from Latin America (LA) in recent decades. This decolonial theory argues that important new insights about modernity, its politics, and epistemology become visible if one starts off thinking about them from the experiences of those colonized by the Spanish and Portuguese in the Americas. For the decolonial theorists, European colonialism in the Americas, on the one hand, and modernity and capitalism (and their sciences) in Europe, on the other hand, coproduced and coconstituted each other. The effects of that history persist today. Starting thought from these LA histories and current realities enables envisioning new resources for social transformations. These decolonial insights seem to receive only a passing recognition in the Latin American social studies of science and technology projects that have begun cosponsoring events and publications with northern equivalents. My focus will be primarily on the decolonial theory and on just two of its themes. One is the critical resources it offers for creating more accurate and progressive northern philosophies and histories of science as well as social studies of science. The second is insights from Latin American feminists that carry different impacts in the context of the decolonial accounts.
This article argues that Gilbert Simondon’s philosophy of technology is useful for both science and technology studies (STS) and critical theory. The synthesis has political implications. It offers an argument for the rationality of democratic interventions by citizens into decisions concerning technology. The new framework opens a perspective on the radical transformation of technology required by ecological modernization and sustainability. In so doing, it suggests new applications of STS methods to politics as well as a reconstruction of the Frankfurt School’s "rational critique of reason."
Research often characterized as "new materialist" has staged a return/turn to nature in social and critical theory by bringing "matter" into the purview of our research. While this growing impetus to take nature seriously fosters new types of interdisciplinarity and thus new resources for knowing our nature-cultural worlds, its capacity to deal with power’s imbrication in how we understand "nature" is curtailed by its failures to engage substantively with the epistemological interventions of postcolonial feminist science studies. The citational practices of many new materialist thinkers eschew the existence of what Sandra Harding has called "a world of sciences." I argue that the "science" privileged and often conflated with matter in new materialist storytelling is the same science destabilized by postcolonial feminist science studies. This does not mean that new materialist feminisms and postcolonial feminist science studies are necessarily at odds, as new materialist storytelling and prevailing conceptualizations of the postcolonial seem to suggest. On the contrary, I suggest that thinking creatively, capaciously, pluralistically, and thus irreverently with respect to the rules of science––about the boundaries and meanings of matter, "life," and "humanness"––could be understood as a central project for a postcolonial feminist science studies.
This article examines a genetic ancestry testing program called the Living History Project (LHP) that was jointly organized by a nonprofit educational institute and a for-profit genealogy company in South Africa. It charts the precise mechanisms by which the LHP sought to shape a postapartheid genome through antiracist commitments aimed at contesting histories of colonial and apartheid rule in varied ways. In particular, it focuses on several tensions that emerged within three modes of material-discursive practice within the production of the LHP: subject recruitment, informed consent, and participant reflections. In the end, it argues that several contradictory tensions were central to the making of the LHP’s postapartheid genome and that it should be understood as nonracial rather than antiracist.
In 1984, eight-year-old Paula Logares was called into a judge’s chambers and was told the man and woman she lived with were not her parents. Her parents had been disappeared during the dirty war, and now, through her blood, scientists would be able to return her to her birth family. Paula, thus, became the first "stolen" child in Argentina to be identified via the incipient technology of DNA identification. With this forensic first, DNA identification has emerged as a central tool of good governance the world round. From routine crime fighting to international criminal tribunals, DNA plays a crucial role in attempts to reckon with crimes of the body. As an alternative origin for forensic DNA, Argentina offers an early example of science emerging from social movements in the Global South. Drawing on twenty-seven months of fieldwork with family members, activists, and scientists, this article documents the ways in which DNA has emerged as a core site of subject formation for individuals and families affected by the terror of the dictatorship and for the Argentine nation-state, as it reckons with the legacies of repression. Through a feminist, postcolonial frame, I offer the concept of re(con)stitution as a way of attending to the forms of biocitizenship that emerge during times of humanitarian crisis and transitional justice. As a tool of reproductive governance, forensic DNA acts not only as a powerful disciplinary site of biocitizenship but also as a potential space to reimagine the social contract between the body, the public, and the state.
This special issue explores intersections of feminism, postcolonialism, and technoscience. The papers emerged out of a 2014 research seminar on Feminist Postcolonial Science and Technology Studies (STS) at the Institute for Research on Women and Gender, University of Michigan. Through innovative engagement with rich empirical cases and theoretical trends in postcolonial theory, feminist theory, and STS, the papers trace local and global circulations of technoscience. They illuminate ways in which science and technology are imbricated in circuits of state power and global inequality and in social movements resisting the state and neocolonial orders. The collection foregrounds the importance of feminist postcolonial STS to our understandings of technoscience, especially how power matters for epistemology and justice.
We examine the criticisms and subsequent changes that arise in the course of peer review. Fifty-two scholars who had recently published in Administrative Science Quarterly were surveyed regarding their peer review experience and how their article changed from initial journal submission to eventual publication. Papers that challenged theoretical perspectives faced distinctively high levels of criticism and change, particularly with attention to methodology, while those that offered a new perspective or that extended or combined established perspectives were less criticized and changed. The number of challenge-oriented publications was small as well, suggesting that either few such submissions survive the review process or few are submitted in the first place. Overall, peer review appears open to expansion of the variety of theoretical argument but does little to aid in the winnowing out of established perspectives.
"Informed consent" implicitly links the transmission of information to the granting of permission on the part of patients, tissue donors, and research subjects. But what of the corollary, informed refusal? Drawing together insights from three moments of refusal, this article explores the rights and obligations of biological citizenship from the vantage point of biodefectors—those who attempt to resist technoscientific conscription. Taken together, the cases expose the limits of individual autonomy as one of the bedrocks of bioethics and suggest the need for a justice-oriented approach to science, medicine, and technology that reclaims the epistemological and political value of refusal.
This study draws on interviews with forty-nine members of a biomedical research community in the UK that is involved in negotiating data sharing and access. During an interview, an interviewee used the words "ethical moment" to describe a confrontation between collaborators in relation to data sharing. In this article, I use this as a lens for thinking about relations between "the conceptual and the empirical" in a way that allows both analyst and actor to challenge the status quo and consider other ethical possibilities. Drawing on actor network theory (ANT), I approach "the empirical" using the concepts of controversy and ontological uncertainty as methodological tools to tackle the problem of ethics. I suggest that these concepts also provide a bridge for understanding the ontological structure of the virtual and the actual, as described in Deleuze’s Difference and Repetition. While other science and technology studies scholars have sought to draw on Deleuze, this article addresses the integration of ethics and empirical research. It arises as a critical reaction to existing treatments of this problem as found in empirical ethics, especially in the sociology of bioethics, and indirectly in ANT texts.
Capitalist dynamics in knowledge production are not limited to situations in which economic interests influence researchers’ practices. Building on laboratory studies and the French "pragmatic" tradition in sociology, this article proposes an approach to tackle more pervasive capitalist logics at work in contemporary research and their consequences. It uses the term epistemic capitalism to denote the accumulation of capital, as worth made durable, through the act of doing research, in and beyond academia. In doing so, it conceptualizes capitalism primarily not as a system of circulation and accumulation of monetary value but rather as a cultural way of producing, attributing, and accumulating specific forms of worth, which need not be monetary. Empirically, the article studies variants in epistemic capitalism by addressing the differing role of the accumulation of different forms of capital and the regimes connected to it in two institutional settings in Austria, academic life science laboratories and biotechnology start-up companies. Concluding, it argues that analytically dissociating the concept of capitalism from its link to economic value allows a finer-grained cultural analysis of the importance and effects of processes of accumulation in contemporary research. It ends with discussing the normative implications of these findings for debates about the commercialization of academia.
This paper links two domains of recent interest in science and technology studies, complexity and ignorance, in the context of knowledge practices observed among synthetic biologists. Synthetic biologists are recruiting concepts and methods from computer science and electrical engineering in order to design and construct novel organisms in the lab. Their field has taken shape amidst revised assessments of life’s complexity in the aftermath of the Human Genome Project. While this complexity is commonly taken to be an immanent property of biological systems, this article presents an epistemological view of complexity according to which complexity relates to a specific scientific theory or model and refers to that which exceeds the theory or model’s explanatory power. This epistemological view allows us to narrate a particular story about the changing relationship between biology and synthetic biology in the last decade and accounts for early knowledge practices in synthetic biology that "ignored" biology. This article further argues that while the failure of ignorance to produce clear-cut results for synthetic biologists has led practitioners back to biology, the entanglements between different pragmatic orientations and ways of knowing trouble the implications of this return for assessments of the complexity of biological systems.
Future Earth is an evolving international research program and platform for engagement aiming to support transitions toward sustainability. This article discusses processes that led to Future Earth, highlighting its intellectual emergence. I describe how Future Earth has increased space for contributions from the social sciences and humanities despite powerful, long-standing preferences for bio-geophysical research in global environmental research communities. I argue that such preferences nevertheless are deeply embedded in scientific institutions that continue to shape environmental science agendas and, as such, constitute a formidable obstacle that needs to be recognized and countered to bolster efforts at effective societal transformation in the face of sustainability challenges. The analysis draws on two decades of observant participation in environmental research communities in the United States, Europe, Brazil, and elsewhere, including participation in the visioning process that led to Future Earth.
On November 28, 2009, as part of events marking the twenty-fifth anniversary of the disaster at the Union Carbide plant in Bhopal, gas survivors protested the contents of the report prepared by government scientists that mocked their complaints about contamination. The survivors shifted from the scientific document to a mediated lunch invitation performance, purporting to serve the same chemicals as food that the report had categorized as having no toxic effects. I argue that the lunch spread, consisting of soil and water from the pesticide plant, explicitly front-staged and highlighted the survivor’s forced intimate relationship with such chemicals, in order to reshape public perception of risks from toxins. Chemical matter like sevin tar and naphthol tar bound politicians, scientists, corporations, affected communities, and activists together, as these stakeholders debated the potential effects of toxic substances. This gave rise to an issue-based "chemical public." Borrowing from such theoretical concepts as "ontologically heterogeneous publics" and "agential realism," I track the existing and emerging publics related to the disaster and the campaigns led by the International Campaign for Justice in Bhopal advocacy group.
The governments of China, India, and the United Kingdom are unanimous in their belief that bioinformatics should supply the link between basic life sciences research and its translation into health benefits for the population and the economy. Yet at the same time, as ambitious states vying for position in the future global bioeconomy they differ considerably in the strategies adopted in pursuit of this goal. At the heart of these differences lies the interaction between epistemic change within the scientific community itself and the apparatus of the state. Drawing on desk-based research and thirty-two interviews with scientists and policy makers in the three countries, this article analyzes the politics that shape this interaction. From this analysis emerges an understanding of the variable capacities of different kinds of states and political systems to work with science in harnessing the potential of new epistemic territories in global life sciences innovation.
This article investigates "urks," that is, disconnected parts of urban infrastructure that remain in their subsurface location. The reason for engaging in this topic is resource scarcity concerns, as urks contain large amounts of copper and aluminum that could be "mined" for the benefit of the environment. Our starting point is that there is a certain nonstagnant capacity of waste-like entities such as urks and that their resistance to categorization is crucial to encapsulate their political potential (cf. Hawkins 2006; Moore 2012; Hird 2013). We investigate how this indeterminate capacity has implications in terms of where future trajectories for urk recovery are conceivable. The study is based on interviews with respondents from the infrastructure and waste sectors in Sweden. By stressing the relationship between urks and their geosocial subsurface surroundings, we use the respondents’ exploratory interpretations of urks to outline a spectrum of issues that should be further discussed for urks to become a matter of concern. The negotiation of these issues, we suggest, can be conceived of as a form of navigation along the perceived fault lines between actors and priorities, and they must be resolved for increased urk recovery to occur.
This article contributes to Science and Technology Studies on vulnerability by putting cyborgs at center stage. What vulnerabilities emerge when technologies move under the skin? I argue that cyborgs face new forms of vulnerability because they have to live with a continuous, inextricable intertwinement of technologies and their bodies. Inspired by recent feminist studies on the lived intimate relationships between bodies and technologies, I suggest that sensory experiences, material practices, and cartographies of power are important heuristic tools to understand the vulnerabilities of hybrid bodies. Based on an analysis of how patients in the Netherlands and the United States cope with appropriate and inappropriate implantable cardioverter defibrillator shocks, I describe how defibrillators introduce two new kinds of vulnerabilities: vulnerability as an internal rather than an external threat, and as harm you may try to anticipate but can never escape. Despite these vulnerabilities, some heart patients don’t position themselves as passive victims of faulty machines. They actively engage in material practices of resilience by using magnets to stop inappropriate shocks. I conclude that anticipating and taming the improper working of technologies inside bodies constitutes a new form of invisible labor that is crucial to diminishing the existential uncertainties of cyborgs.
Science and emotions are typically juxtaposed: science is considered rational and unattached to outcomes, whereas emotions are considered irrational and harmful to science. Ethnographic studies of the daily lives of scientists have problematized this opposition, focusing on the emotional experiences of scientists as they go about their work, but they reveal little about disciplinary differences. We build on these studies by analyzing Citation Classics: accounts about the making of influential science. We document how highly cited scientists retrospectively describe emotional aspects of their research and assess variation in these narratives across six diverse disciplines: Chemistry; Clinical Medicine; Neurobiology; Physics; Plant and Animal Science; and Psychology and Psychiatry. Using correspondence analysis, we develop a multidimensional model to explain disciplinary variation in scientists’ accounts of emotions and link this variation to internal, external, and material aspects of the disciplines. We find differences in norms of appropriate emotional expression, or "feeling rules," between the "hard" and "soft" sciences, the basic and applied sciences, and the sciences that study living organisms versus those that study organs, cells, or atoms. By comparing accounts across disciplines and elaborating the structuring principles underlying these patterns, we integrate knowledge from varied case studies into an integrative and multifaceted model.
In 1942, Katherine Frost Bruner published an article titled "Of psychological writing: Being some valedictory remarks on style." It was published in Journal of Abnormal and Social Psychology, the journal for which she served as editorial assistant between 1937 and 1941. Her collection of advice to writing scholars has been widely quoted, including by several editions of The Publication Manual of the American Psychological Association. The most frequently quoted message in Bruner’s article deals with the importance of making sure that references in academic texts are complete and accurate. Exploring the citation history of this particular message reveals an ironic point: the great majority of those who have quoted Bruner’s words on reference accuracy have not done so accurately. The case may serve as a reminder of the importance of the basic academic principle of striving to use primary sources. The most startling finding in this study is how frequently this principle is violated, even by authors who advise and educate academic writers.
Standards that codify sustainability, such as Ethical Trade, Fairtrade, Organic and Rainforest Alliance, have become a common means for value chain actors in the Global North to make statements about the values of their products and the practices of producers in the Global South. This case study of Tanzanian tea value chains takes a closer look at how sustainability, in the form of SustainabiliTea, is done by actors who did not participate in defining and standardizing the form of sustainability with which they are meant to comply. Based on data collected during a multisited ethnography, I explore the performative nature of sustainability standards. The analysis reveals sustainable projects, sustainable markets, sustainable farm management, and sustainable qualities. These multiple SustainabiliTeas work together to construct a single vision of SustainabiliTea, which is a means to sustain the enterprise. I argue that the use of standards to guide performances makes some technical and political stakes visible while rendering others invisible. By paying attention to the residual categories, the tensions between knowledge and materiality, and listening to those voices at the margins, we see what is at stake in the maintenance of SustainabiliTea: survival in the tea market.
This article examines the foundation myths of Brazil in the last two centuries, paying particular attention to the relationship between these myths and governmental attitudes toward the hybridity of Northern and Southern ethnic and technoscientific entities. Based upon this examination, the article argues that it is important to consider both the wider temporal frames and the shifts and sedimentations that have formed current foundation myths and shaped their relation to science and technology. Postcolonial science technology studies theories illuminate aspects of this trajectory, but our analysis suggests a more complex scenario that involves internal political dynamics and the work of local intellectuals. We argue that the example of Brazilian social scientists should encourage scholars to go beyond the current focus on breaking the myths of technoscience and undertake mythmaking initiatives with wider societal resonances.
This article examines the political controversy in the United States surrounding a new process for the disposition of human remains, alkaline hydrolysis (AH). AH technologies use a heated (sometimes pressurized) solution of water and strong alkali to dissolve tissues, yielding an effluent that can be disposed through municipal sewer systems, and brittle bone matter that can be dried, crushed, and returned to the decedent’s family. Though AH is legal in eight US states, opposition to the technology remains strong. Opponents express concerns about public health and safety and about the dignity of our mortal remains. Proponents focus on AH’s environmental benefits over cremation and earth burial, aligning the technology with the "green burial" movement. Drawing from historical sources, Science, Technology, and Society literature, interviews with funeral professionals, industry literature, and various media sources, this article examines four prominent conceptions of the dead human body as they are deployed (and inflected) by various funeral stakeholders seeking to exercise authority over the dead human body, to influence the trajectory of AH technology in the United States, and to chart a course for US deathcare culture in the twenty-first century.
This article presents a case study of a recent controversy over the use of computed tomography (CT) as a diagnostic technology in South Korean hospitals. The controversy occurred in the wake of a series of conflicts in the late twentieth century over the legitimate placement of healing practices, medicinal substances, and medical technologies within Korea’s separate "Western Medicine" (WM) and "Korean Medicine" (KM) systems of health care and pharmaceutical distribution. The controversy concerned an attempt to use hi-tech imaging technology—the epitome of modern medicine—in a clinic that maintains a strong ideological attachment to Korean healing traditions. A close study of this dispute, based on interviews, participant observation, and documentary analysis, showed that discursive positions taken about the translatability of medical technologies changed with the context of dispute, and did not reflect a stable epistemic boundary between rival medical paradigms.
In the late nineteenth and early twentieth century, the Peruvian Andes ranked as a key international destination for those afflicted with one of the world’s most deadly diseases, tuberculosis. Physicians, scientists, policy makers, and patients believed that high-elevation mountain climates worldwide would help cure the disease. Historical processes driving the creation of Andean health resorts, which are understudied in the historiography, uncover an important story in the history of tuberculosis, and also reveal how global health initiatives and disease treatment played out within the global South, where national forces and local environmental conditions influenced the trajectory of science and medicine. Jauja, Peru, became an internationally recognized health resort for tuberculosis treatment not only through science and medicine but also through national political integration campaigns, transportation initiatives, economic development agendas, social (race and class) relations, cultural perspectives of the Andean landscape, and the impact of the physical environment. This historical case about the evolution of Jauja reveals how science and medicine are shaped by distinct spatial forces that illuminate a geography of science in the postcolonial setting, as well as the ways in which climate is culturally constructed in specific sites, by different peoples, and at distinct points in time.
The relevance of scientific knowledge for science and technology policy and regulation has led to a growing debate about the role of values. This article contributes to the clarification of what specific functions cognitive and noncognitive values adopt in knowledge generation and decisions, and what consequences the operation of values has for policy making and regulation. For our analysis, we differentiate between three different types of decision approaches, each of which shows a particular constellation of cognitive and noncognitive values. Our objectives are to present a structured analysis of the varying functions that different kinds of values can adopt, as well as the value-related tensions and trade-offs they give rise to. We argue that the operation of noncognitive values in scientific knowledge generation, policy, and regulatory decision making can be understood as an enabling factor, rather than a limiting one.
This article reports on ethnographic research into the practical and ethical consequences of the implementation and use of telecare devices for older people living at home in Spain and the United Kingdom. Telecare services are said to allow the maintenance of their users’ autonomy through connectedness, relieving the isolation from which many older people suffer amid rising demands for care. However, engaging with Science and Technology Studies (STS) literature on "user configuration" and implementation processes, we argue here that neither services nor users preexist the installation of the service: they are better described as produced along with it. Moving beyond design and appropriation practices, our contribution stresses the importance of installations as specific moments where such emplacements take place. Using Etienne Souriau’s concept of instauration, we describe the ways in which, through installation work, telecare services "bring into existence" their very infrastructure of usership. Hence, both services and telecare users are effects of fulfilling the "felicity conditions" (technical, relational, and contractual) of an achieved installation.
Since the turn of the millennium, the major development agencies have been promoting "knowledge for development," "ICT for development," or the "knowledge economy" as new paradigms to prompt development in less-developed countries. These paradigms display an unconditional trust in the power of Western technology and scientific knowledge to trigger development—they taste of epistemic and technological determinism. This article probes, by means of a genealogy, how and when development cooperation began adhering to epistemic and technological determinism, and which forms this adhesion has taken over time. The genealogy shows, first, that knowledge and technology have always been integrally part of the very "development" idea since this idea was shaped during enlightenment. Second, while the genealogy reveals that epistemic and technological determinism were embedded in the development idea from the very beginning, it also illustrates that the determinism has always been challenged by critical voices.
Researchers are increasingly expected to deliver "socially robust knowledge" that is not only scientifically reliable but also takes into account demands from societal actors. This article focuses on an empirical example where these additional criteria are explicitly organized into research settings. We investigate how the multiple "accountabilities" are managed in such "responsive research settings." This article provides an empirical account of such an organizational format: the Dutch Academic Collaborative Centres for Public Health. We present a cross-case analysis of four collaborative research projects conducted within this context. We build on (and extend) Miller’s notion of "hybrid management." The article shows that the extended concept of hybrid management is useful to study the different accountabilities encountered in such settings. We analyze how the collaboration developed and which conflicts or dilemmas arose. We then focus on the different hybrid management strategies used in the collaboration. The empirical material shows how the different aspects of hybrid management feature in various configurations in the four projects. We highlight that hybrid management strategies may be used by different groups or at different moments, may reinforce or contradict each other, and may be more or less effective at different points in time.
The New York Times (NYT) receives more citations from academic journals than the American Sociological Review, Research Policy, or the Harvard Law Review. This article explores the reasons why scholars cite the NYT so much. Reasons include studying the newspaper itself or New York City, establishing public interest in a topic by referencing press coverage, introducing specificity, and treating the NYT very much like an academic journal. The phenomenon seems to reflect a mode 2 type of scholarship produced in the context of application, organizationally diverse, socially accountable, and aiming to be socially useful as well as high quality as assessed by peers.
In the early 1970s, the idea of precaution—of heeding rather than ignoring scientific evidence of harm when there is uncertainty, and taking action that errs on the side of safety—was so appealing that the US Congress used it as the basis of the toxics provisions of the Clean Water Act of 1972, the federal Environmental Protection Agency (EPA) based its proposals for implementing those provisions on it, and the courts frequently tended toward it when resolving conflicts over the implementation of pollution control law. In other words, precaution was written into toxic water pollutant control law and was beginning to be written into policy and regulations. By 1976, the tables were completely turned. The EPA abandoned the safety-providing approach in the implementation of the law, even though the law required it, and adopted a risk-taking approach in the creation of standards for the vast majority of toxic water pollutants. The article examines how this change was brought about. It builds on recent work on undone science as an obstacle to regulation and contributes to the development of an account of the creation of the regulatory system, with both its achievements and its limitations.
The authors assess the collaboration between the University of California, Berkeley’s Community Assessment of Renewable Energy and Sustainability program and the Pinoleville Pomo Nation, a small Native American tribal nation in northern California. The collaboration focused on creating culturally inspired, environmentally sustainable housing for tribal citizens using a codesign methodology developed at the university. The housing design process is evaluated in terms of both its contribution to Native American "cultural sovereignty," as elaborated by Coffey and Tsosie, and as a potential example of the democratization of scientific practice.
Sweden’s road safety policy, Vision Zero, seeks to eliminate deaths and serious injuries from traffic crashes, and it recognizes that the bottleneck in improving road safety is displacing mobility as the main priority of the road transportation system. This analysis considers the theory and practice of Vision Zero, first interpreting its proposed changes to responsibility for road safety, and then examining how it has been implemented. The research methods include document analyses, field observations, and interviews with Swedish safety practitioners. This study found that Vision Zero’s main innovation is its explicit call for experts to have causal responsibility for injuries. Moreover, Vision Zero expands the responsibility attributed to road users, who are called on to voice demand for safety improvements to civil servants and elected officials. However, Vision Zero also needed to create institutions through which experts could be accountable for their new causal responsibility, and it needed to support popular organizing around traffic injury prevention. I suggest that a major limitation to increasing the status of road safety as a public problem is that it is generally understood as a private problem and changing this perception through policy requires a more deeply engaged public process.
Through the case of the Helix_T wind turbine project, this article sets out to argue two points: first, on a theoretical level, that Commons-based peer production, in conjunction with the emerging technological capabilities of three-dimensional printing, can also produce promising hardware, globally designed and locally produced. Second, the Commons-oriented wind turbine examined here is also meant to practically contribute to the quest for novel solutions to the timely problem of the need for (autonomous) renewable sources of energy, more in the sense of a development process than as a ready-to-apply solution. We demonstrate that it is possible for someone with partial initial knowledge to initiate a similar, complex project based on an interesting idea, and to succeed in implementing it through collaboration with Commons-oriented communities, while using peer-produced products and tools. Given the trends and trajectories both of the current information-based paradigm and the problems of the predominant industrial modes of production with all the collateral damage they entail, this may be considered a positive message indeed.
This article examines the way in which public controversies affect regulatory science. It describes the controversy that unfolded in Europe around the use of the ninety-day rat-feeding tests for the risk assessment of genetically modified (GM) plants. This type of test had been criticized for almost two decades by toxicologists, nongovernmental organizations, and industry alike for its inability to capture the specific health effects of GM plants. But GM risk assessment experts showed great reluctance to move toward a more systematic use of other tests, such as chronic two-year studies or toxicogenomic techniques, and made sure that official guidance continues to recommend the use of the ninety-day rat-feeding study. The article shows that these tactics of standardization are a defining feature of regulatory science, and a resource for toxicity experts to defend their authority and credibility against competing expertises that arise during controversies.