← Back to Part X: Science & Society

Chapter 29: Social Epistemology of Science

How does the social organization of science affect the reliability of scientific knowledge?

Science is a social enterprise. Research is conducted by teams, evaluated by peer reviewers, funded by institutions, and communicated through journals and conferences. The traditional philosophy of science largely ignored this social dimension, treating scientific rationality as a property of individual minds confronting evidence. But since the 1970s, a rich literature has developed on how the social organization of science affects — for better or worse — the epistemic quality of its products.

This chapter examines the spectrum of positions on the social dimensions of science, from the radical constructivism of the Strong Programme to the more moderate social epistemology of Kitcher and Goldman. We also examine concrete social mechanisms — peer review, consensus formation, division of cognitive labor — and ask whether and when they reliably produce knowledge.

The stakes are high. If the social organization of science can be shown to be truth-conducive, then scientific consensus deserves substantial deference. If, on the other hand, social factors systematically distort scientific inquiry, then the authority of science is undermined. Understanding the social epistemology of science is thus essential for navigating the contemporary landscape of trust and distrust in scientific institutions.

The Sociology of Scientific Knowledge: Bloor’s Strong Programme

The sociology of scientific knowledge (SSK), developed in the 1970s at the University of Edinburgh by David Bloor, Barry Barnes, and others, represents the most radical challenge to the traditional view of science as a purely rational enterprise. Bloor’s Knowledge and Social Imagery (1976) laid out the four tenets of the “Strong Programme”:

  1. Causality: The sociology of knowledge should be concerned with the conditions (including social conditions) that bring about beliefs or states of knowledge.
  2. Impartiality: It should be impartial with respect to truth and falsity, rationality and irrationality, success and failure.
  3. Symmetry: The same types of cause should explain true and false beliefs alike.
  4. Reflexivity: Its patterns of explanation should be applicable to sociology itself.

The most controversial of these is the symmetry principle. Traditional philosophy of science assumed an asymmetry: true beliefs are explained by evidence and reason (they succeed because they are true), while false beliefs are explained by social factors (prejudice, ideology, error). Bloor argued that this asymmetry is untenable. If social factors can distort belief formation, they can also contribute to it. A complete explanation of any belief — true or false — must include both evidential and social factors.

“The sociologist seeks theories which explain the beliefs which are in fact found, regardless of how the investigator evaluates them... All beliefs are on a par with one another with respect to the causes of their credibility.”— David Bloor, Knowledge and Social Imagery (1976)

Critics have argued that the Strong Programme conflates explaining why a belief is held with explaining why a belief is true. The fact that social factors contributed to the acceptance of Newtonian mechanics does not mean that social factors are the reason it is true. Larry Laudan accused the Strong Programme of committing the “arationality assumption” — assuming that rational factors cannot explain belief formation, so that social factors must fill the gap. But this assumption is unjustified: the best explanation for why scientists accept well-confirmed theories may simply be that the evidence supports them.

Latour and Woolgar: Laboratory Life

Bruno Latour and Steve Woolgar’s Laboratory Life: The Construction of Scientific Facts(1979/1986) pioneered the ethnographic study of science. Latour, a sociologist, spent two years observing the daily work of scientists at the Salk Institute in San Diego. His account of how scientific “facts” are produced through the day-to-day practices of the laboratory — inscription devices, negotiations among scientists, the mobilization of allies — challenged the view that facts are simply “discovered” by passive observation.

“A scientific fact is not something that is simply ‘out there,’ waiting to be discovered. It is the end product of a long and complex process of literary inscription, persuasion, and negotiation.”— Bruno Latour & Steve Woolgar, Laboratory Life (1979)

Latour introduced the concept of “inscription devices” — instruments that transform material substances into written traces (graphs, tables, spectra). On Latour’s analysis, the laboratory is a system for producing inscriptions, and scientific knowledge consists of these inscriptions and the networks that sustain them. A “fact” is a statement that has been sufficiently stabilized through repeated inscription and widespread acceptance that its origins in the laboratory are forgotten.

Latour later developed actor-network theory (ANT), which extends the network of science to include non-human actors — instruments, microbes, texts. On this view, scientific knowledge is the product of heterogeneous networks in which human and non-human actors are symmetrically intertwined.

Latour’s work has been both enormously influential and deeply controversial. Critics accuse him of conflating the process of establishing facts (which is indeed social) with the contentof facts (which, on the realist view, is determined by nature, not by negotiation). The philosopher Ian Hacking called this the “construction-discovery” confusion: the fact that we construct our representations of the world does not mean that the world itself is constructed.

The Science Wars and the Sokal Affair

The tensions between defenders of scientific objectivity and social constructivists erupted into open conflict in the 1990s in what became known as the “Science Wars.” On one side were scientists and scientific realists who accused the sociology of science of undermining the authority of science. On the other side were sociologists, historians, and cultural theorists who argued that science was no different in kind from other social activities.

The most dramatic episode was the Sokal affair. In 1996, the physicist Alan Sokal submitted a deliberately nonsensical article, “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity,” to the cultural studies journal Social Text. The article was accepted and published, at which point Sokal revealed the hoax, arguing that it demonstrated the intellectual bankruptcy of postmodern approaches to science.

The Sokal affair generated enormous publicity but arguably more heat than light. Defenders of SSK argued that a single journal’s editorial failures proved nothing about the intellectual merits of social studies of science. Critics of SSK argued that the affair revealed a systematic lack of intellectual rigor in the field.

In retrospect, the Science Wars were largely a product of mutual misunderstanding. Most serious sociologists of science do not deny that the natural world constrains scientific beliefs; they argue that social factors also play a role, and that the relative importance of natural and social factors is an empirical question. And most serious philosophers of science acknowledge that science is a social activity; they argue that its social character does not undermine but rather contributes to its epistemic authority.

Kitcher’s Social Epistemology: Division of Cognitive Labor

Philip Kitcher’s The Advancement of Science (1993) and Science, Truth, and Democracy (2001) develop a “moderate social epistemology” that takes the social character of science seriously without abandoning the commitment to truth. Kitcher’s central insight is that the social organization of science can be truth-conducive: certain social structures make it more likely that the community as a whole will arrive at true beliefs, even if individual scientists are motivated by non-epistemic factors.

The concept of division of cognitive labor is Kitcher’s most influential contribution. He argues that it is epistemically beneficial for a scientific community to have some scientists working on the most promising research program and others working on less promising alternatives. If everyone pursues the currently favored approach, the community risks missing important discoveries that could be made by exploring alternative paths.

“The community is best served not when every individual scientist acts on purely epistemic motivations, but when a diversity of motivations — including the desire for credit, curiosity about unpopular ideas, and stubbornness in the face of criticism — ensures that a range of approaches is explored.”— Philip Kitcher, The Advancement of Science (1993)

Kitcher shows, using game-theoretic models, that the optimal distribution of researchers across competing approaches depends on the payoffs and probabilities involved. A community of perfectly rational, purely truth-seeking individuals might actually perform worse than a community in which some scientists are motivated by credit-seeking, contrarianism, or personal commitment to a minority view. This is because the “invisible hand” of diverse motivations produces a better distribution of cognitive effort than centralized rational planning.

In Science, Truth, and Democracy, Kitcher extends his framework to address the question of “well-ordered science” — how scientific research should be organized to serve democratic values. He argues that research priorities should be set through democratic deliberation, not left to scientists alone or to market forces.

Goldman’s Veritistic Social Epistemology

Alvin Goldman’s Knowledge in a Social World (1999) develops a veritistic(truth-oriented) social epistemology that evaluates social practices and institutions by their tendency to promote true beliefs and diminish false ones. Goldman applies this framework to several domains relevant to science: argumentation practices, jury deliberation, media communication, and — most importantly — scientific inquiry.

Goldman asks: which social practices and institutions are truth-conducive, and which are truth-degrading? For example, is peer review truth-conducive? Goldman analyzes peer review as a form of expert testimony and argues that it is generally (but not infallibly) truth-conducive, provided that reviewers are competent, honest, and independent. When these conditions fail — when reviewers are biased, incompetent, or engaged in gatekeeping — peer review can become truth-degrading.

Goldman’s framework is useful for evaluating proposals to reform scientific institutions. Should peer review be double-blind? Should pre-registration of studies be required? Should negative results be published? Each of these proposals can be assessed by asking whether it would tend to increase or decrease the veritistic value of the scientific process.

Peer Review and the Epistemology of Testimony

Peer review is the primary mechanism by which the scientific community evaluates new knowledge claims. Yet its epistemic credentials are surprisingly poorly understood. Empirical studies of peer review have revealed several troubling features:

  • Low inter-reviewer agreement: Different reviewers of the same paper frequently disagree about its quality and publishability. Studies show that inter-reviewer agreement is often only slightly better than chance.
  • Bias: Peer review is susceptible to biases based on the author’s prestige, institutional affiliation, nationality, and gender. Single-blind review (where reviewers know the author’s identity) is particularly vulnerable.
  • Conservatism: Peer review tends to favor conventional results and methods, making it harder for genuinely novel or paradigm-challenging work to be published.
  • Failure to detect fraud: Peer review is not designed to detect fabrication or falsification, and it frequently fails to do so.

These findings raise difficult epistemological questions. Most of what we believe about science comes not from personal observation but from testimony — reports by other scientists, mediated by peer review and published in journals. The epistemology of testimony asks: when is it rational to believe something on the basis of another person’s say-so?

The traditional answer, going back to David Hume, is that testimony is trustworthy only insofar as we have independent evidence of the testifier’s reliability. But in science, we often lack such independent evidence: we trust peer review because it is the established practice, not because we have personally verified its reliability. John Hardwig (1991) argued that modern science inevitably involves “epistemic dependence” — reliance on others’ expertise that cannot be independently verified — and that this dependence is rational given the division of cognitive labor in science.

Scientific Consensus: Formation and Authority

Scientific consensus — widespread agreement among scientists on a particular claim — plays a crucial role in both science and public policy. The consensus that human activities are causing climate change, for instance, is often cited as a reason for policy action. But what gives scientific consensus its authority? And when, if ever, should we be skeptical of consensus?

The philosopher of science Miriam Solomon (2001) has developed a “social empiricism” that evaluates consensus by examining the distribution of decision vectors — the factors (both epistemic and non-epistemic) that influence scientists’ beliefs. A consensus is rational if it is driven primarily by epistemic vectors (evidence, successful prediction) rather than non-epistemic ones (prestige, funding incentives, conformity).

K. Brad Wray (2011) has argued that scientific consensus is generally reliable because of the social mechanisms that sustain it. Scientists who challenge well-established consensus bear a heavy burden of proof, and the reward structure of science (credit for novel discoveries) ensures that anomalies and disconfirmations are actively sought. A consensus that survives sustained critical scrutiny is more likely to be correct than a consensus that has never been challenged.

However, the history of science also provides examples of consensus that turned out to be wrong — continental drift was rejected by the geological consensus for decades before plate tectonics was accepted. The lesson is not that consensus is unreliable, but that it should be understood probabilistically: consensus provides strong (but not infallible) evidence for a claim, and the strength of the evidence depends on the quality of the social processes that produced the consensus.

The Epistemology of Disagreement

What should you do when you discover that an epistemic peer — someone you regard as equally competent and equally well-informed — disagrees with you? The epistemology of disagreementhas emerged as a major topic in recent philosophy, with direct implications for scientific practice.

Conciliationists (Christensen, Feldman) argue that disagreement with a peer should lead you to move your credence toward the peer’s position. If you and your peer are equally reliable, then the fact of disagreement is itself evidence that one of you is wrong — and you have no more reason to think it is the peer than yourself. The rational response is to “split the difference.”

Steadfasters (Kelly, Lackey) argue that you can sometimes rationally maintain your position in the face of peer disagreement, especially if you have good evidence that your own reasoning is sound. The fact that a peer disagrees does not automatically cancel the evidential force of the considerations that led you to your belief.

In science, the epistemology of disagreement bears on questions about how much weight to give to minority positions. If 97% of climate scientists agree that anthropogenic climate change is real, should a non-expert give equal weight to the dissenting 3%? The conciliationist would say no: the overwhelming consensus should strongly shift one’s credence toward the majority view. The epistemology of disagreement thus provides philosophical grounding for the practice of deferring to scientific consensus.

Essential Readings

  • Bloor, D. (1976). Knowledge and Social Imagery, Chapters 1–2.
  • Latour, B. & Woolgar, S. (1979/1986). Laboratory Life, Chapters 2–4.
  • Kitcher, P. (1993). The Advancement of Science, Chapters 8–9.
  • Goldman, A. (1999). Knowledge in a Social World, Chapters 8–9.
  • Sokal, A. (1996). “Transgressing the Boundaries” and “A Physicist Experiments with Cultural Studies,” Lingua Franca.
  • Solomon, M. (2001). Social Empiricism, Chapters 1–3.
  • Hardwig, J. (1991). “The Role of Trust in Knowledge,” Journal of Philosophy 88(12).
  • Christensen, D. (2007). “Epistemology of Disagreement: The Good News,” Philosophical Review 116(2).