Chapter 28: Values & Objectivity
Can science be value-free? The contested relationship between epistemic and non-epistemic values in scientific reasoning
The value-free ideal of science holds that scientific reasoning should be guided solely by evidence and logic, free from moral, political, or social values. On this view, the scientist qua scientist should be impartial, disinterested, and objective. Values may legitimately influence the choice of research topics (it is permissible to study cancer because we value health), but they should play no role in the evaluation of hypotheses, the interpretation of data, or the acceptance of theories.
This ideal has come under sustained philosophical attack since the mid-twentieth century. Critics argue that non-epistemic values inevitably enter scientific reasoning at multiple points, and that the pretense of value-freedom is itself ideologically loaded. But if values are inescapable in science, what becomes of objectivity? Is science just politics by other means?
This chapter examines the value-free ideal, the most powerful arguments against it, and the various proposals for reconceiving objectivity in a way that acknowledges the role of values without collapsing into relativism.
The Value-Free Ideal
The value-free ideal has deep roots in the history of philosophy of science. The logical positivists drew a sharp distinction between facts and values, arguing that scientific statements are factual (empirically verifiable) while value statements are merely expressions of attitude or preference. Max Weber’s influential doctrine of Wertfreiheit (value-freedom) held that social scientists should strive for objectivity by keeping their personal values out of their research.
The value-free ideal can be formulated at different levels of strength:
- •Strong version: Non-epistemic values should play no role whatsoever in scientific inquiry.
- •Moderate version: Non-epistemic values may legitimately influence the context of discovery (choosing research topics, setting priorities) but not the context of justification (evaluating evidence, accepting hypotheses).
- •Weak version: Non-epistemic values should not be the sole basis for accepting or rejecting hypotheses; evidence must play a constraining role.
Most contemporary critics target the moderate version, arguing that even in the context of justification, non-epistemic values play an ineliminable role. The discovery/justification distinction, they argue, is untenable: the values that influence what questions we ask also influence how we interpret the answers.
The Argument from Inductive Risk
The most powerful argument against the value-free ideal is the argument from inductive risk, first articulated by Richard Rudner in his 1953 paper “The Scientist Qua Scientist Makes Value Judgments.”
Rudner’s argument is disarmingly simple. Scientists must decide whether to accept or reject hypotheses. But no amount of evidence can make a hypothesis certain; there is always a residual risk of error. In accepting a hypothesis, the scientist risks a false positive(accepting a false hypothesis); in rejecting it, the scientist risks a false negative(rejecting a true hypothesis). The appropriate balance between these risks depends on the consequences of error — and assessing consequences requires value judgments.
“How sure we need to be before we accept a hypothesis will depend on how serious a mistake would be... Since no scientific hypothesis is ever completely verified, in accepting a hypothesis the scientist must make the decision that the evidence is sufficiently strong or that the probability is sufficiently high to warrant the acceptance of the hypothesis. Obviously our decision regarding the evidence and how strong is ‘strong enough’ is going to be a function of the importance, in the typically ethical sense, of making a mistake in accepting or rejecting the hypothesis.”— Richard Rudner, “The Scientist Qua Scientist Makes Value Judgments” (1953)
Consider a concrete example. A pharmaceutical company is testing a new drug. Should they accept the hypothesis that the drug is safe? The evidence is ambiguous — some studies show minor side effects, others do not. If they accept the hypothesis and the drug is actually dangerous, people will be harmed. If they reject the hypothesis and the drug is actually safe, patients will be denied a beneficial treatment. The appropriate standard of evidence depends on the relative seriousness of these two errors — which is a value judgment.
Heather Douglas revived and extended the argument from inductive risk in her influential book Science, Policy, and the Value-Free Ideal (2009). Douglas argued that values enter not only at the point of hypothesis acceptance but at every stage of scientific reasoning: in characterizing data, in choosing statistical methods, in deciding which results to report, and in framing conclusions.
Douglas introduced an important distinction between direct and indirect roles for values. Values play a direct role when they serve as reasons for accepting a hypothesis (“I accept this hypothesis because it would be good if it were true”). Values play an indirect role when they influence the threshold for acceptance (“Given the serious consequences of a false positive, I require stronger evidence before accepting this hypothesis”). Douglas argues that direct roles for values are illegitimate (wishful thinking), but indirect roles are not only legitimate but unavoidable.
Epistemic vs Non-Epistemic Values
A crucial question in this debate is whether epistemic values (values related to the pursuit of truth) can be cleanly separated from non-epistemic values (moral, political, social values). Defenders of the value-free ideal typically concede that epistemic values — accuracy, consistency, explanatory power, simplicity — are legitimate in science, and argue that only non-epistemic values need to be excluded.
Thomas Kuhn identified five epistemic values that scientists use to evaluate theories:
- Accuracy: The theory should agree with known observations and experiments.
- Consistency: The theory should be internally consistent and consistent with other accepted theories.
- Scope: The theory should extend beyond the phenomena it was originally designed to explain.
- Simplicity: The theory should bring order to phenomena that would otherwise be unrelated.
- Fruitfulness: The theory should disclose new phenomena or new relationships among known phenomena.
Kuhn argued that these values are shared by all scientists but applied differentlyby different individuals. One scientist may weight simplicity more heavily; another may prioritize scope. This is why theory choice in science is not an algorithm but a matter of judgment — and why reasonable scientists can disagree.
But critics argue that the distinction between epistemic and non-epistemic values is less clear than it appears. Is simplicity really an epistemic value, or is it an aesthetic preference? Is fruitfulness a purely epistemic desideratum, or does it reflect social judgments about which phenomena are worth investigating? Helen Longino has argued that background assumptions — which are shaped by social and cultural values — inevitably influence how evidence is interpreted, blurring the boundary between epistemic and non-epistemic values.
Longino’s Contextual Empiricism
Helen Longino’s Science as Social Knowledge (1990) offers one of the most sophisticated philosophical frameworks for understanding the role of values in science. Longino argues that the relationship between evidence and hypothesis is always mediated by background assumptions— auxiliary hypotheses that connect observations to theoretical claims. These background assumptions are not themselves determined by the evidence; they are shaped by the social, cultural, and political context of inquiry.
“Objectivity is not a property of individual scientists or individual theories, but of scientific communities. It is achieved through the critical interaction of diverse perspectives.”— Helen Longino, Science as Social Knowledge (1990)
Because background assumptions can be value-laden, Longino argues, there is no way to guarantee the value-freedom of individual scientific reasoning. But this does not mean that science is merely subjective. Longino proposes an alternative conception of objectivity: interactive objectivity. Science is objective not because individual scientists are free from bias, but because the social processes of science — peer review, public criticism, replication — allow biases to be identified and corrected.
Longino identifies four criteria for “transformative criticism” — the social processes that produce objective knowledge:
- Recognized avenues for criticism: There must be public forums (journals, conferences) where research can be critiqued.
- Uptake of criticism: The scientific community must actually respond to criticism, not merely tolerate it.
- Public standards: There must be shared standards of evaluation (empirical adequacy, logical consistency) that provide a common basis for criticism.
- Tempered equality: All qualified members of the community must have an equal opportunity to participate in criticism. No perspective should be excluded on the basis of social identity.
Longino’s fourth criterion is the most controversial: it implies that the demographic composition of scientific communities matters for the objectivity of science. If certain perspectives are systematically excluded, then certain background assumptions may go unchallenged, and the resulting science will be less objective. This connects Longino’s work to feminist epistemology of science.
Feminist Epistemology of Science
Feminist epistemology of science, developed by Sandra Harding, Donna Haraway, and others, argues that the social identity of knowers — their gender, race, class, and other social positions — affects what they can know and how they come to know it. This is not a relativist claim that truth is socially constructed; it is the claim that social position can be epistemically relevant, providing access to experiences and perspectives that enable certain kinds of knowledge.
Standpoint theory, developed by Sandra Harding, draws on Marxist epistemology (the claim that the proletariat has epistemic advantages over the bourgeoisie) and extends it to other social positions. Harding argues that marginalized groups can have epistemic advantages because their social position forces them to understand both their own perspective and the perspective of the dominant group.
“Starting off research from women’s lives will generate less partial and distorted accounts not only of women’s lives but also of men’s lives and of the whole social order.”— Sandra Harding, Whose Science? Whose Knowledge? (1991)
Donna Haraway’s concept of “situated knowledge” offers a related but distinct perspective. Haraway rejects both the “God trick” of claiming to see from nowhere (the traditional ideal of objectivity) and the relativist claim that all perspectives are equally valid. Instead, she argues for “partial perspectives” — knowledge is always situated in a particular social and material location, and objectivity consists in acknowledging and accounting for this situatedness.
Feminist epistemology has generated both productive research programs and heated controversy. Its defenders point to concrete examples of how male-dominated science has produced biased results: the exclusion of female subjects from clinical trials, the assumption of male-as-default in biological research, and the long neglect of women’s health issues. Its critics worry that it threatens the universality of scientific knowledge and opens the door to identity-based epistemology.
Anderson’s Democratic Ideal
Elizabeth Anderson has developed an influential account of the relationship between democratic values and scientific inquiry. In her 2004 paper “Uses of Value Judgments in Science,” Anderson argues that value judgments play legitimate and important roles at every stage of scientific inquiry, from the framing of research questions to the interpretation of results.
Anderson’s key insight is that the question is not whether values should enter science, but which values and how. She proposes three criteria for the legitimate role of values in science:
- •Values should not override evidence: they should play an indirect role (in Douglas’s sense), influencing the standard of proof but not substituting for evidence.
- •The values that enter science should be publicly avowed and open to criticism, not hidden assumptions.
- •Scientific communities should be democratically organized to ensure that diverse perspectives are represented.
Anderson’s work bridges the gap between abstract epistemology and practical science policy. Her framework has been applied to debates about climate science, pharmaceutical regulation, and the standards of evidence in social science.
Values in Risk Assessment and Policy-Relevant Science
The role of values is particularly acute in policy-relevant science, where scientific findings directly inform public policy. Consider the assessment of environmental risks: how much evidence is required before we conclude that a chemical is carcinogenic? What level of risk is “acceptable”? These questions cannot be answered by science alone; they require value judgments about the relative importance of economic costs and public health.
Carl Cranor (1993) has argued that in regulatory science, the asymmetry between false positives and false negatives is stark. A false negative (declaring a carcinogen safe) can lead to widespread illness and death. A false positive (declaring a safe substance carcinogenic) leads to unnecessary regulation and economic cost. Given this asymmetry, Cranor argues, regulatory science should adopt a precautionary standard that gives more weight to avoiding false negatives.
This has implications for the debate over climate science. Climate skeptics often demand a very high standard of proof before accepting the hypothesis of anthropogenic climate change. But given the potentially catastrophic consequences of a false negative (failing to act on climate change when it is real), the argument from inductive risk suggests that a lower standard of proof is appropriate.
The challenge for philosophers is to develop a framework that allows values to play a legitimate role in policy-relevant science without undermining the credibility of science itself. If the public perceives science as value-laden, will they lose trust in scientific expertise? Or can a more honest account of the role of values actually increase trust, by making science more transparent and accountable?
Essential Readings
- •Rudner, R. (1953). “The Scientist Qua Scientist Makes Value Judgments,” Philosophy of Science 20(1).
- •Douglas, H. (2009). Science, Policy, and the Value-Free Ideal, Chapters 1–5.
- •Longino, H. (1990). Science as Social Knowledge, Chapters 3–5.
- •Kuhn, T. (1977). “Objectivity, Value Judgment, and Theory Choice,” in The Essential Tension.
- •Harding, S. (1991). Whose Science? Whose Knowledge?, Chapters 5–6.
- •Anderson, E. (2004). “Uses of Value Judgments in Science,” Hypatia 19(1).
- •Haraway, D. (1988). “Situated Knowledges,” Feminist Studies 14(3).