Chapter 30: Ethics of Scientific Research
Moral responsibilities, research integrity, and the governance of science in an age of powerful technologies
Science is not merely a cognitive enterprise; it is also a moral one. Scientific research can cause harm — to research subjects, to the environment, and to society at large. It can be conducted honestly or dishonestly, responsibly or recklessly. The products of science — nuclear weapons, biological agents, surveillance technologies — raise profound ethical questions about the responsibilities of scientists and the governance of research.
The ethics of scientific research has evolved dramatically since the mid-twentieth century, driven by a series of scandals and crises: the Nazi medical experiments revealed at the Nuremberg Trials, the Tuskegee syphilis study, the thalidomide disaster, and more recently, the He Jiankui CRISPR affair and concerns about AI alignment. Each crisis has prompted new ethical frameworks, regulations, and institutional safeguards.
This chapter surveys the major ethical frameworks governing scientific research, examines the norms and structures that promote scientific integrity, and confronts the new ethical challenges posed by emerging technologies.
Research Ethics: Informed Consent and Human Subjects
The modern framework for research ethics was born from atrocity. During World War II, Nazi physicians conducted horrific experiments on concentration camp prisoners: immersing them in freezing water, infecting them with diseases, performing surgeries without anesthesia. The Nuremberg Trials of 1946–47 established the first international code of research ethics: the Nuremberg Code.
Key principles of the Nuremberg Code (1947):
- The voluntary consent of the human subject is absolutely essential.
- The experiment should be designed to yield fruitful results for the good of society.
- The experiment should be based on prior animal experimentation and knowledge of the disease.
- The experiment should avoid all unnecessary physical and mental suffering and injury.
- No experiment should be conducted where there is reason to believe it will cause death or disabling injury.
The Declaration of Helsinki (1964, revised multiple times), adopted by the World Medical Association, extended and refined these principles. It introduced the concept of independent ethics review — the requirement that research protocols be reviewed by an independent committee (an Institutional Review Board or Ethics Committee) before research begins. It also strengthened protections for vulnerable populations and established the principle that the welfare of research subjects takes precedence over the interests of science and society.
The Belmont Report (1979), produced by the U.S. National Commission for the Protection of Human Subjects, articulated three foundational ethical principles:
- •Respect for persons: Individuals should be treated as autonomous agents, and persons with diminished autonomy are entitled to protection. This grounds the requirement for informed consent.
- •Beneficence: Researchers have an obligation to maximize benefits and minimize harms to research subjects. This requires a careful assessment of risks and benefits.
- •Justice: The benefits and burdens of research should be distributed fairly. It is unjust to target vulnerable populations for risky research while directing the benefits to privileged groups.
The Belmont Report’s principle of justice was prompted in part by the Tuskegee syphilis study (1932–1972), in which the U.S. Public Health Service deliberately withheld treatment from African American men with syphilis in order to study the natural progression of the disease. The study continued for 40 years, even after penicillin became the standard treatment for syphilis in the 1940s. The Tuskegee study is a paradigm case of research that violates all three Belmont principles: the subjects were not adequately informed; the risks clearly outweighed the benefits; and the burden fell disproportionately on a marginalized community.
Responsible Conduct of Research: FFP
Research misconduct is conventionally defined in terms of three categories, collectively known as FFP:
Fabrication
Making up data or results and recording or reporting them. This is the most extreme form of scientific fraud. Notable cases include Jan Hendrik Schön, a Bell Labs physicist who fabricated data in dozens of published papers on superconductivity, and Diederik Stapel, a Dutch social psychologist who fabricated data in over 50 publications.
Falsification
Manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented. Falsification includes selectively reporting data (suppressing unfavorable results), manipulating images, and cherry-picking statistical analyses.
Plagiarism
Appropriating another person’s ideas, processes, results, or words without giving appropriate credit. This includes both verbatim copying and paraphrasing without attribution.
Beyond FFP, there is a broader category of questionable research practices (QRPs) that, while not constituting outright fraud, can distort the scientific record. These include:
- •P-hacking: Running multiple statistical analyses until a significant result is found, without correcting for multiple comparisons.
- •HARKing: Hypothesizing After Results are Known — presenting post-hoc hypotheses as if they were formulated before data collection.
- •Publication bias: The tendency of journals to publish positive results and reject negative or null results, creating a distorted picture of the evidence.
- •Salami slicing: Dividing a single study into multiple publications to inflate the number of publications.
The replication crisis that has affected psychology, medicine, and other fields since the 2010s has been partly attributed to the prevalence of QRPs. Pre-registration of studies, registered reports, and open data practices have been proposed as remedies.
Dual-Use Research and the Dual-Use Dilemma
Dual-use research is research that is conducted for legitimate purposes but could also be misused to cause harm. The classic example is nuclear physics: the same knowledge that enables nuclear power also enables nuclear weapons. But the dual-use problem extends far beyond nuclear technology. Research in synthetic biology, virology, cybersecurity, and artificial intelligence all raise dual-use concerns.
The most dramatic recent case involved the creation of an airborne-transmissible form of the H5N1 avian influenza virus by Ron Fouchier’s laboratory at Erasmus Medical Center in 2011. The research was conducted to understand the mutations that could make H5N1 transmissible between mammals, but critics argued that publishing the results could provide a blueprint for bioterrorism. The resulting controversy led to a temporary moratorium on gain-of-function research and new oversight mechanisms.
The dual-use dilemma arises because the same research can produce both benefits and risks, and because the benefits often depend on the open communication of results (which also enables misuse). Miller and Selgelid (2007) have argued that the dual-use dilemma is a genuine ethical dilemma: there may be no solution that fully satisfies all legitimate interests. The challenge is to develop governance frameworks that minimize the risks of misuse while preserving the benefits of open scientific inquiry.
Proposed governance mechanisms include: pre-publication review by biosecurity experts; restricted access to sensitive methods and materials; education of scientists in dual-use awareness; and international treaties (such as the Biological Weapons Convention). Each of these has limitations, and the optimal balance between openness and security remains contested.
The Ethics of Animal Experimentation
The use of animals in scientific research raises fundamental ethical questions about the moral status of non-human animals and the justification of harm for the sake of knowledge. An estimated 100 million animals are used in research worldwide each year, including mice, rats, rabbits, primates, and fish.
The ethical debate is structured by competing moral frameworks:
- •Utilitarian view (Singer): Peter Singer’s Animal Liberation (1975) argues that the capacity to suffer is the morally relevant criterion. Since animals can suffer, their suffering must be given equal consideration to human suffering. Animal experimentation is justified only when the expected benefits clearly outweigh the harms.
- •Rights view (Regan): Tom Regan’s The Case for Animal Rights (1983) argues that animals have inherent value and moral rights that cannot be overridden by utilitarian calculations. On this view, using animals as mere means to human ends is fundamentally wrong, regardless of the benefits.
- •The 3Rs framework: William Russell and Rex Burch’s The Principles of Humane Experimental Technique (1959) proposed three principles that have become the standard framework for animal research ethics: Replacement (use alternatives where possible), Reduction (minimize the number of animals used), and Refinement (minimize suffering).
The 3Rs framework, while widely adopted, has been criticized by animal rights advocates as insufficient — it regulates animal experimentation rather than challenging it. The development of alternatives to animal testing (organ-on-a-chip, computer modeling, human tissue cultures) has opened new possibilities, but animal models remain indispensable for many areas of biomedical research, creating an enduring ethical tension.
Scientific Responsibility and Climate Change
Climate change raises novel questions about the responsibilities of scientists. Climate scientists possess specialized knowledge about a problem of existential significance. Do they have a special obligation to communicate their findings to the public? To advocate for policy action? Or does advocacy compromise their objectivity and credibility?
The philosopher of science Naomi Oreskes has argued in Merchants of Doubt (2010, with Erik Conway) that the manufactured controversy over climate science is itself an ethical issue. When fossil fuel interests fund climate denial campaigns, they are not merely expressing a different opinion; they are deliberately undermining the epistemic authority of science for commercial gain. Oreskes argues that scientists have a responsibility to expose such manufactured doubt and to communicate clearly that the scientific evidence for anthropogenic climate change is overwhelming.
Others worry that scientific advocacy risks politicizing science and undermining public trust. Roger Pielke Jr. has argued for a distinction between the “honest broker” (who presents the range of policy options consistent with the evidence) and the “issue advocate” (who champions a particular policy). On Pielke’s view, scientists serve the public best when they function as honest brokers, leaving value-laden policy choices to democratic processes.
Gene Editing Ethics: CRISPR and the Germline
The development of CRISPR-Cas9 gene editing technology since 2012 has created unprecedented capabilities for modifying the genomes of living organisms, including humans. CRISPR has enormous therapeutic potential: it could be used to cure genetic diseases like sickle cell anemia, cystic fibrosis, and Huntington’s disease. But it also raises profound ethical concerns, especially when applied to the human germline — modifications that would be inherited by future generations.
In November 2018, the Chinese biophysicist He Jiankui announced that he had created the world’s first gene-edited babies — twin girls whose genomes had been modified using CRISPR to confer resistance to HIV. The announcement was met with near-universal condemnation from the scientific community. He had bypassed ethical review, had not obtained adequate informed consent, had used an unproven technology on healthy embryos, and had kept his work secret until after the babies were born. He was subsequently sentenced to three years in prison.
The He Jiankui affair crystallized several distinct ethical concerns:
- •Safety: CRISPR technology is not yet precise enough for clinical use in human embryos. Off-target mutations could have unpredictable consequences.
- •Consent: Future generations cannot consent to modifications that will affect them. Germline editing crosses a moral boundary that somatic cell editing does not.
- •Enhancement vs therapy: Where is the line between curing disease and enhancing human capabilities? CRISPR could potentially be used not only to eliminate genetic diseases but to select for traits like intelligence or athletic ability.
- •Justice: If gene editing becomes available only to the wealthy, it could exacerbate existing social inequalities, creating a genetic underclass.
Most scientific bodies have called for a moratorium on germline editing for reproductive purposes until safety and ethical issues are resolved. But the technology is advancing rapidly, and the governance frameworks are struggling to keep pace.
AI and Algorithmic Bias: New Frontiers
Artificial intelligence and machine learning represent a new frontier in the ethics of science. AI systems are increasingly used in high-stakes decisions: criminal sentencing, hiring, medical diagnosis, loan approval. When these systems encode biases — reflecting the biases present in their training data or the assumptions of their designers — they can perpetuate and amplify social injustice.
Several landmark cases have highlighted the problem of algorithmic bias. The COMPAS recidivism prediction tool, used in U.S. criminal courts, was shown by ProPublica to be significantly more likely to falsely flag Black defendants as future criminals than white defendants. Amazon’s automated hiring tool was found to discriminate against women because it was trained on historical hiring data that reflected existing gender biases.
The ethical challenges of AI raise distinctive philosophical questions:
- •Transparency: Many AI systems are “black boxes” whose decision-making processes are opaque even to their designers. The right to an explanation — the idea that individuals affected by automated decisions should be able to understand the reasoning behind them — is increasingly recognized as an ethical requirement.
- •Accountability: When an AI system causes harm, who is responsible? The designer? The deployer? The training data? The diffusion of responsibility across complex sociotechnical systems creates an “accountability gap.”
- •Value alignment: The challenge of ensuring that AI systems act in accordance with human values is both a technical and a philosophical problem. Whose values should AI systems reflect?
The ethics of AI is a rapidly evolving field that draws on computer science, philosophy, law, and social science. It represents one of the most important areas where the philosophy of science intersects with urgent practical concerns.
Merton’s Norms: CUDOS
The sociologist Robert K. Merton, in his 1942 essay “The Normative Structure of Science” (later titled “Science and Technology in a Democratic Order”), identified four norms that he argued constitute the ethos of science — the moral framework that governs the scientific community. These norms are often remembered by the acronym CUDOS:
Communalism (or Communism)
Scientific knowledge is common property. The results of research should be shared freely with the scientific community. Secrecy is antithetical to science. Scientists contribute to a common pool of knowledge and receive credit (through citations and recognition) rather than financial compensation.
Universalism
Scientific claims should be evaluated on impersonal criteria (evidence, logical consistency), not on the basis of the scientist’s nationality, race, gender, religion, or institutional affiliation. The truth of a claim does not depend on who makes it.
Disinterestedness
Scientists should pursue truth rather than personal gain. The reward structure of science (peer review, replication, public scrutiny) creates institutional checks against self-interested behavior. This does not mean scientists lack personal motivations, but that the institution of science constrains the influence of those motivations.
Organized Skepticism
Scientific claims should be subjected to rigorous critical scrutiny before being accepted. No claim is sacred; all claims are open to challenge and revision. This norm is embodied in the practice of peer review and the emphasis on replicability.
Merton’s norms have been enormously influential, but they have also been criticized as anidealized picture of science that bears little resemblance to actual scientific practice. The sociologist Ian Mitroff proposed a set of “counter-norms” that better describe how science actually operates: solitariness (secrecy and competition), particularism (judging work by who produced it), interestedness (pursuing personal and institutional goals), and organized dogmatism (defending one’s own theories against criticism).
The question is whether Merton’s norms are descriptive (this is how science actually works) or prescriptive (this is how science should work). Most contemporary sociologists of science treat them as prescriptive ideals that are imperfectly realized in practice — aspirational norms that serve as standards for evaluating the health of scientific institutions.
Whistleblowing and Scientific Integrity
Scientific integrity depends not only on the good behavior of individual scientists but on institutional mechanisms for detecting and correcting misconduct. Whistleblowers — individuals who report misconduct within their institutions — play a crucial role in this system. Yet whistleblowers in science, as in other domains, often face severe retaliation: loss of employment, professional ostracism, and damage to their careers.
The case of Peter Wilmshurst, a British cardiologist who reported fraud in clinical trials, is illustrative. Wilmshurst faced a protracted libel suit from the device manufacturer he had exposed, and the scientific community was slow to support him. His case highlights the vulnerability of whistleblowers and the need for stronger institutional protections.
More broadly, the structures of scientific integrity include:
- •Offices of Research Integrity (ORI): Government agencies charged with investigating allegations of research misconduct.
- •Retraction Watch: Independent organizations that track retractions of published papers, providing transparency about the correction of the scientific record.
- •Open science practices: Pre-registration, open data, open code, and open access publishing increase transparency and make fraud and error easier to detect.
- •Ethics training: Mandatory training in responsible conduct of research for graduate students and postdoctoral researchers, though the effectiveness of such training is debated.
The philosopher of science Janet Kourany has argued that scientific integrity is not merely a matter of avoiding misconduct but of actively promoting good epistemic practices. In Philosophy of Science After Feminism (2010), Kourany proposes an “ideal of socially responsible science” that goes beyond the narrow focus on FFP to encompass broader questions about whose interests science serves and how its products are used.
Essential Readings
- •The Nuremberg Code (1947) and the Declaration of Helsinki (1964/2013).
- •The Belmont Report (1979). National Commission for the Protection of Human Subjects.
- •Merton, R.K. (1942). “The Normative Structure of Science,” in The Sociology of Science.
- •Singer, P. (1975). Animal Liberation, Chapters 1–2.
- •Oreskes, N. & Conway, E. (2010). Merchants of Doubt, Chapters 1–3.
- •Doudna, J. & Sternberg, S. (2017). A Crack in Creation, Chapters 7–9.
- •Miller, S. & Selgelid, M. (2007). “Ethical and Philosophical Consideration of the Dual-Use Dilemma,” Science and Engineering Ethics 13.
- •O’Neil, C. (2016). Weapons of Math Destruction, Chapters 1–4.