The Principles of Prevention, Precaution, Prudent Vigilance, Polluter Pays, Gambler’s, and Proaction
One of the most difficult times to make ethical decisions is when there is great uncertainty about what the best decision is, or how to go about achieving that best end. Here I will present six contemporary principles, or risk standards, which are approaches for dealing with uncertainty and risk (discussions of some historical approaches can be found here: Probabilism). I will explain each principle and give examples, then discuss some themes.
A key point of connection between risk standards and ethics is that in riskier situations it often makes sense to use more stringent risk standards, and in the riskiest situations, the most stringent risk standards are more likely to be ethically justifiable. These risk standards might be helpfully connected to the Markkula Center’s Framework for Ethical Decision Making when making ethical decisions in uncertain situations.
It is also worth noting that risk tolerance can vary significantly between individuals and between cultures, so it is likely that disagreements will often appear when discussing the ethics of risks. That does not make ethical decision making impossible, it just means that it might be more difficult, and that communication is very important so that all involved groups know and understand what is going on, how, and why.
1) The Prevention Principle takes a highly cautious approach towards ethical decision making because it specifically relates to situations with certainty of negative outcomes. It follows the general rule that “prevention is better than cure,” and therefore harms ought to be anticipated and pre-empted, rather than experienced and solved later (as in the “Polluter Pays Principle”).
This principle is generally uncontroversial in cases where cause and effect are clear and certain; it is when it moves towards uncertainty that more controversy appears, and the Precautionary Principle tends to be invoked instead. [1]
Examples: the Prevention Principle would promote placing safety requirements on automobiles (such as seat belts and airbags), since the certainty of accidents across a population is 100%, and it is better to prevent or reduce injuries rather than cope with them afterwards. Similarly, polluting industries might have requirements that require them to reduce or prevent certain types of pollution, as in using flue-gas desulfurization (sulfur dioxide scrubbers) on coal-fired power plants to prevent acid rain.
2) The Precautionary Principle is an approach to risk management and ethical decision making which seeks to prevent possible harms in cases where there is not yet scientific consensus on connections between cause and effect. The approach merely necessitates that there be a plausible scientific connection, not that it be certain. This approach is more likely to avoid damages, since waiting for the damage to occur (and thus establish a connection) is too late.
This is a more stringent risk standard than the prevention principle due to its acceptance of causal uncertainty. Over time, if causation becomes clearer (thereby decreasing uncertainty), this approach could be shifted towards prevention (if the connection is established), dropped (if the connection is not established), or another approach chosen (if the situation remains complicated). [2]
Examples: the Precautionary Principle is standard for the pharmaceutical approval process in most nations, where new medicines are approved slowly, under careful conditions, so as to avoid widespread social harms. Another example includes the responses of some nations towards genetically modified organisms (GMOs), where safety suspicions delayed deployment until more certainty was established.
3) Prudent Vigilance is an approach to risk which seeks to proceed with the potentially risky behavior while remaining vigilant of risks that might be developing or becoming more certain as one proceeds. It seeks to establish processes for assessing likely benefits and risks before, during, and after an undertaking, and continues “to evaluate safety and security as technologies develop and diffuse into public and private sectors.” [3] Prudent vigilance allows for risk-taking behavior, but with the understanding that ongoing evaluation is necessary. [3, 4]
Examples: Prudent Vigilance was a cornerstone for the United States’ Obama-era Presidential Commission for the Study of Bioethical Issues, in their 2010 report on the ethics of synthetic biology and other emerging technologies. It has remained a principle for discussion and consideration in this field, and has expanded to a few others, including environmental protection and international relations. [5, 6]
4) The Polluter Pays Principle is a risk standard which permits risk-taking behavior and then, if something goes wrong, assigns clean-up for the harms to those who created the harms. [1] This risk standard is responsive rather than anticipatory, and assumes that risk takers will either self-police (and not make errors), or, if self-policing fails, will be capable of making up for the harms they have produced. Ethically, Polluter Pays values freedom and responsibility, and assumes that, for the most part, people lack the power to significantly affect the future, and that those who can affect the future are meticulously careful, honorable, and benevolent.
Because of growing technological power, this principle is now obsolete in many cases, as damages sometimes can be planetary in scale, long term, and irreversible. In cases where it is difficult to hold entities responsible for their actions, or where damage is too much for them to redress, a more anticipatory strategy makes more sense. Additionally, the complexity of society can make it more likely that unscrupulous entities will not be held accountable.
Examples: the Polluter Pays Principle is at work in any situation where it is assumed that harms can be tolerated, and the agents of that harm held accountable for their actions, typically through legal or legislative recourse. Environmental dumping, even on a small scale, such as littering, sometimes shows this principle in action, as the polluter is typically fined for their misdeed.
5) The Gambler’s Principle counsels risk takers to avoid risking damages which, if they occurred, would be ethically unacceptable, ranging up to the largest technological disasters, including global catastrophic and existential risks. Philosophers of technology Hans Jonas and Michael Davis have each advocated this approach, Jonas describing it as forbidding “any ‘va banque’ [“go for broke” or “all in”] game in the affairs of humanity,” [7] and Davis as “don’t bet more than you can afford to lose.” [8]
Davis describes this principle in more detail: “If we (society at its rational best) would reject any plausible benefit in exchange for suffering that harm, we (that part of society making the decision) should, all else equal, rule out any design that risks that harm (however small the probability — so long as it is finite).” [8] Put another way, if a risk can be voluntarily assumed or declined, then for any unacceptable harm, if the probability is non-zero, then the risk is too high, and is therefore unethical and should not be taken. [9, 10]
This risk standard is focused only on the very largest and worst harms, while ignoring more mundane harms. It is anticipatory in nature towards these larger harms, and responsive in nature towards smaller harms. In this way, it can be viewed as more like the Prevention or Precautionary Principles with respect to larger harms and the Polluter Pays Principle with respect to smaller harms.
Examples: the Gambler’s Principle would counsel rejecting the construction of a nuclear power plant, if a meltdown and subsequent radioactive pollution were deemed an unacceptable risk. Another might be the development of self-replicating nanotechnology, which could bring great benefits, but risks consuming the world if weaponized or gone out of control. In other cases, such as car accidents or more “average” harms, this principle permits the risky behavior and a reactive response if necessary, or it defers to another risk standard.
6) The Proactionary Principle is an approach to risk taking behavior which argues that innovation and technological progress should be pursued with speed. [11] It characterizes the current risk conditions as unacceptably bad (i.e. unethical), and therefore argues that other risks ought to be taken in order to escape the current risky state. It is an approach to risk which emphasizes action now, even in the face of possible negative effects, because if actions are not taken now, then the current unacceptable state will continue, and the future itself may be at stake.
It is optimistic in assuming that the future will be better, despite the risks taken to get there (and any possible ongoing harms from those risks), and is pessimistic about the current state of the world. The Proactionary Principle places faith in the benefits of technological progress. It does not cope well with the most disastrous and irreversible risks of technology, such as existential risks.
Examples: the Proactionary Principle is visible anytime a risk is deemed to be worth the reward, e.g. when taking a new job, buying a house, starting a business, etc. With respect to technological development, it could be used to promote certain technologies such as radical life extension, space settlement, peace-building technologies, and environmental sustainability technologies, arguing that those technologies ought to be developed as quickly as possible, because our current situation is quite dire. Historically, the Manhattan Project followed the Proactionary Principle due to fear of Nazi Germany obtaining the atomic bomb first, and in this effort was pushed forward even as significant scientists worried that it risked igniting the Earth’s atmosphere and destroying all life. [12, 13]
Discussion
There are several ethical dimensions at play in these principles. A first is whether they are anticipatory of harms or reactive/responsive to harms. In the past, permitting harms, then reacting to them, was considered to be acceptable in many cases, since harms were often less damaging.
As a second related dimension, there is the question of whether entities can be trusted to make amends for their damages after the fact, or whether they are likely to shirk their responsibilities and go unpunished, thus contributing to social degradation and breakdown of trust. The more likely it is for damages to go unpunished and/or unredressed, the more important it is to prevent them. Given the complex interactions of entities across the globe and over time, and the rise of uncertain causal connections, lack of accountability has increased and is likely to continue to do so.
Relatedly, a third dimension is the magnitude of the harms a stake. As technology has expanded the human capacity for disaster, more need of anticipation and pre-emption has emerged. Irreversible harms such as species extinctions, and harms of massive scale both spatially and temporally, such as climate change, have necessitated new ways of looking at the ethics of risk.
A fourth dimension is the probability or uncertainty of the risk. As technology has expanded human power, it has also increased our scope of action in unpredictable ways, and therefore uncertainty about the effects of our choices has increased. Every new technology deployed is something like a socio-environmental experiment, exploring the world for effects, both anticipated and unanticipated. In this environment of enhanced uncertainty, risk is much harder to calculate, uncertainty much higher, and therefore risk ought to be avoided more carefully.
Combining some of these dimensions is possible through the “Risk Equation,” often written as Risk = Probability x Harm, or R = p(L), where “R” is risk, “p” is probability, and “L” is loss or harm. The Risk Equation informs several of the above principles and can be a useful interpretative framework for conceptualizing how some aspects of these principles relate to each other.
Lastly, these principles are not presented with the intent of advocating any particular one. Each has its uses, depending on the circumstances. However, it is worth noting that as human impact on the world has increased in past decades (due to technological harms increasing as well as overall uncertainty), societal risk tolerances could have understandably reacted. It may seem that there has been an overall shift towards more risk-averse approaches.
However, perceived in another way, it is merely that the world has changed, while societal risk tolerances have remained even, and these social preferences have gradually expressed a reaction to the shift in power in the techno-social environment. In other words, it is risk that has increased, not risk aversion. In a world where there are more dangerous choices, there is more to say “no” to, [14] and a greater role for ethics as well.
References
[1] World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), “The Precautionary Principle” (Paris: United Nations Educational, Scientific and Cultural Organization (UNESCO), 2005) 7-8. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000139578
[2] “Precautionary Principle,” Glossary of Summaries, EUR-Lex: Access to European Union Law, website, accessed July 6, 2016. Available at: http://eur-lex.europa.eu/summary/glossary/precautionary_principle.html
[3] Presidential Commission for the Study of Bioethical Issues, “New Directions: Ethics of Synthetic Biology and Emerging Technologies,” Washington, D.C, December 2010, p. 27, 123. Available at: https://bioethicsarchive.georgetown.edu/pcsbi/synthetic-biology-report.html
[4] Amy Gutman, “The Ethics of Synthetic Biology: Guiding Principles for Emerging Technologies,” The Hastings Center Report (July-August 2011): 17-22. Available at: https://onlinelibrary.wiley.com/doi/pdf/10.1002/j.1552-146X.2011.tb00118.x
[5] Alison McLennan, “Chapter 5: Environmental risk: uncertainty, precaution, prudent vigilance and adaptation,” in Regulation of Synthetic Biology: BioBricks, Biopunks and Bioentrepreneurs, Elgar Studies in Law and Regulation, by Alison McLennan (Cheltenham, UK: Edward Elgar Publishing, 2018). Precis available at Elgar Online: https://www.elgaronline.com/abstract/9781785369438/14_chapter5.xhtml?
[6] Keir Giles, “Russia Hit Multiple Targets with Zapad-2017,” U.S.-Russia Insight, Carnegie Endowment for International Peace, January 2018. Available at: https://carnegieendowment.org/files/Giles_Zapad_web.pdf
[7] Hans Jonas, The Imperative of Responsibility, (Chicago: University of Chicago Press, 1984) 38.
[8] Michael Davis, “Three nuclear disasters and a hurricane,” Journal of Applied Ethics and Philosophy 4 (August 2012) 8. Available at: https://eprints.lib.hokudai.ac.jp/dspace/bitstream/2115/50468/1/jaep4-1_micael%20davis.pdf
[9] Brian Patrick Green, “Transhumanism and Roman Catholicism: Imagined and Real Tensions,” Theology and Science 13:2 (2015): 196.
[10] Brian Patrick Green, “Little Prevention, Less Cure: Synthetic Biology, Existential Risk, and Ethics,” Workshop on Research Agendas in the Societal Aspects of Synthetic Biology, Tempe, Arizona, November 4-6, 2014. Available at: https://cns.asu.edu/sites/default/files/greenp_synbiopaper_2014.pdf
[11] Max More, “The Proactionary Principle, Version 1.0,” Extropy.org, 2004. Available at: http://www.extropy.org/proactionaryprinciple.htm
[12] Emil Konopinski, Cloyd Margin, and Edward Teller “Ignition of the Atmosphere with Nuclear Bombs,” Classified US Government Report (declassified 1979), August 14, 1946. Available at: https://fas.org/sgp/othergov/doe/lanl/docs1/00329010.pdf
[13] Daniel Ellsberg, “Risking Doomsday I: Atmospheric Ignition,” in Daniel Ellsberg The Doomsday Machine: Confessions of a Nuclear War Planner, (New York: Bloomsbury, 2017) pp. 274-85.
[14] Brian Patrick Green, “The Catholic Church and Technological Progress: Past, Present, and Future.” Religions, special issue guest edited by Noreen Herzfeld, 1 June 2017, 8(106): 12. Available at: https://doi.org/10.3390/rel8060106