Since we cannot put the feeling of a passion into wordsHume identifies passions via their characteristic causes and effects. The cause of a passion is what calls up the passion: A cause can be subdivided into the subject itself e. The object of a passion is what the passion is ultimately directed at:
Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood.
In this paper, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction via the maxipok rule can serve as a strongly action-guiding principle for utilitarian concerns.
I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability. The maxipok rule 1. Existential risk and uncertainty An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development Bostrom Although it is often difficult to assess the probability of existential risks, there are many reasons to suppose that the total such risk confronting humanity over the next few centuries is significant.
But perhaps the strongest reason for judging the total existential risk within the next few centuries to be significant is the extreme magnitude of the values at stake. Even a small probability of existential catastrophe could be highly practically significant Bostrom ; Matheny ; Posner ; Weitzman Humanity has survived what we might call natural existential risks for hundreds of thousands of years; thus it is prima facie unlikely that any of them will do us in within the next hundred.
Empirical impact distributions and scientific models suggest that the likelihood of extinction because of these kinds of risk is extremely small on a time scale of a century or so.
Our longevity Human nature a contested concept a species therefore offers no strong prior grounds for confident optimism.
Consideration of specific existential-risk scenarios bears out the suspicion that the great bulk of existential risk in the foreseeable future consists of anthropogenic existential risks — that is, those arising from human activity.
In particular, most of the biggest existential risks seem to be linked to potential future technological breakthroughs that may radically expand our ability to manipulate the external world or our own biology.
As our powers expand, so will the scale of their potential consequences — intended and unintended, positive and negative.
For example, there appear to be significant existential risks in some of the advanced forms of biotechnology, molecular nanotechnology, and machine intelligence that might be developed in the decades ahead. The bulk of existential risk over the next century may thus reside in rather speculative scenarios to which we cannot assign precise probabilities through any rigorous statistical or scientific method.
But the fact that the probability of some risk is difficult to quantify does not imply that the risk is negligible. Probability can be understood in different senses. An empty cave is unsafe in just this sense if you cannot tell whether or not it is home to a hungry lion.
It would be rational for you to avoid the cave if you reasonably judge that the expected harm of entry outweighs the expected benefit. The uncertainty and error-proneness of our first-order assessments of risk is itself something we must factor into our all-things-considered probability assignments.
This factor often dominates in low-probability, high-consequence risks — especially those involving poorly understood natural phenomena, complex social dynamics, or new technology, or that are difficult to assess for other reasons.
Suppose that some scientific analysis A indicates that some catastrophe X has an extremely small probability P X of occurring. Then the probability that A has some hidden crucial flaw may easily be much greater than P X. We may then find that most of the risk of X resides in the uncertainty of our scientific assessment that P X was small figure 1 Ord, Hillerbrand and Sandberg Factoring in the fallibility of our first-order risk assessments can amplify the probability of risks assessed to be extremely small.
An initial analysis left side gives a small probability of a disaster black stripe. But the analysis could be wrong; this is represented by the gray area right side. Most of the all-things-considered risk may lie in the gray area rather than in the black stripe.
Qualitative risk categories Since a risk is a prospect that is negatively evaluated, the seriousness of a risk — indeed, what is to be regarded as risky at all — depends on an evaluation. Before we can determine the seriousness of a risk, we must specify a standard of evaluation by which the negative value of a particular possible loss scenario is measured.
There are several types of such evaluation standard. But here we will consider a normative evaluation, an ethically warranted assignment of value to various possible outcomes. There are conflicting theories in moral philosophy about which normative evaluations are correct.
I will not here attempt to adjudicate any foundational axiological disagreement. Instead, let us consider a simplified version of one important class of normative theories.
Let us suppose that the lives of persons usually have some significant positive value and that this value is aggregative in the sense that the value of two similar lives is twice that of one life.
Let us also assume that, holding the quality and duration of a life constant, its value does not depend on when it occurs or on whether it already exists or is yet to be brought into existence as a result of future events and choices.
These assumptions could be relaxed and complications could be introduced, but we will confine our discussion to the simplest case. Using the first two of these variables, we can construct a qualitative diagram of different types of risk figure 2.
The probability dimension could be displayed along the z-axis.Hollway-ChHollway-Ch 7/31/ PM Page 17 Nature: A Contested Concept Franklin Ginn and David Demeritt D efinition Nature is a contested term that means different things to different people in dif- ferent places.
In a paper delivered to the Aristotelian Society on 12 March , Walter Bryce Gallie (–) introduced the term essentially contested concept to facilitate an understanding of the different applications or interpretations of the sorts of abstract, qualitative, and evaluative notions —such as "art" and "social justice"—used in the domains of aesthetics, political philosophy.
Mar 06, · The authors called this effect “ego depletion” and said it revealed a fundamental fact about the human mind: We all have a limited supply of willpower, and it decreases with overuse.
In establishing the relationships between these three drivers it becomes emergent that the application of sustainable development is more complex in its nature than it appears on the surface.
The following essay is going to assess the idea of sustainable development as a contested concept. Ecosystem Services as a Contested Concept: a Synthesis of Critique and Counter-Arguments. Authors.
Second, some argue that the concept promotes an exploitative human–nature relationship, whereas others state that it reconnects society to ecosystems, emphasizing humanity's dependence on nature.
Third, concerns exist that the concept may. At some point in the future, we may have to deal with “the line between the human as a product of nature and the human as a fabrication of technology”. 15 Not surprisingly, this has generated heated debate.