Full essay available here.
I’m pretty excited to announce that I won the 2016 Sentience Politics Essay Prize for my essay ‘On terraforming, wild-animal suffering and the far future’. In this essay I explore some concepts that many would consider quite ‘weird’. However, they are becoming increasingly key in discussions about ethics and effective altruism. I’ve copied my conclusions below, but I encourage you to read the full article.
“This essay sought to provide an overview of the literature relevant to wild-animal suffering, terraforming and the far future. Suffering of wild animals and invertebrates in the wild is likely a large source of pain, and spreading wild animals to other planets is expected to be astronomically bad. Even the risk of this dictates extreme caution. Some of the ethical considerations important for discussing wild-animal suffering were also covered, and some new insights were offered. In particular, some recommended actions and a research agenda were proposed. Some key conclusions of the essay are outlined below.”
- “Discussion of the best underlying philosophy is critical, as several different ethical codes (including negative and classical hedonistic utilitarianism) each arrive at different answers to the question of what to do about the far future.
- Without AGI, terraforming of Mars and the spreading of wildlife to other planets may be possible in 150 years. It is highly likely, but not a foregone conclusion, that AGI will reach an intelligence explosion by that point.
- If Mars is terraformed, it is plausible that it can eventually become home to almost as much wild-animal suffering as there currently exists on Earth’s land.
- Values spreading is one of the most high impact ways to positively impact the far future, although we first need to be confident we are spreading the best values.
- There is a limited amount of time for solving the value spreading problem for spreading wild-animal suffering, e.g. encouraging concern for wild animals, utilitarianism (or otherwise finding the true or best moral theory given normative uncertainty), and spreading concern for spreading wellbeing. These problems are also critical for determining what values to load to an AGI.
- I have proposed some reasons for why person-affecting views and negative utilitarianism may be flawed and argue in favour of classical hedonistic utilitarianism though I am not 100% certain about this (nor will I ever be, due to normative uncertainty), and this is meant to create dialogue as well as to criticise.
- We will never be 100% certain that we have identified the best values, and therefore we should consider how certain we want to be before we switch to primarily focusing on spreading values. Once the majority of society has values that we believe are best with some degree of certainty, we can then focus further on ensuring that the values we have chosen are the best ones. A thorough investigation of this is well beyond the scope of this essay, but is strongly called for.”
“Some of the conclusions of this essay are tentative, and would benefit from significantly more consideration and research. This essay was meant to suggest some solutions and insights to important questions and encourage discussion.”
“I argue for caution towards terraforming Mars or otherwise colonising space due to the risk of spreading wild-animal suffering (or suffering in general in the long term), and instead I recommend undertaking high impact research to determine the expected value of the future. I also strongly urge discussion to determine the best ethical theory, and then to determine the best values to spread for that theory, followed by researching the best ways to spread them, and finally enacting on their spreading.”
“Surely (assuming I am right about the normative issues), the best outcome is spreading the maximum possible wellbeing throughout the universe, with the worst outcome being spreading the maximum possible suffering. These are the realisation of Sam Harris’ best and worst possible worlds. We are in position now to set up the future such that the best possible world is a reality, and it is imperative that we do so. Nothing else, save perhaps ensuring that there is a future for sentience, is more important.”
What does AGI stand for?
Artificial general intelligence.