A lot of effective altruists think working on disasters threatening to seriously curtail the progress of humanity into the far future is orders of magnitude more important than events that merely improve the present. The reasoning behind this is that global catastrophic risks (GCRs) not only threaten to wipe everyone on the planet out, but also eliminate countless generations that would have existed had we not gotten ourselves killed. I think GCRs are a good thing to have people working on but I'm skeptical that they surpass more common sense causes like deworming, vaccinating, and distributing insecticide-treated bed nets.
I think we need to make a distinction between two questions. The first question is: Where do all the utilons live? The question we should be asking is: What can I do to maximize the world's goodness?
The first question is about identifying areas with high potential for impact. The second question is what effective altruism actually is. Knowing where the utilons live doesn't answer the fundamental EA research question. You can locate a mountain of utilons yet have no way to access them. If that's the case, then it's better to work on the things you can actually do something about.
The total amount of suffering on Earth is dominated by the pains of insects, invertebrates, and fish. This is where tons of utilons live. In other words, wild animal suffering reduction is an area with high potential for positive impact. If there was an action we could take that reduced a huge portion of insect suffering, for instance, that would dwarf nearly any other cause. We could call this an area that is home to a lot of utilons. But how do we access them? In order for insect suffering to rival other causes, we need to be talking about mass amounts of insects. There's no obvious thing we could do to reduce the suffering of so many insects though. (Or is there?) And if there was, it likely wouldn't rival interventions we could make in less utilon-populated areas. If that's the case, then the reasonable approach toward insect suffering is to keep it on the backburner while we prioritize other issues.
I think the far future, as a cause, is a lot like insect suffering. Humanity's continued survival might be the most important variable to preserve if we want to maximize and continue to maximize the world's goodness. That's where all the utilons live. But what can we do about it? There is no individual far future-related cause that stands out as especially worthwhile to me. Actually, none of them appear to me to rival the best present-related causes we know of. Most future-related causes endorsed by effective altruists are highly speculative and conjunctive. With this post, I'll make many weak arguments for why I think taking steps to reduce GCRs is not an optimal cause to work on for most people.
First, not only do these causes need to be based on arguments that actually work (e.g. AGI will come & that is dangerous), but they also require that specific important events occur within a narrow timeframe. In order for them to be our top priorities, they need to be imminent enough that we can justify ignoring other affairs for them. For example, if an intelligence explosion isn't going to happen until 400 years from now, then MIRI's work is far less important than it would be if the intelligence explosion happens in 20 years. That crosses the boundary between "effective altruism" and "ordinary science." From an effective altruist perspective, the timeframe is highly relevant for claiming a cause's relative importance.
Further, in order to prioritize between different GCRs, we need to accurately predict the order in which events occur. So if "Nanotechnology will come & that is dangerous" is true, but an intelligence explosion happens first, then nanotechnology will have turned out not to have mattered nearly as much. Or if nuclear war happens, we may pass into an era in which life extension is neither desirable or possible to research. Just as competing methods lessen cause priority, so do competing ways for us to die lessen the threat of each individual cause since we're uncertain about the order in which events will happen.
Given that the main reason for prioritizing GCRs is that they threaten to wipe out billions of potential future generations, we can and should also apply the above reasoning to events that would have happened had we survived a specific GCR. Maybe AGI kills us all while nanotechnology is on pace to wipe us out 5 years after the AGI apocalypse but just never gets the chance. If we expect there to be multiple global catastrophes lined up for us in a row then (1) our efforts shouldn't be completely centered on the first one and (2) we can't speak as if each individual disaster is wiping away billions of generations. There's no reason to expect billions of generations if you foresee several serious existential risks. (The same argument applies to reducing infant mortality in really poor countries. The kid can very easily go on to die from something else way before "normal dying age" so the number of life years being saved is less than it originally sounds.)
These theories of the far future also usually leave out the details of the societies these technological advancements spring from. There is often no mention of political struggles, cultural values, economic factors, laws and regulations, etc. I find it unlikely that any GCR scenario is largely unaffected by these things. When these major events come closer and closer to their arrival dates, public discussions will likely heat up about them, politicians will get elected based on how they view them, debates will be had, laws will be passed, and so on. Many of the far future theorists leave these details out and write from the perspective of technological determinism, as if inventors give birth to new creations like Black Swan events. I think sociopolitical pressures should be seen as positive things, much more likely to prevent disasters from happening than they are to prevent humanity from dealing with them. When disasters become imminent enough to scare us, they do scare us, and people start handling them.
Another aspect of the future that often gets left out of these discussions is the possibility that included in the next billion generations will be astronomical amounts of suffering, possibly enough to outweigh future flourishing. The utility in the world right now is likely net negative. The thriving of humanity might just maximize this effect - for example, maybe by spreading animal populations to other planets. Even if we do not expect suffering to outweigh flourishing, there will very likely exist huge amounts of both good and bad experiences and we should consider what we roughly expect the ratio to be. We cannot naively talk about the immense worth of the far future without making any mention of the terrible things to be included in that future. Negative utilitarians should be especially interested in this point.
Here's an argument that I feel there's something to but I'm still figuring out. I think maybe believers in the far future's immense net value are making a philosophical mistake when they say the elimination of countless future generations is many orders of magnitude more terrible than the elimination of Earth's current 7 billion people. It's true that our 7 billion people could yield countless future generations, but this is also true of a single person. When a single person is killed, why don't we multiply the negative utility of this death by all the potential future humans it also takes away? That one individual could have had 2 kids, who each could have had 2 kids, and those kids would have had their own kids, and a billion generations later, we would have a monstrous family tree on our hands. If one death isn't a billion deaths then why are 7 billion deaths worth 7 quintillion?
If one answers that one death is a billion deaths than it seems to me as if she is amplifying the value of every individual human life way beyond what reason allows. For instance, this would make abortion a truly terrible crime. Another counter-argument could be that, in wiping out all humans, as opposed to only some, there's some kind of bonus emergent negative utility because there's no longer any possibility of future generations. The idea that groups of people should be morally valued more than the sum of the morally relevant individuals that comprise them has some problematic implications, however. We probably wouldn't want to say that it is better to save a family of five than five individuals who don't know each other. One could also argue that there is a relevant upper limit on the amount of human lives that could exist in the far future such that the Earth's current population does not significantly affect the world's future population because we will hit that upper limit anyway. That is not at all clear to me. If the response is that keeping alive a tiny probability of a massively positive future is worth more than a confirmed so-so outcome, then I think that's a case of Pascal's Mugging.
Lastly, as Holden Karnofsky pointed out in his recent conversation with MIRI, just "doing good things" has a really great track record, while the strategy of trying to direct humanity as a whole toward an optimal outcome has a comparatively weak track record. The track record is so poor that ethical injunctions might event mitigate against such grand schemes. Probably because people are prone to overlooking the sociopolitical details, they are very bad at predicting how major cultural events will affect the future. Apocalyptic predictions in particular are known for striking out, but that might be unfair. I see the flow-through effects favouring the "safe" side, as well. Just doing good things like being nice to people, donating to great charities, not eating meat, and spreading good ideas is likely to be contagious. People like people that do obviously good things, whereas people are suspicious toward those following some master plan that is supposed to pay off in a few decades or centuries, especially when those people are just regular at ordinary niceness. Valuing "weird" causes makes you less sympathetic, get taken less seriously, gain less funding and other opportunities, and become generally more marginalized.
Despite these weaknesses, it might still be a good idea for you to work mainly on GCR risk reduction since (1) it may be closest to your background, (2) the area is underfunded and underexplored, and (3) having people out there on GCR patrol increases the probability of us receiving GCR updates regularly and well in advance of any disasters. The fact that something isn't the optimal cause for you to possibly be working on doesn't mean that it isn't a good cause.
Effective altruism is about what you can actually do that would be most likely to maximize the world's goodness. "The Far Future" isn't a thing you can do - it's just where all the utilons live. Prioritizing specific GCRs seems to suffer from several problems when one takes an outside view. I see education and openness to compromise as the real best bets for global catastrophic risk reduction. Fortunately, they're easy things to promote on the side, while trying to make today's world healthier and less painful.
(My comment was too long, so I broke it into two.)
ReplyDeleteThanks for the post. I agree with a few points, while disagreeing with many others. Also, I'm not a conventional GCR advocate but come at the issue from an oblique angle, but I can pretend to be a conventional GCR person in some of my replies.
I think we can affect masses of insects now. A hectare of crop land contains millions of insects, which could be killed less painfully by more humane insecticides. Even if you weigh insects linearly in number of neurons (and I think moral weight should be less than linear), that's still like ~tens of thousands of neurons per insect * millions of insects = roughly 100 billion neurons, which is the same as the number of neurons in a human brain. This is one hectare of crop land being sprayed once out of potentially several sprayings in a single year. (As I strive to point out, eliminating insecticide use altogether might actually be worse than using insecticides. So I'd encourage more humane ways of killing the same number of bugs.)
I think MIRI would still be very important if AGI were 400 years away. A lot of MIRI's work is philosophy and model generation, which is intrinsically important regardless of timelines, and is potentially most useful if AGI is farther away, because the work takes so long to proceed. I think MIRI's work would qualify as "doing good things" in Holden's sense. (Note: I disagree with some of MIRI's projects and values, but I tentatively think overall its work is important enough to be very valuable even by my lights.)
I think most GCR people agree that having many possible risks reduces the severity of any given one. However, keep in mind that most people think (apart from anthropic / Great Filter considerations) that any given risk is rather unlikely to cause extinction (with a possible exception of AI, depending on your definition of extinction), so the overlap is relatively small. Say 1% chance of extinction-level nuclear winter and 3% chance of nanotech apocalypse gives just 0.03% chance of both in the same future history, assuming unrealistically that they'd be independent.
In my view, most of the importance of GCRs has to do with its destabilizing effect rather than its extinction potential, which many people seem to think isn't huge. In that sense, GCRs are just another branch of ordinary political, social, and technological changes that society should handle in a smooth way.
I agree that many GCR people focus too much on technology alone. In a recent Facebook post, I said: <>
I agree that many GCR people focus too much on technology alone. In a recent Facebook post, I said: "It seems as though people sometimes view catastrophic risks as primarily engineering problems. This is certainly the case for asteroids and somewhat for AI. But many catastrophic risks are more inherently political. Most catastrophic uses of nanotech are intentional harms, not accidental ones. Likewise for many bio risks. Improving political dynamics (e.g., the long-term goal being solid global governance) seems a main way to address many catastrophic risks. I think LessWrong avoids political issues because 'Politics is the Mind-Killer,' but in the process they miss out on important, nonpartisan political discussions where rationality and altruism have a lot to say."
ReplyDeleteI very much agree that we should worry about massive suffering in the future. I encourage far-future thinkers to factor this into their assessments. However, it's plausible to me that reducing GCRs reduces expected suffering in the future more than increasing it because most GCRs would cause social dislocation but not complete extinction. That said, I expect that directly promoting better outcomes such as via cooperation research and advocacy may have higher returns relative to my values.
One death now isn't worth a billion deaths because, as you suggest later, what matters (to conventional GCR people, not to me intrinsically though perhaps somewhat instrumentally) is whether future people exist at all. Whether the human population now is 3 billion or 10 billion, we should expect basically the same number of people to exist in the long run. In contrast, if you reduce the current population to 0, the number of people in the long run is 0. There's a discontinuity in the function that maps (# of people now) to (# of people eventually).
I'm skeptical of the suggestion that doing good stuff in general beat future thinking in history in terms of per person average impact. I don't have copious examples off the top of my head, but for example, people who thought about Cold War nuclear policy (e.g., Tom Schelling) and averted accidental missile launches (e.g., Stanislav Petrov) and such had *immense* impact. Maybe Norman Borlaug could be argued to have had comparable impact, but counterfactually probably not, because someone else would have done the Green Revolution, but a different nuclear policy might have changed the whole future course of history. I don't see why this trend should reverse substantially when talking about future technologies rather than current ones. DARPA has had immense return on investment by focusing on highly speculative sci-fi-seeming technologies. (Note: I think faster technology is more likely net bad than good in general, though this is a weak overall judgment that's reversed in many specific cases.)
I agree there's something fishy about the far future because of anthropic considerations, and I think this is actually one of the strongest arguments. This piece suggests an example calculation in which the far future can still be pretty important even using a 10^-10 anthropic discount. 10^-10 is rather arbitrary, and when anthropic penalties become proportional to the size of the future, the equation becomes more unclear.
"I think MIRI would still be very important if AGI were 400 years away."
DeleteImportant, sure, but if we're speaking from the perspective of effective altruism, the switch from 20 years to 400 years is more than enough to fall out of contention for #1 donation spot in the world.
"In my view, most of the importance of GCRs has to do with its destabilizing effect rather than its extinction potential, which many people seem to think isn't huge. In that sense, GCRs are just another branch of ordinary political, social, and technological changes that society should handle in a smooth way."
I think this is a better and safer way to think about GCRs.
"One death now isn't worth a billion deaths because, as you suggest later, what matters (to conventional GCR people, not to me intrinsically though perhaps somewhat instrumentally) is whether future people exist at all."
It's pretty obvious to me that whether human people exist at all doesn't have intrinsic value. If we froze our world right now, it would likely have less utility than a world in which no living beings existed.
"Whether the human population now is 3 billion or 10 billion, we should expect basically the same number of people to exist in the long run."
This isn't at all clear to me unless we're talking about life on Earth. I think when people talk about humanity in a billion years, they are assuming humanity to be spread across other planets.
"I'm skeptical of the suggestion that doing good stuff in general beat future thinking in history in terms of per person average impact."
I wonder if any of the people you listed saw themselves as working on the world's #1 most important cause. Even if it is feasible to architect long-term outcomes, I'm not sure this is something we want the average person to be attempting. I think only a minority of attempts at this will work.
"the switch from 20 years to 400 years is more than enough to fall out of contention for #1 donation spot in the world."
DeleteNot sure about that. As I mentioned, some of MIRI's work becomes more important the more time we have. In general, I think society underinvests in long-term research, so at the margin, it's plausible MIRI would still be among the top few charities in the world if AI were 400 years off (which it's probably not).
"It's pretty obvious to me that whether human people exist at all doesn't have intrinsic value."
I probably misunderstand you. Surely future people matter a lot in either a positive or negative way, depending on their levels of wellbeing. (Or if you lean negative, they matter either more or less negatively depending on the extent of suffering in their civilization.)
"I think when people talk about humanity in a billion years, they are assuming humanity to be spread across other planets."
Yes, but that too is relatively insensitive to the current population (except slightly insofar as the current population shapes long-run trajectories, but in that case, it's plausible that a smaller population today implies a bigger population in the galaxy due to reduced resource competition and conflict during a crucial phase of Earth's development).
You don't think over the next 400 years that the most important part of MIRI's work would eventually get done by other people? I don't think it's plausible that MIRI is 400 years ahead of its time.
Delete"I probably misunderstand you. Surely future people matter a lot in either a positive or negative way, depending on their levels of wellbeing. (Or if you lean negative, they matter either more or less negatively depending on the extent of suffering in their civilization.)"
I mean that once you subtract the 7 billion lives, there's no additional bonus penalty for achieving state No More Humans. (There are worlds where the end of humanity is a way to maximize utility.)
"Yes, but that too is relatively insensitive to the current population (except slightly insofar as the current population shapes long-run trajectories, but in that case, it's plausible that a smaller population today implies a bigger population in the galaxy due to reduced resource competition and conflict during a crucial phase of Earth's development)."
Okay, fine. I'll cautiously concede that once you reach very low population levels, individuals start counting for more because they have a higher probability of playing a crucial role in keeping humanity going and multiplying.
The mistake is in assuming humanity will survive for so many generations that our survival is worth billions of generations of utility. From an outside view, the probability of humanity surviving for that long has to be microscopic.
One of the best ways to make sure something happens eventually is to start it now. Hopefully MIRI's work would happen eventually, but maybe it would take a while, and maybe people wouldn't think of it in the right ways. It may also take a really long time. If Newton and Leibniz hadn't invented calculus, someone else would have, but it sure was useful for them to work on calculus back then rather than saying, "Meh, someone else will invent this later." Likewise with MIRI's work: They can start things now, and the field can continue to grow and blossom centuries into the future. MIRI's work is not easier than mathematics, and mathematicians have not run out of things to work on centuries after Newton.
ReplyDeleteI agree there needs to be focus on short-term issues as well, but on the margin, I think the long-term areas are relatively neglected.
Anyway, I agree that if strong AI were 400 years off, it would be less clear whether to focus on far-future issues related to it relative to short-term issues. But in practice, AI is not 400 years off. (Probably not more than 100 years off.)
I should also clarify that I don't know if the past centuries of mathematics were net good for humanity; I'm just using them as an example of a very long-term project that's helpful to start on early.
"There are worlds where the end of humanity is a way to maximize utility."
Do you mean because humans are replaced by happier robots? Or are you referring to a person-affecting ethical view?
"From an outside view, the probability of humanity surviving for that long has to be microscopic."
I agree it's fishy, which is what I meant with the anthropics comment. But we should have a lot of model uncertainty over anthropic arguments like this.
"Do you mean because humans are replaced by happier robots? Or are you referring to a person-affecting ethical view?"
ReplyDeleteRobots, animals, whatever. I hold the No-Difference View.