The fundamental research question of effective altruism is, “How
could I act so as to maximize the world’s goodness?” Another way to formulate
the same idea: “How could I act so as to maximize the amount of good I can do
over the course of my lifetime?” This question could also be translated as, “What
is the most important cause I could possibly be working on?”
I think something important gets lost in translation. The
third question is confused because it assumes that, if we could only identify
the most important cause in the world, then everyone could maximize utility by
simply working on that most important thing. But the optimal world is going to
have an optimal equilibrium of several factors. We don’t want maximum resources
invested in the World’s Best Cause. We want maximum resources spent on
realizing the Best World Possible. As a mass strategy, it will be best for
everyone to stick to what he or she already does best, adjusting a little toward
effective altruist values. If every effective altruist with a business, arts,
and social work background jumped ship to do artificial general intelligence
research, there would be nobody sane left to run the other industries. If most
people stick to what they do best, the mass allocation of resources will be
more cost-effective.
The second question confuses “the world” with “my lifetime,”
an easy mistake to make. But effective altruism isn’t about getting into the
Guinness Book of World Records for Most Lives Saved Ever, it’s about maximizing
your impact. In some cases, personally doing less good will allow for more
total good to get done.
Even the first question could be vulnerable to criticism, depending on your views.
Firstly, Negative Utilitarians might disagree with the use of "maximizing the world's goodness." This might be interpreted as focusing too much on the maximization of wellbeing, compared to the minimization of suffering. I think this constitutes a disagreement with how we should define "the world's goodness" more than it does a disagreement with the idea that (correctly defined) maximized world goodness should be our target.
Secondly, as I argued in Anchoring to Other Perspectives, aiming for the altruistic stars might, on average, lead to worse consequences than would more humble altruistic efforts. This is again more of a disagreement with the strategies that should be used for world optimization than it is a disagreement with the idea of making the world as good as we possibly can.
I agree that some EAs are too quick to encourage everyone to do a career that typically looks good (e.g., earning to give). I think personal preferences and avoiding burnout are big reasons to make a more personalized choice; optimizing toward your particular background is also relevant, although we're all young and could pretty easily change backgrounds without too much sunk cost. Lastly, I think EAs might slightly overstate the value differentials between different possible paths. That said, there is worth in challenging the idea that the best thing you can do is what you were going to do anyway, although keeping ourselves happy may make it hard to deviate too much from that.
ReplyDelete