

Discover more from Daylight Reveries
Mr. Longtermist, Welcome to China
Accelerationist state capitalism is literally the perfect system for longtermism---ridiculing and objecting to longtermism.
You live in a cave
You are a normal person with an actual social life
If neither of these is true, you probably have heard of Longtermism.
It all began on a rainy day. You are hiking Mt. Everest with a banana in hand. Midway in the mountain, you accidentally dropped your banana peel. Now, you are presented with a grave choice. You can choose to bend down, and pick up the banana peel. Or, you can leave it alone, but the next person who comes hiking Mt. Everest might slip on the banana peel, and lose an arm or two, so that is not great. After a long time of philosophical deliberation, you decide that...yes, you have the moral responsibility to pick up the banana peel.
So, what is the deep philosophical punchline? Well, there is none, unless your name is William MacAskill. MacAskill asked an interesting question: suppose you know that, if you don’t pick up the banana peel, 8 years from now, a child will climb Mt. Everest, and slip on the banana peel. So, that’s quite bad, morally speaking. Now, MacAskill will ask you, does it matter if the child is only 7 years old? No, of course it shouldn’t matter; why would it matter? Why should the age of a child who slips on the banana peel affect the moral situation? Why should it exonerate you of the guilt of not picking up the banana peel? It clearly shouldn’t. If you think that way, you have fallen into the trap MacAskill prepared for you. MacAskill would say, it matters. It matters significantly, because the child isn’t yet born, when you made that damned decision to not pick up the banana peel. So, by feeling guilty for causing an unborn child to slip down Mt. Everest, because of your banana peel, you have essentially acknowledged your moral obligation towards humans that are not yet born!
MacAskill takes the argument further, and claims that we have as much moral obligation towards people alive today, as towards people who don’t exist yet. Upon first view, this idea seems morally intuitive, but it actually leads to a broad range of morally controversial conclusions.
If people in the future matter morally, as much as people alive today, then as charitable members of society, we must make decisions not only based on what benefits people alive today, but also based on what benefits the people who will live in the future. To be as charitable as possible, we must act in a way, that maximizes the benefit of our actions, to all humans who exist or will come into existence. This is a very radical claim, although it may not seem like one, at first glance. The reason it is so radical is that the number of people who will come into existence in the future, by estimation, is astronomical. There are 8 billion people alive today. That might seem like a lot, but, by estimation, there are about 80 trillion people, who are yet to be born! Future people outnumber us 10000 to 1.
What MacAskill concludes from this is a radical idea: the well-being of people alive today doesn’t really matter, because there are so many more unborn people; we should really focus on the well-being of those unborn people, instead of the well-being of the people alive today.
So, if you are planning to donate to UNICEF to save poor kids from starving to death, MacAskill would stop you, and ask you to think: is this really the most effective way to contribute to the overall well-being of all humans who exist or will exist? MacAskill might urge you to donate to one of those shady X-risk prevention programs, which are basically initiatives to lower the risk of humans going extinct. From a longtermist perspective, reducing the almost negligible chance of an asteroid wiping out humanity is almost certainly more valuable than, say, saving a couple of starving children.
If that doesn’t sound crazy enough, let’s cast aside existential risks for a moment, and consider what an ideal society would look like to a longtermist. A longtermist tries to maximize the total pleasure of all humans that exist or will exist, which can be achieved by increasing the average pleasure each person experiences, or increasing the number of people that will come into existence. Which one would a longtermist choose? Let’s consider which method would be more effective. The answer is actually quite clear. Pretend that we are living on a planet, and it has a fixed amount of resources, should we choose to devote these resources to create more people and sustain them, or should we opt to use these resources to increase the pleasurableness of the life of people who already exist?
It turns out that using the resources to increase existing people’s pleasurableness is much less efficient. The reason is because of marginal value, which I think of, in this way: giving a $10 bill to a poor person just above the survival line is much better than giving a $10 bill to a millionaire. The millionaire already has lots of resources, and leads a pleasurable life, so giving him an extra $10 would not do much to improve the pleasurableness of his life. The poor person, on the other hand, desperately needs that $10 bill. Giving her a $10 bill can significantly improve the pleasurableness of her life.
So, to maximize the total amount of human pleasure in the present and the future, a longtermist swiftly opts to sustain everyone in a state just above the survival line, and devote all the rest of the resources to creating new people. If a longtermist rules the planet, we will be living in a world that is significantly overpopulated, and everyone will be impoverished, living just above the survival line, or, in other words, in a condition where the moral value of our existence is slightly above zero. Different people may draw this line at different places, some philosophers, for example, consider line as the suicide decision boundary—if we fall below it, our conditions would be so miserable that we choose to suicide. Anyhow, no matter how we draw the survival line, we can be certain that, in a longtermist’s utopia, everyone will live a quite miserable life.
There is one economic system, that is as if it is designed for the longtermist—state capitalism. Yes, state capitalism, with its great capacity for growth, and its strict social control, it can easily avert any existential crisis, by leveraging the force of the market.
Worried about Artificial General Intelligence taking over the world? Worry not, state capitalism can step in, and swiftly end all AI research.
Worried about climate change? Worry not, state capitalism can force industries to switch to green energy faster than any other system.
Worried about an asteroid taking out humanity? Worry not, state capitalism can direct the innovative forces of the market, and come up with a solution to evade the asteroid strike.
Worried about a high birthrate, creating unsustainable population growth? Worry not, state capitalism can sweep in, and impose limits on how many children a household can have. What about a low birthrate? A mandatory family planning policy can step in to rescue a population collapse.
Even better than its potential to avert existential crises, a state capitalist society keeps everyone just above the survival line! Everyone is equally exploited, and the gains of the exploitation lead to economic growth, growth that will provide resources for the next generation of exploited citizens. In this way, each generation will be more numerous than the previous. State capitalism is as if it is designed to accelerate population growth—perfect for a longtermist.
Basically, my point is, welcome to China, Mr. Longtermist.
You may wonder, if I have anything more to offer, beyond a satirical mockery of longtermism that falls flat on its belly. Well, there are legitimate objections to the philosophy of longtermism. For one thing, longtermism insists on moral absolutism (I’m sure if you look into academia, there are many objections to longtermism).
The longtermist argument begins by asking you to agree with a concrete decision, that you should pick up a banana peel, so that some child who may not be born yet, wouldn’t slip on the banana peel and lose an elbow. In this scenario, you sacrificed the time and energy of bowing down to pick up a banana peel, for the well-being of a child who hasn’t been born yet.
The longtermist then compares this scenario with one in which you have to sacrifice your current decent living conditions, in order to benefit trillions of imaginary people who may into existence in the future. The longtermist tells you that the two situations are, in essence, equivalent, so you should accept his argument, and donate your house away, for the benefit of imaginary humans.
This is a common trickery philosophers like to employ. They first make you agree to a very common sense and concrete moral decision, such as, you should not leave a banana peel on the ground, because it could doom somebody in the future. You agree with him simply because your moral intuition tells you to. The philosopher then cooks up an abstract moral doctrine about how to act morally. He says, since you agree to pick up a banana peel for the benefit of future people, wouldn’t you agree that you have the moral responsibility to value future people as much as the people alive today? Such a moral doctrine sounds pretty good, within the context of the story of the banana peel. But the moral doctrine has huge significance! The philosopher will take the limit as the moral doctrine goes to infinity. It can generate wild results, when applied to another situation, such as the one that requires us to choose between saving starving people, and reducing the almost negligible risk of an asteroid impact. At a certain point, your moral intuition no longer agrees with the judgment of the moral doctrine. So, the philosopher says to you, your intuition must be wrong, so you should follow the doctrine instead of your intuition. The philosopher is quite deceptive here. You should remember, that you agreed to his doctrine in the first place, because it aligned with your moral intuitions. So, trust your moral intuitions? Or trust a moral doctrine founded upon moral intuitions, and extrapolated to infinity? Your pick.
Moral intuitions are just a system of beliefs we evolved, that seems to do a pretty good job of keeping us alive. These intuitions give us ways to make decisions about things, and the decision boundary is not a smooth and analytically simple function. It is a ragged surface. What moral philosophy so often tries to do, is to pick a specific case, a certain point on the decision boundary, and make a linear approximation at that point. This linear approximation is a moral doctrine, and is obviously not representative of the ragged surface of the decision boundary formed by your moral intuitions. When the linear approximation diverges from your intuitive decision boundary, the philosopher asks you, why aren’t you following the moral doctrine you agreed to? Perhaps you ought to answer, I agreed with your moral doctrine in one situation, but that doesn’t mean I should agree with it in another completely different situation.