

Discover more from Daylight Reveries
Descriptive Ethics
Where does our ethics and morality come from?
Even infants have a sense of morality. Yale psychologist Paul Bloom showed a group of babies a bunch of animated geometric objects with drawn cartoon faces: a red ball would try to go up a ramp. In some cases, a yellow square got behind the red ball and nudged it up the ramp. In other cases, a blue triangle stood in front of the red ball to push it down the ramp. After the experiment, the babies were presented with the yellow square and the blue triangle. In over 80% of the cases, they chose the helpful character rather than the malicious actor.
Babies know the blue triangles are bad just as much as they know breast milk is edible—these are intuitions hard wired into our brains through millenniums of evolution. Evolution gave us a sense of what is right to help us survive: it gave us the instinct to empathize with other humans so we would stick with our pack. It taught us honesty so that our ancestral tribes would not devolve into the Malthusian death-trap of prisoner’s dilemma.
The biological moral instincts we have served our ancestors to survive and propagate their offspring. In this sense, our aspiration towards righteousness is no nobler than our sex drive.
Evolutionary misalignment
However, morality ≠ survivalism & progenation. The ascetic saints of Jerusalem could tell you that. So could the Chinese eunuchs who castrated themselves to serve the common good of their nation. It is quite often that our biological moral intuitions tell us to do something that only harms our chances of survival and progenation.
This happens because evolution is slow and stupid. It doesn’t act on our conscious thoughts directly—it cannot implant a desire in our brain to “maximize our genetic presence in the human gene pool” (because to do so would require an understanding of abstract concepts like “genes,” which our hunting-gathering ancestors had no idea of). It can only set up simple proxy incentives for our brain—sex drive, hunger, shame, and the desire to be righteous. In the ancestral hunting-gathering society where these proxies were evolved, the proxies were well aligned with the objectives of survivalism and progenation.
But as human society advanced, and agricultural & industrial societies sprawled from the ground, our environment has changed drastically. This all happened in such a short time frame, that the evolution of our incentive proxies couldn’t keep up with the pace of change (if our proxy incentives were up to date, males should feel a bizarre desire to donate to sperm banks. None of us do, except this guy perhaps).
Our proxy incentives become misaligned with the evolutionary goal of survivalism and progenation. Going back to the eunuch example: perhaps in the ancient times a strong sense of tribal allegiance was necessary for survival in tribal life. But when societal expectation of pro-tribe behavior changed—you need to castrate to serve your tribe as a eunuch—blindly applying pro-tribe instincts would…well, reduce your fertility.
Don’t let culture and society delude you, and don’t delude yourself
The advances of our civilization also created a whole host of moral problems that we never evolved the moral instincts for during our tribal childhood. These are exotic issues that were simply not present in our training environment. For example: the morality of property right, land ownership, cannibalism, the freedom of linguistic expression, incest, and e.t.c.
To assess the morality of these issues, the lay-person relies on his cultural & communal norms. The average American, for example, would rush to the conclusion that property right, land ownership, and freedom of linguistic expression are valid. Cannibalism and incest, on the other hand, are immoral.
Cultural & communal norms are powerful. They are also legitimate to a certain extent, because they facilitate a sense of communal solidarity. They are so powerful in fact that it’s easy to confuse our culturally derived instincts with our biological instincts because we feel a strong tribal attachment to our cultural instincts. And we are easily deluded by confirmation bias into affirming their validity.
But in reality, tribal instincts are fallible. Almost every moral issue humans have gotten “wrong” before—slavery, male supremacy, racism, - - -phobia—fall into this category of moral issues. We don’t have strong biological instincts about these issues and too often rush to judgment based on our cultural instincts.
The work of philosopher-activists consist largely of arguing for or against one of these exotic issues by logically reducing them to more familiar questions for which we have primal moral instincts about. Example: slavery = bad because slavery causes human suffering, and we all instinctively agree that human suffering is bad, right?
As more and more people agree that slavery is bad, our cultural & communal instincts regarding slavery changes from “slaves are just property” to “that’s crazy, how can humans be property!” Until it becomes inconceivable to us that anyone could have thought slavery was morally permissible.
Our current culture is not infallible to mistake either. Many issues we presume to be morally right based on our cultural instincts will be egregiously wrong to future generations. And, unfortunately but rightly, they will throw us down the bus as immoral idiots.
What are philosophers up to?
Western philosophy tradition was born out of a time when people thought God was real. Therefore, it was natural for them to assume that universal moral truths existed. So, moral philosophy developed around the glorious quest for moral truths.
The framework philosophers use to search for moral truths is axiomatization. It works like this: a philosopher cooks up a set of moral axioms that are fairly intuitive and acceptable. From these axioms, she deduces a bunch of moral laws using logic (a moral law is basically a judgment of a moral issue. Example: “slavery is bad” is a moral law that is true under most moral axioms).
The only binding constraint these moral axioms have to satisfy is logical consistency. If your axiom 1 leads to the logical conclusion that animals deserve rights, while your axiom 2 suggests that animals don’t deserve rights, then your moral theory is messed up.
If our philosopher’s axioms produce moral laws that contradict our biological or cultural intuitions, two things can happen. If she takes her axioms seriously, she might argue that our instincts are wrong, and should be revised. If she doesn’t take her axioms seriously, she would continuing juggling with her axioms until its moral prediction matches our intuition.
Normative ethics
Philosopher’s obsession with axiomatization
The philosopher’s approach to morality is flawed in many ways. For one, there are no universal moral truths. We know that because we know precisely where morality comes from—genetic and cultural intuitions. Since morality is completely encoded in cultures and genes, different people from different cultures and with different genes must have different moral views, however slight the differences. The best axiomatization can do is to capture the moral intuitions of the average person.
Morality also is not logical. The chances that a bunch of randomly evolved instincts observe the structure of logic strictly is 0. Consider this example:
Imagine a trolley car rolling down a rail track. 5 people are on the rail track, and they will die if you don’t do something to stop the trolley car. There’s a big guy—Bob—chilling next to you. You could push him onto the track to his certain death, but his body would stop the trolley car from killing the other 5 people. What do you do? (You can’t jump onto the track yourself because you are too tiny to stop the car).
This is an exotic moral issue that our ancestors would rarely encounter, so we don’t have an immediate moral intuition of what is the right option. But we could look at moral intuitions that might be able to extrapolate to this situation.
Generally, we are intuitively inclined to agree that every person should have equal moral standing. If that’s the case, then we should logically value those 5 people’s life above Bob’s life. And in a situation where we have to choose between the two, we should choose to save the 5 people.
On the other hand, it is also fairly intuitive to us that we should respect the dignity of each person, and shouldn’t instrumentalize people—treat them as tools to achieve some ends. Since pushing Bob onto the track essentially treats him as a tool to stop the car, it dehumanizes Bob—and we shouldn’t do it.
When you start with different moral intuitions, you arrive at different logical conclusions. The two pieces of intuitions are, therefore, logically inconsistent. If you axiomatize the intuitions “treat everyone as equals in moral standing” and “choose the morally weightier option,” then you can no longer include “don’t dehumanize people” in your set of axioms, because they contradict each other. In other words, to stay logically consistent, you can only axiomatize one side of intuitions.
But I think it is quite apparent that humans possess both sides of intuition. And if our moral theory cannot accurately reflect that, I don’t think it is a good moral theory. This is, unfortunately, a fundamental problem of philosopher’s traditional approach to morality.
A seasoned philosopher can pull plenty of tricks to fix this “fundamental problem of two axioms both seeming reasonable but contradict each other logically.” She could add a clause to her axioms, that says “axiom 1 overrides axiom 2 when the two conflicts.” She might say “choosing the morally weightier option” is a more important axiom that supersedes “don’t dehumanize people.”
But she will soon run into trouble—she will run into situations where her biological instinct tells her that axiom 2 > axiom 1. We could change the trolley problem up a little: instead of pushing Bob to his death to save 5 people on a track, now you need to force Bob to gulp down 2 kg of slugs and decapitate him to save 2 lives. Would you still choose to do that? (Maybe see a psychiatrist if you answered yes).
The philosopher could avoid this issue by appending additional clauses to her axioms: in situation type X, axiom 1 > axiom 2; in situation type Y, axiom 2 > axiom 1; in situation type Z, axiom 1 > axiom 2; in situation type W, …
Yes, you can add more and more clauses and structures to your axioms to better approximate our biological intuitions. Or you can just accept that axiomatization is a bad framework. Moral intuitions exist on a multi-dimensional spectrum with curved decision boundaries—yet axiomatization tries to draw a straight line across that. It is gonna get things wrong with almost certainty, much less find the “moral truth.”
A more heinous problem
Moral axioms are not particularly suitable for describing our moral intuitions, but it can still offer some guidance on moral issues. A more heinous question to ask is if we should accept its guidance.
Consider an exotic moral issue that we have no biological intuitions for. A moral philosopher would cook up an axiomatic representation of our moral intuitions, and then apply his axioms to solve that exotic moral issue.
But what gives legitimacy to his moral axioms? History is ripe with examples of philosophers coming up with witty moral axioms, and then smugly applying them to every situation. It usually doesn’t end well. Examples: Kant came up with the axiom of universalization, and logically concluded from it that we should never lie, even if it saves lives. William MacAskill decided that “all humans have moral value, whether they are born or not” is a moral axiom, and arrived at a bunch of ridiculous conclusions (for instance, we morally ought to maximize the number of babies born into the world).
I applaud philosophers for coming up with simple moral axioms that are able to capture a lot of our biological intuitions. But I think they are too eager to apply their moral axioms without a justification of their legitimacy.
Even if we could come up with moral axioms that reflect our intuitions fairly accurately, there is no reason we should apply these axioms. Because our moral intuitions lack legitimacy themselves—they are just a jumbo combo of cultural norms and biological proxy incentives that evolution implanted in our cute little brains to help us survive and progenate. And the axioms we made up are a feeble attempt to map out these proxy incentives.
How should we approach morality
There is no true answer to this question. There is no “right” way to approach morality, because our sense of what is “right” is also derived from these flimsy cultural & biological intuitions. (Look up the definition of the word “right.” There is no external reality beyond human senses and intuitions to anchor the word “right”).
So, if you are looking for facts and arguments, the essay ends here. The rest are my opinions.
An interlude about desires
Humans are pretty simple machines: we have a bunch of desires, and we spend all day maximizing the satisfaction of these desires.
This might seem un-intuitive to you, and you might object “hold on, that’s not quite right. We don’t do everything to maximize our satisfaction of desires. The Buddhist monk who refrains from eating meat and having sex clearly isn’t satisfying his desires.”
And you would be right. The Buddhist monk is suppressing his culinary and sexual desires. But he is doing that to satisfy his desire for tranquility—to be at peace in his mind, to reach nirvana, e.t.c.
You see, I am using the word “desire” in a very broad sense.
You might be suppressing the desire to eat a chocolate bar, so you will be in good shape. But you are not suppressing your desire for chocolate for no reasons. You are doing it to fulfill your desire to be happy with your body in the long term.
“What if I simply decide to not eat a chocolate bar? I could do that, and I will prove you wrong.” Yes, you could do that. But in this case, you are just suppressing your desire for chocolate to fulfill your desire to prove me wrong. The natural state of human is inaction. Everything we do—from breathing to grabbing that chocolate bar, originate from some desire.
The relationship between desires and actions is not one-sided (desire → action). It’s dialectical. Desires lead to actions. But actions change the state we are in—and alter the strengths of our different desires.
An ascetic is someone whose long term desire for happiness and tranquility outweigh her desire for immediate gratification. Driven by her desires, she avoids activities that give her short term gratification. And because of her avoidance of these gratifications, her desire for them wears thin. A hedonist is someone stuck in the opposite feedback loop.
We are not forever enchained to these desire feedback loops. Background desires tend to win out. Our desire for chocolate is only activated when the word is mentioned or there is a chocolate cake in our field of vision. A lot of these hedonistic desires—sexual desire, desire for vainity, e.t.c. have specific triggers not present in our day to day life. But our background desires—to be happy, fulfilled, and not lonely—are there most of the time. They tend to occupy our brain’s processing time. Our brains are clever and can find ways to help these desires win—by doing things that would assuage our other desires. For example, our brains can tell us to watch motivational speeches, read self-help books, set up a system for quitting addiction, e.t.c.
There is probably way way more going on here than I dare to speculate. But I do think the thesis that we are desire maximizers is accurate.
Maximize satisfaction
If this thesis is right, then “being moral” and “being right” are just another strand of desire that humans try to maximize. So, I think it is fairly reasonable to say that an ideal moral code satisfies the condition that in a society where everyone follows this moral code (universalization), the average desire satisfaction is maximized.
I don’t know what this moral code would look like, but I do know what it wouldn’t look like.
It wouldn’t make frequent moral requirements of us that forces us to override our biological intuitions (unlike many philosopher’s moral theory). Because biological intuitions are desires—and overriding them goes against the thesis of maximizing desire satisfaction.
It wouldn’t allow archaic cultural moral norms to stick around. Because cultural norms are arbitrary and if they don’t line up with our biological intuitions, changing them would improve satisfaction of our desire to be moral.
That doesn’t mean biological intuitions will reign. If giving up a piece of intuition leads to a vastly more satisfying world, it would be moral to do so. For example, mothers have strong biological instincts about raising their own children. But if it is found that adoptive parents and children are generally happier than families who keep their birth children, then it would be morally good to abolish the nuclear family system and put every new born baby up for adoption.
At the end of the day, what should “being moral mean” is a question without a definitive answer. The above is only what I think morality should entail and my justification for its legitimacy—we are desire optimizing creatures, so we might as well do it to perfection.