Evolution and Culture
A Scientific Approach to Morals and Ethics
Posted on February 24, 2008 by Peter Turney
Ethical axioms are found and tested not very differently from the axioms of science. Truth is what stands the test of experience. — Albert Einstein
The traditional view is that science has nothing to say about ethics and morality. Science tells us what is and morality tells us what ought to be. You can’t get ought from is. Science can help us predict the consequences of our actions, but it cannot tell us which consequences we should seek and which we should avoid. How, then, do we decide what is ethical? We must look in our hearts and souls; we must turn to religion or spirituality. We must seek ethical axioms to which all can agree (except psychopaths, sociopaths, deviants, and people who are not like us) and make sure that our actions conform to those axioms. I disagree with this view.
Why do we have ethics and morals? What is their function? Let’s define an agent as a being who has beliefs and desires, and who chooses actions based on those beliefs and desires. Different agents often have incompatible desires, which leads to conflict. The function of ethics and morality is to resolve conflict among agents; to facilitate cooperation among agents. A solitary agent (if there could be such a being — even hermits and feral children are never entirely solitary, since animals can be agents, and since actions can have consequences far beyond their local origins) would have no need or use for morality.
Human agents have many goals, and the weights and priorities they attach to their goals vary with time and circumstances. One way to resolve conflict among agents is to find a shared goal and persuade the agents to agree on an action that is consistent with that shared goal. Many moral systems take this to an extreme, by attempting to formulate a moral axiom (or set of moral axioms) to which we can all agree. Some hope that science might help us to find such a moral axiom. I agree with the critics, who argue that this is not the kind of thing that science can do.
Where do our ethics and morals come from? An ethical system is an algorithm that an agent uses for making decisions in the context of other agents, when there is the potential for conflict or cooperation with the other agents. Our ethical algorithms have biological and cultural components, which have evolved by biological and cultural evolution. Science can help us to understand the evolutionary origins of our ethics.
This view comes close to the naturalistic fallacy, a type of false argument that attempts to base an ethical system on facts of nature. But I do not claim that science can tell us what is ethically right or wrong; instead, I want to make a kind of meta-ethical claim, that science can help us to choose between competing ethical algorithms. The fact that a certain ethical system has survived countless years of biological and cultural evolution does not imply that it is a good ethical system, because evolution makes major mistakes [alternate link]. However, it does imply that the survivor is better than some of its competitors that became extinct (with various careful scientific caveats, about statistical sampling and the specific environment).
Ethical systems evolve in much the same way as scientific theories evolve. Scientists select theories that are fitter than competing theories, in terms of their scope, fertility, and explanatory and predictive power. Ethical agents select ethical systems that are fitter than competing ethical systems, in terms of their ability to facilitate cooperation and reduce conflict with other agents. This is not because cooperation is inherently good and conflict is inherently bad; it is simply because, whatever goals an agent has, some ethical systems will make it easier to achieve those goals than other ethical systems. Science can help us to find effective ethical systems — effective in terms of our own personal long-term goals in life.
The paradigm of the scientific approach to ethics is Axelrod‘s Evolution of Cooperation. The lesson of Axelrod’s experiments is that tit for tat is an effective ethical algorithm for certain types of conflict among agents. This is not to say that tit for tat should be elevated to the status of an ethical axiom; it is only to say that tit for tat is better than many competing algorithms (for certain types of environments, with certain statistical sampling assumptions). This is a good example of what we might learn from a mature science of ethics.
Arguments about whether a certain action is ethical often lead to a stalemate, because the agents involved cannot agree on a set of ethical axioms. Science cannot tell us whether a particular axiom is right or wrong, just as we never know whether a particular scientific theory is true or false, only whether it is fitter than competing theories. But we can take the ethical argument to a meta-level and ask which algorithms the participants in the debate should adopt in order to more effectively achieve their goals; this is a problem that science can address.
I believe that a science of ethics (which is currently in its infancy) will eventually justify some of our most cherished ethical beliefs, such as The Golden Rule and the importance of diversity, perhaps with various caveats, embellishments, and qualifications. I envision tit for tat-like experiments that will show these are highly competitive ethical algorithms (not that they are right or wrong).
Where do altruism and self-sacrifice fit in this scheme? It might be said that I am arguing for a kind of enlightened self-interest, which is incompatible with pure altruism. The way to deflate this criticism is to focus on the question, what is the self? I do not necessarily identify myself with my body or my genome; I can identify myself with a certain set of ideas, a certain set of values, or a certain group of agents. Pure altruism can arise from enlightened self-interest within this broader sense of self.
(This blog post is partially based on discussions with my son, Craig Turney, and with Peter Watts.)