Justice as Heuristic: A Cognitive Evolutionary Approach to Utilitarian Justice Models


There is a central question at the heart of both politics and psychology: What makes us do the things we do? Psychology is interested in this question in and of itself, as a disciplinary objective. Political Science, however, seeks to use the answer to this question in its larger aims of studying the possible structures of society, along with their potential costs and benefits. Now, to answer this question in a paper as short as this would be absurd. However, I intend to use the specific case study of Justice and Utilitarianism as a synecdoche, a lens through which to view the entire issue. Specifically, in this paper I will investigate the supposed problems which a sense of Justice presents to the overall philosophy of Utilitarianism. In response to these issues, I will construct an argument for interpreting Justice as a moral heuristic.

To begin, we must first define the terms of the discussion—Utilitarianism and Justice. Utilitarianism I will define as follows: Government-sponsored “Normative Egoistic Hedonism”. Utilitarianism is “Normative” because the philosophy makes claims about what actions people should do. It is “Egoistic” because (in its original formulation by Jeremy Bentham) a person is interested predominantly in his or her own condition. “Hedonism” is used because the ultimate aim of the philosophy is happiness, or “utility”, from which the term “Utilitarianism” comes. As Bentham put it, “By utility is meant that property in any object, whereby it tends to produce benefit, advantage, pleasure, good, or happiness (all this in the present case comes to the same thing)” (18). It is “government-sponsored” in the sense that while standard individual Utilitarianism takes only the individual into account, a government must take all individuals under consideration equally in its decision making process.[1] So, to rephrase the theory, it is simply that each person should do that which causes “the greatest good for the greatest number of people”, or “that principle which approves or disapproves of every action whatsoever, according to the tendency which it appears to have to augment or diminish the happiness of the party whose interest is in question” (18). In this way, Utilitarianism is a consequentialist moral theory, wherein each action is judged on the basis of the net change it affects in utility rather than on the action in and of itself.[2]

However, judging actions solely by their consequences seems to cause a number of problems. Specifically, in many situations the actions seemingly prescribed by Utilitarianism go against our moral intuitions. Something, we feel, is simply being left out of the “felicific calculus”[3] of adding up the net pleasures enacted upon all affected parties. One commonly cited hypothetical example is when a homeless person wanders into a hospital ward. At that time, there are a number of people in the hospital waiting on organ donations, which are extremely difficult to come by, and without which they would die. From a Utilitarian perspective, killing the homeless individual and donating his organs to the numerous people waiting would certainly increase net utility. A more contemporary (and thus more realistically nuanced) example is that of the “enhanced interrogation techniques” currently being debated throughout the media and political spheres. Through the information garnered via the torture of one individual, one could potentially avert catastrophic consequences (terrorism, war actions, etc.).[4] These situations, however, all end up boiling down to around the same thing: Is it worth the suffering of one to save many?

To confront this dilemma, Mill revises Bentham’s original formulation of Act Utilitarianism to become Rule Utilitarianism. By this it is meant that Bentham believes that each individual act be judged on its net change in utility, while Mill believes that there can be rules formulated, which are judged on the extent to which following their demands enacts the greatest net utility. Rules which would resolve the apparent contradiction between our feeling of Justice and the surface Utilitarian decision in the cases cited earlier would be “do not kill innocents” and “do not torture”, respectively.

Of course, these rules are extremely broad. Should we make them more specific? Many moral philosophers have taken this as a significant flaw in Rule Utilitarianism: why can we not simply make each rule slightly more detailed by adding an unending list of situational exceptions wherein following the exception would increase net utility? Or, to go even broader, why can the rule formulated not simply be “act as an act utilitarian”? The answer to these two questions lies in the field of cognitive science.

Humans, while being ostensibly the smartest animals, still have major deficits in cognitive ability. We make significant errors daily, in fields as varied as philosophy, mathematics, physics, music, and history. We simply don’t have the memory or computational power to get everything right all the time. Most importantly, however, we don’t need to. Surely, in the basic tasks of eating, sleeping, walking, and procreating, we’re all pretty much competent. These traits are necessary to our survival and continuance as a species, and any persons who could not perform them quickly disappeared from the gene pool. As mankind developed technologies, however, a new set of needs was invented—abstract manipulation of concepts. The field was entirely alien to us as a species, and to this day we have not sufficiently evolved to cope with its requirements.

What we have developed, however, are little rules or problem-solving strategies (commonly known as heuristics). Heuristics are cognitive “short-cuts” by which our brains simplify the external world to more easily manipulable problems. They allow for quick, rough processing of information—the sort of work which the fast-paced, dog-eat-dog world of survival of the fittest requires. In doing so, they leave out much of the subtlety of the actual problem, fill in missing data, and extrapolate from past experience to attempt to solve the problem sufficiently and quickly.

One can assume that human beings developed heuristics because they were mechanisms which maximized reproductive success (a whole other sense of “utility”). Those human beings which could not problem solve well could not interact with the world in complex manners and would have less productive and secure societies, which would drive down an entire society’s reproductive success. As for why heuristics would develop rather than a complete-information-processing approach, one must recognize that the more and more one can reduce the cognitive load an actor must process to come to a decision, the more reasonably such an actor could actually act upon a decision. We simply don’t have the time (individually or evolutionarily) ability to gather every piece of information which relates to the potential utility of each decision we make.[5]

Applying this line of thought to the sociopolitical sphere, one can also assume that humans would develop moral heuristics to make quick social interaction decisions. In the same way that problem-solving heuristics evolved because they maximized our ability to interact with complex and abstract dilemmas brought about by new technologies, moral heuristics became increasingly important as humans developed into social animals involved in larger and larger societal networks. These new environments put new evolutionary pressures on humans, and those who could effectively make decisions which benefited the group as a whole (i.e., increase net utility) increased the group’s health, wealth, and security against other, competing societies. [6] These sets of moral heuristics which were selected for over the course of human history (because they increased utility) are what now constitute our feeling of Justice.

Reconciling Utilitarianism with Justice, then, is relatively simple. To do full-on Act Utilitarianism requires a complete calculation of how each decision we make would impact every other person’s happiness. So, like other problems, we must account for the unknown present and future via educated guesswork, aka heuristic. While Utilitarianism calls on us to use our cognitive, near-mathematical heuristics to make a decision, our brains already provide us with a quick moral heuristic solution—do whatever action does not violate our conception of Justice.

Revisiting those two cases from earlier, we can see how our moral heuristics interact with the Utilitarian moral philosophy. While I would not begin to suppose that the heuristics would be formulated precisely as I have in this paper, those Utilitarian Rules postulated earlier (“do not kill innocents” and “do not torture”) would almost certainly have analogues in our subconscious decision-making system, manifesting themselves consciously as a sick feeling in our gut that something is simply wrong with the decision. This feeling of Justice, we must remember, is not actually at odds with Utilitarianism as a moral philosophy—it is simply a short-cut, a way to rely on instincts which have built up in our genome rather than do full Utilitarian calculations every time. The very feelings themselves are based evolutionarily on the utility measure of increased reproductive success.

In this way, we can see the ways in which evolution, both social and psychological, can have huge impacts on political philosophy and practice. It does so mainly by its standard biological maxims: regardless of whatever may be “right” or “wrong” (if those terms can even be said to have any meaning), organisms end up doing that which maximizes their reproductive output. On a sociopolitical level, this most commonly ends up being that which allows one societal structure to overtake, defeat, and eliminate another, often by out-producing goods and maximizing market utility until it begins to negatively affect other key factors within that society (health, happiness, and reproductive success). This is nature’s Utilitarian Philosophy, and the final test of all moral and political philosophies we can ever come up with.

–David Kettler


[1] This final clause makes this formulation more accurately described as ‘universal utilitarianism’, which the vast majority of utilitarian political philosophers are. Technically speaking, Bentham first builds up his principle of utility as a decision to be made in a more “selfish” manner, judging actions by their effects on “the person in question”. He then considers government policy, and states “A measure of government […] may be said to be conformable to or dictated by the principle of utility, when in like manner the tendency which it has to augment the happiness of the community is greater than any which it has to diminish it” (19).

[2] Those moral theories which judge the actions in and of themselves, regardless of the consequences, are called deontological theories.

[3] This term is now used to describe Bentham’s process of calculating utility, detailed on pp. 42-43

[4] The validity of information garnered via torture is, of course, extremely suspect for a number of reasons, but we will ignore this debate for the sake of argument.

[5] For a more complete discussion of how heuristics would evolve, see “Visions of Rationality” by Valerie M. Chase, Ralph Hertwig, and Gerg Gigerenzer.

[6] For a more technical discussion of the intersection of evolutionary biology, psychology, game theory, and social custom, see The Moral Animal by Robert Wright

~ by David Kettler on August 8, 2009.

Leave a comment