What do we talk about when we talk about expected utility?

Here’s an interesting question: do we actually calculate expected utility the way we, as rational agents, know we should?

For example, consider the probability distribution over all states of the world political-economic system. Such a distribution is almost entirely unknown to us. Most states are fairly reasonable—wars, financial crises, and even depressions are not unknown to us, and the human race moves past them fairly easily—but a much smaller set of states could be truly nightmarish. Think a post-apocalyptic barter economy or a totalitarian world government (e.g. North Korea on a massive scale). Now, the number of states of the world political-economic system that fall into the above events (call them, for the sake of brevity, \(A_1\) and \(A_2\) ) definitely has non-zero measure, so these events occur with probability greater than zero. (This is the observation biaS that Bostrom, etc. talk about when we speak of existential risk; the human race has never gone extinct before, so we are ill-equipped to think about the possibility. Here we’re not talking about human extinction per se, but the point remains.) The events \(A_1\) and \(A_2\) imply very, very large amounts of negative utility for almost all actors involved—certainly they are far worse than the aggregate utility levels we, as a society, experienced during, say, World War II. Maybe the Black Plague was close; it’s really not possible to say. At any rate, the issue here is that these events, while taking up nonzero space in the probability distribution of all events, definitely take up an extraordinarily small amount of space—so much that the expected utility of all states of the system during any reasonable time period is nothing remotely close to the depths that could exist.

If we actually calculate expected utility (leaving out the question of whether the first expectation exists, etc.) and don’t decisively act to prevent these possible futures, then what we are implicitly saying is that we just aren’t willing to plan for these events; even though they’re so bad, the probability of their coming into existence is minute to the point where the opportunity cost of concerning ourselves with it now is greater than the probability- and time-weighted cost of the possible nightmarish events. Somehow—and this is just opinion—but somehow I doubt that we (and by “we” I mean regimes in power, just to add a little principal-agent problem into the mix) are actually performing anything close to this calculation. I think that we have either

  1. greatly overestimated the value of our time today, or
  2. greatly underestimated either or both of the probability- and time-weighted cost of the nightmarish events.

Of these two possibilities I believe the second is more likely, and that we both greatly underestimate the probability of these events and the negative effect they would have on economic activity writ large.

Written on July 9, 2016