Welcome to European Tribune. It's gone a bit quiet around here these days, but it's still going.
Display:
Because it's a load of epicycles and Jesuit logic.

Tl;dr version: The human brain is not wired to find first-best solutions. The human brain is wired to find good-enough solutions.

At the risk of dabbling in evolutionary psychology, the selection pressure on human behavior is not "make the smartest decision you can possibly make." It is "make a smarter decision than the other guy, fast enough for it to matter." You do not, after all, need to outrun the tiger. You only need to outrun your friends.

Long version:
What you actually observe is behavior, not utility. Given a big enough set of observed behavior, you can assemble some behavioral heuristics that will predict agent behavior, at least in terms of some statistical aggregate.

You can, of course, always construct some sort of utility function that generates those observed behavioral heuristics when you optimize against it. But such a utility function has no predictive power over and above the set of heuristics you feed into it. This is because

(a) Humans are not consistent in their decisionmaking, so the inferred utility function will either fail to even describe (nevermind predict) observed behavior, or it will contain all sorts of ad hoc modifications reducing both parsimony and predictive power.
(b) Even if humans were consistent in their decisionmaking, no practical amount of data will allow you to specify it with sufficient accuracy and precision (you need both) to yield experimentally useful predictions.

Furthermore, we know that the inconsistency in human decisionmaking is not due to random noise, computational errors or bad input. The inconsistency is fundamental, not an error term you can graft onto your model at the end.

Human decisionmaking is inconsistent as a result of fundamental uncertainty, constraints on computational tractability and the fact that predictive output improves much slower than linearly in input accuracy, and the cost of input accuracy usually scales much faster than linearly.

Model-consistent behavior is not rational, because obtaining the input required to compute the model-consistent strategy is more expensive than just winging it based on learned and instinctive heuristics. And even with perfect and complete input information, computing the first-best solution would still be waste of wetware cycles compared to winging it based on experience and instinct.

- Jake

Friends come and go. Enemies accumulate.

by JakeS (JangoSierra 'at' gmail 'dot' com) on Sun Jul 7th, 2013 at 04:11:49 PM EST
[ Parent ]

Others have rated this comment as follows:

Display:

Occasional Series