The European Tribune is a forum for thoughtful dialogue of European and international issues. You are invited to post comments and your own articles.
Please REGISTER to post.
Tl;dr version: The human brain is not wired to find first-best solutions. The human brain is wired to find good-enough solutions.
At the risk of dabbling in evolutionary psychology, the selection pressure on human behavior is not "make the smartest decision you can possibly make." It is "make a smarter decision than the other guy, fast enough for it to matter." You do not, after all, need to outrun the tiger. You only need to outrun your friends.
Long version: What you actually observe is behavior, not utility. Given a big enough set of observed behavior, you can assemble some behavioral heuristics that will predict agent behavior, at least in terms of some statistical aggregate.
You can, of course, always construct some sort of utility function that generates those observed behavioral heuristics when you optimize against it. But such a utility function has no predictive power over and above the set of heuristics you feed into it. This is because
(a) Humans are not consistent in their decisionmaking, so the inferred utility function will either fail to even describe (nevermind predict) observed behavior, or it will contain all sorts of ad hoc modifications reducing both parsimony and predictive power. (b) Even if humans were consistent in their decisionmaking, no practical amount of data will allow you to specify it with sufficient accuracy and precision (you need both) to yield experimentally useful predictions.
Furthermore, we know that the inconsistency in human decisionmaking is not due to random noise, computational errors or bad input. The inconsistency is fundamental, not an error term you can graft onto your model at the end.
Human decisionmaking is inconsistent as a result of fundamental uncertainty, constraints on computational tractability and the fact that predictive output improves much slower than linearly in input accuracy, and the cost of input accuracy usually scales much faster than linearly.
Model-consistent behavior is not rational, because obtaining the input required to compute the model-consistent strategy is more expensive than just winging it based on learned and instinctive heuristics. And even with perfect and complete input information, computing the first-best solution would still be waste of wetware cycles compared to winging it based on experience and instinct.
- Jake Friends come and go. Enemies accumulate.
by gmoke - Nov 28
by gmoke - Nov 12 7 comments
by Oui - Nov 304 comments
by Oui - Nov 2837 comments
by Oui - Nov 278 comments
by Oui - Nov 2511 comments
by Oui - Nov 24
by Oui - Nov 221 comment
by Oui - Nov 22
by Oui - Nov 2119 comments
by Oui - Nov 1615 comments
by Oui - Nov 154 comments
by Oui - Nov 1319 comments
by Oui - Nov 1224 comments
by gmoke - Nov 127 comments
by Oui - Nov 1114 comments
by Oui - Nov 10
by Oui - Nov 928 comments
by Oui - Nov 8
by Oui - Nov 73 comments
by Oui - Nov 633 comments