Welcome to European Tribune. It's gone a bit quiet around here these days, but it's still going.
Display:
I assume (by Walrasian) you mean general equilibrium with sticky prices/wages bolted on?

Be nice to America. Or we'll bring democracy to your country.
by Drew J Jones (pedobear@pennstatefootball.com) on Sat Jul 6th, 2013 at 02:00:38 PM EST
[ Parent ]
I mean utility-optimization loanable-funds models. The Walras/Marshall discussion (general vs. partial equilibrium) is a sideshow in that respect.

- Jake

Friends come and go. Enemies accumulate.

by JakeS (JangoSierra 'at' gmail 'dot' com) on Sun Jul 7th, 2013 at 02:37:49 AM EST
[ Parent ]
With you all the way on loanable funds, but why utility-optimization?

Be nice to America. Or we'll bring democracy to your country.
by Drew J Jones (pedobear@pennstatefootball.com) on Sun Jul 7th, 2013 at 08:50:54 AM EST
[ Parent ]
Because it's a load of epicycles and Jesuit logic.

Tl;dr version: The human brain is not wired to find first-best solutions. The human brain is wired to find good-enough solutions.

At the risk of dabbling in evolutionary psychology, the selection pressure on human behavior is not "make the smartest decision you can possibly make." It is "make a smarter decision than the other guy, fast enough for it to matter." You do not, after all, need to outrun the tiger. You only need to outrun your friends.

Long version:
What you actually observe is behavior, not utility. Given a big enough set of observed behavior, you can assemble some behavioral heuristics that will predict agent behavior, at least in terms of some statistical aggregate.

You can, of course, always construct some sort of utility function that generates those observed behavioral heuristics when you optimize against it. But such a utility function has no predictive power over and above the set of heuristics you feed into it. This is because

(a) Humans are not consistent in their decisionmaking, so the inferred utility function will either fail to even describe (nevermind predict) observed behavior, or it will contain all sorts of ad hoc modifications reducing both parsimony and predictive power.
(b) Even if humans were consistent in their decisionmaking, no practical amount of data will allow you to specify it with sufficient accuracy and precision (you need both) to yield experimentally useful predictions.

Furthermore, we know that the inconsistency in human decisionmaking is not due to random noise, computational errors or bad input. The inconsistency is fundamental, not an error term you can graft onto your model at the end.

Human decisionmaking is inconsistent as a result of fundamental uncertainty, constraints on computational tractability and the fact that predictive output improves much slower than linearly in input accuracy, and the cost of input accuracy usually scales much faster than linearly.

Model-consistent behavior is not rational, because obtaining the input required to compute the model-consistent strategy is more expensive than just winging it based on learned and instinctive heuristics. And even with perfect and complete input information, computing the first-best solution would still be waste of wetware cycles compared to winging it based on experience and instinct.

- Jake

Friends come and go. Enemies accumulate.

by JakeS (JangoSierra 'at' gmail 'dot' com) on Sun Jul 7th, 2013 at 04:11:49 PM EST
[ Parent ]
I once read a paper on Sherlock Holmes that noted that Holmes's times was the last in which a man could know effectively everything.  We are now over a century from Edwardian England, yet we continue to pretend in both economics and politics that every actor is Holmes.
by rifek on Mon Jul 8th, 2013 at 12:14:37 AM EST
[ Parent ]
Knowing "everything" would have been a tall order even by the middle 19th C - even Holmes was careful not to attempt that, see the discussion of his knowledge of fields that he considered irrelevant to his interests in A Study in Scarlet.
by Colman (colman at eurotrib.com) on Mon Jul 8th, 2013 at 05:38:39 AM EST
[ Parent ]
At the risk of dabbling in evolutionary psychology, the selection pressure on human behavior is not "make the smartest decision you can possibly make." It is "make a smarter decision than the other guy, fast enough for it to matter." You do not, after all, need to outrun the tiger. You only need to outrun your friends.

That's precisely what I tell climatology deniers that tell me climate models are 'wrong': they don't have to be 'right' as long as they perform better than the competition. And that's a really low bar against the competing 'models' these deniers proffer.

by mustakissa on Tue Jul 9th, 2013 at 10:43:55 AM EST
[ Parent ]

Display:

Occasional Series