Welcome to European Tribune. It's gone a bit quiet around here these days, but it's still going.
Display:
...adding:

I also don't think the gulf between the heterodox Post Keynesians/MMTers and the orthodox New Keynesians is as wide as the spats would lead one to believe based on Krugman and Keen being assholes to each other.

(Confidential to both: No, Paul, DSGE doesn't apply only to NK models -- in fact it didn't even originate with NK.  And no, Steve, you're the one who apparently doesn't understand ISLM, not Krugman.)

It can often appear to be the size of the Grand Canyon when I think it's more like a drainage ditch (okay, maybe a canal).  Oftentimes it seems to just devolve into word games.

Now obviously the gulf between PKs/MMTers and New Classicalists/RBCers is obviously enormous, but it seems to be that even longtime RBCers are throwing in the towel.

They all arrive at pretty much the same conclusions for dealing with our current problems, and there's a reason for that.  The differences are mostly in regard to how best to model -- and it's not that that isn't important, but it doesn't strike me as an insurmountable hurdle.

Be nice to America. Or we'll bring democracy to your country.

by Drew J Jones (pedobear@pennstatefootball.com) on Thu Jul 4th, 2013 at 09:44:55 AM EST
[ Parent ]
The problem with Krugman and the rest of the Saltwater guys is that they are taking primitive Walrasian garbage into a sound-proofed basement and beating it with a rubber hose until it starts producing reasonable results. Pseudoscience, in other words.

Which leaves them in the fundamentally untenable position of trying to argue that their models are science, but that taking the very same models and not torturing them until they confess is pseudoscience. Or the equally untenable position of acknowledging that the pure-strain Walrasian models are science, but their tortured wrecks are as well.

- Jake

Friends come and go. Enemies accumulate.

by JakeS (JangoSierra 'at' gmail 'dot' com) on Thu Jul 4th, 2013 at 03:45:31 PM EST
[ Parent ]
I assume (by Walrasian) you mean general equilibrium with sticky prices/wages bolted on?

Be nice to America. Or we'll bring democracy to your country.
by Drew J Jones (pedobear@pennstatefootball.com) on Sat Jul 6th, 2013 at 02:00:38 PM EST
[ Parent ]
I mean utility-optimization loanable-funds models. The Walras/Marshall discussion (general vs. partial equilibrium) is a sideshow in that respect.

- Jake

Friends come and go. Enemies accumulate.

by JakeS (JangoSierra 'at' gmail 'dot' com) on Sun Jul 7th, 2013 at 02:37:49 AM EST
[ Parent ]
With you all the way on loanable funds, but why utility-optimization?

Be nice to America. Or we'll bring democracy to your country.
by Drew J Jones (pedobear@pennstatefootball.com) on Sun Jul 7th, 2013 at 08:50:54 AM EST
[ Parent ]
Because it's a load of epicycles and Jesuit logic.

Tl;dr version: The human brain is not wired to find first-best solutions. The human brain is wired to find good-enough solutions.

At the risk of dabbling in evolutionary psychology, the selection pressure on human behavior is not "make the smartest decision you can possibly make." It is "make a smarter decision than the other guy, fast enough for it to matter." You do not, after all, need to outrun the tiger. You only need to outrun your friends.

Long version:
What you actually observe is behavior, not utility. Given a big enough set of observed behavior, you can assemble some behavioral heuristics that will predict agent behavior, at least in terms of some statistical aggregate.

You can, of course, always construct some sort of utility function that generates those observed behavioral heuristics when you optimize against it. But such a utility function has no predictive power over and above the set of heuristics you feed into it. This is because

(a) Humans are not consistent in their decisionmaking, so the inferred utility function will either fail to even describe (nevermind predict) observed behavior, or it will contain all sorts of ad hoc modifications reducing both parsimony and predictive power.
(b) Even if humans were consistent in their decisionmaking, no practical amount of data will allow you to specify it with sufficient accuracy and precision (you need both) to yield experimentally useful predictions.

Furthermore, we know that the inconsistency in human decisionmaking is not due to random noise, computational errors or bad input. The inconsistency is fundamental, not an error term you can graft onto your model at the end.

Human decisionmaking is inconsistent as a result of fundamental uncertainty, constraints on computational tractability and the fact that predictive output improves much slower than linearly in input accuracy, and the cost of input accuracy usually scales much faster than linearly.

Model-consistent behavior is not rational, because obtaining the input required to compute the model-consistent strategy is more expensive than just winging it based on learned and instinctive heuristics. And even with perfect and complete input information, computing the first-best solution would still be waste of wetware cycles compared to winging it based on experience and instinct.

- Jake

Friends come and go. Enemies accumulate.

by JakeS (JangoSierra 'at' gmail 'dot' com) on Sun Jul 7th, 2013 at 04:11:49 PM EST
[ Parent ]
I once read a paper on Sherlock Holmes that noted that Holmes's times was the last in which a man could know effectively everything.  We are now over a century from Edwardian England, yet we continue to pretend in both economics and politics that every actor is Holmes.
by rifek on Mon Jul 8th, 2013 at 12:14:37 AM EST
[ Parent ]
Knowing "everything" would have been a tall order even by the middle 19th C - even Holmes was careful not to attempt that, see the discussion of his knowledge of fields that he considered irrelevant to his interests in A Study in Scarlet.
by Colman (colman at eurotrib.com) on Mon Jul 8th, 2013 at 05:38:39 AM EST
[ Parent ]
At the risk of dabbling in evolutionary psychology, the selection pressure on human behavior is not "make the smartest decision you can possibly make." It is "make a smarter decision than the other guy, fast enough for it to matter." You do not, after all, need to outrun the tiger. You only need to outrun your friends.

That's precisely what I tell climatology deniers that tell me climate models are 'wrong': they don't have to be 'right' as long as they perform better than the competition. And that's a really low bar against the competing 'models' these deniers proffer.

by mustakissa on Tue Jul 9th, 2013 at 10:43:55 AM EST
[ Parent ]

Display:

Occasional Series