The European Tribune is a forum for thoughtful dialogue of European and international issues. You are invited to post comments and your own articles.
Please REGISTER to post.
Recently at a large party I found myself sitting next to a very likable young middle-aged academic tenured at an elite British university, whom henceforth I will refer to as Doctor X and whose field is closely associated with this blog. Doctor X was unfamiliar with both the Real-World Economics Review and the World Economics Association. But when I described the purposes of the latter, in particular the fostering of a professional ethos that prioritized the advancement of knowledge rather than the preservation of orthodoxies and the promotion of vested interests, there was an instantaneous recognition of a central relevance to his/her intellectual and career situation. "Every year I publish papers in the top journals and they're pure shit." Doctor X, who by now had had a glass or two, felt bad about this, not least because "students these days are so idealistic and eager to learn; they're really wonderful." Furthermore Doctor X could and would like "to write serious papers but what would be the point?" I then listened to an explanation of Doctor X's predicament that went roughly like this. One naturally feels loyalty to one's immediate colleagues. The amount of funding Doctor X's department receives depends not on how many papers or their quality its members publish, but instead on in which journals they are published. The journals in Doctor X's field in which publication results in substantial funding will not publish "serious papers" but instead only "pure shit" papers, meaning ones that merely elaborate old theories that nearly everyone knows are false. Moreover, even to publish a "serious paper" in addition to the "pure shit" ones could taint the department's reputation, resulting in a reduction of its funding. In any case, no one at a top university would read a "serious paper" because they only read "top journals." Memory of this little encounter came rushing back to me a few minutes ago when reading in today's The Observer an opinion piece by Paul Nurse, the president of the Royal Society. In a few words he nicely spells out how real science operates and how Doctor X, perhaps no less than me, wishes economics would operate. Good science is a reliable way of generating knowledge because of the way it is done. It is based on reproducible observation and experiment, taking account of all evidence and not cherry-picking data. Scientific issues are settled by the overall strength of that evidence combined with rational, consistent and objective argument. Central to science is the ability to prove that something is not true, an attribute which distinguishes science from beliefs based on religions and ideologies, which place more emphasis on faith, tradition and opinion. A good scientist is inherently sceptical - the Royal Society's motto, in Latin of course, roughly translates as "take nobody's word for it".
"Every year I publish papers in the top journals and they're pure shit." Doctor X, who by now had had a glass or two, felt bad about this, not least because "students these days are so idealistic and eager to learn; they're really wonderful." Furthermore Doctor X could and would like "to write serious papers but what would be the point?"
I then listened to an explanation of Doctor X's predicament that went roughly like this.
One naturally feels loyalty to one's immediate colleagues. The amount of funding Doctor X's department receives depends not on how many papers or their quality its members publish, but instead on in which journals they are published. The journals in Doctor X's field in which publication results in substantial funding will not publish "serious papers" but instead only "pure shit" papers, meaning ones that merely elaborate old theories that nearly everyone knows are false. Moreover, even to publish a "serious paper" in addition to the "pure shit" ones could taint the department's reputation, resulting in a reduction of its funding. In any case, no one at a top university would read a "serious paper" because they only read "top journals."
Good science is a reliable way of generating knowledge because of the way it is done. It is based on reproducible observation and experiment, taking account of all evidence and not cherry-picking data. Scientific issues are settled by the overall strength of that evidence combined with rational, consistent and objective argument. Central to science is the ability to prove that something is not true, an attribute which distinguishes science from beliefs based on religions and ideologies, which place more emphasis on faith, tradition and opinion.
A good scientist is inherently sceptical - the Royal Society's motto, in Latin of course, roughly translates as "take nobody's word for it".
So there is probably some kind of scale from orthodox to reformed, with theological parts of theology departments at one end and very social science parts of religious studies departments at the other. Sweden's finest (and perhaps only) collaborative, leftist e-newspaper Synapze.se
This ties into something I have been thinking about regarding academic disciplines that are dominant in political elite discourse and thus required reading to get into the political elite. I think they tend to be tailored after the needs of the ruling elite, in terms of collective narrative to motivate teh existence of said elite, common language and tools of power. These do not always match, claiming for example that military victory is granted by God serves the narrative well regarding the victories that brought the eltie into power but serves poorly in understanding military as a tool of power.
So do empires change their dominant academic discipline, and if so how does it happen? Which leads to the question of what was the dominant academic discipline of past empires. Was history mandatory for the Brittish elite, and if so does the fall of the Brittish empire explain why the grand narratives died in history? Sweden's finest (and perhaps only) collaborative, leftist e-newspaper Synapze.se
does the fall of the British empire explain why the grand narratives died in history?
I suspect, however, that the grand narrative could have been sustained for the USA at least into the 80s. And while the post-modernist critiques have seriously undercut such views, I don't think we can say that the US triumphalists have capitulated. But these are not the sorts of efforts that are widely supported in US Academia. History is still widely presented devoid of any meaningful social theory framework. This was the stronghold of Marxist historians and thus remains suspicious in the USA. "It is not necessary to have hope in order to persevere."
That's not so big a problem at (say) U of Missouri-Kansas City, which is a pretty minor school within the state system. (The flagship school is U of Missouri-Columbia.) The administration isn't expecting them to be Harvard.
But it's a big problem at schools like Notre Dame (which fancies itself mentioned in the same breath as the Ivies), which basically had its econ department gutted over the lack of Top Fives.
I do think it's going to get better. Certainly the blogosphere has helped open up the conversation. And clowns like Reinhart and Rogoff have allowed departments like UMass-Amherst to gain exposure. Getting a huge name like Jamie Galbraith to back you up helps as well (and certainly nobody can accuse the University of Texas-Austin of being an academic liteweight). Be nice to America. Or we'll bring democracy to your country.
I also don't think the gulf between the heterodox Post Keynesians/MMTers and the orthodox New Keynesians is as wide as the spats would lead one to believe based on Krugman and Keen being assholes to each other.
(Confidential to both: No, Paul, DSGE doesn't apply only to NK models -- in fact it didn't even originate with NK. And no, Steve, you're the one who apparently doesn't understand ISLM, not Krugman.)
It can often appear to be the size of the Grand Canyon when I think it's more like a drainage ditch (okay, maybe a canal). Oftentimes it seems to just devolve into word games.
Now obviously the gulf between PKs/MMTers and New Classicalists/RBCers is obviously enormous, but it seems to be that even longtime RBCers are throwing in the towel.
They all arrive at pretty much the same conclusions for dealing with our current problems, and there's a reason for that. The differences are mostly in regard to how best to model -- and it's not that that isn't important, but it doesn't strike me as an insurmountable hurdle. Be nice to America. Or we'll bring democracy to your country.
Which leaves them in the fundamentally untenable position of trying to argue that their models are science, but that taking the very same models and not torturing them until they confess is pseudoscience. Or the equally untenable position of acknowledging that the pure-strain Walrasian models are science, but their tortured wrecks are as well.
- Jake Friends come and go. Enemies accumulate.
Tl;dr version: The human brain is not wired to find first-best solutions. The human brain is wired to find good-enough solutions.
At the risk of dabbling in evolutionary psychology, the selection pressure on human behavior is not "make the smartest decision you can possibly make." It is "make a smarter decision than the other guy, fast enough for it to matter." You do not, after all, need to outrun the tiger. You only need to outrun your friends.
Long version: What you actually observe is behavior, not utility. Given a big enough set of observed behavior, you can assemble some behavioral heuristics that will predict agent behavior, at least in terms of some statistical aggregate.
You can, of course, always construct some sort of utility function that generates those observed behavioral heuristics when you optimize against it. But such a utility function has no predictive power over and above the set of heuristics you feed into it. This is because
(a) Humans are not consistent in their decisionmaking, so the inferred utility function will either fail to even describe (nevermind predict) observed behavior, or it will contain all sorts of ad hoc modifications reducing both parsimony and predictive power. (b) Even if humans were consistent in their decisionmaking, no practical amount of data will allow you to specify it with sufficient accuracy and precision (you need both) to yield experimentally useful predictions.
Furthermore, we know that the inconsistency in human decisionmaking is not due to random noise, computational errors or bad input. The inconsistency is fundamental, not an error term you can graft onto your model at the end.
Human decisionmaking is inconsistent as a result of fundamental uncertainty, constraints on computational tractability and the fact that predictive output improves much slower than linearly in input accuracy, and the cost of input accuracy usually scales much faster than linearly.
Model-consistent behavior is not rational, because obtaining the input required to compute the model-consistent strategy is more expensive than just winging it based on learned and instinctive heuristics. And even with perfect and complete input information, computing the first-best solution would still be waste of wetware cycles compared to winging it based on experience and instinct.
That's precisely what I tell climatology deniers that tell me climate models are 'wrong': they don't have to be 'right' as long as they perform better than the competition. And that's a really low bar against the competing 'models' these deniers proffer.
by gmoke - Nov 28
by gmoke - Nov 12 7 comments
by Oui - Dec 2
by Oui - Dec 19 comments
by Oui - Dec 12 comments
by gmoke - Nov 303 comments
by Oui - Nov 3012 comments
by Oui - Nov 2838 comments
by Oui - Nov 278 comments
by Oui - Nov 2511 comments
by Oui - Nov 24
by Oui - Nov 221 comment
by Oui - Nov 22
by Oui - Nov 2119 comments
by Oui - Nov 1615 comments
by Oui - Nov 154 comments
by Oui - Nov 1319 comments
by Oui - Nov 1224 comments
by gmoke - Nov 127 comments
by Oui - Nov 1114 comments
by Oui - Nov 10