Welcome to European Tribune. It's gone a bit quiet around here these days, but it's still going.
But if the paper is a direct calculation in the framework of a standard model, and that the calculation is correct, then whoever made it should not change the view that it is, indeed, correct.
"Proper" peer review would require replicating such calculations. Nobody does that. Especially if the paper involves a DSGE model. Or, in the case of a paper presenting the result of statistical analysis, the raw data are not provided. Remember the case of Reinhart' and Rogoff's "evidence" supporting austerity?

A society committed to the notion that government is always bad will have bad government. And it doesn't have to be that way. — Paul Krugman
by Carrie (migeru at eurotrib dot com) on Thu Jan 1st, 2015 at 06:40:54 AM EST
[ Parent ]
Then peer review is broken -whether it is Mankiw or anybody else writing the paper.

As for R&R (which was not a paper), it should be ground for dismissing any such paper until data and calculations are made available. They were hardly trade secrets (which should not be a valid excuse anyway): they were national statistics...

Earth provides enough to satisfy every man's need, but not every man's greed. Gandhi

by Cyrille (cyrillev domain yahoo.fr) on Thu Jan 1st, 2015 at 07:24:10 AM EST
[ Parent ]
"Proper" peer review would require replicating such calculations.

I disagree.

Peer review should verify that the methodology used is not insane, that the paper properly references its data, that the author has performed adequate robustness and specification tests, and that the data is available to other investigators who wish to replicate the analysis.

It is possible to imagine cases where the analysis is based on data that cannot be made available to the general public for ethical reasons, or because doing so would be an unreasonable commercial loss for the source of said data. However, in those cases I would argue that journals should demand full independent replication rather than the much more cursory process of peer review.

The above is already a higher standard than current academic peer review observes, and I don't think going beyond this is realistic - or necessarily a desirable use of the reviewers' time.

Now, there's a whole issue of replication not receiving the recognition it ought to. But that is a slightly different matter, and one I think can be solved with standard governance methods, like formalized KPIs for researchers requiring them to publish two replications for each original result.

- Jake

Friends come and go. Enemies accumulate.

by JakeS (JangoSierra 'at' gmail 'dot' com) on Thu Jan 1st, 2015 at 08:49:08 AM EST
[ Parent ]


Occasional Series