Tuesday, May 31, 2005

Retraction of scientific papers

One of the difficulties in retraction of erroneous papers is the policy of some journals to require agreement of all authors. That was the case with Science in the Schon affair. See Robert F. Service, Science, Vol. 298, pp. 13-14, Oct. 4, 2002. Around Nov. 2, 2002, AP mentioned Science had retracted 8 papers "at the request of his co-authors." [which would be a seeming violation of Science's requirement as to ALL authors] Nature retracted papers March 6, 2003, approximately six months after the fraud was documented.

In the Oct. 4 article, it was also suggested that Schon's work would no longer be cited. Three years later we see that suggestion was incorrect.

The Schon matter also brought up issues with peer review. Because peer review has come up in the context of Rebecca Eisenberg's suggestions to refine the obviousness inquiry in patent law, contemplate the following discussion, related to the Schon matter, in Fraud Shows Peer-Review Flaws by Eric Lerner in the Industrial Physicist:

The peer-review system is supposed to guarantee that published research is carried out in accordance with established scientific standards. Yet recently, an internal report from Lucent Technologies’ Bell Laboratories concluded that data in 16 published papers authored by researcher Hendrik Schön were fraudulent. The papers were reviewed and accepted by several prestigious scientific journals, including Nature, Science, Physical Review, and Applied Physics Letters. Yet, in many of the papers, the fraud was obvious, even to an untrained eye, with data repeated point-for-point and impossibly smooth or noise-free. All the papers passed through internal review at Bell Labs, one of the world’s foremost industrial research institutions, and the journal peer review system without raising alarms. The fraud was discovered only after journal readers started pointing it out.
(...)
Once the papers were submitted for publication, how did they get past so many sets of reviewers? Clearly, it was not the fault of one or two reviewers because of the many articles involved. Nor did editors ignore warnings from the reviewers. “After the story broke, we looked back over the reviewer reports,” says Monica Bradford, managing editor of Science, “but we did not find any clues that something was wrong.” Although it is common for journal reviewers to critically comment on a paper’s data and raise questions about noise levels and statistics, not one reviewer at any journal caught the fact that the data was impossibly good or copied from chart to chart.

Some in the scientific community think that the reviewers should not be blamed for missing the flaws in Schön’s papers. “Referees cannot determine if data is falsified, nor are they expected to,” argues Marc H. Brodsky, executive director of the American Institute of Physics, which publishes Applied Physics Letters. “That job belongs to the author’s institution, and the readers if they deem the results are important enough. A referee’s job is to see if the work is described well enough to be understood, that enough data is presented to document the authors’ points, that the results are physically plausible, and that enough information is given to try to reproduce the results if there is interest.”

But editors at leading journals take a broader view, and they admit that the reviewers were among those at fault. “Clearly, reviewers were less critical of the papers than they should have been, in part because the papers came from Batlogg, who had an excellent track record, and from Bell Labs, which has always done good work,” admits Karl Ziemelis, physical sciences editor at Nature. “In addition, although the results were spectacular, they were in keeping with the expectations of the community. If they had not been, or had they come from a completely unknown research group, they might have gotten closer scrutiny.” Thus, reviewers and editors as a group had a bias toward expected results from established researchers that blinded them to the problems in the data. [LBE note: one notes there is no discussion of the letter to the editor of Nature by Paul Solomon of IBM.]

The Schön case points to problems in the peer-review system on which considerable discussion has focused recently, and which affect aspects of science far more significant than the infrequent case of fraud. “There is absolutely no doubt that papers and grant proposals from established groups and high-prestige institutions get less severe review than they should,” comments Howard K. Birnbaum, former director of the Frederick Seitz Materials Research Laboratory of the University of Illinois at Urbana-Champaign. He recently criticized peer-review practices in grant awards in an article in Physics Today. It is not just a problem of fraud, he says. I and colleagues have seen sheer nonsense published in journals such as Physical Review Letters, papers with gaping methodological flaws from prestige institutions.

Because journals have a limited number of pages and government agencies have limited funds for research, too lenient reviews of the established and the orthodox can mean too severe reviews of relatively unknown scientists or novel ideas. The unorthodox can be frozen out, not only from the most visible publications but also from research funding. Not only does less-than-sound work get circulated, but also important, if maverick, work does not get done at all. The peer-review system's biases, highlighted in the Schon case, tend to enforce a herd instinct among scientists and impede the self-correcting nature of science. This is scarcely a new problem. As Samuel Pierpont Langley, president of the American Association for the Advancement of Science, wrote in 1889, the scientific community sometimes acts as a pack of hounds...where the louder-voiced bring many to follow them nearly as often in a wrong path as in a right one, where the entire pack even has been known to move off bodily on a false scent. [LBE note: one must recall the role of Langley, and of Langley's experiments, in the saga of the Wright Brothers.]

(...)
One way to encourage real collaborations rather than passive co-authoring is to have the responsibility of co-authors listed in the published paper -- for example, device fabrication by John Doe, experimental procedure by Jane Smith, data analysis by Tom Harold. Senior researchers would then have to take co-responsibility for specific aspects of an experiment, or remove their names from papers to which they contributed little.

None of these changes, however, directly addresses the bias of reviewers toward prestigious groups and accepted ideas. More drastic reforms aim at fundamental changes in the system of anonymous review. Blind review, for example, involves removing the authors' names from articles sent to reviewers, while open review requires reviewers to sign their names to reviews seen by authors.



See also M.J.G. Farthing, "Publish, and be damned...", the road to research misconduct.

0 Comments:

Post a Comment

<< Home