Vox points me today to an an interesting fact: the number of retractions in academic journals has skyrocketed over the past two decades. However, since the total number of journal articles has also skyrocketed¹ I adjusted the raw numbers to show the percentage of articles retracted:
Even accounting for the growth in journal articles, the number of retractions has nearly tripled since 2017. Is this because researchers are doing sloppier work? Or because more people are on the lookout for sloppy work? Your guess is as good as mine.
¹The number of retractions comes from the Retraction Watch database here. The number of journal articles comes from Fire and Guestrin, here. Their numbers go through 2014, so I extrapolated to 2018 and then used the growth numbers here up to 2022. This is admittedly a little dodgy, but probably in the right ballpark.
This is why I've had Retraction Watch bookmarked for years.
But it can be VERY depressing.
I think it's because more sloppy work is getting published in the newer. less rigorous journals.
The new journals have a strong economic incentive to take papers, since they're "open access", getting all their revenue from page charges paid by the authors, none from subscribers. If they don't accept the papers, they don't get paid. (Some non-open access journals have page charges too, but since they have enough submissions to fill up the space they'll get that money no matter what.)
"Less rigorous" is a somewhat generous designation. I've encountered journals that just copied and pasted the journal info from another journal's website, without even changing the name. Some offer a turnaround of only 10 days between submission and publication...with a higher fee, of course.
To be fair, those journals aren't causing the spike in retraction. I'm sure they'd never bother retracting anything, unless the check bounced.
Because AI tools draw from the open internet, the worst of the predatory journals tend to show up when you ask AI to find "scholarly articles" on a topic.
0.06% hardly qualifies as "lots."
True, but doubling in four years is pretty extraordinary.
You should demand a retraction.
It's not necessarily because more people are looking for sloppy papers, but because more people are looking at papers in general.
During the early 2000s there were still plenty of journals that were only available in physical copies at the university library, with only a smaller selection of journals available online, and depending on your university, some of those might not have been available everywhere. That has steadily improved over the past 15 years or so where now every journal is published online, and a much wider selection is not just available at the library, but publicly available for free for anyone with an internet connection.
All that extra exposure is bound to increase the number of retractions as more eyes go over papers that might not have receives as much attention in the past.
A colleague of mine had two articles retracted for image manipulation. I'm pretty familiar with the case, which was first posted on PubPeer. None of the manipulations affected the conclusions of either paper, but the journal decided to retract them both. The PI (my colleague) was ultimately responsible, but the co-authors were burned by this, too.
In these two cases, the journal was a very high impact journal, and the image manipulation was to tidy up the images since reviewers would probably have insisted on the experiments being repeated and the PI didn't have the money. Should have gone to a lower tier journal rather than cut corners, yes, but in the highly competitive NIH funding universe, getting a couple of high impact pubs was too tempting. I'm sure other cases never get caught.
Combination of poorer quality journals + academic pressure to publish more. Retractions should be numbering in the tens of thousands per year. Psychology research, in particular, has been found to be particularly bad. The majority of psychology studies fail when researchers try to recreate the results. And various forms of data manipulation that fall short of outright fraud are extremely common. I'd estimate somewhere between 50% and 60% of all social science papers are guilty of some level of data manipulation (especially p-hacking).
It's been known for years. A group of psychologists in 2011 published a paper using standard, accepted psychology research methods to prove that listening to a Beatles song de-aged a person by 1.5 years.
As for whether people are doing sloppier work ... it's very hard to measure. (What is sloppy? Who decides? Should the standard vary by field, because methods do?)
What is absolutely true is that more people are looking harder at publications for what might be worthy of a retraction.
But a third factor I'd suggest is that as retractions have increased, so has the feeling that retraction is an option for a flawed or sloppy (as opposed to fraudulent) paper. That wasn't always the case. Thirty years ago, a flawed paper would simply be dismissed or taken with a grain of salt by the people to whom it was relevant.
(To be honest, this is all, I think, partly a consequence of making scientific output more accessible outside of specialist academic circles. I am 200% in favor of making as much scientific literature available to the public as possible, especially if it was paid for with public money, and yet even I don't see an unalloyed good in it. A paper may be deeply flawed or mistaken (and many, many are) and yet still prove useful to the field. But a layperson is simply not qualified to judge this nuance. This is not helped by the seemingly widespread expectation, clearly evident on this blog as well, that a scientific paper contains or aims at "truth." It never has, and never will, and reading a paper as if it contained truth is a recipe for credulity, sloppy thinking, and going on to do very poor science. It is one of the first misapprehensions a scientist in training needs to have corrected, and it is also the cardinal and seemingly unavoidable sin of what passes for science journalism these days. And I think an environment in which a paper's conclusions might be mistaken for "truth" is one which may lead authors, editors, and observers, scientific or lay, to err much more in favor of retraction.)
I guess that’s ok. You can always retract it later…
From Kevin:
“Even accounting for the growth in journal articles, the number of retractions has nearly tripled since 2017. Is this because researchers are doing sloppier work?”
Well, sloppy research is very likely one side of the equation. But why would that sloppy work slip past peer reviews and editors?
This question brings us to an obvious but politically incorrect possibility. As detailed by the Atlantic Mag in 2018, “What an Audacious Hoax Reveals About Academia,” by none other than Yascha Mounk:
“Over the past 12 months, three scholars—James Lindsay, Helen Pluckrose, and Peter Boghossian—wrote 20 fake papers using fashionable jargon to argue for ridiculous conclusions, and tried to get them placed in high-profile journals in fields including gender studies, queer studies, and fat studies. Their success rate was remarkable: By the time they took their experiment public late on Tuesday, seven of their articles had been accepted for publication by ostensibly serious peer-reviewed journals …
“The lesson is neither that all fields of academia should be mistrusted nor that the study of race, gender, or sexuality is unimportant …
“But if we are to be serious about remedying discrimination, racism, and sexism, we can’t ignore the uncomfortable truth these hoaxers have revealed: Some academic emperors—the ones who supposedly have the most to say about these crucial topics—have no clothes.”
I don’t offer this as a full explanation for an increasing number of journal retractions; but, I do think that it’s a potentially important aspect of the situation that should not be ignored: as an increasing number of ostensibly academic and/or objective fields of inquiry emphasize the concerns of social justice instead of the rigors of dispassionate analysis, a whole lot of nonsense is going to get published.
It isn't the first time that sort of thing has been done, but I think it is a form of academic misconduct. If you deliberately choose to publish garbage, you're part of the problem regardless of what your motives are. You're the James O"Keefe of academia.
I don't know who these people are, but in the past this sort of hoax has been people attacking fields that are not their own - physicists writing sociology papers, that sort of thing. And that is doubly dishonest.
My guess? It is because iffier data is getting published because number of publications is the only performance metric colleges care about anymore.
I think the more interesting question is what is the appropriate rate of retraction? Reviewers are an incredibly small pool. It should be expected that some flaws will not be found by them.