We've all heard about the replication crisis, right? It turns out that lots of well-known results in the social sciences are the result of crappy, small experiments that fail when researchers try to reproduce them.
But that's social science. An article in Nature a few months ago says we have the same problem in clinical medicine, even in supposedly gold-standard randomized controlled trials. But the problem isn't sloppiness. It's outright fraud:
“If you search for all randomized trials on a topic, about a third of the trials will be fabricated,” asserts Ian Roberts, an epidemiologist at the London School of Hygiene & Tropical Medicine.
....Ben Mol, who specializes in obstetrics and gynaecology at Monash University in Melbourne, Australia, argues that as many as 20–30% of the RCTs included in systematic reviews in women’s health are suspect.
....A 2022 update of a Cochrane review argued that more than 40% of [ivermectin] RCTs were untrustworthy.
The good news, if there is any, is that much of the problem comes from "paper mill" studies in underdeveloped countries. These typically have small sample sizes and are manipulated to support earlier findings. That attracts less attention, you see.
In practice, then, the real problem isn't quite as dramatic as "a third are fabricated." But it's still pretty bad. Just generally speaking, it sure seems like science needs to get its house in order.
"Just generally speaking, it sure seems like science needs to get its house in order."
There is no transnational "big science" that controls all scientific research and publishing on the planet. There will always be folks in third world countries who manufacture data and there will always be fee-for-publication journals in third world countries willing to publish those data. There is not mechanism to prevent this from happening. The best we can hope for is to limit subscriptions and aggregation of journals that publish fraud.
I just got paid 7268 Dollars Working off my Laptop this month. And if you think that’s cool, My gd02 Divorced friend has twin toddlers and made 0ver $ 13892 her first m0nth. It feels so good making so much money when other people have to as20 work for so much less.
This is what I do……………..> > > https://dailyincome41.blogspot.com/
"There will always be folks in third world countries who manufacture data and there will always be fee-for-publication journals in third world countries willing to publish those data. "
The same is true in rich developed countries. Just remember that the modern day antivax craziness that started with the false claims of vaccines causing autism was published in one of the most prestigious journals in the world, and the fraudulent research took place in the UK. The same thing is true about pay to publish journals, universities from all "first world" countries use them when they can't get their studies in more rigorous journals.
"Just remember that the modern day antivax craziness that started with the false claims of vaccines causing autism was published in one of the most prestigious journals in the world, and the fraudulent research took place in the UK."
One paper, which was withdrawn and the senior author was struck off the medical register. The plural of anecdote isn't data.
What's interesting isn't that occasional fraud occurs in western refereed journals. Of course it does. But it is relatively rare. Most fraudulent scientific papers are published from 3rd world countries in pay-to-play journals.
"Relatively rare" contains multitudes. Here's another anecdote. My brother was an academic chemist. One time he did a literature search for an oddball project he was working on, and found a relevant paper in an obscure journal. Not being a fool, he tried to replicate the results, without success. This by itself is not that unusual, as it is difficult to fully describe the conditions of the experiment in limited space. So the next step was to contact the author of the paper. The initial contact was cordial and collegial, but as it came out what my brother was doing, the guy ghosted him. My brother concluded that the results were faked.
This was not, in my brother's experience, really all that uncommon. The trick is to publish results that are interesting enough to be accepted by at least a low-tier journal, while not so interesting that anyone is likely to follow up on them. The key is that the sciences are so specialized that a hiring or tenure committee likely has no one on it who knows which are the good journals and which publish irrelevant crap. Play your cards right and you have a superficially impressive yet insubstantial publications list.
The institutional attempt to respond is to have automated systems to assess papers' significance by counting how many times they are cited by other researchers. Do crap researchers respond by trading citations? Of course they do!
Edit to add: The good news is that the crap science published this way is irrelevant. My brother's experience is a good illustration of this. He wasted some time and resources, but he didn't build a house of cards atop the crap paper. The problem in the fields that do is that they build that house of cards. This isn't to say that anyone should try to replication every published paper, but that if the result is important enough to build on, it is important enough to check.
FYI, hiring committees will likely know which journals are reputable and which publish crap, since they tend to have people in the same field as the applicant. Tenure committees will definitely know which is which.
Joel says it well. We have no control over foreign journals. We now have journals where you pay to have your stuff published. There is no peer review. I think the pre-publish journals have utility when used so that people can comment them and people can improve their papers but they are all too often cited as finished products. For those of us who work in medicine, and I suspect for others reading in their own fields, it's not so much of an issue as we learn what to ignore or suspect as false. However, for the lay public these kinds of papers get weaponized as we saw during covid. Sucks, but the US cant really shut down some journal published in another country.
Steve
Doesn't this generally invalidate most meta-analysis papers?
No.
Good meta-analysis uses criteria to judge how reliable a study may be. Of course that does not eliminate all fraud. Meta-analyses should be more reliable, depending on who does them.
Retraction watch dot com.
In my field, no one pays attention to the predatory fee-for-publication journals. (Well, actually, these days all journals are fee-for-publication, but some are clearly for profit, while the mainstream ones are ostensibly just to keep the editorial/production staffs paid.)
And almost all of our data are public anywhere from 2 weeks to at most 1 year after they're obtained, so there's definitely a market for researchers in less developed countries to publish quick, shallow papers. One recent observatory has its data go public in 2 weeks, and for each observation there are very quickly three papers: one from the instrument team, and one each from two different Indian groups, and all the papers are pretty shallow.
I kind of understand with India. They are new to the game in our field, it's 1 Billion people, it's hard to stand out, so there is this huge pressure to publish as much as possible.
I've had data "scooped" by researchers in underdeveloped countries, and so far mostly what they get from me is a very, very brief reference that they've published the data, and I go on with my own analysis at what I hope is a much deeper, thorough, level.
Again, public data are public data, so I can't stop anyone from looking at it. I just hopefully can do a better job of looking at it carefully.
Came here to say something similar. I think all fields are now riddled with scam journals that have cropped up to feed the enormous demand from desperate grad students and postdocs, typically outside of the U.S. and Europe. You pay your money, get some nominal peer review, and presto, you're published in an open source journal that no one reads and carries zero weight with institutions and funding agencies. I didn't know they were scraping other people's data to "scoop" them; fortunately my data isn't public until published. I don't know what fraction of published articles they comprise, but however big it is, it's a fraction that is almost completely ignored.
Government funded research is moving toward all associated data being made public. NASA has been there since ~1990, with any spacecraft data being in a public archive. Practice has been a one year proprietary period. That window is shrinking as of late.
I'm OK with it being shrunk to ~6 months, and in some cases being shrunk further to ~3 months (e.g., special opportunities like every NASA observatory chasing after the counterparts to the merging neutron stars that were observed in gravity waves ~5 years ago; it's useful to have that data out there more quickly). But the two week thing is silly, and just encourages fast, sloppy work.
This is in part being driven by the reproducibility issue. Funding agencies are also pushing to have people put their analysis code on public repositories. I'm also OK with that. For any paper I write, I usually have a script that I can hand out that will create all the statistics, tables, and figures from the paper. (Maybe not the soup to nuts analysis, but a fair chunk of the end results.) And I generally think that's a good thing to make this available.
I'm NIH-funded and they are not requiring that yet. Journals I publish in are requiring that you post your data in an online repository upon publication, probably at the NIH's behest. I think I'd be ok with posting data after 6 months, but even so it was nearly 3 years from the time data collection was complete to final publication of my most recent paper (COVID and an obstreperous postdoc adviser played a role in that), so I'm grateful I wasn't required to post it as I collected it.
Same here (ocean circulation). All the journals I’d publish in require a demonstrated public archive before acceptance. Reviewers are asked to test it for usability.
This will eventually become a burden when superseded data (say when more careful quality control is done after initial publication, or long-obsolete model runs) need perpetually-maintained accessible storage. The journals I know have not yet faced up to that problem.
There is a reason that the FDA doesn't pay much attention to "clinical trials" that are performed outside the US (or at least the developed world). This has been an issue for many decades. I think that clinical research in the US is >80% on the up-and-up. Phase 4 studies excluded.
You know, what fascinates me is how similar this is to "news" websites. The whole "fake news" thing was first about websites set up in places like Croatia or something with purported news headlines and photos that were completely made up.
i also wonder about the prevalence of drug companies running multiple studies but only publishing ones with successful results
Since 2007, drug studies related to approval are registered at initiation to prevent selective reporting. See: https://research.oregonstate.edu/irb/clinicaltrialsgov-registration-reporting-requirements
thx; didn't know that, so feel somewhat better about that part of the process
This is what terrifies me so much about AI tool use for scholarly literature reviews. When I have tested them, I've gotten a lot of predatory journal content. This is to be expected - predatory journals are really easy to find online, so they get sucked up into the data used in AI tools.
As AI does more and more of the "reading" and data-gathering for lit reviews, we'll see an ever growing proportion of cited literature being total junk.
The problem really has been exacerbated with the push for open access. The intention is good-the idea being that taxes pay for much research, therefore taxpayers should be able to read articles on research they already paid for without additional fees or having to schlep to an academic library. As noted above, government agencies are now requiring open access publishing in many cases for this reason. However, the unintended consequence has been the rise of predatory journals, who rip off scientists who are young, naive, work in underprivileged countries and desperately need things to put on their resumes. This also makes it easy for fraudsters-just pay a fee and put your crappy article on line for the world to see! No need to go through all that tedious peer review stuff, we'll make sure your article is published, wink, wink. And it's not just journals, it's also predatory conferences and fellowships, who are happy to take money from just about anyone, no questions asked. I'm flooded with e-mails from these "organizations" begging me to speak at some conference on a topic I have no expertise in or congratulating me on being nominated to be a fellow in some "society" I've never heard of (just pay a low fee of x$)! It's really a scourge.
"However, the unintended consequence has been the rise of predatory journals,"
That is because the author-pay model is the wrong one. What the funding agencies need to do is to fund the publishing process itself, similar in principle to the way they fund the science.
Science won't "get its house in order" until colleges and Universities get their house in order with respect to tenure and promotion policies and procedures.