An Example of Poor Peer-Review

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

DynamicDidactic

Still Kickin'
10+ Year Member
Joined
Jul 27, 2010
Messages
1,819
Reaction score
1,526
Saw this on Retraction Watch

Though, RW's entries are usually hard to make sense of. So, I found this:

Here is the original article

I still have no idea what RW is talking about when it comes to a inconsistencies stemming from a change in study design. But, overall, how could the reviewers miss all these glaring issues? Just glancing over the article, I see a lot of other issues related to treatment-construct validity. Basically, it is hard to tell if these therapists were actually doing CBT (seems like just CT) and maybe they were doing both treatments. Also, they are all "well-trained psychologists" despite the fact that some are MA-level and others are doctorate without any evidence to support their training.
The therapist encourages the patients to imagine, using the skills learned in therapy to cope with the events, feelings, and thoughts, and to anticipate when and how they can apply the skills learned in therapy for future situations.
If this was the extent of the therapy, no wonder there is no difference between groups.

Members don't see this ad.
 
Peer review is an inefficient system of external research quality management.

Totally agree, through seeing reviews of my published work, and seeing other reviewers' statements when I review work, vast differences in quality and time spent in these reviews.

Considering how much these journals charge, they should have some identified individuals paid on a part-time basis to do this work.
 
  • Like
Reactions: 3 users
Members don't see this ad :)
I see so many papers with flaws that should have been caught by peer review....and I don't necessarily mean flaws in the study because those are inevitable (no study is perfect) but in the way it's analyzed or described, which could be fixed or clarified in the presentation. On the flip side, I also have had decent papers (in my opinion) get rejected over and over (mostly in social psych/personality journals, which are VERY stringent right now) for things that can't be fixed. Peer review is about helping people clarify their message, but there's just a lot of "you should run 3 more studies" or "I don't think this study adds anything" neither of which are always feasible or helpful--I wouldn't DO a study if I didn't think it added something....why is the reviewer's sense of importance more valuable than mine? So it becomes a "one said" versus "another said" kind of thing and it just feels gross. Can't we just help each other instead of trying to tear each other down?
 
  • Like
Reactions: 1 users
When I was a research assistant, an academic psychiatrist who was the clinic director as well as a professor emeritus at the university’s medical school would pass the job of reviewing articles received to the research assistants, some of whom only had bachelors degrees. The psychiatrist is very well known in her field and has a few hundred peer reviewed publications.

The psychiatrist would look over what the research assistants wrote but typically did not change anything. We were essentially the “ghost writers.” We did not have the knowledge or skill to undertake peer reviewing, because we were not reviewing “peers”; we were reviewing articles written by physicians and psychologists.
 
When I was a research assistant, an academic psychiatrist who was the clinic director as well as a professor emeritus at the university’s medical school would pass the job of reviewing articles received to the research assistants, some of whom only had bachelors degrees. The psychiatrist is very well known in her field and has a few hundred peer reviewed publications.

The psychiatrist would look over what the research assistants wrote but typically did not change anything. We were essentially the “ghost writers.” We did not have the knowledge or skill to undertake peer reviewing, because we were not reviewing “peers”; we were reviewing articles written by physicians and psychologists.
Sad. That doesn't advance the science at all. Also: journals get what journals pay for.
 
  • Like
Reactions: 1 user
I agree; having peer review be essentially 100% voluntary is great in theory, but difficult in practice, especially in this world of increasing clinical time demands and decreasing proportions of the field in academia.
Not just that, but those of us in academia have so much work to do and get so little tenure and promotion credit for peer review.

Speaking of questionable stuff, I just reviewed a manuscript that talked about it's "all female sample" for a purportedly female-exclusive measure they had developed--only for the demographic table to reveal that almost 20% of their sample did not, in fact, identify as female.
 
  • Like
Reactions: 3 users
Since graduate school I have been saying that peer-review is <at best> a review of the presentation of the science rather than the actual science. Even there it is going to be imperfect, but I think we often forget that even a rigorous peer-review is not a great way to actually confirm the quality of the work. There are simply too many details that a peer-reviewer would never get into or catch (ala my earlier post about people running analyses in R who don't realize that R's anova function is running a fundamentally different type of analysis than what anyone would expect it to do). I am about as confident as I can be there are a huge number of errors in the literature from that one. Peer review would have no conceivable way of catching it. We have a lot of blind trust in the process. Are we confident faculty are having all data double-entered and cross-checked? How confident are we in their training process for research assistants? Like it or not, these things are about as likely to impact findings as many intended manipulations....especially in established fields where folks typically advance to teasing out more subtle effects.

The problem is time and money. A proper peer-review would involve a full on-site lab audit. No one wants to pay for that. No peer-reviewer has the time for that. Mine are frequently done late at night with a beer in hand and my reviews are usually more detailed than most.

Long-term, I think systemic changes are needed. I think there is something to be said to moving towards a model where we are funding/reviewing <labs> in lieu of individual studies. Or perhaps in addition to individual studies. AMCs are doing this quite often now. I mean, its usually poorly set up and they send out a nurse who has never done anything but a pharma clinical trial and is weirdly focused on "crossing things out the correct way" but has no idea how statistics work so....not that. But in theory if we wanted to do it well and dedicated resources to it, I think we could.

We need to think big. A lot of the current efforts seem more the proverbial bandaid on the broken limb than actual solutions to what is plaguing the field.
 
  • Like
Reactions: 3 users
I'm going to say that journal page limits are an issue with full disclosure as well. When you have a strict 25 page limit for everything (abstract, table, reference, title page, etc), seemingly small but potentially important details about methodology are probably going to end up on the chopping block, sadly. I also see well-conducted systematic reviews and meta-analyses are key to noting and correcting some of these issues.
 
  • Like
Reactions: 1 user
At least most of the top journals allow/encourage online supplements these days and that seems to be becoming more widespread. I think about half of the reviews I do I end up telling authors to include additional material in a supplement if necessary and I am not shy about making folks cut intro/discussion for more methods. For my current project the supplement and the manuscript itself are competing to determine which will be longer and this is not atypical in some fields. Its not a perfect solution since many people won't check them, but it is at least a start. Honestly, the whole notion of "page limits" seems stupid to me at this point - its a relic from the "print" days. Serious question - has anyone on this board actually read a physical copy of a journal (not counting an article you printed yourself) in the last oh....3 years or so? I think the only time I did was when I got a free copy of something at a conference and flipped through it during a talk I wasn't interested in. I guess copyeditors charge by length too, but its a fraction of the cost of printing and is mostly automated these days anyways.

I will semi-agree with you on systematic reviews, but disagree on meta-analysis. Well....that's unfair. Partially disagree. They are a tool and do have their place. There is a lot to be said for collating the literature. The overwhelming majority of systematic reviews I read are content-focused and rarely dig into the methodology enough to uncover nuance there, but that could be readily overcome. Meta-analysis I am quite a bit more suspicious about since it makes it even easier to avoid discussing the underlying studies. It works well for relatively simple questions or fields with extremely homogeneous methods (e.g. pharm trials). These were largely what it was designed to do. Its now become a way for authors and reviewers to "make numbers" and feel good about themselves for how scientific they are being. They can then conclude "d = .563...that's good right?!?" and call it a day. It is invariably a train wreck and that recent thread about the latest dodo bird article is not even close to the most egregious example I have seen. Sadly, I'd actually place it in the low-average category. I worry a lot about meta-analyses actually clouding the picture by obscuring methodological detail.s and just giving us one nice friendly effect size so we can ignore all the complicated methodology gobbledygook. I dug into an fMRI meta a few weeks back for a paper I'm working on and discovered the region the meta highlighted was literally not there in any of the underlying studies. Admittedly ALE is a different technique than standard meta-analysis and there could even be legitimate reasons why this might occur...but its a good illustration of the point because the authors certainly didn't discuss that issue and I wouldn't expect reviewers to catch it.

And sadly...that one I would place in at least the upper third of meta-analyses I have read. Yes, even with that problem.
 
Serious question - has anyone on this board actually read a physical copy of a journal (not counting an article you printed yourself) in the last oh....3 years or so? I think the only time I did was when I got a free copy of something at a conference and flipped through it during a talk I wasn't interested in. I guess copyeditors charge by length too, but its a fraction of the cost of printing and is mostly automated these days anyways.

I do skim through the teaching journal that I still get in print on the walk from my mailbox to my office, but that's about it. And I totally agree that page limits are stupid--cutting down the rigor of the methods to fit into the page limits is just ridiculous.

And I also agree it's valuable to be skeptical about meta-analyses...and I especially appreciate that statement coming from YOU, who is more stats-saavy than I am. :)
 
At least most of the top journals allow/encourage online supplements these days and that seems to be becoming more widespread. I think about half of the reviews I do I end up telling authors to include additional material in a supplement if necessary and I am not shy about making folks cut intro/discussion for more methods. For my current project the supplement and the manuscript itself are competing to determine which will be longer and this is not atypical in some fields. Its not a perfect solution since many people won't check them, but it is at least a start. Honestly, the whole notion of "page limits" seems stupid to me at this point - its a relic from the "print" days. Serious question - has anyone on this board actually read a physical copy of a journal (not counting an article you printed yourself) in the last oh....3 years or so? I think the only time I did was when I got a free copy of something at a conference and flipped through it during a talk I wasn't interested in. I guess copyeditors charge by length too, but its a fraction of the cost of printing and is mostly automated these days anyways.

I will semi-agree with you on systematic reviews, but disagree on meta-analysis. Well....that's unfair. Partially disagree. They are a tool and do have their place. There is a lot to be said for collating the literature. The overwhelming majority of systematic reviews I read are content-focused and rarely dig into the methodology enough to uncover nuance there, but that could be readily overcome. Meta-analysis I am quite a bit more suspicious about since it makes it even easier to avoid discussing the underlying studies. It works well for relatively simple questions or fields with extremely homogeneous methods (e.g. pharm trials). These were largely what it was designed to do. Its now become a way for authors and reviewers to "make numbers" and feel good about themselves for how scientific they are being. They can then conclude "d = .563...that's good right?!?" and call it a day. It is invariably a train wreck and that recent thread about the latest dodo bird article is not even close to the most egregious example I have seen. Sadly, I'd actually place it in the low-average category. I worry a lot about meta-analyses actually clouding the picture by obscuring methodological detail.s and just giving us one nice friendly effect size so we can ignore all the complicated methodology gobbledygook. I dug into an fMRI meta a few weeks back for a paper I'm working on and discovered the region the meta highlighted was literally not there in any of the underlying studies. Admittedly ALE is a different technique than standard meta-analysis and there could even be legitimate reasons why this might occur...but its a good illustration of the point because the authors certainly didn't discuss that issue and I wouldn't expect reviewers to catch it.

And sadly...that one I would place in at least the upper third of meta-analyses I have read. Yes, even with that problem.
I guess I have unrealisticly high standards for systematic reviews and metas because I would harshly criticize any that didn't include thorough assessment of study quality as a major part of it and any metas that didn't involve an actual systematic review process. That said, I think a lot of people grossly underestimate how incredibly time-consuming it is to do a truly good, thorough systematic review. It's incredibly time-consuming to do right.
 
  • Like
Reactions: 1 users
Top