The IEEE sent me their annual report for 2022. I was wondering how they were responding to the poor quality of many of their published papers, including faked papers and various paper retractions. Let’s take a quick look.
This post won’t help you directly with your CSP work or your signal-processing education, but I do hope it might help you indirectly. It might help you by illustrating that peer-review is broken, so that published technical papers should be viewed with extreme suspicion (including mine of course), and that the gold-rush mentality that infects so many ML and AI researchers feeds a growing boldness on the part of grifters and fraudsters.
You can download either version of the IEEE 2022 Final Report from the CSP Blog here or here. I couldn’t find any mention of the status of peer-review in the reports. They don’t contain the word ‘peer’ and the instances of ‘review’ relate to the financial-condition sections of the report.
But peer-review is a big part of the IEEE–or at least I assume it to be. According to Google (I’m not going to ask ChatGPT), about 25,000 documents are added to IEEE Xplore each month. I presume the bulk of those are conference and journal papers–a few are standards documents and the like.
And there are known problems with the IEEE peer-review process, problems well beyond those I document here at the CSP Blog. An interesting website for scientific-minded or scholarly people is Retraction Watch. They catalog retracted papers across all kinds of scholarly disciplines, and explain the reasons for the retractions and sometimes the methods by which journals are forced to do retractions. They say the IEEE is a major offender.
For example, here is a relevant Retraction Watch item about IEEE papers in 2022 (the year covered by the aforementioned IEEE Annual Report):
So I wonder why the IEEE does not see fit to report on the sorry state of peer review in its Annual Report? I also wonder why they didn’t just include some high-level statistics on peer review as yet another way to promote themselves: Total number of papers reviewed, number of reviewers, number of countries-of-origin of the reviewers, acceptance rates, etc. But instead I see nothing–just look the other way and whistle past the graveyard. Everything’s fine.
There are people that think peer review is irredeemable and say good riddance to it. One is Adam Mastroianni. He makes some interesting arguments about the failures of peer review and thinks science is better off without it. I think his main point is that science has not benefitted from peer review. We see a modern consensus that science has stagnated (although people disagree about why), and lots of opinions that at least as much good science was done before peer review than after. Peer review is expensive in time spent and is difficult labor when done honestly and conscientiously. So why even use it?
My problem with just letting peer review die is students. Won’t someone please think of the children! Heh. I just can’t countenance letting up-and-coming graduate students fend for themselves, trying to find the good stuff hidden in much larger set of crap. Adam’s argument is something like “the truth will win out in the end,”
“They do triumph eventually.” Eventually. Just tough luck for the people in the here and now, very far from eventually, whenever that is, who have to wade through the current muck. And all the muck we’ve allowed to accumulate over the years as peer review degraded.
And I’m not convinced by the pre- and post-peer-review argument. We couldn’t do a controlled experiment. We did some science pre and we did different science with different people in a different world post. Does that mean peer review is a failure? Or might it mean that without peer review that post-peer-review era would have been worse?