17

There's a group of authors that are incredibly productive, having sent papers to a journal I'm handling at a rate of a little more than one paper a week for over four months.

This rate seems impossibly high. Should I be concerned, and if so, what should I do? The papers look average - not groundbreaking, but not in the desk reject range either. There's also no indication that the papers are fraudulent; they pass plagiarism checks and the reviewer comments (from a diverse group of reviewers) look normal.

10
  • 4
    How big is that group? Commented yesterday
  • 2
    And what field? Physics? Commented yesterday
  • 1
    Do they have a track record at publishing at this pace (across all journals)? Commented yesterday
  • 3
    I suppose you're satisfied that the papers are not being AI generated? Commented yesterday
  • 2
    Is the work described in the papers something that could plausibly be done in a week by a group of this size? One could also imagine that they have a body of research conducted over a long period, and are now writing it up all at once, but with the sustained pace of submissions that seems less likely. Commented yesterday

5 Answers 5

14

If the group is fairly large and meet frequently then this might be normal. It would be if they are working in a fairly new area laying the groundwork. It is also possible that they've been working quietly for a while and just recently reached the point of getting their findings into print.

Think about the size of the research groups at a place like CERN.

I suggest you treat these as you normally would other papers if you have no evidence of misconduct. If you have better papers to publish then advance those.

You don't raise any red flags, especially from reviewers, but you haven't mentioned AI. Use of AI would make a difference if you could establish that.

Another issue you don't raise is "salami slicing" which isn't generally advised. Would the output be "better" if collected in fewer papers?

0
8

Some types of papers can be generated very quickly with AI tools. Data analyses (PCA, MDS, FA, and calculation of various indices and metrics for example) and simple modelling exercises (GLM, GAM, RF, XGB, ridge regression, Bayesian hierarchical modelling, basic DL and NNs, etc.) that used to take weeks can be done and written up in a day, with reproducible figures and code repositories to back them up. AI tools can also greatly speed up literature reviews that used to take months. And they can speed up structuring of manuscripts and discussions, even if they aren't abused to actually write the papers.

This has been developing rapidly over the past few years -- the methods have been accessible to expert users for a few years now, but even in just the last month, new tools have been released that make them accessible to everyone able to pay a small subscription fee.

Personally, I usually only write one first-authored paper a year, but honestly if I were targeting the low-hanging data analysis and modelling fruit with the tools available now, and working hard at it, I'd be able to pump out one mid-tier journal paper a week now, too.

If it's not this type of paper, some other fields have always had ways for those with resources to pump out low impact, quick win papers: a paper each for running the same test on several different substances in materials science, a paper each for running the same quick lab study on several low-cost species in biology, a quick paper for each small n social psychology experiment.

2
  • 1
    AI is only telling us what we already knew, which is that "average" papers are worthless and only "breakthrough" papers are meaningful. Now if only methods for evaluating researchers would catch up... (Well, R1 hiring in theoretical mathematics in the US caught onto this a long time ago - researchers are mostly evaluated on their best one or two papers, not on quantity.) Commented yesterday
  • 1
    @AlexanderWoo In the UK it is mostly bean-counting outputs that are "three or four stars" (implying REF potential). However, most panels do not have sufficiently broad and deep expertise to evaluate quality, and the decision collapses to a simple bean-counting, sometimes using journal IF as a proxy. Commented yesterday
8

There was a group I knew in a particular field in computational mathematics who had this principle where they send absolutely no papers for a year, and work on problems. And the next year they work on no problems and only write papers and submit. Although it is an extreme case, I wouldn't say it is impossible.

4

If they are from reputable institutions, contact the institutions' ethics/compliance section. It is up to them to investigate. Clearly this only works if the institutions care about their academic reputations and the damage that having their people participating in paper mills does to them.

Note that a high rate of output is not automatically fraudulent. I have heard recently of a case at my institution where an academic was publishing at a seemingly ridiculous rate. They were investigated, and were one of those rare cases where it is legit. Lots of documentation demonstrating that they are indeed being that productive with their team and pumping out huge volumes.

2

A main factor for paper count is simply group size. Let's assume a group size of 50 PhD students who take 3 years on average to graduate. If they write 3 papers each on average, which I would consider realistic, you quickly roughly reach the one paper per week range. Certain more senior group members will quite often be an author on many/most papers.

I would consider it unusual to submit a too big fraction of the manuscripts to a single journal. But people make apparently strange decisions all the time, sometimes for good reasons.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.