OPENCAFE-L Archives

OpenCafe-l

OPENCAFE-L@LISTSERV.BYU.EDU

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jean-Claude Guédon <[log in to unmask]>
Reply To:
Jean-Claude Guédon <[log in to unmask]>
Date:
Sun, 25 Feb 2024 13:45:18 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (94 lines)
Thank you, Collin, for this. The following document, although a little 
old (2013) and more of a blog than a refereed article, does contain 
interesting ideas and its author, Jeroen Bosman, is one of the authors 
of the diamond journal survey mentioned by Glenn Hampson on this list: 
https://im2punt0.wordpress.com/2013/11/03/nine-reasons-why-impact-factors-fail-and-using-them-may-harm-science/ 
. Further arguments can also be found here: 
file:///home/jc/Downloads/f1000research-247421.pdf . Bjoern Brembs has 
also published on the relationship between between JIF and reliability 
of results (doi: 10.3389/fnhum.2018.00037 and 
doi.org/10.1371/journal.pbio.3000117).

Anecdotally, I can recount a form of behaviour that truly astonished 
(and deeply amused) me: a granting agency that will remain unnamed 
briefed evaluating juries by specifically underscoring that the 
objective was to evaluate quality and not rank the submissions. One 
member of the jury stood up and, rather excitedly, exclaimed: how can I 
evaluate quality if I am not allowed to rank?

This mind set is very deeply rooted in our communities. Its origins are 
old, of course, and are based on competition competition between 
scientists has long been with us, but it largely rested on issues of 
precedence (Darwin vs Wallace, Crick and Watson vs. Pauling) or on 
substance (Einstein). Resolution would emerge out of the ensuing 
verification, debates and criticisms. In effect, this process is a form 
of post-publication and open peer review, and it is central to the 
process of producing knowledge. It still goes on, as it should: science 
is an ever-evolving attempt to interpret reality that will never reach 
the level of total certainty. That is what makes science so exciting.

Competition, this said, was enormously intensified when commercial 
publishers, after WWII, found a way to align the commercial competition 
of journals with scientific and scholarly competition proper. Where you 
publish could become more important than what you publish. The JIF, of 
course, became a keystone of this new structure. In fact, until 
recently, the JIF was published with three, meaningless, decimals, 
presumably, as Garfield lamely argued, to avoid having two journals with 
an identical JIF!

The result of all this is exactly what the funny jury member I referred 
to above stated: a complete confusion between rankings and quality of 
research. The effects on all is enormous, but it also helps construct 
the market of journals and the struggle for market shares. It reaches 
from individuals to whole countries.

What all that does to editors, peer review, etc., I leave to my readers' 
imagination: the publishing zone where the publishing commercial 
interests and the scientific intellectual interests intersect remains 
very opaque. Once in a while, we hear of an editorial board resigning 
because, on top of the rest, the publishers have made sure to own the 
journal's title. We also hear about increasingly inappropriate 
scientific behaviour - the recent case of the president of Stanford is 
still very present in our minds. "Retraction watch" feed us with news 
that keep our blood pressure up. In short, signals potentially pointing 
to systemic dysfunctions in the research ecosystem are intensifying, and 
this is worrisome, very worrisome. At least, it worries me.

Jean-Claude

On 2024-02-24 10:41, Collin Alexander Drummond wrote:
> On Sat, 24 Feb 2024 06:27:18 -0500, Jean-Claude Guédon <[log in to unmask]> wrote:
>> The financing issue is real. However, as has already been pointed out by
>> several people on this forum, if funding agencies and libraries (where
>> the money largely resides) looked at the situation lucidly, they would
>> finance diamond journals rather than pay APCs. If researchers complain
>> because they want to publish in high IF journals (prestige and
>> visibility seeking), they should be told that funding agencies and
>> libraries are interested in quality knowledge, not prestige or even
>> visibility.
> I really like this idea, but it has been a challenge for me in practice. A lot of our faculty (at least in our Cancer Center, which I'm most familiar with) are convinced that the NIH will only care about articles published in high-IF (>10) journals, so from their perspective, publishing in a low-IF or no-IF journal would lower their chances on the next grant application. I know the NIH considers a lot more than just IF, but I haven't seen any research about the impact of IF on grant applications. And the number of high-IF, diamond, cancer journals is pretty small.
>
> Does anyone have good evidence that IF *isn't* a major factor in grant funding? If so, I would love to hear about it so I can make a stronger case to our faculty for publishing in diamond journals.
>
> Collin
>
> ########################################################################
>
> Access the OPENCAFE-L Home Page and Archives:
> https://listserv.byu.edu/cgi-bin/wa?A0=OPENCAFE-L
>
> To unsubscribe from OPENCAFE-L send an email to:
> [log in to unmask]
>
> ########################################################################

########################################################################

Access the OPENCAFE-L Home Page and Archives:
https://listserv.byu.edu/cgi-bin/wa?A0=OPENCAFE-L

To unsubscribe from OPENCAFE-L send an email to:
[log in to unmask]

########################################################################

ATOM RSS1 RSS2