OPENCAFE-L Archives

OpenCafe-l

OPENCAFE-L@LISTSERV.BYU.EDU

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Glenn Hampson <[log in to unmask]>
Reply To:
Glenn Hampson <[log in to unmask]>
Date:
Sun, 25 Feb 2024 21:48:01 +0000
Content-Type:
text/plain
Parts/Attachments:
text/plain (89 lines)
Piling on Collin...

You ask a great question ("Does anyone have good evidence that IF *isn't* a major factor in grant funding?"). Unfortunately (at least as far as I’m aware---and maybe someone with more knowledge will be able to jump in here), I don’t think there’s a great answer. 

For one, because there is HUGE diversity in this space (as I’m sure you already know), shaped by factors like field of study, study type, award size, granting agency, institution, publishing norms, and the career stage of the researchers, it’s hard to draw any blanket conclusions about anything. 
For example, surveys have consistently shown that publishing in high impact factor journals is very important to most early career researchers. Lutz Bornmann and Richard Williams coauthored a nice 2017 study (https://arxiv.org/ftp/arxiv/papers/1706/1706.06515.pdf) looking at one aspect of this relationship (and advising granting agencies to “not rely solely on early JIFs…when rewarding work and allocating resources.”). At the same time, some institutions like Harvard and MIT, and countries like the UK (through REF) make more of a conscious effort to downplay reliance on JIFs (and other journal-level metrics like CiteScore, SNIP, SJR, etc.) in research evaluations. So, while you can probably prove your hypothesis for certain unique audiences---e.g., that impact factors aren’t a major factor in RPT at MIT if you’re a senior physics researcher—the data may be too diverse to make this same case for all researchers everywhere. 

IMHO, I think you might be better off finding out where specific researchers in your field currently publish, and how these publishing stats have been received by the funding agency you’re approaching (maybe work backward by looking at the CVs of successful grant applications in your field?). Maybe someone in your field has already done all this heavy lifting? Or maybe there are tools out there that help researchers with this kind of work? What do others think here?

The bigger picture about impact is muddier (see https://bit.ly/3IdCuv1 and https://bit.ly/42PwjH5, for example). There’s been a lot of chatter for years about moving away from JIFs, of course, but there hasn’t been a lot of actual moving away yet because the whole idea of measuring impact is so ingrained in modern science. There are many initiatives (at the national, funder, and institutional level) to take a more wholistic look at impact (including more inputs, rewarding others like reusability, and attempting to downplay traditional valuations like high impact journals), but big, systemic change, if it happens at all (and we all hope that it will), will probably only happen slowly and in one small space at a time.

Good luck with this---hopefully someone else can give you more useful advice 😊

Best,

Glenn

-----Original Message-----
From: OpenCafe-l <[log in to unmask]> On Behalf Of Jean-Claude Guédon
Sent: Sunday, February 25, 2024 10:45 AM
To: [log in to unmask]
Subject: Re: [OPENCAFE-L] European Policy Shifts

Thank you, Collin, for this. The following document, although a little old (2013) and more of a blog than a refereed article, does contain interesting ideas and its author, Jeroen Bosman, is one of the authors of the diamond journal survey mentioned by Glenn Hampson on this list: 
https://im2punt0.wordpress.com/2013/11/03/nine-reasons-why-impact-factors-fail-and-using-them-may-harm-science/
. Further arguments can also be found here: 
file:///home/jc/Downloads/f1000research-247421.pdf . Bjoern Brembs has also published on the relationship between between JIF and reliability of results (doi: 10.3389/fnhum.2018.00037 and doi.org/10.1371/journal.pbio.3000117).

Anecdotally, I can recount a form of behaviour that truly astonished (and deeply amused) me: a granting agency that will remain unnamed briefed evaluating juries by specifically underscoring that the objective was to evaluate quality and not rank the submissions. One member of the jury stood up and, rather excitedly, exclaimed: how can I evaluate quality if I am not allowed to rank?

This mind set is very deeply rooted in our communities. Its origins are old, of course, and are based on competition competition between scientists has long been with us, but it largely rested on issues of precedence (Darwin vs Wallace, Crick and Watson vs. Pauling) or on substance (Einstein). Resolution would emerge out of the ensuing verification, debates and criticisms. In effect, this process is a form of post-publication and open peer review, and it is central to the process of producing knowledge. It still goes on, as it should: science is an ever-evolving attempt to interpret reality that will never reach the level of total certainty. That is what makes science so exciting.

Competition, this said, was enormously intensified when commercial publishers, after WWII, found a way to align the commercial competition of journals with scientific and scholarly competition proper. Where you publish could become more important than what you publish. The JIF, of course, became a keystone of this new structure. In fact, until recently, the JIF was published with three, meaningless, decimals, presumably, as Garfield lamely argued, to avoid having two journals with an identical JIF!

The result of all this is exactly what the funny jury member I referred to above stated: a complete confusion between rankings and quality of research. The effects on all is enormous, but it also helps construct the market of journals and the struggle for market shares. It reaches from individuals to whole countries.

What all that does to editors, peer review, etc., I leave to my readers' 
imagination: the publishing zone where the publishing commercial interests and the scientific intellectual interests intersect remains very opaque. Once in a while, we hear of an editorial board resigning because, on top of the rest, the publishers have made sure to own the journal's title. We also hear about increasingly inappropriate scientific behaviour - the recent case of the president of Stanford is still very present in our minds. "Retraction watch" feed us with news that keep our blood pressure up. In short, signals potentially pointing to systemic dysfunctions in the research ecosystem are intensifying, and this is worrisome, very worrisome. At least, it worries me.

Jean-Claude

On 2024-02-24 10:41, Collin Alexander Drummond wrote:
> On Sat, 24 Feb 2024 06:27:18 -0500, Jean-Claude Guédon <[log in to unmask]> wrote:
>> The financing issue is real. However, as has already been pointed out 
>> by several people on this forum, if funding agencies and libraries 
>> (where the money largely resides) looked at the situation lucidly, 
>> they would finance diamond journals rather than pay APCs. If 
>> researchers complain because they want to publish in high IF journals 
>> (prestige and visibility seeking), they should be told that funding 
>> agencies and libraries are interested in quality knowledge, not 
>> prestige or even visibility.
> I really like this idea, but it has been a challenge for me in practice. A lot of our faculty (at least in our Cancer Center, which I'm most familiar with) are convinced that the NIH will only care about articles published in high-IF (>10) journals, so from their perspective, publishing in a low-IF or no-IF journal would lower their chances on the next grant application. I know the NIH considers a lot more than just IF, but I haven't seen any research about the impact of IF on grant applications. And the number of high-IF, diamond, cancer journals is pretty small.
>
> Does anyone have good evidence that IF *isn't* a major factor in grant funding? If so, I would love to hear about it so I can make a stronger case to our faculty for publishing in diamond journals.
>
> Collin
>
> ######################################################################
> ##
>
> Access the OPENCAFE-L Home Page and Archives:
> https://listserv.byu.edu/cgi-bin/wa?A0=OPENCAFE-L
>
> To unsubscribe from OPENCAFE-L send an email to:
> [log in to unmask]
>
> ######################################################################
> ##

########################################################################

Access the OPENCAFE-L Home Page and Archives:
https://listserv.byu.edu/cgi-bin/wa?A0=OPENCAFE-L

To unsubscribe from OPENCAFE-L send an email to:
[log in to unmask]

########################################################################

########################################################################

Access the OPENCAFE-L Home Page and Archives:
https://listserv.byu.edu/cgi-bin/wa?A0=OPENCAFE-L

To unsubscribe from OPENCAFE-L send an email to:
[log in to unmask]

########################################################################

ATOM RSS1 RSS2