Skip Navigational Links
LISTSERV email list manager
LISTSERV - LISTSERV.BYU.EDU
LISTSERV Menu
Log In
Log In
LISTSERV 17.5 Help - OPENCAFE-L Archives
LISTSERV Archives
LISTSERV Archives
Search Archives
Search Archives
Register
Register
Log In
Log In

OPENCAFE-L Archives

OpenCafe-l

OPENCAFE-L@LISTSERV.BYU.EDU

Menu
LISTSERV Archives LISTSERV Archives
OPENCAFE-L Home OPENCAFE-L Home

Log In Log In
Register Register

Subscribe or Unsubscribe Subscribe or Unsubscribe

Search Archives Search Archives
Options: Use Classic View

Use Monospaced Font
Show HTML Part by Default
Show All Mail Headers

Topic: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Re: On behalf of Christine Bogman (FW: How to assess science scientifically? was: 'one shot'...)
"Walz, Anita" <[log in to unmask]>
Mon, 11 Mar 2024 13:19:40 +0000
text/plain (29 kB) , text/html (86 kB) , image001.jpg (8 kB) , image002.jpg (8 kB)
Hi Mario,

There is an entire field of Research Impact Studies. See VT's linked guides on this topic and services for VT here: https://guides.lib.vt.edu/research_impact_intelligence/impact

Anita


Anita R. Walz
Associate Professor
Assistant Director for Open Education and Scholarly Communication Librarian
Library Liaison to Economics and Legal Studies
Virginia Tech | University Libraries (0434)
560 Drillfield Drive
Blacksburg, VA 24061
[log in to unmask]<mailto:[log in to unmask]> | T: 540-231-2204 | Twitter: @arwalz
http://www.lib.vt.edu<http://www.lib.vt.edu/>
Open Educational Resources Guide http://guides.lib.vt.edu/oer

Strengths<https://experience.vt.edu/strengths.html>: Learner | Connectedness | Individualization | Achiever | Responsibility

________________________________
From: OpenCafe-l <[log in to unmask]> on behalf of Biagioli, Mario <[log in to unmask]>
Sent: Wednesday, March 6, 2024 12:47 AM
To: [log in to unmask] <[log in to unmask]>
Subject: Re: [OPENCAFE-L] On behalf of Christine Bogman (FW: How to assess science scientifically? was: 'one shot'...)


Hello there,

Does anybody know of some good literature on ‘impact”?? It seems to me that impact was introduced to bypass qualitative and arguably biased judgment, but in the end it does not seem any more transparent than ‘quality’ or excellence, or whatever.



I’m toying with the idea of writing an article or even a supershort book on impact today (and perhaps in the near future and would rather not reinvent the wheel, which is not much of a contribution.

MB





From: OpenCafe-l <[log in to unmask]> on behalf of Rick Anderson <[log in to unmask]>
Date: Thursday, February 8, 2024 at 10:06 AM
To: [log in to unmask] <[log in to unmask]>
Subject: [OPENCAFE-L] On behalf of Christine Bogman (FW: How to assess science scientifically? was: 'one shot'...)

You don't often get email from [log in to unmask] Learn why this is important<https://aka.ms/LearnAboutSenderIdentification>

Listers –



Due to the vagaries of her local email system, Christine Borgman’s message below was rejected by the list platform. I’m forwarding it at her request.



---

Rick Anderson

University Librarian

Brigham Young University

(801) 422-4301

[log in to unmask]





From: CHRISTINE L BORGMAN <[log in to unmask]>
Date: Thursday, February 8, 2024 at 9:57 AM
To: "[log in to unmask]" <[log in to unmask]>
Cc: Rick Anderson <[log in to unmask]>
Subject: How to assess science scientifically? was: 'one shot'...



Thanks to all for a most interesting discussion and history of peer review!



I’m attempting to start a new thread to build upon the ‘one shot’ scholarly communication thread:



Let's take a few steps earlier in the cycle of scholarly inquiry to ask how to assess the quality of research proposals and projects? Put simply, how do we evaluate scientific missions scientifically?



Only the projects that are successful lead to the science that leads to the journal articles that are subject to peer review, hence our interest in tracing further back in the process.



With colleagues in astronomy, we are studying the processes by which major observatory proposals (on the scale of Keck, Hubble, JWST, etc) are evaluated at the initial state of competition, at interim reviews, and at continuing reviews for further funding or cancellation. We are finding little written about these kinds of peer reviews, which too often devolve into simple citation metrics and opaque expert judgments. Some of the evaluation reports are proprietary due to concerns for intellectual property, intelligence issues, and so on. The process is far from transparent.



The lit review by Mayernik et al (below) is among the few to address these questions.



All thoughts (and references) on how to apply peer review mechanisms to ‘big science’ appreciated.



Mayernik, M. S., Hart, D. L., Maull, K. E., & Weber, N. M. (2017). Assessing and tracing the outcomes and impact of research infrastructures. Journal of the Association for Information Science and Technology, 68(6), 1341–1359. https://doi.org/10.1002/asi.23721

Christine



On Feb 8, 2024, at 09:29, Rick Anderson <[log in to unmask]> wrote:



That is helpful, Glenn, thanks.



For me, the issue isn’t so much whether we should use the term “gold standard” to characterize peer review – I don’t care much how we characterize it. I do care whether we understand what it does and whether it’s effective for its intended purpose.



I’ll stop posting on this thread now so as to leave more room for other voices.



Rick



---

Rick Anderson

University Librarian

Brigham Young University

(801) 422-4301

[log in to unmask]<mailto:[log in to unmask]>





From: Glenn Hampson <[log in to unmask]<mailto:[log in to unmask]>>
Date: Thursday, February 8, 2024 at 9:25 AM
To: Rick Anderson <[log in to unmask]<mailto:[log in to unmask]>>, "[log in to unmask]<mailto:[log in to unmask]>'" <[log in to unmask]<mailto:[log in to unmask]>>
Cc: "[log in to unmask]<mailto:[log in to unmask]>'" <[log in to unmask]<mailto:[log in to unmask]>>
Subject: RE: [OPENCAFE-L] The 'one shot' scholarly communication talk



I’m out of my depth Rick and will defer to others here (or not here, yet) who are peer review experts---the esteemed Michael Ware comes to mind.



But to take a swipe at the answer anyway, I think peer review might best be described as part of a process that weeds out papers (for various reasons, good and bad), at which point they are put back into the submission pipeline, with or without correction, and most will eventually get published elsewhere. At the same time, this process often provides constructive feedback that can improve papers but not necessarily guarantee they are factually correct or otherwise free from substantive defect.



The surveys cited by Ware in his 2008 paper (Ware M. 2008. “Peer Review: Benefits, Perceptions and Alternatives.” PRC Summary Papers, 4:4-20. Google Scholar<https://scholar.google.com/scholar_lookup?journal=PRC+Summary+Papers&title=Peer+Review:+Benefits,+Perceptions+and+Alternatives.&author=M.+Ware&volume=4&publication_year=2008&pages=4-20&>), show an average rejection rate of about 50 percent---20% desk rejections and 30% rejections through peer review. Of the 50% accepted, most (80-ish percent of this total) are accepted on the condition they be revised. Again citing Ware (and similar stats show up elsewhere), most academics say they are satisfied with this system, and believe it helps improve their work.



So---I think the questions we’re asking are what is the true function of peer review, and what are the limits of this process? We imbue it with so much authority and ability, but it may not deserve these. It is, more accurately, an editorial review system---not really a “gold standard” of anything. If we can grapple with this reality, we can then work on designing the other review processes we need to address the wide variety of other issues peer review cannot effectively address---everything from plagiarism to fake data to bad stats.



Does this answer your question? (And please, others with more expertise in this area please do jump in.)



Best regards,



Glenn



Glenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)

[cid:image001.jpg@01DA5A76.38294340]<file:///C:/Users/ghamp/AppData/Roaming/Microsoft/Signatures/osiglobal.org>







From: Rick Anderson <[log in to unmask]<mailto:[log in to unmask]>>
Sent: Thursday, February 8, 2024 8:39 AM
To: Glenn Hampson <[log in to unmask]<mailto:[log in to unmask]>>; [log in to unmask]<mailto:[log in to unmask]>' <[log in to unmask]<mailto:[log in to unmask]>>
Cc: [log in to unmask]<mailto:[log in to unmask]>' <[log in to unmask]<mailto:[log in to unmask]>>
Subject: Re: [OPENCAFE-L] The 'one shot' scholarly communication talk



Thanks, Glenn. To your knowledge, did any of these studies find a way to evaluate articles that are rejected through peer review?



In other words, one of the key functions of a journal is to reject. The effectiveness of rejection is hugely important, which means that studying the record of published articles necessarily means ignoring a core function of peer review. With apologies for not having the time to read all of these (but sincere appreciation to you for sharing them), can you tell us whether, and if so how, any of these studies might have accounted for that?



Rick



---

Rick Anderson

University Librarian

Brigham Young University

(801) 422-4301

[log in to unmask]<mailto:[log in to unmask]>





From: Glenn Hampson <[log in to unmask]<mailto:[log in to unmask]>>
Date: Thursday, February 8, 2024 at 7:52 AM
To: Rick Anderson <[log in to unmask]<mailto:[log in to unmask]>>, "[log in to unmask]<mailto:[log in to unmask]>'" <[log in to unmask]<mailto:[log in to unmask]>>
Cc: "[log in to unmask]<mailto:[log in to unmask]>'" <[log in to unmask]<mailto:[log in to unmask]>>
Subject: RE: [OPENCAFE-L] The 'one shot' scholarly communication talk



Gladly Rick. Here are the citations from my 2020 BRISPE presentation (some studies, some articles). There are many others before and since---this is just a sample:



•         Kelly, J, T Sadeghieh, K Adeli. 2014. Peer Review in Scientific Publications: Benefits, Critiques, & A Survival Guide<https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4975196/>. EJIFCC. 2014 Oct 24;25(3):227-43. PMID: 27683470; PMCID: PMC4975196.

•         Willis, Michael. 2020. Peer review quality in the era of COVID. https://www.wiley.com/en-us/network/publishing/research-publishing/trending-stories/peer-review-quality-in-the-era-of-covid-19

•         Smith, Richard. 2006. Peer review: a flawed process at the heart of science and journals<https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1420798/>. Journal of the Royal Society of Medicine vol. 99,4 (2006): 178-82. doi:10.1258/jrsm.99.4.178

•         Horbach, SPJM, and W Halffman. 2019. The ability of different peer review procedures to flag problematic publications. Scientometrics 118, 3 39–373

•         Tennant, JP, and T Ross-Hellauer. 2020. The limitations to our understanding of peer review<https://researchintegrityjournal.biomedcentral.com/articles/10.1186/s41073-020-00092-1>. Res Integr Peer Rev 5, 6. Doi: 10.1186/s41073-020-00092-1

•         Open Scholarship Initiative. 2016. Report from the OSI2016 Peer Review Workgroup<https://journals.gmu.edu/index.php/osi/article/view/1385/1173>. doi: 10.13021/G8K88P

o   “Peer review is the worst form of evaluation except all those other forms that have been tried from time to time.-with apologies to Winston Churchill”



From: Rick Anderson <[log in to unmask]<mailto:[log in to unmask]>>
Sent: Thursday, February 8, 2024 7:29 AM
To: Glenn Hampson <[log in to unmask]<mailto:[log in to unmask]>>; [log in to unmask]<mailto:[log in to unmask]>
Cc: [log in to unmask]<mailto:[log in to unmask]>
Subject: Re: [OPENCAFE-L] The 'one shot' scholarly communication talk



> but many studies have complained over the years that the evidence is unclear whether peer

> review actually improves research (beyond making articles more readable).



Glenn, could you share links to a few of these?



---

Rick Anderson

University Librarian

Brigham Young University

(801) 422-4301

[log in to unmask]<mailto:[log in to unmask]>





From: <[log in to unmask]<mailto:[log in to unmask]>> on behalf of Glenn Hampson <[log in to unmask]<mailto:[log in to unmask]>>
Date: Thursday, February 8, 2024 at 7:27 AM
To: "[log in to unmask]<mailto:[log in to unmask]>" <[log in to unmask]<mailto:[log in to unmask]>>
Cc: "[log in to unmask]<mailto:[log in to unmask]>" <[log in to unmask]<mailto:[log in to unmask]>>
Subject: RE: [OPENCAFE-L] The 'one shot' scholarly communication talk



Wow. Living on the West coast of the US can be rough. By the time your day gets started, listserv conversations can be almost over! If I may, there are a couple of issues here that I see differently than my esteemed colleagues.



First, to this whole notion introduced by my friend Ricks and Lisa that peer review is highly effective at weeding out garbage and allowing good scholarship to get published: This is certainly true for the editorial process in general (like the desk rejection process), but it isn’t true of peer review. The peer review process is highly regarded by researchers, seen as a signal of quality (see https://bit.ly/3otwKRs), and highly valued by funders and institutions, but many studies have complained over the years that the evidence is unclear whether peer review actually improves research (beyond making articles more readable).



This process also varies by journal (see note below) and is highly subject to bias, as Daniel mentions---by idea, gender, nationality, etc.



Here’s a link to a presentation I gave a few years ago on this topic. There’s too much detail to bore you with in a listserv email but the presentation has references included if you want to dig deeper: BRISPE-presentation-final-Hampson.pdf (osiglobal.org)<https://osiglobal.org/wp-content/uploads/2021/04/BRISPE-presentation-final-Hampson.pdf>. In particular, I suggest you read Melinda Baldwin’s great paper on the history of peer review (Baldwin, Melinda. 2018. Scientific Autonomy, Public Accountability, and the Rise of “Peer Review” in the Cold War United States. Isis, volume 109, number 3). The peer review system we use today is essentially a byproduct of US Congressional oversight in the mid-1970s; it took decades thereafter for this process to become widely used throughout the world.



So what do you tell your students in your one-shot, Melissa? I don’t know. Maybe that peer review is quality-control process we invented to help “monitor” science and now it’s kind of an institution unto itself with a mythology larger than its actual value to science?



Regarding Pooja’s story about gatekeeping, I know this might not make your colleague feel better, but most papers are rejected at least once, for any number of reasons (as Jean-Claude explained, like a bad fit with the journal’s focus). Across all kinds of journals, the average rejection rate of articles is a whopping 60-65% (https://doi.org/10.3145/epi.2019.jul.07). Individual rates vary widely by journal, ranging from 0-90% and higher. About 20% of papers get rejected before peer review for being out of scope, among other reasons (see https://bit.ly/2YnYoVv). All this said, most papers eventually get published somewhere. Two-thirds of preprints posted before 2017 were later published in peer-reviewed journals within 12-18 months (see https://doi.org/10.7554/eLife.45133). Also, if your colleague is submitting to 65 different journals, it seems like they might be casting a net that is too wide (and too unfocused), which might not be the best approach.



And finally, to Toby and Danny’s big-picture thinking, here’s an infographic OSI created a few years ago to show how review and publishing fit into (and feed into) the full idea lifecycle: OSI-Infographic-1.0: The Idea Lifecylce (osiglobal.org)<https://osiglobal.org/wp-content/uploads/2021/02/OSI-Infographic-1.0-1.pdf>. There’s a lot more to research than just publishing, obviously, but to Jean-Claude’s point, publishing still plays a critical role (and it always has throughout the history of science). What form this takes in the future is where so much of the attention and effort in the OA reform space has been directed.



Best regards,



Glenn





Glenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)

[cid:image002.jpg@01DA5A76.38294340]<file:///C:/Users/ghamp/AppData/Roaming/Microsoft/Signatures/osiglobal.org>





Note: Generally speaking, specialty and prestige journals provide high quality peer review; even some preprint servers are experimenting with new forms of peer review. Regional journals don’t always provide the kind of peer review required by specialty journals; peer review quality here varies widely.







From: OpenCafe-l <[log in to unmask]<mailto:[log in to unmask]>> On Behalf Of Daniel Kulp
Sent: Thursday, February 8, 2024 6:32 AM
To: [log in to unmask]<mailto:[log in to unmask]>
Subject: Re: [OPENCAFE-L] The 'one shot' scholarly communication talk



At the end of the day, peer review is run by people (editors, reviewers, etc.), and people are susceptible to bias. Is peer review perfect?  No, it’s not. It is likely the best we have, at the moment.  I certainly support experiments in the publishing industry, but I have yet to see a process which is consistently better and able to applied at scale. That would be how I would frame peer review to students.



Daniel Kulp, PhD

Founder, PIE Consulting
Publicationintegrity.com<http://publicationintegrity.com/>







On Feb 8, 2024, at 5:21 AM, Jean-Claude Guédon <[log in to unmask]<mailto:[log in to unmask]>> wrote:



My own take on peer review is that it is part of a larger process which I have sometimes described as the "Great Conversation" that stands behind knowledge producing. Voltaire says somewhere - and I am paraphrasing - it is difficult to live without certainty, but believing that certainty exists is ridiculous. Knowledge production works exactly at that level, and that is where it differs from believing or convictions. Peer review is part of the process one can use to allow the best forms of human thinking to percolate to the surface and become reference points for further evolution of knowledge. Knowledge can only claim reliability, not certainty.

Rick is right when he says that, when executed competently and honestly, peer review is highly effective. The main problem is that parts of the process can remain quite opaque. For example, that "desk rejection" Rick mentions generally involves one person only. That person - the editor - may have two divergent objectives in his/her mind: on the one hand, his/her notion of quality, and, on the other, the effect of the article on the general standing of the journal, especially in a tightly controlled competition system such as the impact-factor driven mechanism.

Imagine yourself in the following situation: you have room (i.e. resources) to publish one article. You have two submissions. One article is on a hot topic but its quality is ho-hum. The other one appears stellar but on a topic that is more marginal in the present development of knowledge (perhaps it is not yet well understood, or whatever). Which principle will be used at the desk rejection level? The first article is bound to improve your impact factor; the second article may lower your impact factor. This is because the relationship of the impact factor to quality is both tenuous and ambiguous.

More generally, how is peer review affected by the fact that scientific articles and journals are two different kinds of objects  but have been entangled with each other since the advent of print. And this leads to another question: in a digital world, do journals still matter, and, if so, how can they matter? How do journals relate to platforms? To communities? Etc.

Jean-Claude Guédon

On 2024-02-07 23:23, Rick Anderson wrote:

Here’s how I explain peer review to students:



When an author submits her paper to a peer-reviewed journal, the journal’s editor gives it a first look. If it doesn’t appear to be up to scratch (which can mean any number of things: obviously poor methodology, illegibility, irrelevance, etc.) then the editor rejects it -- we call this “desk rejection.” If it looks like it has promise, the editor sends it out to one or more (usually at least two) reviewers. They’re called “peer” reviewers because they work in the same field as the author, or a closely adjacent one, so they’re in a good position to evaluate the scholarship. The reviewers are asked to read the paper more closely and evaluate it on its scholarly merits: is its methodology sound; do the conclusions proceed from the data; is it well organized and cogently written; do the cited works actually support the arguments in support of which they’re cited; etc. The reviewers submit reviews with recommendations as to whether the article should be rejected, or returned for revision, or published as is. This process may involve two or three rounds before the paper is finally published or rejected.



It's by no means a fail-safe system, but when executed competently and honestly, it’s highly effective at weeding out garbage and allowing good scholarship to get published. Unfortunately, the competence and honesty of journals is highly variable. Sometimes they get into bed with corporations who want to see certain things published; sometimes editors of different journals collude with each other to require authors to cite each other’s publications; and in recent years, there’s been a growing industry of journals that dishonestly claim to carry out peer review when in fact they will publish anything submitted to them as long as the author pays a publication fee. So before submitting to a journal, it’s really important to do your due diligence.



---

Rick Anderson

University Librarian

Brigham Young University

(801) 422-4301

[log in to unmask]<mailto:[log in to unmask]>





From: OpenCafe-l <[log in to unmask]><mailto:[log in to unmask]> on behalf of Danny Kingsley <[log in to unmask]><mailto:[log in to unmask]>
Reply-To: Danny Kingsley <[log in to unmask]><mailto:[log in to unmask]>
Date: Wednesday, February 7, 2024 at 8:03 PM
To: "[log in to unmask]"<mailto:[log in to unmask]> <[log in to unmask]><mailto:[log in to unmask]>
Subject: [OPENCAFE-L] The 'one shot' scholarly communication talk



Hi everyone,



I’m picking up in a new thread something Melissa noted:



As a librarian, I need to be able to stand in front of a class of freshmen, as I am about to do tonight, to explain what peer-review is and why it's the gold standard for what they cite in their papers, and to be able to say it with a straight face without feeling like a liar. For those of you who know what a "one-shot" is, you know we do NOT have time to explain the intricacies of the scholarly publishing industry, its good and bad financial incentives, etc., even if we understand them fully ourselves. We don't even have time to explain all that to graduate students.



This is a really good point for discussion.



How do people approach this type of explanation? I am thinking there is a parallel with the difference between what is written in textbooks and what is happening in the scholarly literature. Textbooks tend to present information as ‘decided’, information published in the literature is the ongoing debate. Textbooks change perspective and ideas slowly, a paper can get shot down in weeks/months.



So, do we provide the ‘textbook' version to students: “This is how science works, a research team find something out, write it up, send it to a journal, it gets sent to experts in the field, they comment, amendments are made and then it is published”.



Or do we bring in some of the broader picture: “Researchers don’t get paid to publish. Publication is the way researchers get ‘prestige’ - the better their paper and (more commonly) the place they publish it in ‘counts’ towards their academic standing. There are systems that count how many papers people have published, where they have published and how many other people have subsequently cited their work. These numbers are fed into most decision making in research - whether someone gets a promotion, whether they get a grant, how an institution fares in national ‘research excellence’ exercises and how universities get ranked."



Or do we lay it down: “The very narrow focus on what constitutes 'success’ in research has unfortunately resulted in some very poor behaviour…..”





I am conscious that when this is new to people it can seem overwhelming. A comment at last year’s AIMOS conference (which consisted of multiple presentations about research on research, uncovering a swathe of issues) was that it was very depressing and it meant it was hard to believe anything that was published. To be honest when you read articles like these https://www.theguardian.com/science/2024/feb/03/the-situation-has-become-appalling-fake-scientific-papers-push-research-credibility-to-crisis-point (which is referring to activity all over world) you can get depressed.



My response is that it is good we are lifting the lid on this - these are the steps we make towards fixing the problems.



But we want our community to ‘be alert not alarmed’.



How do people approach this discussion in their own institutions?



Danny







Dr Danny Kingsley

Scholarly Communication Consultant
Visiting Fellow, Australian National Centre for the Public Awareness of Science<https://cpas.anu.edu.au/people/dr-danny-kingsley>, ANU

Adjunct Senior Lecturer, Charles Sturt University
Member, Board of Directors, FORCE11<https://force11.org/info/people-at-force11/>
Member, Australian Academy of Science National Committee for Data in Science<https://www.science.org.au/supporting-science/national-committees-science/national-committee-for-data-in-science>
---------------------------------------
e: [log in to unmask]<mailto:[log in to unmask]>
m: +61 (0)480 115 937
t:@dannykay68

b: @dannykay68.bsky.social
o: 0000-0002-3636-5939













________________________________

Access the OPENCAFE-L Home Page and Archives<https://listserv.byu.edu/cgi-bin/wa?A0=OPENCAFE-L>

To unsubscribe from the OPENCAFE-L send an email to: [log in to unmask]<mailto:[log in to unmask]>



________________________________

Access the OPENCAFE-L Home Page and Archives<https://listserv.byu.edu/cgi-bin/wa?A0=OPENCAFE-L>

To unsubscribe from the OPENCAFE-L send an email to: [log in to unmask]<mailto:[log in to unmask]>



________________________________

Access the OPENCAFE-L Home Page and Archives<https://listserv.byu.edu/cgi-bin/wa?A0=OPENCAFE-L>

To unsubscribe from the OPENCAFE-L send an email to: [log in to unmask]<mailto:[log in to unmask]>





________________________________

Access the OPENCAFE-L Home Page and Archives<https://listserv.byu.edu/cgi-bin/wa?A0=OPENCAFE-L>

To unsubscribe from the OPENCAFE-L send an email to: [log in to unmask]<mailto:[log in to unmask]>

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [log in to unmask]<mailto:[log in to unmask]>.
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/DM4PR17MB60640E4C6121008C690CBB07C5442%40DM4PR17MB6064.namprd17.prod.outlook.com<https://groups.google.com/d/msgid/osi2016-25/DM4PR17MB60640E4C6121008C690CBB07C5442%40DM4PR17MB6064.namprd17.prod.outlook.com?utm_medium=email&utm_source=footer>.



________________________________

Access the OPENCAFE-L Home Page and Archives<https://listserv.byu.edu/cgi-bin/wa?A0=OPENCAFE-L>

To unsubscribe from OPENCAFE-L send an email to: [log in to unmask]<mailto:[log in to unmask]>



Christine L. Borgman<http://christineborgman.info/>, Distinguished Research Professor, Information Studies

Director, UCLA Center for Knowledge Infrastructures<https://knowledgeinfrastructures.gseis.ucla.edu/>











________________________________

Access the OPENCAFE-L Home Page and Archives<https://listserv.byu.edu/cgi-bin/wa?A0=OPENCAFE-L>

To unsubscribe from OPENCAFE-L send an email to: [log in to unmask]

________________________________

Access the OPENCAFE-L Home Page and Archives<https://listserv.byu.edu/cgi-bin/wa?A0=OPENCAFE-L>

To unsubscribe from OPENCAFE-L send an email to: [log in to unmask]

########################################################################

Access the OPENCAFE-L Home Page and Archives:
https://listserv.byu.edu/cgi-bin/wa?A0=OPENCAFE-L

To unsubscribe from OPENCAFE-L send an email to:
[log in to unmask]

########################################################################

ATOM RSS1 RSS2

LISTSERV.BYU.EDU CataList Email List Search Powered by LISTSERV