Name:
Current Trends in Peer Review
Description:
Current Trends in Peer Review
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/9d963a3f-448b-4082-aa63-a5dee10cefe0/thumbnails/9d963a3f-448b-4082-aa63-a5dee10cefe0.png
Duration:
T01H04M25S
Embed URL:
https://stream.cadmore.media/player/9d963a3f-448b-4082-aa63-a5dee10cefe0
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/9d963a3f-448b-4082-aa63-a5dee10cefe0/GMT20220316-150052_Recording_3840x2160.mp4?sv=2019-02-02&sr=c&sig=D4MUvNW3WWtxhvkudE%2Bq2ApjvKB55iYzokHGbJHXZjA%3D&st=2024-11-19T19%3A18%3A57Z&se=2024-11-19T21%3A23%3A57Z&sp=r
Upload Date:
2024-02-23T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
JASON POINTE: Thank you and welcome to today's event, current trends in peer review. The second and SSP's 2022 webinar series. My name is Jason Pointe, leading the SSP education committee's webinars working group. Before we get started, I want to thank our 2022 education programs sponsors. Arpha, J&J Editorial, OpenAthens, and Silverchair. We're grateful for their support. I also have just a few housekeeping items to review.
JASON POINTE: Your phones have been muted automatically, but please use the Q&A feature in Zoom to enter questions for the moderator panelists. There's also a chat feature that you can use to communicate with speakers and other participants. This one hour session will be recorded and available to registrants following today's broadcast. At the conclusion of today's discussion, you will receive a post event evaluation via email.
JASON POINTE: We encourage you to provide feedback to help shape future SSP programming. Our moderator for today's discussion is Dr. Clark Holdsworth, senior manager partnerships and communications that act on. Clark's perspective in this webinar focuses on that of authors and the manner in which trends discussed today affect their experience. It's now my pleasure to introduce Clark.
JASON POINTE:
DR. CLARK HOLDSWORTH: Hey. Thanks for the introduction, Jason. Appreciate you being the one to take care of all the housekeeping for me. I just want to reiterate just one of Jason's points there. Please help me out as a moderator by dropping your questions in the Q&A early, so it looks like we're at about 75 attendees now. So I expect a lot of good questions, but putting them in early, so we're not waiting at the end will greatly improve how we run the session.
DR. CLARK HOLDSWORTH: I'm quite delighted to have the opportunity to moderate this session. Exceptionally grateful, of course, to our panelists that agreed to participate today. I've played virtual host a lot over these pandemic years and it doesn't matter how smoothly we do it, the insights of the speakers ends up being when you walk away was that worthwhile. So I think we got very, very lucky putting this group together.
DR. CLARK HOLDSWORTH: So I'm going to briefly introduce everybody. This will be in the order of the presentations they'll give. So we have Dana Compton, managing director and publisher at the American Society of Civil Engineers. We have Paige Wooden, senior program manager, at Publication Statistics at the American Geophysical Union, and we have Tim Vines, founder and director of DataSeer. Topically, I'm going to let these presentations speak for themselves as we only have the better part of an hour to get through them.
DR. CLARK HOLDSWORTH: But the plan here today for how we're going to proceed is we're going to go through these three about 10 to 15 minute presentations without too much delay between. I may not be able to resist pinging a presenter with one pressing question in between or the panelists may want to jump in for one pertinent item, but otherwise we're going to wait till those last 15 minutes for your questions.
DR. CLARK HOLDSWORTH: I'll collect them all and I'll start delivering them, most likely based on the upvote feature to the panelists for them to cover everything. Apologies in advance if we don't get to every question. We do have a large group here today. Well, with that I'm going to be silent now. I'm going to let Dana Compton lead us off, so Dana.
DANA COMPTON: Great. Thanks so much, Clark and I just want to start off by saying thank you to SSP and to you for inviting me to participate. I'm really excited to be on this panel. Just want to check can you see my slide show OK? We're good? Great. Thank you so much. All right.
DANA COMPTON: So I'm going to talk a little bit about-- we're here to talk about current trends what's happening in peer-review-- so I'm going to touch a bit on artificial intelligence tools incorporated into the peer review process, and what our experience has been at ASCE over the past couple of years. Just to give a little bit of a picture of what our journal's program looks like-- we publish 35 journal titles it ASCE.
DANA COMPTON: These bring in just over just about 15,000-plus submissions in any given year-- publish just over 4,500 papers as a result of that. And we manage a very large volunteer workforce to accomplish those to process all the content. So in addition to all of our chief editors, the associate editors that make up our editorial boards, a number more than 1,300. And just thinking from there, you can imagine how many thousands of peer reviewers that means we are working within any given day.
DANA COMPTON: So a huge number of volunteers that help us in producing our content. So the question that we've been facing is how can we use or can we use our artificial intelligence tools out there that can help us overcome some of the challenges in our peer review process. So what challenges, in particular, are these needs saying. I think this story is going to be really familiar to most of you.
DANA COMPTON: We're facing a huge editor and reviewer burden. More and more and more content each year, and I'll talk a little bit more about that. And a lot of time spent up front in the peer review process really dealing with papers that are very likely not going to be accepted, or even go through a full review. So we've been wondering early on in the process, is there a role for some non-human intervention.
DANA COMPTON: Scope mismatch-- I'll talk about the numbers here, but we see a lot of papers coming into our journals. With 35 journals, sometimes the aims and scope of individual journals aren't all that far apart. So are authors choosing just the right one or are they directing their manuscripts in the best possible way-- not always. Does it require a chief editor to be devoting their time to make those determinations.
DANA COMPTON: Our hypothesis was maybe there are some tools that can help with that. Bias of course, is a broad question, and I present this with a question mark right there because we have a single anonymous peer review process. We realize that there is likely an element, potentially a significant element of unconscious bias that is impacting the decisions of editors and the perceptions of reviewers.
DANA COMPTON: What we are looking to ascertain is-- can an automated solution help minimize some of that bias, or is it more likely that the human training of that system it will only amplify that bias. Our question is, can a mix of automated and human help to eliminate some of this.
DANA COMPTON: So in terms of editor and reviewer burden, just thinking in terms of submission growth, our journals have experienced more than 100% growth, 135% growth over the past decade. We had a typical spike that I think many of us saw in 2020 during the height of the pandemic in which our submissions really rose significantly. That was around the time that we were beginning to explore some AI solutions.
DANA COMPTON: They've come down a bit since then, but still outpace where we've been pre-pandemic. And so the trend line is pretty apparent here that many of us have been seeing with our journals, the submissions are growing and growing. I had done a talk for peer review week a few years ago about some solutions and human solutions for growth and peer manuscript volume. Things like how to use data and structure editorial boards appropriately to ensure sufficient resourcing levels.
DANA COMPTON: But we're getting to a critical point where simply adding more volunteers to the process isn't helping, isn't sufficient to keep up with the growth year on year. And so we've been looking at how can we augment with some automated solutions. In terms of scope mismatch, looked pretty critically at what's happening early on in the process in terms of quick projects.
DANA COMPTON: About 42% of the initial submissions that come into our journals are rejected, or at least an initial reject without review. And I would suspect that there's actually been room for that to grow. I think in the mid 40% to 50% would definitely be appropriate for the content we're receiving. The majority of those quick projects are being rejected by the editors because they're out of the scope of their journal.
DANA COMPTON: It's almost 3,000 papers that editors are spending time on to say, this doesn't fit with the scope of my journal. And we have two decision types that fall in that category. Out of scope-- just out of scope, doesn't fit my journal versus transfer. That being, this would be more appropriate for a different ASCE journal. What's interesting about this is that almost 2,600 of those decisions are simply out of scope.
DANA COMPTON: We only get about 300 of those where editors are saying transfer to another ASCE journal. And so does that mean that our editors don't want good content to stay with ASCE journals? I don't think so. Rather, I think what that indicates is that our editors are really busy people. They know the parameters of their own journals ends and scope better than they know those of other ASCE journals.
DANA COMPTON: And they just have to process the papers that are coming in. So if it doesn't fit their journal, they are just going to push it along and say, no, this is not for me. Does that mean that we are losing those papers to competitor journals? Yes, absolutely. Authors aren't always giving them time to figure out what would be more appropriate ASCE journal and not receiving a chance for a decision, I think, exacerbates that problem for them.
DANA COMPTON: Just assuming that ASCE is not the publishing outlet for them. So we really wanted to look at can we help the editors by not putting on their plate figuring out where a paper out of scope for their journal needs to go, but provide some clues through an AI solution. Language quality is the other big one and actually, our editors, when we started talking with them about possible AI in our peer review process.
DANA COMPTON: They ranked this as highly important for them. So this is about 800 papers that they're dealing with. If they're reviewing and saying, this is not even at the level at which it would need to be for full review or to be able to assess it. And sending it back for revision before review. So our assessment in looking at these numbers was that we could cut our number of initial submissions that really require close chief editor scrutiny by half.
DANA COMPTON: Of course, every one of these would still need a human eye, would need validation of an automated suggestion for rejection, but what really would require significant time on the editor's part could be minimized. The other thing that we see in our journals is a real-- well, not shift. I mean, we've always seen strong international representation among our content, but this is increasing in recent years.
DANA COMPTON: So our submissions are predominantly coming from China, India, and Iran. Specifically, the submissions from China and India have grown substantially in recent years. In 2020, accepted papers from China outpaced papers from the US for the first time. And the gap between English language papers versus papers coming from non-English language-speaking countries is widening.
DANA COMPTON: I'll talk a little bit more about that in a bit. So as we explored solutions, what we chose to do is run a pilot with the Unsilo product that Cactus Communications offers. Of our 35 journals, we ran this pilot program with four of them. This was based on journals that have a decent number of submissions coming in, in a given year, so that we could get some good data out as well as editors who are a little bit more engaged and we're really interested in learning what a product like this could do.
DANA COMPTON: We ran the pilot for three months. We had a beta integration with our submission system editorial manager. At the time that we ran this last year, Unsilo was just getting integrated with EM. I think it's important to say that we ran this on-- we ran the Unsilo check on initial submissions, but staff had actually already done a technical check on these papers.
DANA COMPTON: So during a pilot period we weren't putting all our eggs in and saying, we're just going to be real hands-off about this. I know there's a lot of technical things that the tool could look for that could also alleviate staff time and staff burden, but we felt like until we had some ground under us and some trust in the system, we still wanted to have that review happening in the way it always had been.
DANA COMPTON: We had 200 training manuscripts, so 50 from each journal, that were fed to the Unsilo product prior to the pilot, so this was essentially because this is an NLP system that-- natural language processing-- this is going to learn over time. And so we did some essentially training of the tool to understand what kind of content we were looking for in terms of quality, and then during the pilot period of three months we tested the program on about 1,000 manuscripts.
DANA COMPTON: Now, I want to talk a little bit about the kinds of things that we're being checked. Because you'll recall that scope is our biggest issue when it comes to quick rejects. Unfortunately, scope wasn't something that we were able to work into this pilot. Unsilo does have a journal match product. And so it's not out of the realm of possibility. Unfortunately for us that, is run on biomedical content from PubMed.
DANA COMPTON: So our bad luck being in civil engineering was that we could probably have trained some sort of scope match solution over time using our corpus of content. It isn't something that we opted to do during the pilot, but I do want to point out that was a little bit of a limitation for us. So instead, this pilot really focused on things like manuscript length, and language quality, the existence of citations for all figures and tables and references, referenced formatting, and looked at some other technical elements of the papers as well as self citation rates.
DANA COMPTON: So feedback that we got from our editors, just kind of summed up here. These are just some of the comments that we received from the editors who were engaged in this pilot. Where we kind of landed at the end of this was that in the absence of a scope check, we found that this wasn't as helpful for us as it potentially could be.
DANA COMPTON: Our editors, although they were really enthusiastic at the outset, were very reluctant to give up that skim of the entire manuscript. And I can understand that during the pilot period, right. You have to gain some trust, but it wasn't comfortable for them in the way I think that they thought it would be. And part of that was because they were receiving very similar feedback on almost all of their papers.
DANA COMPTON: So we had one editor who commented that every paper he received said the writing quality was poor. Well, that's not very helpful then, right. Are we going to reject every single paper that he's receiving because they're all rated as poor-- no. I suspect that some of this is the prevalence of non-native English language.
DANA COMPTON: Is that reason to rank the writing as poor? Probably not, it's probably what we had actually been seeing and the editors were making kind of a finer tuned decision when looking at things directly. We found that sections of papers might be overlooked, like a referencing section might be-- because it wasn't labeled properly or formatted correctly-- we would get something like there were no references.
DANA COMPTON: So are these all things that could be trained in the system over time? Of course, they probably are. In our assessment, was this quite ready for our peer review process? Unfortunately not. With that said, while we did not continue after the three months of the pilot, do we think there are opportunities in the future?
DANA COMPTON: Yes, we're continuing to explore opportunities with AI. That was not quite the right solution for us, but our experience showed us that there are other opportunities. What we did choose to do was launch a pre-submission language editing service. Again, this is back to a human intervention. This is PhD level editors and it's an optional author service.
DANA COMPTON: But down the road, depending on how this service goes, I could see an opportunity to put a service level in as this is a paid author option, and we're doing our best to make this as equitable as possible with discounting for lower income economies, and so forth. But could there be a service level that is an automated language edit versus a human language edit?
DANA COMPTON: I would imagine that, that could be useful for authors. For our editors, I think there are other tools that might be more useful in language editing. We are continuing to explore opportunities for scope check that can provide us the kind of lead that we need for civil engineering content, as opposed to the biomedical space. We're exploring tools related to a reviewer assignment.
DANA COMPTON: We faced similar challenges there with many tools being biomedically focused. And we also have a large conference proceedings program. So we've been thinking, are journals the right space for this in peer review or is there opportunity for AI in conference proceedings peer review? These papers are still reviewed, but the presentations have already been accepted. That scope check is not as much of a concern here.
DANA COMPTON: By the time we're getting to proceedings, papers review the process is relatively simpler. We also realize from this experience that there is much more staff prep that needs to be done on a tool in terms of training the tool. Doing some benchmarking, specifically when we start to think about it as improving bias is this changing the way decisions might go in a human run environment. Setting some thresholds to guide editors decision, and so forth before really unleashing to editors.
DANA COMPTON: So these are some things that we're thinking about for the future. So I guess our answer to the question of can AI cure our peer review challenges would be-- we hope so, but not yet. And that's it for me. I'll hand it off. Thanks so much.
DR. CLARK HOLDSWORTH: Thanks very much, Dana. I appreciate it. We're pretty close to time, so I'm going to just transition us through to the next one. We don't have anything pertinent, I don't think can be addressed in the Q&A session. So next up we're going to have Paige Wooden. You can take it away, Paige.
PAIGE WOODEN: Great. Can you see my screen?
DR. CLARK HOLDSWORTH: Absolutely.
PAIGE WOODEN: Great. So my name is Paige and I've given a data talk spiel before. So some of you may have heard it. There are some-- it's a different message than my usual one. So I'm an American Geophysical Union. We're a society of Earth and space scientists and enthusiasts. The trend in peer review that I want to talk about today is finding your own trends in peer review.
PAIGE WOODEN: And I also want to give you a little bit of a confidence boost if you feel that you're scared of analyzing your data, or helping your society partners, or your publishing partners analyze their data. Don't be. There's a lot of resources out there. And please feel free to email me with any questions or any support that you need for analyzing data.
PAIGE WOODEN: So with the interest and increased commitment to equity, we're relying more on our data to develop baseline views of who we're engaging in the peer review and publishing process and how they're interacting with each other. And then we want to create goals around who we want increased engagement from. And then track our success and share with our teams, external internal.
PAIGE WOODEN: Actually AGU is tiny compared to ASCE. We only have 23 journals. We actually have similar number of submissions and saw similar trends during COVID, and after. But we average about 17,000 to 18,000 submissions a year. And we have about 50,000 to 60,000 members at any one time. So that's to say we have a lot of robust data sets. So at AGU I pretty much look at the annual rates of everything. But here we see the gender of invited reviewers.
PAIGE WOODEN: This is just a sample of reviewer demographics by percent of total invitations per year broken out by gender. And the rates of women invited to review have been steadily increasing. We did see a decrease in 2021, but we also saw a similar decrease in men. That's because we have an increase in unknown gender and actually those are primarily invitations to people in China. And people in China we have fewer gender data for.
PAIGE WOODEN: So in a way, sometimes you have to decide if you're inviting-- if you want to increase your reviewer pool to certain countries that have lower representation of women currently, you may see lower representation in women in other areas. So we see here that there has been an increase in percent of unknown. We can dig a little bit deeper by looking at the increase of women editors.
PAIGE WOODEN: And they, in our data not shown here, they do invite more female reviewers. So that's just one thing that you can do to increase invitations to women is to bring on more women. And then digging deeper, this is kind of like a next level thing that I looked at, that our editors say, it feels like I have a hard time finding women reviewers.
PAIGE WOODEN: Our agree rates for women are lower than men and lower than unknown. And a lot of the unknown is coming from China. They have a higher agree rate. And didn't see a decrease in agree rate in 2020 when COVID hit. We saw a decrease in 2021 by everybody. And then I look at invites per person. This, the blue line just shows where the lowest rates of women in 2020.
PAIGE WOODEN: Editors are inviting the same women less often than the same men, and so they're going to the same men more often than the same women. And that may be because of that the agree rates are lower for women, but it may just be a fear that women are busy. And I see women, I don't want to overburden them. So that's one thing to consider is that are you actually inviting the same women more often. And in our case, it's no.
PAIGE WOODEN: We're not overburdening our women basically with reviewer request because we're inviting men more often-- the same men. And then we can look at invitations to people from China. That's been steadily increasing, but even though it is 8%. In 2021 papers from China are 21% of our publisher accepted papers.
PAIGE WOODEN: So it's still not on par, so this is another comparison we look at is race reviewer invitation rates versus potential reviewer pool. And one way to think of that is your accepted authors. And then digging deeper. Let's look at the increase-- this is the percentage-- this is what region of our associate editors and our editors come from.
PAIGE WOODEN: And we see an increase from China, increase from Europe and India. India, we've had such little representation and we're starting to get a lot of submissions from India. So we definitely want to increase our editorial board representation there. And this is not percentages, it's just numbers. So let's see. This graph shows kind of an idea of the-- I did an analysis on who editors is from each region are inviting to review.
PAIGE WOODEN: So this compares 2018 invitations to 2021 invitations to China-based reviewers and this is based on the region of the inviting editor on the bottom. So China-based editors invite more China based reviewers in 2018 and in 2021. 17% of the China-based editors, their review invitations-- 17% went to China-based reviewers in 2018. And then in 2021 20% went to China-based reviewers.
PAIGE WOODEN: And then you can see that most regions, invitations to reviewers, our editors are increasing their invitations to China-based reviewers. So it's not just adding Chinese editors to your board. It's also finding ways to encourage your current editors and new editors to expand their reviewer pool as well. And so there are a few analysis projects I've done with peer reviewer demographics and peer review data that I wanted to share with you.
PAIGE WOODEN: So when considering double anonymous review, we asked if there are differences in reviewer agree rates based on author demographics. Does the reviewer or potential reviewer look at the author, where they're from in order to agree or decline the review. So what I did is I looked at all the invitations we sent out in 2019 to reviewers, and I ranked the corresponding author-- we have a single-anonymous right now peer review.
PAIGE WOODEN: I ranked the corresponding authors institution using QS world rankings and Scimago scores. And I looked at all of the papers that went out to review. I divided the authors institutions rankings into tiers. So the top 300 institutions are in tier one, the next tier two-- whatever that number was, it may have been 500. And then we have no ranking. And then on the left hand side, that is the reviewer agree rate.
PAIGE WOODEN: So are reviewers more likely to agree to review a paper from an institution-- from an author's institution that has a high ranking? No. The answer is no, not for AGU. So you can see here that-- and mind you this is after the editors do their initial desk reject and we do see a correlation there with tier and rejection rate by the editor.
PAIGE WOODEN: But once it's passed through the editor and it goes out to review, these are the reviewer agree rates. And there's no statistically significant difference. Do look at tier three. We thought this was-- I thought this was kind of interesting that some of the most agreed to review papers came from tier three institutions. And that's probably because the research group of that paper is doing interesting work in geology or space science, but the rankings themselves put them at lower for some reason or another.
PAIGE WOODEN: So rankings don't look at the best geology programs, right. Rankings look at a lot of other factors. We also have reviewer agree rate based on author country. And this was-- so let's see-- so that papers from China, we see do have a lower review agree rate. And a statistically significant difference from papers from the US and Europe.
PAIGE WOODEN: The error bars are not overlapping there. So people are more likely to agree to review US and Europe's papers. The larger error bars is because we have a very few people, very few papers from Africa. So there's more margin of error potentially there. So the next one we did is I looked into the bias expressed by some of our editors.
PAIGE WOODEN: And that they were thinking that reviews-- we don't want to invite China-based reviewers because the quality of the review is not as good as US and Europe. So there is a bias there, a prejudice there so we wanted to dig into this. We couldn't really test quality very easily with just data analysis. But I did look into the ratings that the reviewers gave papers.
PAIGE WOODEN: So is there a connection between reviewer criticalness-- that's what I called-- and reviewer author demographics. So reviewer ratings equivalents were-- I gave accept 4, and reject 1, and minor revisions 3, and 2, respectively. So what I looked at here is-- let's see here.
PAIGE WOODEN: OK. So I showed the average review score given to papers by the corresponding author region. So this shows the average review score that authors from China get. The blue is all of the review scores given, the average of all other review scores given. And then the green segments out the reviewers who are also from China.
PAIGE WOODEN: So it's not statistically significant. I don't have the error bars here. But we can see that all reviewers rate Chinese-- I think that's a little bit higher than a 2, right. And then when the reviewers from China, they give overall a little bit of higher score. But we see that in most regions. And so Europe and the United States and Asia are very similar.
PAIGE WOODEN: And then Africa-based reviewers give their papers ratings to Africa-based authors lower than all of the papers that are reviewed from Africa. So I don't see any bias here. That's what the conclusion was, but we did look at what author suggested reviewers versus non author suggested reviewers give scores. So on the bottom here, this is now reviewer region. So how do different reviewers give-- how do they score papers.
PAIGE WOODEN: And then so the blue bar shows the average review score from author suggested reviewers. And then the magenta bar shows the average score when the reviewer was not suggested by the author. So it's very obvious that reviewers that are suggested by the author, no matter what country they're coming from. You can see in the total-- the third set of bars that no matter what country they come-- overall reviewers rate papers more highly if they are suggested reviewers.
PAIGE WOODEN: But do note that there's an exaggerated scale here. So we are starting at 2.0, so we can see the differences a little bit better. And this could be because they're more qualified to review because they were suggested by the author. Authors could have a better idea of who might be an appropriate expert, but there, other things could be going on.
PAIGE WOODEN: OK. So in conclusion, what can you do. I know that a lot of data here could be something that you're not-- wouldn't be able to do, to analyze on your own. That is totally fine. But I think that the first thing, no matter what, just create a data plan.
PAIGE WOODEN: It's not something that you actually have to do. And it should be in line with your diversity and equity goals and other goals that you and your team are trying to achieve. You're not alone. That's another thing. You have a large team to help you. Your team could be interns, internal Excel gurus, someone interested in learning Excel.
PAIGE WOODEN: Someone who, like an editor or a member who's good at statistics-- there's a lot of partnerships there between editors and society partners or publishing partners to share data and get some experts on there. Your submission system vendor is also on your data team. Sometimes just going to them, saying what you want to do-- they are able to help you and there are Excel experts and data experts on their team as well.
PAIGE WOODEN: Just ask a question and dig into the data and then figure out then who you need and what skills that you need to brush up on in order to find that information. It's always really good to disaggregate by various demographics, so maybe it's younger women who you're not inviting to review, and you're overburdening mid-career and late career women, for example.
PAIGE WOODEN: And if you need help, I'm always here. So that's it. And I think Tim is taking over.
DR. CLARK HOLDSWORTH: Thanks, Paige. Appreciate it. Tim, you actually had a question there for Paige, so feel free to field it to her now if you want because it's quite interesting, keeping in mind it's going to dig into your time. So we're looking at a good pace here.
TIM VINES: Well, so I was just looking at that. I was wondering because you talked about the sources of why author suggested reviewers might be more favorable. I wondered whether if you separate authors just suppose-- we do reviewers by same region versus different region. Are they suggesting their local bodies. I wonder if they're way more agreeable, or way more positive about the author's paper than out of region suggested reviewers.
TIM VINES: That is, oh, I think this person this other country is actually very good at this, rather than suggesting the person down the corridor who then go and buy a beer for.
PAIGE WOODEN: Yeah. That's a good idea and I didn't do that. I just did authors suggested and people from the same region. So authors suggested people from the same region. It's exactly what I told you to do on the last slide is to disaggregate and then re aggregate to see some trends. And I think that-- I'm not sure if I have enough data for that specific year to make it statistically significant or robust, but that would be a good idea to look at.
PAIGE WOODEN: I could do that pretty easily across multiple years of invited and agreed reviewers. And author suggested.
TIM VINES: Sounds like an amazing data set, so I'm sure you can some good-- OK. Let me start.
JASON POINTE: It's all you, Tim.
TIM VINES: --so we can be on time. OK. Is that there for everyone?
DR. CLARK HOLDSWORTH: Yes.
TIM VINES: OK. Cool. So I'm going to talk about rewarding reviewers. Without further ado, so why should reviewers get a reward. We need to stop. Big question, why should they get a reward. Because they spend hours and hours of their time-- five to 20 hours per manuscript. Reading through it, giving a highly technical evaluation based on their own expertise.
TIM VINES: And they take this time out of their lives to do reviews. And it's this gift of knowledge that incrementally benefits humanity because it affects the scientific record. And that in turn, affects our certain knowledge of the society. And so peer review is this integral part of how we sort of decide what's true and what's not true as a society. And we really, really want academics to keep doing this because if they stop doing this, then there's this huge hole in our ability to evaluate the research that's being created by society.
TIM VINES: But here's the problem. This is from Elsevier, the link for this in the chat. Panelists and chat, everyone. It's the paper we published a few years ago about reviewer agreements rate for a significant chunk of the ecology evolution. field. And the first point here is for 2003. So that was the sort of golden age of 60% agreement rates peer reviewers.
TIM VINES: And by 2017, 2018, this has dropped all the way down to like 30% across these four different journals. So this is a pretty serious problem. And what are we going to do about it. So this-- I'm going to kind of go in reverse order, I suppose, in terms of how current an idea is. So one of the much wanted ways of rewarding reviewers with putting all this time and effort into reviewing is to do what you do when anyone puts a bunch of time and effort into doing something.
TIM VINES: And that's pay them money. And it makes a lot of sense to do this because journals get the money from the articles, the reviewed articles, that is, have been through the process of peer reviewed types and what have you. They get money for that. They get subscriptions, they get APCs. And so it makes a lot of sense that reviewers should get a cut of this for helping with the sort of evaluation and curation process for the article.
TIM VINES: But there's a ton of problems with paying reviewers. So the first one is you can't really pay reviewers enough. And if you wanted to do a sort of benchmark from outside of academia, if you're an expert witness in a trial-- you can probably expect to be paid about $300 an hour for your time, which adds up really fast. Or if you're a very specialized lawyer and there is a client that has a very specialized problem.
TIM VINES: So you're an expert in the interface between Venezuelan law and Canadian contract law, and a company comes to you with a problem, you can charge them what you'd like because there is just nowhere else that they can get that advice. And that's very similar for academics. Academics are typically one of 10, maybe 15, 20 people who can evaluate an article to the fullest extent.
TIM VINES: And so if we move to this system where academics can negotiate a fee for revealing, they can say I'm not doing this for less than 10 grand because I know that my expertise is unique in the world-- almost unique in the world and you can't really get this review anywhere else. So I'm going to hold out until you give me 10 grand. And that's fair enough because they're experts. And that's what expertise is worth in the world.
TIM VINES: And so once you start to pay reviewers, then there's no real upper limit to what you would have to pay them. I mean, initially you could start with some small fee, but it would quickly go up. And on top of that, even if you pay reviewers a fairly small amount of money-- so taking the $450 that gets talked about on Twitter a lot by James Heathers, you would add almost $4,000 onto the APC.
TIM VINES: And that's because it's not just one review for one manuscript. Every manuscript gets 2.2 reviews which takes you up to $990 on average. And then because the APC is charged to the authors of the accepted article, they've got to pay for the peer review of the rejected articles too, so if you've got a 25% acceptance rate, you're going to end up paying for the peer review of four articles in total, which is about $4,000.
TIM VINES: And my feeling is that if you increase the APCs by $4,000 to pay all of the reviews for $450, there would be a bit of an outcry. So this even at a fairly low fee, the math just doesn't work. The numbers get enormous very quickly. And then if you mentioned paying every reviewer $10,000, it's insane. And so on top of that, not just giving reviewers money, but the infrastructure needed to give reviewers money would be another huge investment.
TIM VINES: The journals don't have anything like this, so they'd have to build it from scratch. And this involves sending payments to people all over the world who may or may not have accessible bank accounts, credit card-- it would just be an absolute nightmare. And particularly, when the amounts of money are small. Because then you the amount that you're spending to send the money is almost about the same as the amount of money you're sending.
TIM VINES: And so this whole infrastructure would cost about the same amount of money again to create. And then where does this $4,000 come from? So for an APC of, let's say, $2000 we need to find double that again to be able to pay the reviewers for the authors for that article, and publish a profit. So people talk a lot about, Oh, Yeah, it's just come out of publisher profits.
TIM VINES: Publisher profits will have to be like North of 300% to be able to let them have this huge pile of cash sitting there to start this process off-- at least, maybe 500%. And let us know what they're making. Because APCs and subscription fees, they already go on things. There's a ton of other expenses already. And so that means if we were going to move to for paying reviewers this notional article $4,000, the authors are going to have to pay-- would have to add it to the APC.
TIM VINES: And this is a whole host of problems of its own. Because the reviewers are authors themselves. It's not like the expert witness or the lawyer whose job it is to provide this. Academics wear multiple hats. Lawyers provide the service and that's what they get paid to do all day. Reviewers and academics wear multiple hats. One day they're a reviewer and the next day they're an author.
TIM VINES: And so whatever you earn as a reviewer, reviewing articles, you probably have to pay back something similar every year to have your own articles reviewed. So there's this tremendous amount of money going backwards and forwards between people, but just to arrive at about net zero. So-- OK. So some reviewers would do pretty well out of it, other people would end up in the red out of it.
TIM VINES: But given the amount of money changing hands, what's the point. What are we gaining from this, and the complexities of setting up the system to achieve this sort of net zero effect is not very compelling. And then on top of that, there's the moral ethical problems here. So putting money into a system that currently runs on goodwill and honor is deeply corrupted.
TIM VINES: There is a very well established over justification of that, that once you start paying people to do things, the money becomes the goal, getting paid becomes the goal rather than taking pride in achieving the thing itself. Something like peer review, where the difference between doing a really great job of evaluating an article and phoning in your review is very hard to spot. Because reviewers can just write-- They can just write a bunch of stuff, which looks like a good review, but they haven't really evaluated.
TIM VINES: But if that's all they need to do to get the money, then that's what they'll do. And as you all will be aware that the rise of open access has led to this huge swath of predatory journals, which are becoming an epidemic on publishing at the moment. Just imagine what throwing tons more money into the system to do to pay for peer review. Do we really imagine that tons of other scams and conflicts of interest-- those aren't going to happen?
TIM VINES: Of course there are. There's going to be peer review files, there's going to be all sorts of really ugly things happening. And as I pointed out on the previous slides, what are we going to be getting from this in exchange for this huge new thing we've added in. So reward options-- paying reviewers, number one. I think we should just get out with it. I think it's a bad idea.
TIM VINES: In fact, I'm going to go a bit further and completely scribble this out. It's a terrible idea. This will be a death knell for peer review as we know if we start paying peer reviewers. So part two-- pseudo currencies. This is where you pay reviewers not in cash, but in some sort of token that they can then exchange to particularly for open access journals they can reduce or eliminate APCs, the article processing charges.
TIM VINES: These are much easier to administer because they don't have all the trappings of money around them. And the journal also has an infinite supply of tokens because they can just hand them out. They don't have to obtain them from anywhere, which is what you have to do with money. And so these have a bunch of good features to them and there's already schemes like this out there. So this is JMed, Journal of Medical Internet Research has karma credits.
TIM VINES: has just launched these contributor award tokens, which you get for reviewing and so on. And these can be stacked up and used to pay for article processing charges of both journals. There are still some problems with these because they are currency related. If these become very, very successful-- Because for each article you're giving out a whole load of two or three sets of reviewer tokens, and people are reviewing three or four times a year for this journal-- then it's possible that nobody ends up paying APCs.
TIM VINES: And that means that paying reviewers with these tokens still indirectly sucks resources out of the system. Because you need the APC money to pay for other things too, like the editorial office and hosting software and typesetting, and all that sort of stuff. But these token systems are really very new. And we're not likely to get to that situation soon, but it's something to bear in mind because when we talk about systems for rewarding reviewers, we need to talk about systems that are going to take over.
TIM VINES: This is going to be the dominant way of doing this. We need to think about all the problems that that's going to bring when they're dominant. Not just when it's a small pilot working with a bunch of amenable people. We need to think about what's going to happen if this becomes the norm. And over justification is still a big problem here. So people can review to get tokens, but they don't give back to the broader community.
TIM VINES: And so this corrosive effect of over justification can also remain an issue here. So pseudo currencies-- maybe these are helpful. But I also want to get on to these intangible currencies that we also have. The other currencies of academia-- prestige, promotion, respect, time, and joy. And anonymous reviewers are still getting prestige. Twitter can't get its head around this, but the journal knows exactly who the reviewers are and journals can reward the best reviewers.
TIM VINES: They can become editors and maybe one day become chief editor. Good reviews get also asked to write commentaries. And so this is an inherent reward for being a great reviewer. There's no system we need to put in place for this. It just happens anyway because the editor's are like, Wow, this person was really great. They put a ton of effort in, they reveal a lot for us, they're quick, they make a lot of sense.
TIM VINES: Let's make them an editor. And that itself is a prestigious position. Promotion. Promotion is hard, so great reviewers-- It would be wonderful if really great reviewers were more likely to get tenure or promotion, but good luck with that. It's a system that's completely outside of publisher's control. This is the tenure and promotion committees within institutions.
TIM VINES: We can't influence them from here. We could try quantifying reviewing with something like publons, where people can say, hey, look at all this reviewing I did. But tenure and promotion committees are very opaque and you can tell them to do all sorts of things, but whether or not they do them-- who knows. Maybe they do, maybe they don't.
TIM VINES: Because the factors underlying particular decisions are very hard to back to work out. However, we can be pretty sure that prestige is important to tenure or promotion committee. So if you're the chief editor of a journal, or an editor on a journal-- that is something that counts in your favor. That is gotten to by being a good reviewer. Time, respect, and joy.
TIM VINES: These are really intangible intangibles, but I think this is perhaps the most important point I want to make here. If you're sending emails like this to your reviewers, and still wondering why you can't get people to review, you need to look in the mirror. OK. You spelt my name wrong, it's poorly formatting, it's got spelling mistakes.
TIM VINES: No one's signed their name to it. It's just rude and you've taken up my time. So I want to put this in. Don't be jerks to your reviewers. If you want to reward reviewers, maybe start by not actively annoying them. So I want to put this here. Don't be jerks to your reviewers. And this requires journals to really just take a hard look at themselves.
TIM VINES: Are you wasting your reviewer's time? Are you sending them poorly formatted, or stupid emails? Are you dismissing or ignoring their efforts. And if you are doing these things and having a hard time getting reviewed, the good news is you can fix this, you just need to do an audit and need to think, how am I supporting my reviewer community. And so here's some suggestions centralizing review requests to make sure people don't get too many.
TIM VINES: Don't cut reviewers off before their deadline. Insist the editors include a decision statement justifying their decision. And it's these here that I think the easiest-- these are the lowest hanging fruit for the journals to tackle. I think this is where we should go. So I just want to finish on this thought. Peer review is about a personal relationship between the journal and the community.
TIM VINES: So we need to nourish these relationships at every opportunity. So I'll stop there. Thanks.
DR. CLARK HOLDSWORTH: Thanks very much, Tim. Appreciate it. So unfortunately, we are getting close here on time. But my excellent panelists have been answering some of their questions and the Q&A already. So I'm just going to go through, field a couple of these real quick. I may ask you all to give a one liner to some of these that you may have addressed. Dana, you had this one where you referenced the STM report.
DR. CLARK HOLDSWORTH: Can you just sort of verbalize that conclusion for us on really the idea of why submissions are growing worldwide?
DANA COMPTON: Sure. The question was about any research that's been done into-- why submissions are kind of growing across academia. And so I just referenced back to the 2018 STM report that talks about scholarly market size a little bit. And correlates research journal output, so article output with global R&D spending. So a growth in global R&D spending that correlates with growth there in research articles because there are more researchers, essentially, publishing their work.
DR. CLARK HOLDSWORTH: Thank you. Appreciate it. Paige, this is one that you had answered too. A question about when you were talking about suggested reviewers, this idea. Do the reviewers know that the author has suggested them and that you indicated that, no, you don't tell them about the review request the AGU. I was just wondering as a follow up to that, is that a policy or is that just the way it's historically been done?
DR. CLARK HOLDSWORTH: Is there a rationale behind telling them or not?
PAIGE WOODEN: Yeah. I never got that question actually. So I've never discussed this. The editors have never asked can we do this. The only thing that would happen is the AE or the editor, who's ever inviting the reviewer, could put in the invitation letter manually that you were an author suggested reviewer. I don't know. Maybe I'll brush-- I'll have that conversation with our team.
DR. CLARK HOLDSWORTH: Yeah. That was my perspective. I've had response letters for my own papers where it only comes through in that instance where the associate editor decided to mention it as part of their debriefing to like let me know what was going on with it and everything, rather than a formalized aspect. And so it seems like it's at their discretion for several different journals.
DR. CLARK HOLDSWORTH: Let me take another look.
DANA COMPTON: If you don't mind my jumping in. I know there was one about training the Unsilo tool from Chris Azure, so thanks for that question. I was trying to type a response and it just got a little unwieldy. So I thought I just comment that the way we did that up front was by providing papers that already had a disposition in our peer review system to Cactus for the Unsilo tool. And they ran those papers through their system without having the decision.
DANA COMPTON: And then we delivered back to them what the ultimate disposition had ended up being to kind of match up how close their output had been to what the editor result had been from the peer review process for those papers. And I think we did 50 papers per journal. That's fine. I think the message that we had gotten loud and clear from Cactus had been, the more the better.
DANA COMPTON: So potentially this could have been more finely tuned if we had, had the resources to provide more for training. That's where I was kind of saying in my talk, there would be a lot more investment that would be needed in terms of staff resources to really make the AI as accurate as we really wanted it to be. But then on an ongoing basis, feeding back the ultimate decision on papers that went through, their tool to say did the editors really agree with the assessment of the Unsilo tool would help with fine tuning and learning over time.
DANA COMPTON: So I hope that kind of sums it up for you. I just couldn't quite get that in a blurb.
DR. CLARK HOLDSWORTH: Yes. Perfect. Thank you very much on that one. A little follow up to it for one that just came in. Are you considering any other tools or other solutions for replacing the technical check that you noticed, wasn't something that you wanted to adopt there from Unsilo?
DANA COMPTON: We are, and honestly I wish I had the means. Our team is looking at it. And I don't, so if anybody wants to send me a quick follow up, I can absolutely check on that. Feel free to email me or connect with me on LinkedIn or somewhere. I don't have the means at my disposal, but we are exploring.
DR. CLARK HOLDSWORTH: Yeah. My organization does provision of editorial services as a component of our activities, and we were looking at it as a complement, what sort of these tools can we adopt to aid the human-based aspect of it. We also looked at Writefull, so maybe that's one that people want to check out. It wasn't exactly working for us, but it may be specific to field or journal size, or something.
DR. CLARK HOLDSWORTH: You may find it to be more useful to complement your sort of human technical checks. And then I had another one for Tim. I was wondering about your opinion on this, Tim, because it came through the chat. This is from Hannah. They're a small, predominantly US medical education journal incentivizing reviewers with the annual top 10 review award, a master reviewer certificate, and CME credits, which I found interesting.
DR. CLARK HOLDSWORTH: I'm wondering if you can speak to how that might dovetail with what you present on that last couple of slides.
TIM VINES: Yeah. Yeah those are great ideas. Top 10 reviewer awards feed into the prestige. I mean, there really just can be a line on a resume, but still they help. And they don't cost the journal anything to do. And it shows appreciation. And something like-- reviewers certificates, CME credits. These are all helpful things, whether or not they intrinsically motivate people to do the work, or it's just the recognition of people who are motivated-- it's a bit of a harder question to answer.
TIM VINES: But yeah. I mean, they don't hurt, for sure.
DR. CLARK HOLDSWORTH: And then I just had another one for Paige. You answered this question here in the Q&A. Where does the demographic data under reviewers come from? Could you just sort of highlight, or verbalize your conclusion there because I think those tools would be relevant to the audience.
PAIGE WOODEN: So first pass we match email address to our member database and our member database we do ask people to fill in. It's not required, but fill in their gender and their age. And then recently, we were able to add something to our member database website where an author can go in, even if they're not a member, and they can answer their gender. So authors and a lot of our reviewers are already members, they're already in that AGU network.
PAIGE WOODEN: But a lot of our authors, especially those from China are not AGU members. And so we've gotten more demographic data over the last-- I think we just started this, this year, but it was a big update to our net forum member database-- to have a screen where people who aren't members, who are just authors to add their gender. So that's the first pass.
PAIGE WOODEN: Because that's self-selected, and then the second one is we started using gender API. It's a gender algorithm. It's not always accurate. It's more accurate for very strongly male, female names. And so China is usually underrepresented there because the algorithm is not as good at identifying gender for China authors and reviewers.
DR. CLARK HOLDSWORTH: Perfect. Thank you very much. I will stretch this out for way too long, so I'm being cut off now. I realize, I want to direct it back to Jason because he's going to have obviously some closing remarks and some ending housekeeping items for us. But thank you to all of our speakers. I really, really appreciate you taking your time here.
JASON POINTE: Thank you, Clark. And thank you, everybody, for attending today's webinar. Thank you also to Clark and our panel for a very informative engaging discussion. And of course, thanks to our 2022 education sponsors, Arpha, J&J Editorial, OpenAthens, and Silverchair. Attendees will receive a post event evaluation via email. We encourage you to please provide feedback and help us determine topics for our future events. Also please check out the SSP website for information on future SSP events, such as those shown here, including our April 13 upcoming Scholarly Kitchen webinar, and of course, registration for our 44th annual meeting in Chicago.
JASON POINTE: Today's discussion was recorded in all registrants will receive a link to the recording when it is posted on the SSP website. This concludes our session today.