Name:
Real World Results on Peer Review: Pilot Study Indicates a Solution
Description:
Real World Results on Peer Review: Pilot Study Indicates a Solution
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/0c82a6fa-8a58-4473-8298-c6b7f79ddef8/videoscrubberimages/Scrubber_1.jpg
Duration:
T01H00M13S
Embed URL:
https://stream.cadmore.media/player/0c82a6fa-8a58-4473-8298-c6b7f79ddef8
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/0c82a6fa-8a58-4473-8298-c6b7f79ddef8/SSP2025 5-29 1600 - Session 3D.mp4?sv=2019-02-02&sr=c&sig=QN4ccqH8cPvf7Q8cTsvbWc7EOPfF%2BxxLlZIWirH32ts%3D&st=2025-12-05T21%3A03%3A20Z&se=2025-12-05T23%3A08%3A20Z&sp=r
Upload Date:
2025-08-16T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
All right. I think we're just about to get started. So for everyone coming in have a panel of 4 here talking about real world results on peer review. We're going to talk a little bit about a pilot study and then generally about peer review. And so one thank you for being here. My name is Jenny Pittman. I'm with Elsevier.
I work in the health and medical sciences group. And I'm going to let everyone introduce themselves. But first I'd like a vote. With a show of hands, because we're here in Maryland and I'm from Baltimore. And so I don't know if you all have had a chance to enjoy some of the food that's known in Baltimore, like crab cakes, for example. Well, one of the key things that you will always find on crab cakes and in many other products is old Bay, and the SSP group was nice enough to give us some burger cookies.
And so. I don't know if anyone has had those, so I'd just like a show of hands who likes old Bay. See Jen. And who likes burger cookies. All right, so you guys have seen it. All right. Thank you for that little vote. So now I'm going to turn it over to Jeff to talk a little bit.
He's our amazing technologist here. Can everyone hear me OK. Fantastic old Bay by the way. That's my vote. My name is Jeff Christy. I'm a senior client services account coordinator with Aries systems. I've been doing this for about 7 and 1/2 years now, and I work with onboarding clients, making sure they're comfortable using editorial manager, and I'm essentially their tech guy from there on out after their site goes live, just answering their questions, making sure that they have the best experience possible.
And I'm really excited to be here today. Thank you. Hi good afternoon, everyone. I'm Chris petkoff. I'm Vice Chair of research at the University of Iowa and also editor in chief of current research in neurobiology, an experimental journal that we started in 2019, right before the pandemic. Not a great time to be starting a journal, and we like to think of ourselves as a platform where we could do experiments that may be useful for Elsevier in general.
So I'm here to tell you about one of those experiments today. And hi, I'm Jennifer rogala. I'm associate director of publishing at Wolters Kluwer health, where I oversee a team of medical news platforms. I also work closely with our society and proprietary titles that are peer reviewed as well. And then prior to that, normally I wouldn't go back in my resume. But it's important, I think, to the presentation that I'm giving today.
I've worked for several publishers and societies over the years, so I've seen a lot of peer review in my time. So I guess I'm here as the peer review elder, if you will. So thanks. And your new SSP board member. Oh thank you. So you can see our names up here. But I do want to acknowledge someone who unfortunately couldn't be here in person.
Her name is bahamani. She is the head of peer review at Elsevier. She's incredibly talented and she's now the president of the European Association of scientific editors. So if anyone was just at that conference, she was there. She is the driving force behind the results that I'm going to take you through right now.
So here's the problem. Not that there aren't a number of problems related to peer review, but these are some of the problems that we were looking at when we looked at this study. We wanted to have better inter-rater agreement. Inter-rater agreement basically was around 30, 31% So that means every time one of our editors would ask for two peer reviewers, the peer reviewers would give their reviews.
And unfortunately, the rest of the time, besides that, 31% they would not agree. And so, as we all that usually extends it because either an associate editor or an editor in chief has to weigh in or you have to go back out and seek additional peer reviewers. Also, there were significant shortcomings to a lot of the peer reviews, so it was really looking at the quality of the peer reviews that we were getting.
Some of our journals in this study were really successful in recruiting new reviewers, and we were trying to educate them with different classes. We have a certified peer reviewer course, but they were still needing additional information and guidance. So that's really the problem statement. And then from our editors, they were having a trouble looking at these peer review reports and making a decision.
And they wanted to make the best decision. And of course, the author from the author perspective, they were not happy with the inter-rater agreement. They were getting the comments back from your reviewers and coming back to the editors and saying, which one do you want me to address. This is not agreeing. So we tried something different.
We wanted to implement structured peer review. So we looked at three article types. So we wanted to focus on original research search articles. We wanted to look at review articles and then perspective articles. I want to make sure it's very clear that we only looked at those Article types. So some of our journals in this study did still publish other article types where these were the ones that we focused on.
And then when we were looking at the questions that we wanted to address, we thought long and hard because when we look at the review reports, reviewers are really great at pointing out things that typographical and things that are copy editors really could be responsible for. But what the editors were looking for was information on. Are the objectives clear. Can you tell me a little bit more about the methodology and what are the limitations of this study.
So when we were thinking about it, we wanted to make it as short as possible. But we still wanted to get the information that the editors and the authors would need. So here are the core questions and the timing of the study. So just to let everyone know this was between. It was around 2:30 journals. It was between July of 2022 and December of 2022.
And it was over 40,000 manuscripts that were received. We analyzed the data and we also set up interviews with authors, reviewers and editors. And this was really because we wanted to socialize the idea of structured peer review. And we wanted to get buy in because some of our editors were a little at first, cautious about doing this because for a long time they hadn't done structured peer review.
In fact, they didn't even have questions where there was a rating. And so they were really hoping that their reviewers wouldn't say, I'm not going to review for you anymore. And they wanted the reviewers on board. Also the authors, they had loyal authors. A lot of these journals had loyal authors that were not used to getting the structured peer review feedback.
So the follow up interviews were really important. And the socialization. So Jeff's going to talk a little bit more about this, but this is what it looked like in editorial manager. So most of the journals that were part of this study use editorial manager. And so we set up the core questions in there. And in addition to the questions, we wanted to make sure that every reviewer could actually provide additional follow up.
So what you see here is the actual comments. So not only did they say yes or no, but then the additional comments to the core questions. So then Bahar decided, OK, we've done this over this many journals, let's write a paper. So let's just randomly sample the journals that have basically done this and see what the outcome is and let's publish the results. So she looked at 23 different journals.
I want to be clear. The journals were across all different subspecialties. So health and medical sciences, life sciences, physics PSSHH journals. And they were also different impact factor quartiles. And what was really interesting is there were no significant differences between any of the journals in this study between the impact factor or the subspecialties, which was interesting.
We thought there would be every paper that was part of this received those two reviews, and we posted on a preprint server and then published it finally in Peer Gynt. So one of the key things that we wanted to get out of this was the journals that were involved in this study. We wanted to improve the time reviewer time because some of them had really long reviewer times.
And so, that's one of the quandaries we still want quality peer review, but we don't want it to take forever. And so one of the things that really came out of this was that we decreased it by 5 days, and we probably could have decreased it more in terms of the peer reviewer time, but we still felt that the questions that we were asking were really important, so we didn't cut it down in size.
The inter-rater agreement, as I mentioned, went up. So it's 60% I think it's actually now higher on some of the questions. And this is really important because from an editor standpoint, they were thrilled. The fact that they didn't have to go out for a third peer review or top one of their associate editors to do that review. And then what I thought was really interesting was they could have just said yes or no on a lot of these questions.
So we made it optional in terms of the feedback, the additional commentary that they could choose to give. And it was really amazing. Not only did almost all reviewers, you see that 92% provide answers, but they also gave really substantial feedback. So in that study, what was interesting is it was an average of around 323 words in terms of that feedback. So again, the editors were looking at more detailed feedback that would make the paper better.
So I think there's oh yes. Here's a little bit more information about the inter-rater agreement. So again, our editors had said to us you focus on copyediting your copyediting. We don't want that from our reviewers, even though we still had one question, because our reviewers still wanted to note things that they thought should be changed, like a sentence rewritten or typographical issues or English language.
But what I think is really interesting is that the lowest agreement was based on the things that the editors thought were most important, so they really wanted that important aspect about the limitation, for example, of the study. And so as part of this, though, we actually received more feedback on the limitations of the site. We received feedback on this needs a statistical review when that wasn't included before.
And so, when we were talking about some of this research, what's interesting is when we talk to editors, they thought because the reviewers had not mentioned anything before, they didn't address that this needs a statistical review. It wasn't that it didn't need statistical review, it's just that they didn't think to include it. And because that question was there, they actually said, oh, yes, we need a statistical review for that.
So there are some limitations with this study. I think one of the biggest was that we didn't look at the peer reviewer, age, experience, gender, even location. And so that and we couldn't break apart when looking at the overall metrics, the journals that had other article types besides original research articles and reviews and perspectives, there were other article types that were put into this overarching study.
This pilot study. The author impact. So we have had authors that have come back to us and said, I wish I had known. I wish in your instructions for authors, it listed information that this is how we do peer review, and this is the type of feedback that we'll be receiving. So it's one of those things like in terms of being more transparent, we realize that it's one of the things that we could do as we expand in structured peer review.
Tips and takeaways. So the biggest thing was editors in chief really wanted the ability to modify the questions. Some of them wanted to add questions. Some of them just wanted to reword them. So when we originally set this up, we wanted to get the research out of it. So we basically said these are the nine questions.
And so our editors would like to modify that. They said, OK, we understand the importance of some of these questions, but can they be rewarded or can we add questions to this. We now have that ability, and it's definitely something that we're still trying to monitor. How many questions are asked because we want to keep that high percentage of reviewers answering all the questions.
In terms of the I mentioned the readers already, they really wanted to know about the peer review process. And once they knew about the peer review process, they loved it because they loved the comments that they were getting and the feedback. I think it was a surprise. We heard from authors that were loyal authors. This isn't what I'm used to receiving. What you know.
And so that was definitely something we heard. And then we actually think readers would appreciate some of the structured peer review comments. So we're thinking about sharing that how we can share that. And from a reviewer standpoint, we have resources, educational resources about how to review. We think that we want to add to that. So if a journal has structured peer review, we can give them webinars that are very specific to this is how you handle this question.
These are the things to think about when you're getting this question. And Chris is going to talk about his research expanding on this. And the last is really keeping the door open for feedback from all stakeholders. I will say that we were really excited about the fact that editors and reviewers were ultimately really thrilled. And we have more journals adopting this. And so we're sharing this research so that hopefully more journals can benefit from this.
So now I'm going to turn it over to Jeff. Thank you. Hi, everybody. So I am essentially going to go through how you prepare the workflow that Jenny just went over in editorial manager and how you can report on the results of it as well. And I am really excited to use this laser pointer, which you can see right here.
Perfect So as Jenny just mentioned, you can have a variety of different custom questions that can be configured to various questionnaires that can be tied to different article types. So different questionnaires can be tied to different article types to have that sort of variation. We also have some demo custom instructions right here. This is just a mock up of a review form that I have on my editorial manager demo site, and as you can see, we have the red text there showing that these questions are required so the reviewers won't be able to just skip them, which I hear about more than you would think.
On this form, you can see that we have those pre-configured responses of yes, no or nah here. So again, just making sure that we have that standardization that was alluded to earlier and making them easier to analyze for the editors, ideally. You can also reiterate the questions that you've already posed to the reviewers down here in the reviewer comments to author section.
This is feedback that can be anonymized and be made available to the authors if the paper is sent back to them as a revision. And as you can see here, this is an additional space where the reviewers are actually able to clarify a little bit on why they selected the yes/no response, so they can add some context that can be really helpful to authors in terms of revision feedback.
And right here is just a simple screenshot of on the editor form, where editors are able to review the feedback given by reviewers based off of those questions that were on the review form. And if you check this box right here, that's when you're able to select those comments entered by that reviewer to be made available to the author and the author decision letter, they'll be able to take that feedback and use it when submitting their revision.
Anonymized to I think I said anonymized like four times there. So sorry about that. So getting to the configuration side of things, we have our question configuration right here. So pretty simple stuff. I just have my question text editable. I have custom instructions that I can input for each and every question on the form.
As you can see here, I want to have the question and response text available in my decision letter, so I checked for having it available in decision letter merging and for the response type, it's as easy as just using the list response type that we have. In editorial manager, you're able to set a pre-configured response of yes, no and a again, having that standardization available. And let's see here.
So when you're configuring your review form, let's say that we have all of our questions configured OK. From there, you're able to go and add them to the review form which will be applied to an article type. You can have multiple different questionnaires for multiple article types or one questionnaire for a whole bunch of different article types. It's totally up to your workflow needs. But here we have the questions that I've configured.
And as you can see, I can select whether I need them to. They are going to be required for submission, for review, whether the comment, whether the question and the response from the reviewer are going to be visible to the authors, and even if it's going to be visible to other reviewers, which I'm not sure if we covered in the case study, we'll talk about that. But we do have that option.
And as you can see here, we could also have the comment box that I mentioned earlier. If you're not wanting to have it. The questions reiterated there, you can actually just reconfigure them as custom questions where the reviewers will be able to elaborate after they initially answer yes, no or nah. You could even configure it as a follow on question, asking for clarification on why the reviewer responded the way that they did.
Again, making sure you have as much information as possible from reviewers about why they responded the way they did with their review. So getting into reporting, one of the good things about how standardized these questions can be is that it makes reporting a lot easier. So I built a very simple report here. As you can see, I believe I have well, my eyes are getting bad.
I have Article title, the custom question text, the reviewer name, and their response. I went pretty simple here. I'm just pulling in those yes, no or nah responses, but you're also able to pull in those clarifying comments that the reviewers would have followed up with. So at 1, at a glance, you can really have a high level overview of which sort of responses are skewing which way for reviewers, and these can also be scheduled and sent to various folks at various different times with our reporting utility.
And in addition, as you can see here, I just have a little bit of a visual aid as well, in case you're looking for something like a trend in a certain response to a certain question, you're able to do that too. And I think I'm going to turn it over to Chris now. I highly recommend the laser pointer. Thanks let's see if I can figure this out. Well, I'll start off with a confession.
I can't believe how many years I've spent as a scientist without having any structure to my peer reviews. I mean, I can think I can't recollect all the papers I've reviewed, but I can recall the very first review I did, which was completely unstructured. I was just asked to review a paper, and then I kind of had to go figure it out. And as Jenny highlighted, I wasn't sure if I was supposed to help them write the paper and edit it, which would have saved me a lot of time having to skip that.
And then you learn as you go along. You see how the other reviewers are reviewing it, but that doesn't necessarily mean you're learning from reviewers that have got it figured out that there's any structure to that. So I was delighted to be involved in this tonight. I'm also grateful to Baha, who unfortunately couldn't be here for collaborating on this project and helping us to run this experiment in our journal, current research in neurobiology.
So I'd like to tell you a little bit about structured peer review and action. And this was just a series of five pilots that we conducted a couple of years ago that were now reporting and I like the emblem for the society, because I'd like to think that our experiment was involving a community in the peer review process. It was very adaptable, as you'll see.
And we were mindful of the integrity of the peer review, that it remained high and that it was very inclusive. As you hopefully you'll be able to see. So how did we start. Well, we started with the partnership between our journal, which is I highlighted, we tried to be an experimental platform where experiments like this could take place. And we partnered with the team from pre-review, which is a peer review, training and live review platform to conduct these five pilots in 2023.
A couple of years ago, our objectives were to evaluate how structured peer review within a live review setting could integrate into our editorial workflow and try to do that as seamlessly as we possibly could, and then to evaluate the speed of review and the rigor of the review, which were a couple of key factors. So we've all been on Zoom. We know what it's like.
So you're probably pretty comfortable with what our live review session may have looked like. It was a Zoom session for 90 minutes where we had in this case, we had a pre-review staff member that was chairing the session just to provide the structure and to guide it. But we think going forward, one of the reviewers could be appointed as chair to chair it. So it doesn't have to depend on the pre-review team.
And then, of course, we would have reviewers that could show themselves, unmute their microphones and take part in the discussion or not. And we also oftentimes would have the authors present so that there could be if the reviewers had questions for the authors, that they could answer them immediately, as opposed to through really a well, long interaction through the usual review process. So that was our starting point.
And I'll tell you a little bit about the structure of the review sessions. So these were five manuscripts that were submitted as preprints online, as they were also submitted to the journal in 2023. And they were evaluated alongside our standard articles that were submitted to the journal during the same time period. And for each of these five manuscripts, we started a recruitment campaign through Elsevier manager and through adverts to the scientific community.
There could be experts to review these papers. And so we started a campaign to make it clear that there was a preprint that folks could be involved in reviewing at a particular session, and the date of that session was set. And then there was a 90 minute live review session chaired by the pre-review team. As I highlighted and the reviewers discussed with the authors presents and the editors could observe but not take part in the interactions.
But the key was that the reviewers worked whether they were aware of it or not, around a structured peer review document. So there was a Google document, an online document where people could work on it even if they decided not to unmute their microphones, and so they're actively working on the review process. They'd all read the paper before they came to the session, already with questions and things that they could put on the document.
And the structure of it meant that they had sufficient time for each of the sections to be populated by the reviewers. And then the other really neat thing from this experiment was that it ended up being more rewarding to the reviewers than we thought that it would be, because it allowed the review to be summarized and put together again within that structure, and published on Zenodo, with the DUI given to the authors to claim credit so they could be identified as authors for this review.
And then the work for the handling editor began. The review that was available online was integrated pretty seamlessly within our editorial manager workflow, so that the editor can make a first and then a final decision. And ultimately, these papers were revised and accepted. They ended up being high quality papers. Now, there's nothing particularly special about our journal editorial workflow.
I just want to highlight where the pre-review process was integrated and how this could work, really with any journal. This is our workflow. And from manuscript submission to the initial editorial evaluation, pretty similar, pretty standard as with other journals. But what we did was that the pre-review session allowed us to really balance this triangle between the editor, the reviewers and the authors so that it was more of an interactive situation than typically you would get with a standard review paper review process.
And so that was the key thing here that it meant that the authors also could take part in a process that typically, they're the recipient of but don't really get engage in very strongly. Now, Jenny highlighted that with structured peer review, you could can have several questions, and we are also mindful that folks weren't overloaded with the number of questions that they had during the review session.
We wanted them to focus on the review, not necessarily the sets of questions that they would be Fielding. And so we compacted that with Bahar actually into a couple set of sections. One was the summary of the research and the overall impressions, and there were a few questions there, very much like you had Jenny. And then the evidence and the examples to go into the meat of the paper and the recommendations for the authors.
So the session started with the roll call and live captioning, and then folks were actively working on the document for that 90 minutes discussing. But there were some sessions where there was very little discussion and more work on the document, and they worked around it with the structured peer review that they may or may not have been aware of. And the pre review team guided that into in the ways I indicated.
And then the review was finalized after the session and published on Zenodo as I highlighted. Now, here are some of the interesting data I pulled from the report that we're working on submitting actually this weekend. And so one of the things that was crucial was to have enough participants recruited before the session, because time zone issues and commitment issues can change. That's one of the limitations of doing this live review.
Not everyone might be able to join. And so we typically needed to recruit probably double the number of individuals as ended up turning up. And then of course, we evaluated the speed with which the editors could reach a first and a final decision. I thought it was very fast. I was observing, but that's not what the stats show. So statistically, what I can tell you is that it was as quick as our conventional paper decision times in 2023.
And then in terms of the survey demographics, it's interesting. Again, one of the limitations is that it helped to engage folks primarily from the United States. And so, we might think about how to implement this in a way to bring a greater diversity in terms of geography. This is fairly well split in terms of gender, ethnicity, and it really good mixture of individuals that had never reviewed before and those that had reviewed a lot before.
And all of them seemed to appreciate going through structured peer review process. This was a survey results and it's not a lot of folks. Again, a few cases to report here. And I wish that we'd obtained data with the standard paper publications basically submitted the survey to those folks. So we could compare those numbers. So I don't have a point of reference. All I can tell you is that they're actually fairly high in terms of rigor, in terms of the review was respectful and just a number of things that we were very interested in.
So fairly high marks. I just can't tell you that they're higher than what we would have gotten with our standard review process. As a scientist, I have to make that disclaimer. So this is what an example review looks like. So this is the published review. And you can see a few of the authors there that can claim the review.
And this is the paper that was published in the Journal. And we had five of these. So it just a delight to see these through. So we know that there's issues with the standard review process. There's a lack of formal training, review times are slow and it's difficult to maintain scientific rigor in this day and age. And reviewers often see their service as unrewarding.
It's just something that they have to do, but it's not something that they take a great amount of pride in. Unless, of course, the journal has got a very high impact factor. Promising five pilots. The structured peer review provided training whether they were aware of it or not. Favorable review times was what we saw, and the rigor, at least with the survey, seemed to be high, which was exactly what we were hoping to achieve.
We didn't want folks to be blowing off the session just because it was a live review, and reviewers could claim review publication as a reward. And recognition for the service. I do think that scaling up is going to be the challenge, it's easier to do a review on your own and then submit that at your own time. But I do think it's an addressable challenge, and we've talked to some of the folks from Elsevier about scaling this up in various ways.
And the pilots paper preprint will soon be available as a preprint, and then we'll send it in to see if it'll get reviewed by someone and hopefully published. Thank you. I was just asked if I wanted the pointer, and I do just because I want to point it. But anyway, the top one.
Yeah OK. So that's me. That's about the only time I'll use it. So hi, everyone. I'm Jennifer rogala. Excited to be here to close us out today. Before we take some questions from the group. And first of all, I want to say I've spoken at SSP a lot of times before, but I'm nervous this time because my husband surprised me in the audience and he is wearing a pink shirt, and that's how you can tell who he is.
So if I seem a little shaky, that's why. But I'm happy he's here. And my role today is I'm here to bring it all home. I'm here to bring this conversation that we're having about peer review back to what, are you guys going to go home and do with this information. And how are you going to improve the publications that you're working with. We're right by Camden yards.
So consider me the closing pitcher old I still have a good arm though, and I've played for a lot of teams. I'm late career and I'm here to tell you the stuff I've seen. If you sat and listened to what my colleagues all spoke about here today, though, I think the key takeaways that we have are education, education of everyone who is involved in the process. And that includes your editorial office, that includes your reviewers, obviously editors, authors, and making sure that they're all on board.
There's clear communication, there's efficiencies, and that you're never complacent in those efficiencies, that you are really working your system and making it the best that it can be that you are. You are being mindful of time. Good stewards of time of all of those people who are involved in that process and that those processes are very clear, very defined. Recognition in all of this is super important to recognition for everyone involved.
And then transparency and how you want to be transparent and communicating that transparency. And last but not least, structure, which we've been talking about innovation though in that too, I always encourage you guys just to innovate. And what's right for one publication is not right for another. But what is right for every publication, again, is to be clear about how you are doing everything that you're doing.
So moving along, I mentioned I'm late career. I didn't know my husband would be here, but I did put a family picture here to prove that I'm old and a grandma. I just wanted to have picture proof there, but in my career I've probably worked. Now I think with 500 or more journals, and that's a lot. But if you think about the first thing to consider in that is to think about each one of those.
I already mentioned is a fingerprint. Each journal is different. Embrace those differences. And then work with your communities to find that peer review system that works best for all of you. Again, lots of communication is involved there. Peer reviewers are our most precious resource, so make sure you're doing what your community wants. And you're really making the best use of their time and structure helps with that for sure.
I've seen everything from a peer review return from an eager medical resident. Let's say that's five pages long and you're just reading this. I'm not the one making the decision from that peer review. But I'm looking at that what's someone going to do with this. Or you get two words. It's garbage. And, the structure and providing different opportunities for your community to come together and find a review structure that works best for you all is really imperative.
Again, really then also in there being mindful of your peer review system. Really lucky to. I used to work with Jeff back in the day at Sheridan journal services. But, find your Jeff who is your person with your editorial solution, your peer review solution. And who's that. Is that how can you get that person to best serve your publication.
So real things that I've done and I've seen that have really worked. I worked at the American Urological Association for many years, a society publisher who was partnered with Wolters Kluwer, my current employer, and we worked on a journal of urology peer review project, and we involved every single person in our community in that. We started with the senior editors of our journal of urology editorial team, and we sat there and broke down every single process we had.
I was fortunate enough to have an editor on the editor board who could draw and map that out. The map of that, when you tried to put it on one piece of paper was so complicated, and lines crossing here, there, and everywhere, things were going all over the place. I couldn't even keep up with it and I thought I had understood it until that point. So we thought, OK, this is our opportunity to make this system better.
And that's when we started surveying. We did quantitative surveys, but we did qualitative surveys and sat down with our top peer reviewers, the ones who consistently, year after year, were in our top 10 reviewers. We sat down with our most prolific authors and asked them for their pain points. And then we went out to the larger editorial board to go through and see, what's going on there. Another thing I did during my AUA years is I do think the education component, I mentioned that first up and we've talked about it today.
Educate your peer reviewers. And also, I don't think scholarly publishing is a place for competition. I think it's a place for collaboration. So we reached out to what was considered a competitive journal, the Journal of pediatric urology, and we joined forces with them to put together a peer review training program. It was really insightful.
It was a standing room only situation. We had people sitting on the floor and people loved it. Our survey follow up after was amazing and we did actually see many new reviewers, and those reviewers were good. So it kept us committed to doing that moving forward. There is a paper in learned publishing. I'm happy to share with anyone after this that we published about that experience.
Another thing is participate in every user group imaginable. I hate to keep throwing Jeff under the bus here, but editorial manager does an amazing user group. I love that user group, JP all of them do it. You should be going to those and you should be paying attention and you should be learning from those things. Peer review migrations. If you're doing a peer review migration and moving on to editorial manager.
And I do not work for Elsevier or ars. So this is not a commercial. But if you're moving to editorial manager, let's say make the most of that and there are experts there, find out, how you can maximize that. That does not have to be a headache. That can be an opportunity. Peer review model changes. Think about that.
Double anonymous. Are you using open peer review or are you publishing peer review results and how can that be used as educational tools. Another thing that is a real life thing that I've done with several societies that I've worked with is the desk reject marketing campaign. And this sounds crazy, but I'll explain it to you, is you teach your editors to desk reject as often as they are able to do, but in that make that editors rejection as quick as possible because you're saving that author time.
But make that be a really good review of the paper. It's not just sorry, you're reject it and you're trying to do it quickly. Sorry you're reject it. But here's why. And here's how you can make this be an awesome paper for our journal in the future. Another thing is again, communicate. Don't waste people's time.
My favorite thing to think about with peer review is I had an amazing editor who would say to me, I am an editor on this journal, but when I submit to this journal, where the hell does my paper go. I can't track it. I don't know where it is. Why does Amazon tell me exactly where my grill is and then send me a picture of it on my front porch. But you can't even tell me where my paper is.
And ask the questions of all these constituencies and show you care. And now I'm trying to hurry, Jenny, because I'm marking my own time and I'm talking way too long. OK, so just go back to your office with these practical tips again, I do. It drives me crazy when I can't go home with an actual to do list. But I would say.
When was the last time you talked to your editor in chief about their pain points. You're handling editor, so I don't know what you're calling your handling editors, but your senior editors, associate editors ask them questions and ask them often. Make them meaningful. Stop wasting your in-person meeting times on going over peer reviewed data, when you could just be sending those reports out every month.
Instead, spend your time talking to your editors. What about those authors. Are you really talking to them. Are you asking them about their experience being submitting to your journal. Also, be looking at what you're sending to your authors with decisions. Again, are you sending them that review that says, this is garbage.
You need to really be taking that opportunity. The reviewers talk to your top reviewers all the time. You should be talking to your top reviewers every single week and saying, OK, first of all, thank you. That's recognition. But then get something back from them and say, OK, why are your reviews so good. What are you looking at. What are you thinking of.
How can we use what you know and what you do so well to teach others. Ask people who are brand new to your review process the same thing. What are you struggling with here. What's hard about this process. Think about connecting those top reviewers with those early career reviewers who happen to show great promise.
And then last but not least, and I'll point my green pointer over at Jeff. Are you. What did I have. What have I done here. OK, I've totally messed this up. OK, well, it doesn't really matter except for to say, call your Jeff. So every peer review software team is excellent.
I've never seen a bad one. Jeff's amazing, I call him. He's not even fully connected to some of my publications. He's always happy to help. But every Jeff is just like that. Use your teams wisely. Use wait, did I totally mess this up. That doesn't matter, because we're good.
It's fine. Thanks and then what about your publisher. I work for a publisher use our teams as well to get that done. You shouldn't be doing any of this alone. And you should be working with all of the resources that you have in hand. So with that, I'm going to finish up and we're going to take questions from the audience. I'm going to run around in the audience with a microphone, and we want you all to raise your hands and ask lots of questions.
Thank you. Here in the front. Hello thank you very much for this great presentation. I'm coming from Paris, France. I meet a lot of physicians. I do a lot of trainings about peer review, medical writing because as we don't speak very well English, so they need some more assistance.
You mentioned that lots of reviewers feel unrewarded when they do so. Don't you think that rewarding the peer reviewer would speed the time. And also would convince them to follow a structured peer review process as you describe. I think this is one of the key elements, the reward. And I'm not talking about money because those guys do not need money to do the reviewing and they're not looking for money, but at least some consideration for them or their institution when it's about to submit another paper.
Because all those reviewers are also authors, so they review for free and they pay to submit with APC. Don't you think something could be done there to improve this situation. Thank you. Yeah no, I mean, really good point. And it's multifaceted, isn't it. I mean, we all find things rewarding in a very different way, but we try to explore that space and that I mean, when we started the journal, we even surveyed the community and asked them, so how would you transform things.
And some of the reviewers said, well, pay me for my review. So obviously for them that would be monetary reward. We don't want to go there. But I see your point. And Jennifer, as you highlighted, there's some reviewers that just go out of their way to provide thorough, rigorous, high quality reviews. And yet, I would love our journal if we could knock down the APCs for those reviewers, to be able to submit and get some recognition in that regard.
The other I mean, read some of the papers, trying to get around this issue, reviewers like me sometimes will gravitate towards high impact journals because it's just rewarding. And then those journals, the editors there know that they can count on me to get that review in ASAP or they're going to get after me. Like Jeff northcott in current biology chases me down pretty regularly.
So that's one way to do it. There's various ways that reward can be there, but I like the idea that as you said, we just start thinking about what are the key stakeholders. Editors, authors and reviewers really finding beneficial in this process. And what are they finding rewarding because it's what's rewarding for an author wouldn't necessarily be rewarding for a reviewer and vice versa.
So thanks for raising the point. And Chris, you said something really key there when you were talking about how every we every editor has that little spreadsheet of those people they can count on. So how are you combating that. And how are you making sure that you're growing that pool. And the rewards sometimes can be found. I have seen not that a good peer reviewer needs any more work, but saying, hey, you're an awesome peer reviewer, how about you mentor this promising person.
And then that the paying it forward. And also, have you ever done a back of the envelope calculation to when people come to me and say pay us. As reviewers, I have met so many great reviewers and sucked them into the web by doing a back of the envelope calculation to show them how much as a society publisher, for instance, we'd have to pay every one of those peer reviewers, and then that money could not be reinvested into that community.
That usually shuts that conversation down. But then we start in on then what can we do better to make this better for you. So, Jeff, I think you were going to say something too or no, were you going to respond, OK, OK. Jenny, did you want to add OK. Oh I need the ones far away so I can get my steps. Oh, I can't even my eyes are terrible. Hi thank you for this talk.
Adriana Bourdieu from the American Society for Microbiology. So this is a question for Jenny. Hi Good to see you again. So I was really interested in a little bit more about we talked a lot about the time to peer review as well as the inner reviewer reliability. But also I'm very interested in some of the comments that you receive from actually surveying and doing the interviews with the authors and reviewers, but in particular the authors I would like to as you said, they were wanting to know a little bit more about OK, what to expect.
But once they other than that did you get a sense of whether or not they felt that the way that they were receiving this information was more helpful to them, or did they see a value in that. It's great to see you. So thank you for the question. So as part of a general thing, we do a survey to any author that is published, and so it's considered the author feedback program.
And one of the things that was a limitation of our study was that we couldn't isolate the authors that were giving us feedback. So I mentioned that some of the other article types were still included in these journals, and they were not getting a structured peer review questionnaire. So they were still doing a regular unstructured peer review.
And so all of the feedback was a combination on both of those. But on the interviews, what we found was that the authors wanted additional information on what were the questions asked. So the information, Jeff, Jeff was great in showing what was pulled into em so they would see all the comments. Sometimes the editors didn't want all the comments to be seen and the authors sometimes were confused in terms of the comments to a question, in terms of how they should interpret it for the revised manuscript.
So what we found was that we want to make sure that authors know about the questions going into the peer review process, so not only include it in the instructions for authors, but also include it in, say, M So that they could actually have a prompt to say, this is how your paper will be reviewed. And so there would be less confusion from the authors when they receive the replies. What was also nice was that we made it optional for reviewers to give that feedback, but there were a couple of questions that we would have said, oh, this shouldn't be optional.
It should be a mandatory. So there were three questions that we decided those should be mandatory, and we would let every author know that those are the mandatory questions that were going to be asking reviewers. Viewers so when you're writing your paper, think about that. So that was really I'm hoping I addressed all your questions. Thank you for coming.
Yeah and that those clear instructions are key. Who had the final say on the three questions that were mandatory. How did you get to that point. Sure actually, it was a combination of the editors along with the comments that the peer reviewers gave. We wanted to make sure that those questions were never not asked.
So I mentioned that 92% completed all the questions. We wanted to make sure that we got to 100% And so if at most likely the editors in the program wanted to have less questions that were asked. And so there was confusion about what are the questions that are most important. So, for example, the editors did not want to ask the question about English language or copy editing because they knew that copy editing would be done after the paper was accepted.
However, reviewers, gave the feedback to the editors in chief that they still wanted to weigh in on that. And so as you mentioned, reviewers are a precious resource. So we wanted them to still feel that their comments were valued. And so that really it was a combination of OK, what are the most important questions from an editor perspective. And then we'll still build up the additional questions that are optional.
But the three most important questions were typically regarding methodology and the limitations of the study. And so Bahar went out and asked for a vote. And so, so it was she rephrased the questions a number of different times. And that's basically how she got to the top three. And I'm sure she'll publish that information. So one of the things that I didn't mention was that she is President of the European Association of science editors, so she has published a checklist.
And that checklist has the questions that can be used. So you can just download the checklist. It's freely available to all. And so that was one of the things where she was looking at the questions and trying to get buy in from the editors. In terms of what are the most mandatory questions, should that be on the checklist. To sum up that response, I would say thoughtful.
It was a thoughtful process, and that is the key. And that's where you're going to get your success and have smooth processes. So I would encourage that thoughtful thinking not too many steps this time. I was wondering, do I grab this from you. OK, I was wondering how applicable you saw this being for other subject areas. So I know that in the social sciences and the medical sciences, there might be differences.
How much of a lift do you think it would be to create one for different subject areas. Is there a need. Just curious on your thoughts. Chris, do you want to go first. OK So we did do the study across all different journal types. And I will say that SSH was the one that came up in terms of OK, can we modify these questions so that editors definitely had the most.
That was where we saw the most requests in terms of being able to modify the questions, particularly for the article types, because in SSH articles they didn't necessarily follow a very structured article type. And so some of those questions were not always as relevant. And so that's why one of the key takeaways in that and that Jennifer was talking about was that you should make sure that it's appropriate for your journal.
What does the editor need to get from the peer reviewer. And so, we did realize that after the study, we noticed that different articles within different subspecialties would benefit from either a shorter or a longer question list and appropriate to their article types. So no, I completely agree. Not every structure will be appropriate for every journal or for every type of Article that you would have.
And just with the fives that we had, the structure worked well for the initial 4, but the last one was a review rather than an original paper. And we restructured it. And so I think it's really good in the era of openness and sharing, we share those structures online. And as folks start to implement structures that work really well within a particular domain, I think that's going to help out quite a bit.
And as you're hearing between the two of us, we're debating about whether you want to have more or fewer questions, how you actually structure just to make sure you don't discourage folks from taking part in the process. If they're thinking that this is just going to be a lot more work than it actually is. But Jenny, your point, I thought, was really well made that folks can start to appreciate.
And maybe it gets back to the reward question, that this could be a lot more streamlined of a review than what they're accustomed to doing. My reviews are a lot more streamlined, but that took many years for me to figure out what it was that editors wanted to do. And again, coming back to reward that can be rewarding in and of itself. But I just remember just last week, a review I turned in and immediately the editor thanked me for that review, and I thought that was really rewarding because it just doesn't happen very often.
And that's part of the education. That's where an editor office educates their editors to reach out and have that personal outreach, a phone call out of the blue, or a really nice email saying your review is really good. And here's why. Pointing someone out and making them feel good goes a very long way. And a lot of people don't do it.
So do we have time for one more or we are at Happy Hour. We are at oh, OK. Well, I don't want to hold anyone back from the free beer and wine. So thank you guys all for joining us today. And you can contact any one of us at any time, especially call Jeff if you need any help. I'll be waiting and we're here to help. Thanks