Name:
Addressing problems in peer review_Recording
Description:
Addressing problems in peer review_Recording
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/f324eb57-cf07-436f-a788-0ecb394abccb/videoscrubberimages/Scrubber_1.jpg
Duration:
T00H38M14S
Embed URL:
https://stream.cadmore.media/player/f324eb57-cf07-436f-a788-0ecb394abccb
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/f324eb57-cf07-436f-a788-0ecb394abccb/Addressing problems in peer review_Recording.mp4?sv=2019-02-02&sr=c&sig=6y4%2B1yl%2BaeOI95Q%2BbXxkgPCnWtAqJwLKRk65C%2BHx8HE%3D&st=2024-05-17T01%3A35%3A40Z&se=2024-05-17T03%3A40%3A40Z&sp=r
Upload Date:
2024-03-06T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Hello, everyone. I hope you can hear me. I don't think you can see me, but maybe that's for the best. Thank you very much to our speakers there. I have some questions in front of me here to Pose to them. If there are additional questions, you could share them on chat. It's obvious this subject has come at a very good time.
I notice this week in nature there was another article on exactly this issue called stop the. Peer review treadmill. I want to get off if you haven't looked at that, can have a look after this session. So I've got four very interesting questions here. One for each of our presenters. So maybe I'll start with you, Jasmine.
So we saw in your. Word cloud. There were lots of problems that authors and reviewers associate with the peer review process, but maybe one of the ways we can alleviate some of those you identified as having an influx of new people into the. Peer review merry go round was called into the environment of reviewing for journals.
Can you think of what concrete actions we could take as journal editors or as. Publishers in order to widen this. Pool of reviewers. Do we want to widen it in terms of career stage? Do we want to widen it in terms of geography or both or everything? What are your thoughts on how do we increase the diversity of the reviewer pool?
I think so. Just hearing the question, how do we increase the diversity of the reviewer pool? Again, I think there's an influx of people, and I also think there's a lot of players already in the systems that we don't utilize effectively or at all because we have the standards of what constitutes this, quote unquote good reviewer.
I think that's more the question of it's not that you can't find people to review. You want someone to be a good reviewer. But I think you have to understand what qualifies a good reviewer in your space. Like what are you looking for? And start to address that quantifiable thing about that group. Right so if good to you is that they get it done fast or that level of expertise, is it the time they spend on whatever your qualifications are for how long or whatever you're qualifying Good as I think that's really objective for your space.
I think in order to get new people in the system, you just have to use them. I always think there's tons of people, we just don't use them because we don't constitute them as good reviewers. So start making some decisions I think today of what would a good reviewer be? And is there a way to translate the reviewers that we have now that are currently seen as bad?
How do we translate this to a good reviewer? And I think that's so subjective because for me, when I look at the process and what I hear our authors and are asking us is they want it to be done faster. They want, yes, a rigorous review, but they want it done as fast as possible. Right so we're weighing rigor. Is this, you know, against time and we can't manage both like Tim alluded to this in his comment is this what are we gain for the lack of time versus how much rigor we're getting?
Right and that varies for each group. You get the people in the system by utilizing the people. If you take a look at your systems, usually, you know, a lot of these systems, they collect the information about all your authors. I've worked at organizations where every author had the potential to be a reviewer, right? But we wouldn't utilize two people because no one valued the input of every person in the system.
So I think it's just translating that to something that's more useful. How do we make these people what you need them to be? Because they are present and they are here and they can be effective. I think we're doing a lot of that with efforts towards early career researchers. You know, they're like, OK, we know that they're not going to have this level of expertise.
So there's that exception. Some people do like a paring of what would be considered an expert with someone who is not a senior in this space. So I think you have to play with some things. But I think what we really should get at is this idea of making what you consider bad, good, and what does that look like and really defining that and showing, I think for your stakeholders that there are a ton of people, we just don't qualify them as good, so we can't use them.
So I just offer that as a response. I think that's something you can do today. Start to just reconsider. What are you looking for with your viewers? And it is this what you're like because someone does not have this level of expertise, does not mean they're not going to gain it at some point. So is there a way of using that right now and today? Right so that's just what I offer is feedback.
I don't know if others have anything to add, but I see that as just one of the biggest problems. We don't trust the people, so we don't use. It's interesting, Cos we could build some trust by maybe training them or offering them onboarding programs to the reviewing process. I know some of those programs have been very helpful, but also I know they're not super visible.
So I work for a company in India whenever I speak to Indian researchers, but within an hour they've asked me, so how do I become a reviewer for a journal? And there's a big appetite for it out there in the, let's call it the global south, but there's no way for them to express that they're interested in helping to review and they see no clear path to being qualified enough to aid in the review process.
So it seems to be a bit of a disconnect maybe between the publishers who want to expand their reviewer pool and an expanded reviewer pool who want to be reviewers. This something not quite meshing. They're in the middle. I don't know how we make that easier. Yes I mean, I completely agree with what Jasmine is saying.
This is a very the skew in who we use as reviewers is very strong. And I worked at a journal where one editor picked the same four people for every paper he was given and said, no, no, they're already reviewing a paper that you sent us last week. We sent you last week. We can't invite them again because they're already reviewing that.
Oh, there's no one else out there. No, that's rubbish. There's tons of people out there. You just need to think about how about this and this. This is really where something like profi is a sort of new product that's coming along. And there's other reviewers suggesting tools out there that are going to enable editors to break out of this hamster wheel of picking the same people over and over again and do it in a smart way.
Yes, they published an article, but it wasn't in this journal. It was in another journal, but it was on the same topic. Yes, this is an article about birds in North America, but someone did very similar work on birds in India and we should totally ask them to review. But once they're invited, it's up to them. I think. I think somebody at least needs to have published a paper or have been involved in have had reviews on their own work before.
They're really ready to review. Unless they're getting exceptional mentoring because then they've kind of accumulated enough expertise in that area to offer valid opinions. But, you know, you don't need to publish many before you can be a great reviewer. But once you get the opportunity for review, you do need to put the effort in and do a really great review. If you, you know, you get invited and throw back one paragraph saying, I like it, it was nice, then it's fine.
The editors will be like, well, that was rubbish. I'm not going to ask them again. So you've got to make the most of opportunities that coming your way. Sure Fred, I'm going to bring you in here. So the model you were showing there that I have recently adopted is super interesting, but thinking of the industry as a whole. Do you think over the next few years, we will need to review everything?
Or can you imagine a situation where the. Peer review process is a kind of premium service that only some articles enjoy. Yeah it's an interesting question, I think. But obviously there are limits to the number of reviewers out there and there are limits to that. I think in an ideal world, you would like to review everything, but that isn't going to be possible.
Therefore, that's why we still have a sort of a senior editor making a decision about whether you will commit to reviewing something and then that sort of that decision sort of forms the basis of whether we will essentially publish that that review. So I think it is. Obviously there will be various different communities that would aim to review different papers.
I can see how that could be a problem in the sense that certain papers might get reviews from various different communities. You could have potentially five or 10 years from now, one paper getting a review from nature and then cell and then from science, and then other papers that don't get any reviews at all. So I think that is a problem that. Review communities need to think about the future when they implement these kinds of processes.
And I think it sort of does also stem into what we were just talking about as well, I think. Publishers need to publish. Their communities need to try and foster kind of situations where as many people as possible to obviously qualify to do so, feel that they can provide honest reviews. People who are in sort of early stages of their careers or otherwise potentially vulnerable positions need to be able to feel comfortable writing honest reviews about the work of more senior colleagues or people who have sway in the industry.
So so. Imagining that the increased throughput of manuscripts is causing a bottleneck with human review. Tim, you mentioned that there are aspects to the review process, which are currently automatable or automated. Again, looking at that 3 to 5 year timescale, can you imagine the amount of information that we use computers or AI for will increase and improve transparency in review.
Yeah I mean, transparency. Often depends where you're standing and where you're looking from. Like for something to be transparent, there needs to be someone trying to look, if you see what I mean, it can be transparent, but if no one's trying to look, then it doesn't matter. So what I kind of mean by that is so sure you could automate open data, which is what data tries to do is try to help researchers share their data and try to make that a scalable process.
But then someone had to try and do something with that open data. And it may be that you also just automate the reproducibility check stage, which is another step along that line. But that transparency is only a view. If somebody comes along later and tries to see what happened to that article.
I have been skeptical for a long time that I would really be able to help with peer review. So, sure, compliance checks? Absolutely did they use prisma? That's pretty easy to tell. Did they share the data? That's hard. But possible was we're finding things like is this statistical test appropriately applied to test this question?
That's really hard and you have to have a lot of expertise and you have to understand the paper in an abstract way and not just in sense of the words on the page. And I don't think I models are capable of doing that. GPT is just using prediction to tell you it just uses the most model to pick the next word to say. It doesn't understand like at a massive level of what's going on. So I think automated peer review is a long way off.
But I could be wrong. It could happen to you. We have been surprised with AI recently, I guess. So I have a question. This is the last of my question. So we'll move on to the audience questions. So, Adam, thank you for your presentation. Very intriguing and interesting. So the concept of publishing manuscripts on a.
Preprint server like you did with Sai archive and inviting comments and suggestions, and which I guess in aggregate could be considered forming a review of sorts. That kind of technology has existed for a long time, but isn't super well adopted by readers. What do we have to do to incentivize readers to leave comments and constructive criticism so that we aren't perhaps so reliant on the peer review process?
I think my answer to the question is write papers that they want to read and comment on that. I think if someone isn't engaging with your work, I think that is also information and that is something that I could have learned from posting that paper online if no one responded to it. It suggests that this might not be useful to people. And in fact, just last week I published kind of the more readable version of a paper that I published last year in PNAS.
And and there was less of a reaction to it. And I felt like that was right, that like I never actually felt like I did a good job framing why that paper was important and what we can learn from it. And the fact that people have less to say about it was useful information which which I trust more when many more people have the chance to say something about it and decline, rather than maybe one editor, who has their particular proclivities who says, like, I'm not interested in this when like the next person down the line might have or the next person down the line might have thought it was the most interesting thing that they'd ever noticed.
Part of, and I think part of the way you invite people in as well, is writing papers in a way that people can understand them. So so most papers, almost no one reads from beginning to end, often not even the reviewers in many of our experiences. I mean, why is that? I think it's because papers have to be written in such a format to pass the sort of legal review, which isn't actually the best way for transmitting the information for most people.
So when I read a paper, most of the time, I want the gist for the work that I'm actually building on. I want to really go in deep and I want to see the analysis that they did. That's not the way that I take in most scientific information. And so what I've been doing is writing my favorites for the 99% of people who are going to read them, which is here is what I did. And if you want to take a closer look, you should look directly at the materials and the code and the analysis and all of that's there for you.
But I'm not going to try to give that to everybody all at once. And so I've had people write back and be like, you know, I read my paper to my eight-year-old daughter and she can understand it, which is exactly what I want, because that that eight-year-old might one day be the greatest scientist of next generation and understood that science is something that she can do. But if it had been written in the format that I would have had to write a journal.
She would have been like, oh, science is something that you only get to do when you get interested in reading and boring words for a very long time. Yeah so I can see that that approach of inclusive writing or accessible writing, and indeed I read your archive paper and it was hugely entertaining but also educational. So you certainly managed to engage me as a non psychologist, which is good, but if you're doing something like mathematical research or theoretical physics, which perhaps even the thrust of your arguments are equation best, it's kind of that's a harder job, right, for them to pitch at a more accessible level.
It might well be. I don't do those things, so I don't know. But I do have the suspicion that if you are unable to explain to a non-expert what you are doing, then you may not actually understand it to the level that you believe. So there may be ideas that to really get them, you have to look directly at the equations.
But if that's the only way that you could give people any insight into what you're doing, I mean, how do you really know, you know, what you're doing? But again, not what I do. So it could be that there are just realms of knowledge production that just have to be opaque to others. But I would at least bet that they have to be less opaque than most people think.
Sure Thanks. Yes sorry you had a hand up that I was just going to throw in that. Adams also, I think what is happening with the scientific article, the point is part of a net accumulation of knowledge. And so the people that are going to make the most use of it in terms of making the next piece of research are the people that really do need the details and people are able to understand the formula in great depth and to reproduce it.
And I think that's true throughout science. But so it's those details that matter. Sure it's great that other people can understand it sometimes or not, but it's the next researcher along the line that needs to read your article, really get it, and then add the next brick to the wall. Yeah I think what they really need is the materials and the code and the data, which is why I think it's vital that they should have access to those and why I think it's sort of wild that many journals you do not have to provide those or you don't have to provide them in a usable form.
Yeah which suggests like what are we really doing? Although I do think that making that work more accessible to more people allows those people to contribute in a way that we might not assume that they might not be interested in. So I think there are people out there who have expertise or interest that are maybe outside of the academic mainstream, but still have something to contribute.
And the more you make it accessible to them, the more you open that up. I've found this myself with making this more accessible to people. I have people, especially from other disciplines, saying like, oh, here's how this would make sense in what I do in a way that they would never have spoken to me if I had published in a psychological journal. And they go, oh, that's psychology.
Like, I don't have anything to do with that. Great we have some questions from the audience. Specifically, I'm going to start with one from Lisa Schiff. And the question is for you. Jasmine says, I really appreciated your mention of training educating reviewers, since that is not typically a part of graduate school programs. For a publisher like plos with many journals, is reviewer education conducted across all journals, or is it left up to each journal to do that separately?
Yeah each journal does it separately, but we do have like onboarding for our reviewer. So it's a process that they have to go through. And it's different because each sign, each journal has a different discipline and it needs different rigor or according to that space. So they all do it differently. They all have their own sort of onboarding. And I think there's even more to beef up in that space with like how that looks and just are we completely giving them everything they need to be successful?
But it is very. Great thank you. And another question from Isa for Fred this time. What challenges, if any, has eLife encountered over the years in conducting collaborative reviews? So I think you are maybe unique in having all reviewers compile their notes and. Come to some in a single consensus to communicate to the author.
What kind of issues. Have you had with that over the years? Well Probably most likely ones that are sort of semi obvious. Sometimes there isn't a consensus. So the sort of communication that goes to the authors is that some reviewers felt this, some other reviewers felt this. Obviously, the process takes longer.
So takes longer for reviewers for them to meet and discuss in detail. And I think. The benefit of that process for authors is that they get. It obviously, it takes longer for reviews, but the authors do get generally a relatively consistent piece of feedback, know they don't get conflicting pieces of advice from reviews, number one. And number two, I think that's one of the main benefits of process.
And what kind of communication takes place with authors of manuscripts that elife editors have. A desk rejected. So I guess they have some kind of email which says something or other. What form of communication is that and what does it say? Yeah Yeah. So I think. That we aim for anybody who's getting rejected to have a reason for being rejected.
Obviously, authors would like that to happen as quickly as possible. So there's kind of a bit of a trade off there between someone having to craft a. Yeah Kroft responds. That is. Helpful and not simply critical. And I think that's something that our editorial leadership and trying to foster and work on for some time now and.
So the aim is definitely not just that you don't get any sort of information. It would be that you get a reason for why your life has decided not to review your work. And for the most part, that would probably be because. We don't feel that Eli will do a good enough job of reviewing the work as we want ilife to become associated with high quality reviews as opposed to high quality papers. That's the aim of this new process.
All right. Some more questions coming in towards the end of our hour session. There's one for Adam here as well, which says unsure what career stage you are at, but is it not important that you need to be published in a peer reviewed journal? Oh, Yeah.
No, I mean, this is sort of the unspoken part of the conversation that we're having is that peer review is also a way by which we administer status to people in the hierarchy in which we live. And and if you want to do something outside of that hierarchy, you're going to pay the price. I am just foolish enough to do that, basically, like I am willing to. I think this thing is important.
I think someone should be doing it and I'm willing to face the consequences for it. So part of what I hope to one day do is be able to continue to do the research that I do and find a different way of supporting myself. Other than being a tenure track professor, which also used to be much more the norm, that there are all kinds of people who are producing science that didn't necessarily have a University affiliation or didn't all the time.
And I just think there should be more people doing that, and I think there should be people who are modeling that for younger people, that being a scientist doesn't require being an academic. So so, yeah, you're 100% right. This is this is risky. And I don't recommend doing it unless you're willing to take risks. Great thank you.
Another question from. Regina Reynolds has the rise of so-called predatory publishers who often claim peer review. But seem not to provide it made. Their view more important or degraded it? And how does the presence or absence of an esn affect the credibility credibility of the journal, if at all? And for full disclosure, if says I am a director of the US esn center.
So I don't know who would like to take that question, but maybe we can start with esn and then what predatory publishers have done to the perception of the review process. Yes Tim O'Brien. No, I don't have anything to say about it. The idea of open peer reviews?
I think so. There was very broad consensus that if journals publish the peer reviews alongside their article, that that would really be able to distinguish journals that are doing proper peer review. From journals that are not bothering with a peer review process and are just accepting the articles as they come in and charging the authors. I'm not so sure that's going to work anymore because you can say, gee, Pete, give me a review of 1,000 word review of this article.
And and there you go CBT will write out what looks like a review. And we can then the predatory journal can post those alongside. And so yeah, I'm a bit worried about that happening. I'll add something to this too. If you're Tim, are you just making sure. The other.
I think that we tend to see the predatory space as a negative thing. I don't see it as a negative thing. I think it's just highlighting areas that we should focus on. Right and we're not having to do the work to figure those spaces and areas out because they're making it very obvious. So I think it's worthwhile to start considering them. Yeah, I'm not saying predatory drones are a good thing and they do kind of intrude upon whether that trust and peer review is there.
That's something that degrades that. But I think us the perspectives are different, right? The perspective of the author is that they see a tool that they can use and it's easy. The perspective of the publisher is coming in is infiltrating our systems right in that space. So I think it's a bit of a give and take in that where we can learn a lot from what they're doing in these different things, the fake papers.
I think it teaches us a lot about our processes and where we need to be compliances or whatever we need to do. But we're just seeing it as something, oh, this is negative. It's highlighting everything we're doing in a bad way, which it does. But we could use that information to make something positive start to happen and utilize those things in a way that I don't think we really we just see it as we tend to be very reactive with these things, as opposed to sitting back, taking in what's actually happening and thinking about ways that we could use this to our advantage.
Right? like every bad thing could dismiss, not necessarily a negative thing. It doesn't have to have a negative impact on it. So I don't know. That's just what I offer to that space is that maybe we need to shift our perspective of how we're seeing these so that we could kind of not be reactive to how do we stop it from happening versus how do we get involved in it and make it be something that can start to be used as a positive thing, right?
Which I hear a lot of that. Tim, as you're saying, the check, how can we start using these tools to better what we're doing in some way versus just, Oh my god, it's a negative thing. It's, it's doing this and we should avoid it. Know so I'll just add that to the conversation. Great thank you to answer the other part of the question from Regina about does esn affect the credibility of the journal?
In my eyes it did, absolutely. And then when I came around to setting up a whole suite of new journals for a new publisher, I applied for license through the International licensing registry in Paris, and it was a very straightforward procedure. And prior to that I thought I send Sens were a kind of stamp of credibility or yes, they conferred some credibility on a journal title.
And then I found it was just a question of filling in a form and you get a nice send back and then six months later you need to publish something and it's assigned to you. So yes, as a reader, I always thought the deferred conferred credibility and then as a publisher I found it was kind of relatively easy to get analysis done. So not so sure.
That's just a personal viewpoint. I'll stop it because there's a very interesting question I want everyone to respond to before our time is up in 5 minute time and it's come from an anonymous person. It says there are some communities that are experimenting with incentivizing reviewers via financial support or APC waivers. How effective do you think these initiatives are and are there concerns with providing monetary incentives to reviewers?
I don't know who would like to start us off with that one. Thanks for offering monetary incentives to reviewers. Yes is the answer. Alison and I wrote a long Scholarly Kitchen post about. It seems like a great idea, but it would be incredibly corrosive to the system. Incredibly corrosive.
but we had so many points. I'm not sure I want to start saying them in the last 5 minutes, so I'm going to stop there. To Bob and read your piece a little after this. So Thanks for putting that out. I think also from a sort of pragmatic perspective, it's quite a big, big task as well because of all of the different issues around.
Being able to essentially pay people when they work in institutions that can't actually accept payments and issues like that. So and I'm sorry. Go ahead. No, no. I was just saying I was actually working on something around this to incentivize reviewers and to widen the reviewer pools a while ago.
And we were thinking about a kind of flat fee of. $50 It peer review, right? If you're in the US or Europe or Australia, probably the admin required within your University to process that $50 payment isn't worth it. Right so you could volunteer to put that into a pool of money which could be used to send other people to conferences.
Undergrads from the global South to a conference somewhere right in the world. But equally, if you're a University in Kenya and someone offers you 50 USD to review an article, that's probably meaningful, right? So there is a way, I think, that payment of reviewers at the right price point can incentivize a desirable improvement in the overall ecosystem.
So I'm hopeful that there is a way. But incentivizing reviewers with relatively modest payments could be useful. I don't know if anyone's got any counter views on this. I think one of the elephant in the room is that a sizeable number of papers are never cited or used again, and so it's unclear from the beginning, whether they actually represented a contribution.
And so adding additional cost and time to reviewing them doesn't really benefit anyone except maybe the person who's pocketing the $50. Which is part of why I think I'd much rather, if I was a researcher, give up 1% of my grants or whatever to fund something centrally that's going to try to do a good job of looking around the field and seeing like, what are the important things that people are really relying on?
And it would make a big difference if they turned out to not be true. And we are going to be the people who open up the data sets and check the analysis and see to what extent does this paper actually provide evidentiary value, because that's hard for individuals to do. You have to actually figure out what the variables names meant. I'd rather do that. Then try to make individual reviews slightly better because the difference between reading the paper and offering some comments and reading a paper to actually extend that it takes to vet it is huge.
And I think it's pretty hard to get people over that chasm without a lot of money. Sure where we are at time, I'm afraid. I just want to say thank you very much to all of our speakers for first of all, for their great presentations, but secondly for this very interesting Q&A session afterwards. And Thanks to you, the audience, for some stimulating questions as well.
I think with that, I just want to say thank you once again to our speakers. Many Thanks. Thank you, Chris. Thank you. Thanks Bye bye.