Name:
New Directions in Peer Review
Description:
New Directions in Peer Review
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/7ffed894-684a-40f2-91bc-d1efb1903b45/thumbnails/7ffed894-684a-40f2-91bc-d1efb1903b45.png
Duration:
T01H06M35S
Embed URL:
https://stream.cadmore.media/player/7ffed894-684a-40f2-91bc-d1efb1903b45
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/7ffed894-684a-40f2-91bc-d1efb1903b45/new_directions_in_peer_review_2022 (1440p).mp4?sv=2019-02-02&sr=c&sig=Qr2yoF6p7RzFkgfblvLwRT02grsHIC9RvXbhYqYItB4%3D&st=2024-11-20T02%3A38%3A12Z&se=2024-11-20T04%3A43%3A12Z&sp=r
Upload Date:
2024-04-10T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
My name is Ben mudrick. I'm the senior product manager at Cam archive and I have a really great panel here today. I'm going to let them tell you a little bit more about themselves, but for the purpose of just mentioning who they are Gail Fitzpatrick, John Fisher and page, who is hyperlocal here from HQ. So we're going to talk today about new directions in peer review.
And I want to lay out just one moment some of the goals we had for our session, and then I'm going to let each of them speak for just a couple of minutes. They have some slides. Yes, that's a good question. It is. It's pretty chill, but. Yeah all right, I'll do.
I'll do a beat. Poetry yeah, we. Sorry this is where I think maybe not even heard the announcement. We're trying to move it up so we can keep everything flowing since the last one was able to end a little bit early. I'll take a minute. We have got to work the music.
You want a martini? All right. Thanks for the suggestion. Thank you, Bob. All right. So as I said, we'll just have u to the panelists talk briefly at the beginning. I have at least one question I want to ask them, and then I'm hoping that we can continue in the trend and have a discussion.
So whatever you're interested in talking about your observations as well. So we're hoping to represent the various stakeholders in peer review and their viewpoints with this panel. And I think that we really do provide some data about peer review and highlighting some metrics that are really important to track. If you're interested in peer review and how well it's working, describe some of the ethical issues that arise with scholarly publishing and when and where and how peer review is able to address those, if at all, and then be able to discuss the balance between the need for a thorough peer review process that really safeguards and improves the scholarly literature with the need to streamline in the face of ever increasing scholarly output.
So I think that's sort of the crux of the fight that we're all feeling right now. All right. So I'm going to stop at the moment and turn it over to Yale to talk a little bit about ethical issues. Thank you, Ben. So last session of the day, let's get everybody's energy pumped up by talking about ethics.
And well, then can. How do I get rid of that? Sorry. OK. Thank you. Are colleges. We'll make up the time that we gained by starting early.
So just kind a few benchmarks to start with. I'm going to be talking primarily about STEM issues. That's just because that's what my experience is in I. As we have this continued conversation, I'd be really, really interested to hear viewpoints from folks in the humanities and social sciences and have that be part of the entire conversation. So thank you. So I am Fitzpatrick.
I'm the editorial ethics manager for the proceedings of the National Academy of sciences, and I am excited to be here talking to you about ethics and peer review. Ideally, ethical issues don't come up a whole lot in peer review. It's not exactly the rainbows and unicorns side of scholarly publishing. I'm not sure what is the rainbows and unicorns side of scholarly publishing, but the, the, the, the issue of dealing with ethical concerns is hopefully an outlier.
Unfortunately, the reality is that it's not always an outlier, but it's a really important thing to consider. And I would say that, especially in this day and age, when we're living through kind of an unprecedented level of distrust of the truth, it's all that much more important to handle ethical issues and to handle them carefully. The entire process of research and research dissemination relies on an inherent level of trust.
So just kind of some baseline ideas to throw out there. Dealing with ethical issues, it's always preferable to deal with things pre-publication. And if that just kind of goes without saying, it's not always possible. But that is the goal. And something else to remember is that it might sound simplistic, but the right thing to do often isn't the easy thing to do.
Usually it's not the easy thing to do. That doesn't mean that it shouldn't be done. It's all that much more important. Keep in mind not to jump to conclusions. Remember, remember that there's usually a lot of nuance. There's usually multiple facets to any issues. Research is conducted by humans, and humans are fallible. And oftentimes there might be innocent parties who get caught in the crossfire of something.
Try to keep that distinction between the people involved and just the cold, hard facts of the research and what is at stake. And remember to have empathy. And I'm not saying that everybody should be like warm and fuzzy. Having empathy doesn't necessarily mean that you're compromising objectivity. It's always appropriate to be a human being.
There's no downside to being kind. I don't see Jennifer regalia in this room anymore, but to quote her, it is free to be nice and tall comb your hair. And I think that is really, really true. So there's no downside to being kind. As long as you're staying neutral, you've got to stay neutral. Partly, unfortunately, because you need to consider the potential for legal repercussions, which sometimes can come very much into play when you're dealing with some serious ethical issues.
And being empathetic doesn't mean that you're going to make a decision because you feel sorry for anybody. It's important to be compassionate and to remember that if you're dealing with a serious ethical concern, you're likely dealing with somebody at a real low point of their career. So just be a decent human being.
So just to run down a very non-exhaustive list of different kinds of ethical issues that you might come across. These are things that I have dealt with at various publishers that I've worked with, but I think it's universal for a lot of people in the scholarly space. The one that we maybe hear about the most is image based concerns.
There are a lot of problems that are coming to light more and more with image forensics, things like peer and various whistleblowers and things that bring this to light. More and more tech space concerns obviously are important. I think in the previous panel, people were talking about authenticate and different concerns for plagiarism, competing interests is that can be a real doozy.
It's something that we again, we rely on the inherent trust of the process. Sometimes somebody might not disclose a competing interest when they should. That can be as problematic as leading to the necessary retraction of an article. Authorship disputes are a not very seen by the public, but a pretty prolific concern in the ethics sphere. It's very common that a journal or a publisher will get a complaint from an author saying, you know, I didn't get the credit that I should have gotten here or I should have been first author instead of fourth author, or I did this, but I wasn't credited for that.
So that's a big deal. And I'm going to talk a little bit more about that in just a few minutes. Duplicate submissions. It's something that I think most journals just kind of as a standard, have that you cannot you may not have duplicate submissions to different publishers at the same time, data sharing is a big deal. I think most publishers also require that researchers share their data.
Sometimes they resist that, and that needs to be addressed. Issues with the ethics of human subjects or animal welfare becomes a very delicate thing to deal with as well. And multiple jurisdictions, that's a case where it can be, as I mentioned earlier, that there are often multiple sides to a story. There can be. There was at a publisher that I have worked at, there was a case where research was published that dealt with human subjects from one particular jurisdiction.
We were we were contacted by representatives of a different jurisdiction saying these authors did not have the rights to do this. It seemed like it was a really cut and dried, very clear case of misconduct. We were actually prepared to retract this article, contacted the authors to alert them, and they came back with a completely different side to the story that turned out after a lot more assessment was completely valid.
Oc some of the big questions that I want to talk about, and we don't have a lot of time here, but two things that I think come to the come to play more than any else that I want to talk about are the question of using artificial intelligence and machine learning for image forensics and the concerns about how long it takes to address ethical concerns. So first, talking about image forensics. There are misconceptions that it should be just really easy to hit a button and find everything that's wrong with an image that is highly, highly unrealistic.
I wish it wasn't. There is a lot of development that's happening in the software sphere to try to address this issue. It is a massively, massively complex problem. I don't know if we will see it solved in our lifetimes. It's something that is getting better. But there's this misconception. Like my colleague Lilian earlier was joking about how it's like on Star Trek where you just say computer enhance and that just that's not real, that that doesn't exist.
It's a massive problem. So when people say, well, you know, why did this get through? Why is this happening? It's not that we're not trying. It's that the problem is larger than us at the moment. And then why it takes so long to address things? I believe me every time I see a post on Retraction Watch saying, well, you know, this paper got retracted in three days.
How come these other publishers can't do that, too? I agree that things take too long, but also it's not realistic always for things to happen as quickly as they do. For a number of reasons. There is a lack of resources. There's it's just a massive quantity of images, of images of issues that we're trying to address. There's the complexity of a lot of these ethical concerns.
There's the number of people that are involved. There's the need for multiple levels of review at every level. And then at the end of the day, also just to be mindful of the potential consequences for those that are involved and from that, the need to not jump to any kind of conclusions. So kind of thinking very, very large, long, large term, can't talk.
End of the day, what can publishers do or what should they not do? I'd say first and foremost, be as transparent as possible, as communicative as possible, with the understanding that our industry relies on a great deal of confidentiality that doesn't necessarily need to be compromised, be as transparent and communicative, communicative as possible with whistleblowers as well. Sometimes a whistleblower might be an early career researcher who was terrified to send you that email and making them feel like they're being listened to and they're being heard can help to kind of continue to develop that creation of a safe space.
There are the whistleblowers like the Claire Francis of the world who publishers love and hate because they will flag concerns that are valid. But it's in this sea of other things that are not valid, and that then sucks up so much, so much time. And so much energy. So where is that balance? I don't know if any of us have found it yet. Publishers can and should update the publication record as appropriate, whether it's with expressions of concern or corrections or retractions, and to, in extreme cases, bring a concern to the attention of an institution or a funding body, something that we should do.
And I was so glad to hear multiple other speakers talk about this today is building partnerships. I think there has been a tendency in our industry for publishers to be very, very siloed and keep your cards very, very close to your chest. A lot of what we do as publishers relies on a very strong level of confidentiality, and we can still maintain that while still partnering with one another to just kind of better at the entire enterprise.
There are people who, I'm happy to say are part of this conference that we've been able to partner with to address ethical concerns shared across publishers or shared across institutions. So I would say let's keep doing that and just general awareness. Raising awareness of these concerns is a big deal. Making people feel like this is something that people are interested in and something that people are concerned about and please come to us and then supporting efforts for improvement.
The holy grail of developing good image forensic software. That's, it's, it's like a moonshot in a ways. But building support for that is very, very important. A dark underbelly of something that publishers really should do is be really, really conscious of records. That's because materials can be subpoenaed in court. So the whole notion of there's this old tropomyosin before you send an email, think, would you want it, print it out and post it up on the bulletin board in your lunch room.
Think of that at like times 1,000. So just be really, really conscious of what you're putting and writing. Should something maybe sometimes be a verbal conversation just as something to consider. And then something that publishers should not do, and this might be a little bit divisive is investigate. It might seem like semantics, but it's important. Publishers, by and large, are not investigative bodies.
We don't have the jurisdiction to subpoena lab books. We don't have the jurisdiction to gain the information that other people like an institution or that ori might have. So it's very common that people will say, oh, can you investigate this? And so we just like as a publisher, we try to be really mindful of not using that word and instead saying things like, you know, assess or evaluate or review.
Its it might seem like a minor thing, but it's, it's really not. And that feeds into another thing about authorship disputes. We do not mediate authorship disputes for exactly that reason. If a researcher contacts us and says, well, you know, I did this experiment and this person was credited for it instead, we don't have the jurisdiction to determine that and to assess that.
So we then tell them that their institution needs to make that determination. And then lastly, I just want to talk about the paradox of intent, because publishers should not consider intent when it comes to ethical matters. But it's impossible for publishers not to consider intent when it comes to ethical matters. And it's something that I think is always kind of in the back of your mind when you're evaluating something that might be coming up.
But it can't be something that then, at the end of the day, kind of clouds things in an appropriate way. I think I've probably gone over my time, so I'm going to stop yammering and pass it back to Ben. All right. Thank you very much. So we're going to move from the perspective of someone who works at a publisher to an important ancillary of that, someone who's working at a journal and representing a sort of reviewer and editor mindset.
So, John. Thank you, Ben. And thanks, everyone. This is a new kind of meeting for me. I've never gone to a scholarly publishing workshop, so I was really interested in the earlier discussions. I learned a lot. So I'm John Fisher. I'm Professor and Chair of the bioengineering department over at Maryland.
Just just a few miles to the West and just some initial thoughts just to get the conversation going as a researcher and a reviewer and an editor. So my background, just to tell you a little bit about me, professor, I've been in Maryland. I'm actually in my 20th year now, which is hard to believe. And department chair for the last seven or so years, I work in the area of tissue engineering and I'm editor in chief of the Journal tissue engineering, which is one of Sophie's journals at Merion Lieber.
I've also done some societal work and past president of the tissue Engineering Society. So I just mentioned all of this because you see different things of the publishing world from these different perspectives. Societies have one perspective on what publishers and journals should and shouldn't do. As a department chair, I'm always worried about our young faculty.
How are they getting everything they need so that they can build a strong research group and do what they need to do is not only so they can get promoted, but they can have a really fruitful career. I'm also worried about the graduate students. Graduate students need to be able to publish so they not only can graduate but also get a job. Most of them aren't interested in working in academia.
Some of them are, but it's important for them and for their careers as well. And then what I do is so just no one cares. But I'm into tissue engineering biomaterials. We do printing and bioprinting and a variety of different areas cardiovascular, bone and cartilage. We've done placental and boreal or complexes and all this kind of stuff.
But that's the area that I work in. We've published about 200 papers. And so a little bit of that, that perspective as well. So I mean, I think it's rather obvious, but I think it's important to say, like, why do we need scientific review? I think it's because the experts in the field are well suited to comment on work. I wouldn't say best suited or always ideal ones to look at things, but they're well, well suited as well suited as probably others.
They look at things like, is the work significant? Is it going to have a real impact on the field? Is this work a high quality and increasing thing as rigor, statistics, reproducibility and then now an increasing emphasis on things like diversity and equity and inclusion in the research team in the research context, the project itself and the impact.
And then I would say to Sophie knows our journal is very much worldwide. We have membership from our editorial board throughout the whole world, the society that I was past president of, we have chapters in throughout the world. These things are looked very differently as one moves from the United States to other parts of the americas, to the EU, to Africa, to Asia, there's more or less emphasis on some of these, and it makes things a little bit complex.
And then again, when I think like as a department chair, I'm not excited if I feel like our assistant professors are being put at a competitive disadvantage because folks in other places care a little bit less about some of these things. So that's something I think to keep in mind as well as that all makes sense. I'm trying to say I'm trying to say it subtly without afterwards I'll be more sorry for the comments.
So what are some of the challenges? I think for reviewers and editors, there's just too many journals. I mean, it's ridiculous. I, I will open up my email and I will have five requests on any given day to review for four journals I've never even heard of. It's, it's too much.
It's, it's unbelievable. Like, I don't even know how it can. Keep on going. It's really it's mind boggling. So I think many of us what I've done now personally is I just I've hunkered down to the journal that I help out with the journals we publish in the journals I think are important to our field.
And then I just start saying no to everybody else, which is not great. When I started as assistant professor, I would review for everything. I think in my CV. I have like 50 different journals I used to review. For now I just like hunkered down because there's just too much. It's, it's, it's really, really difficult to, to, to manage.
So I think that's a big challenge for review, making sure you have a good relevant review or database for your journal. The people are that you're asking to review or current. They know what the heck is going on. They're still working in the field. Ideally, they're still publishing. So I think that's a big, big issue. And then there are obvious things that go along with bias.
You can have a reviewer that doesn't like the author. I would be kind of open to really pushing towards everything being anonymous, but that's why I put journals down there might be biases as well because there's an obvious motivation for a journal to not have things anonymous because then they can make sure they publish people who are really high in the field. So the journal gets lots, lots of visibility.
So there's some tension there. I think we've tried really hard to support the younger folks in our field to help grow the field. But you're doing that as a little bit of a disservice to your own journal because those folks are going to be cited less and that's your impact factor is going to be lower from that. So there's some tension there as well.
And then I just saw it as important to say, well, as a reviewer or editor, think about what impact we have on the field of science. We we establish what's quality work and what's not quality work. I mean, I tell my students now, when I started in the lab what I talk about in a number of replicates and end of three and assuming your data was normal, was completely fine 20 years ago.
Now it's at least an end of 5 and you have to show normality or use a non-parametric approach to analyze your data. That's just the way it is. I think some of those things are good, but but I also worry a little bit about the amount of data that's required in a paper. It's also just gotten, I think, a little bit out of control. Sure it's wonderful.
You have really rigorous data set, but now we've got a poor graduate student that's publishing one paper in five years because they need to do five years worth of work to collect all the data that's being asked for in the papers. And that's putting the graduate student in a tough spot as well. I think it impacts the what scientific fields are popular, what scientific fields are diminishing, what scientific fields are growing.
So I think our role is really kind of define what tomorrow is in science. And that's a really important decision because it affects a lot. I think it affects a lot. And then finally, I think review ultimately is going to select what's funded, what work is going to be funded in the future. And ultimately, funding is going to drive what's successful.
So those are some things to keep in mind as well. And our roles as editors and reviewers. So those are some initial thoughts. I don't know why I put the obligatory acknowledgments slide for my lab at the end, but I did, and that's just what I do. So all right, Thanks. When I saw that at the end, it brought me back to my days at the microbiology conferences and stuff.
Yeah Thank you so much for sharing that perspective. I think that was really, really helpful. So I am going to switch over to page's presentation just a moment. Why not? All right. So make sure you can see it.
We kind of just open Spotify and listen to some music. All right. So my name is Paige, and I am here at Au. I've been here for about 14 years. I started on peer review and now I do data analytics. I think some of you have heard some of my talks. So I titled my talk, new directions and peer reviewed data.
And I think that I so I do a lot of data analysis of our publications data and across Au. So the theme of my talk is exploring gut feelings with data and we'll see a little bit why in slides coming up, why that is. So a little bit of what John was talking about earlier is that there's a lot more research and a lot more journal articles published now.
There's a lot more data and analysis software. And I will show that there is a lot more analysis of science and of peer review. So here we. I did a search in dimensions and I looked for papers that had peer review in the title or the abstract. And this is since 2014, and we can see that in 2020 and 2021.
There's been a big increase in the discussion of the analysis of peer review in general. And the publication of articles. And I was wondering if it had something to do with the COVID bump and in the sense that were there more papers about peer review on COVID and like medical research and medical review of vaccines and issues in covid, but it actually does not necessarily account for all of the increased interest and publication of articles on peer review.
So for example, we had obviously zero papers on COVID in 2019 and about 800 more articles in 2020 on peer review than the previous year. But only 266 of those were about COVID or the pandemic. And then my previous slide, I jumped over that. We see a lot of articles in medical and Health Sciences on peer review, but we also see it in education, physical sciences, community computing sciences as well.
So at agu, I look at our peer review data and some of the trends that we've seen are familiar with we're going to be familiar to everyone here. And these are just some examples of what we show our editors, what our editors ask us and what the data really shows. So one of the issues is the share of peer reviewed submissions from various countries.
Looks like this with we've divided it up by region. And these are submissions that were submitted to us in 2020 and 2021 and sent out for peer review. But the share of invited reviewers from those same regions are very much different. So many more submissions coming from China that are peer reviewed and we're not asking people from China to help us with that as much with that extra peer review. And then obviously we can see its reviewers from Europe and US and Canada that are being asked to review like John here and his 20 requests per day.
And then I just wanted to show that about 20% of the accepted articles are from China. So we get what we're asking about 7% of our viewers are from China. And are invited reviewers, and then 20% of those accepted articles are from China. So we do have qualified authors from China that are not being asked to review.
That's just one example. So in this case, we have a gut feeling that we have a lot of submission from China. We a lot of our viewers are saying, no, we're sending a lot more out for review. We're requiring more of a reviewers. And this is one of the cases where we were able to see with numbers what the discrepancy is between those two, between our author pool and a reviewable.
But I just wanted to show that people from China this is invited this is the agree rate to review they. 72% were agreed to review and this is for 2020 and RA21 and 2021. While those from Europe and US are 44%, 39%, about in the 40% So people from China are eager to review. And we do see that if you're asking someone 100 times to do something and they get asked 100 times and you're asking someone else, you know, asking them one time the person that you've only asked one they're going to they might say yes, they're more inclined to say yes.
So expanding the reviewer pool we use, if you could get this data for to show your editors to see where in what regions you are getting the highest acceptance rates. Another thing that I've asked is, is increasing your editorial board from certain regions, will that increase your reviewer pool and in those same regions? So what you can do is you can look at the invitations that go out and look at the region of your editors and the region of your invited reviewers.
For example, if we have a few editors and associated editors, they in four ag journals, they both invite reviewers for of the editors from Africa. 4% of the invitations went to Africa based reviewers, 40% went to the Uc and Canada. So we can see that there are still networks that of our journals that are still heavily based in Europe and in Canada, in US and Canada and Australia and New Zealand.
Only 11% of their invitations went to people in their region, while 42% and 40% went to US and Europe. China, it's a little bit better. So China based ease and editors for ag journals they do they bump up their network to China a little bit more. So 20% of their invitations were to China, but they still have a very high request rate rate of US and Europe.
And then this is not this is not surprising. 43% Europe. It is from Mexico. 7% of their invitations went to those in the region. Rest of Asia, India, and South Korea are the biggest ones here. And Japan. And 18% went to people in their region. And us.
There we go, 53% But total if you just look at totals, then you may think that like increasing your Ed pool in certain regions may have a different effect than you were expecting. So this is one of the things that our editors say, especially in 2020. I feel like I'm having to send out more review requests each year. So what I can do is I can look at the review request per manuscript sent out to review and they're right.
In this case they're right and this is just average. So sometimes it's hard when you take average numbers, it's difficult to kind of intuit what 5.56 means on a daily basis for an editor compared to what 5.19 means in 2020. Hard single article these know this is for all of the average number of Yeah. For each article so on average Yeah. So of all of the articles sent out to review, I looked at the number of those and then Oh a number of review requests divided by the number of manuscripts.
And now to review. And this is for journals that either require two or three reviewers. So that would be great to break it down by journal as well. Oh, a lot of writers. I feel like I think women just decline more because they're busy. In this case, it's true.
Women do decline. These are the gri to review rates per year. You can't see. The first set is 2018. First set is 2019. The next set is 2020 and the next set is 2021. So yeah, in each year women have a slightly lower agree to review rate than men.
Sorry have you looked preliminary? The question was, have I looked at 2022 data? Yeah the tough part of 20 of half year data is that a lot of the manuscripts submitted haven't actually been sent out. So it and we're still maybe it takes like to one month, two months to actually acquire those.
So I haven't looked at 2022 data, especially for that this kind of situation. Is there a difference in the number of invitations per person based on invited reviewer gender? So that's another question is that I'm asking women decline more. So I don't maybe I don't I'm not going to ask them as much. So you're they're not. So what this shows is a review request per person.
So I took the total review request and divided by the number of distinct. Reviewers invited to review based on gender. Unknown gender is much lower than both of these actually. So but I'm just looking at the data that we have gender data for and the same men are invited more often than the same women, and that's across all years. We do see both of those numbers going down a little bit, which means we are expanding our reviewer pool.
So this is another data point that you look and you can look at. You might not this is up for discussion because the decline women's decline rates are higher. Does that mean I'm going to ask them less often now? So I don't know. Something to think about. Right do female editors invite more female reviewers and male editors?
So this is if we want to expand, we want to bring more women into the peer review process. Should we add more and adding just the assumption that adding more women to the editorial boards, is that going to increase the invite rates? And so I broke this up into the invitations from female editors, their invitations, the proportion of their invitations to men and their proportion of invitations to women.
And we have 67% versus 23% to women age. US membership is about 33% women. And male, they're not that much different. So that's just one doesn't mean that we're not going to bring on women editors anymore, but this is just something that an assumption that we might make that for you journals, at least in these two years, was not necessarily true.
We're still working with the same pool of people. The same structures are still in place, the same culture of editorial editorship, maybe passed down from the same structures. So this is something to talk about, to look at the data and then to talk about like, well, what are the barriers for women participating in the review process? And then I would ask, do women invited to review agree to higher rates when the editor is a woman?
So this is review all agree rates by gender of editor and invited reviewer. So if the male editors are in blue. The female editors are in grain. So when the female invited reviewer. There agree rates are 41% when invited by a male editor and 41% when invited by a female editor. And I show the percentage the 0.21 and the 0.3 to because I wanted to make sure you knew that these were different numbers and it wasn't a typo.
And again in mail and vetted reviewers. And this age has a ton of data. I'm talking about thousands and thousands of invitations, 50,000 invitations I'm not sure about. We probably have about 70% of those have gender data behind it. So this is not a small sample size, or some small samples in. And then for all invited reviewers independent of their gender, 45% say yes to male editors and 44.98% say yes to female editors.
So there really is no difference here. Oh, and I'll say that the reviewers know the editors names, the editors know the reviewers names, the reviewers know the authors and are know the author names. And the authors may get the reviewer name if the reviewer signs their review. So I'm not a problem person. I'm an opportunity. I like to say a problem is our opportunities and I really believe it.
So some of the problems, quote unquote problems identified is author output might be outpacing current reviewer pool capacity or it is outpacing current reviewer but capacity and those that that's one of the reasons why people say the peer review system is broken and I don't think it's broken. I just think it's been strained by all this output and this growth that John was talking about that might not be sustainable and it might not be.
Yeah, I think we're rewarding growth now, especially in the publishing industry, but we kind of have to take a step back and see maybe we don't want to have a publishing society or an ecosystem that grows too fast. And my, my take home my take home point for you all is to create data driven plans, whatever that means for your publishing operation. That might mean bringing a person in to do data analytics.
I don't have the background. I'm self-taught, I have an English background. So I think that if I can do it, anyone could do it. And to create data driven plans. But also don't assume certain things that you've assumed for a long time when you could actually look at the data to see if it's the case. So there's an increased onus on the same reviewer pool.
So explore trends in your peer review ecosystem, identify and mitigate barriers to participation. Stakeholders do make assumption based on what they remember. So I'm having a really tough time finding it's 2021. Reviewers are harder and harder to find. They editors might only remember the manuscripts that get back to their desktop where they invited 20 people and all of them said, no, you're not going to really remember the ones where it was easy time finding reviewers because it's not going to come back to your desktop.
So it's really there is bias in what we the problems that we remember. So, you know, create systems in your department that allow you to verify stakeholder assumptions. That's if I have this data set, it's relatively quick to parse a data to see to investigate various trends. Susan already asked, do you have 2022 data? Well, I could easily pull it, and I already have a system that I'm able to analyze it fairly quickly.
Changes made to the editorial boards and peer review operations may not have the intended effect. So again, investigate the data and their trends first to verify or contradict assumptions and then use published research or the experience of other publishers. If you don't have that data. And this is not to say that we shouldn't do any we shouldn't make any changes. If one of assumption if we thought bringing more women onto our editorial board would invite would be able to we'd be able to invite more women reviewers just because that doesn't work, it means that we need to find more creative ways to address potential solutions, to find potential solutions.
And if those don't work, do something else. So it's like a constant suggestion to use best practices that other publishers are using and integrate those into your system. That's it. Yeah Yeah. Go go ahead. Hi I have a question for Paige.
Thanks for a very, very interesting presentation. How at which point do you collect the gender data? And can you use that process to collect data on other demographics, such as ethnicity and age? Sorry any of these on. But this was.
OK so that's the perennial question that we get asked and we are doing it differently than other publishers. We do not collect gender or race, ethnicity or age at the submission level, which does bias our collection because we get it from our member data. And a lot of our members come into the ag fold when attending a conference, our annual conference.
So we have a lot of data on us and Europe, European geoscientists, because they're the ones that sign up to go to attend the meetings and are in the EU membership ecosystem. So we have a lot of unknown gender for China based reviewers. It doesn't necessarily affect too much. What I was talking about here.
So what I do is I just match it from our member database to with using email address to our publications data. A lot of our reviewers have gender data because they are in the Au network and they are editors are in that network. And so we have a lot of unknown in our newer authors and an author data. The fact that had only about 70% of something.
Yeah, so about 50% of our authors, we are like corresponding and submitting authors we have gender data for. And I think it's, I think it's up to 80% for our reviewers and our Yeah. Our reviewers. One thing that I've done is I use gender API to assess gender for names. And again, US and Europe names are there's a higher confidence in that gender assessment, I guess because they have more data on it, there's more names that are more clearly associated with a male female.
So thank you. I think we have one more over here and then some from the online. So good. I have a question that's either to reinforce or to dispel the myth that there's too many journals and maybe someone in the audience has the current metrics. In the past, I saw the number of papers sort of Trending with the number of journals lines as being sort of a constant ratio.
Has that changed any? So I really think what we're dealing with is not the overreaching number of journals. That is a challenge to peer review. It's really the number of papers that are being generated worldwide. The journals are just supporting the demand for the papers needing to be published.
And I understand from memory two or three years ago that the curves were pretty consistent. So the journals are supporting the publication need. We have to figure out how to do peer review in the blossoming number of publications period, because we've asked the researchers to be productive. They're being productive. Now We have to figure out how to do it. I think maybe he just gives a little bit of the answer that there is a lot more work being done throughout the world, but it's being published and being reviewed by a not equally distributed group of folks.
So some folks are seeing more than others. So, which I think is probably a problem. We have a question from the online group that I'll come back to. So this is similar to the question previously. What percentage of women is in the reviewer database and page? This is for you. Is it similar to the membership breakdown? Also curious about the geographic data.
Is the reviewer database made up mostly of US and European reviewers? I think one of my slides had the invited reviewer proportion. So 7% were from China. And I don't remember offhand with the other it's about 40% US and Europe. The first question was so invited, women invited to review I across all of our journals is in the mid I think 25%, about 25% Now And our members, we have 33% women members.
The discrepancy there is age. We have we bring in a lot of younger women. I haven't looked at the timeline of percentage of women. We are increasing our proportion of women each year in our membership and authorship and viewership. It's just still disproportionate from the reviewer pool especially is that a lot of the older men get invited to review because they are seen as the experts and.
Well, Yeah. So I think that inviting more younger women to review is one of the solutions to the problem that my data presented. I wonder if one, one aspect of that question might be, is there a, a database that a list of people that is sort of the first place that people go when they're searching for a reviewer? And is the distribution of that similar, do you think, to what you're seeing, or are your editors taking the time to add to that sort of preexisting list?
And maybe you can even speak to this also, John, about sort of do you what's your process there? Are you going out to the whole wide world or are you I think starting from a list that may itself already be somewhat skewed. You want to go? First of all, we I know that just listening to our editors, they do different. We have various tools within usual press system for finding reviewers who are potential reviewers who either were authored on previous papers or reviewers for similar papers.
So we have all the tools we have. We our editors use those tools. One thing that we could also do is we're encouraging our authors to suggest reviewers who are diverse. And so more women submitting papers may have maybe slightly. They do. I haven't done this analysis in a while, but they do invite they do suggest more female colleagues.
But then again, it's you're also looking at papers, the citations to paper references and papers to see what authors are on those papers for your potential reviewers. So I think because editors try do a lot of different things. And I think that's a good way to take a step back and look at those systems in place and those practices in place to see if those kinds of ways that we're looking for reviewers do create bias.
Yeah, Thanks. I would just agree, I would say previous authors and suggest the reviewers are kind of the go to we've had discussions. I kind of want to I generally want to call them. So if we did this like this, I want to call the database so that we have just people from the last few years that are super active that we know rather than somebody from a while ago.
But I understand there's some forces that maybe don't suggest doing that. But I agree with what pages says as well. And I think. The distribution of reviewers, their makeup. And in all different ways. It really does play an impact on the outcome of things, and you should definitely be paying attention to it.
Thanks, Angela. You were going to go and then we have, I think, three more and we'll do our best to get them all in. I'll be quick. In my 22 years of managing peer review, I have now come to the conclusion that it does not matter what tools we give the editors to find reviewers, they are going to use the people they know because they don't have a lot of time.
They're going to go back to the ones that they know will say yes. They're going to go back to the ones that they know, write a good review and they will burn them out and grind them into the ground until they like, say, never contact me again. It's just the human nature piece of it. But I think the way it's super labor intensive, but it seems the way to get people in is through reviewer mentoring programs.
Because I've also been at conferences where we have an editorial board meeting starting and an early career researcher comes up and says, how do I become a reviewer for that journal? And it's like, well, don't just go create a reviewer account. Nobody knows you. So I encourage them to email the editor and explain what their expertise is and ask them how to get involved with those reviewer.
Mentoring programs is how your associate editors are going to get to these up and coming researchers. Otherwise, they're just going to go to the same people all the time. I guess that's not a question, but thanks, but Thank you. I think we have one down here. And then Ana and then I disagree with what you said. Thank you.
So I just want to commend all three of you. Thank you for these very interesting and diverse talks on aspects of peer review. I also had a comment about mentorship, but I guess my question was more for all three of you. Clearly, John, it's easier to answer for you, but are you the first person at your organization to hold this job looking at this information, whether it's talking about ethics or whether it's talking about actual data, about the peer review process itself?
And to that, I guess, to kind of go with what Angela was saying, John, are you actually looking to mentor people to help bring in those next rounds of reviewers? But for Gail and Pedro, are you guys looking at expanding even the team who do what you do because of what you've seen? So for me, yes, I am the first person that has held this role at PNS.
Most aspects of the role did exist previously. They were just done by 5 or six different people and they decided to consolidate it into one person. The one aspect of the role that I do bring to the table that they didn't have in-house before is image forensics expertise, which is a blessing and a curse. And as far as I'm sorry, it's really heartening to see that they decided this is an important thing to have a dedicated ethics person.
And I'm seeing that happening more and more at different publishers and different societies that they want somebody to be that dedicated person or that dedicated team. And then to answer your lesson question, I hope that it will be expanded because my to do list will never be all checked off. But we'll see what happens. And for me, I'm the second person to hold this position.
But the person before me, there was a gap. So there it does seem like sometimes publishers or groups take this as like a nice to have someone doing the data analysis because we can just use our gut instincts to make decisions. Right and our guess is, but the person before me, she was a whiz at statistics and data. She didn't have any publishing experience. So that's like the very different, like the opposite of me.
I had very little experience in data analysis and all of it in publishing and in peer review. So it would be great to groom more like other people to have. I think that I think the publishing experience is really good to have and to have that kind of critical thinking. And eye for the process and then learning, learning. On top of that, the tools and statistics light for me for data analysis, we're not necessarily expanding, but the I had someone I there was a position that I oversaw a lot of just report running and she retired and so now the next incumbent I'd like to train more on what I actually do and data analysis and if they with publishing experience as well.
And so that Yeah. So I think that moving in the direction of like stronger analytical skills is where we're going at. Heather has heather, have a response to a question. Sir I don't want to. I'll just say real quick. Yeah there's a lot of mentoring, whether it's in the department, in research groups. We've done this in our societies.
The journal has done it as well. Have a peer mentoring operation. So I think that's all happening. I do kind of get the impression the younger folks there are a little bit more transactional in their scientific career. They want to know what they get for what they put in, and they don't see that so much in societies that are working with journals and all that.
They're a little bit more, I'm just going to do my work and get funding and publish and I don't really need to do the other stuff. And that's, I think, a frustration. I guess I'm old now. I guess it's a frustration to the older people. It sounds like a new data analysis, new gut instinct to report on. Yeah Yeah.
I could just go really quickly in response to Don's question. At delta, I think we did a news and views of free news and views back in August where we analyzed the open Alex data set from 2008 into 2021. And the number of the growth in the number of journals up until 2017 was increasing and then started to fall. That was the same time that the number of publishers started to fall in terms of the great the growth rate in articles on average it was pretty much solid at 5% to 6% per year until 2018, 2019 when it started to grow.
And then we see of course, the COVID bump, which we're waiting to see what's going to happen. But anyway, go check the August news and views. I can give you the link if you need it, and we can kind of sort that out later over drinks. I want to do one more question over there and then I think, unfortunately, we're going to have to wrap it up so we can have discussion after, though, if you'd like.
Thank you. Thanks for all three of you. So I want to go back to the controversial or I think it was controversial statement. You said at the beginning of that publishers should not investigate. I think there is an assumption there. I mean, if institutions are to investigate, there's an assumption that the person is associated with the institution and that institution is one that you can contact.
And I don't think those assumptions well, I know those assumptions aren't always true. So I guess I wanted you to unpack that just a little bit more on what you think, on who should investigate if there's a problem. So if I understand correctly that you're saying that if an institution investigates that there's an inherent bias there because they're from the same institution as the researcher in question.
So I think no, what I'm saying is if there's what if it's a fake peer reviewer, what if it's know, there's paper mills, there's fake peer reviewers, fake authors, that type of thing. So who's responsible then? That's a good question. I would say if it's something that's not necessarily research misconduct, that that might then be something that a publisher assesses.
My point about investigation was somewhat semantic that we just we like I said, we don't have the jurisdiction to gather the materials, to subpoena materials that we need to actually investigate something. That's not to say that we can't collect materials to make informed assessments and reviews with the assistance of experts. Does that. It helps.
I think it to me, it just brings up the problem is who's responsible and there's not a great answer right now I think is part department chair. It's a lot of fun. It stinks to do because it's super frustrating because you hear from the hear from the journal or from the funding agency, and there's modest amount of guidance from the institution. And then you kind of have to work through it.
It's a. Yeah it's a very challenging thing, particularly the ones that are the ones I love the most. That's the ones that start with pub here. They're very, very frustrating. Some people out in the middle of nowhere saying like, this one image is wrong and it can ignite months of investigation and lawyers and. Collecting records and yeah, Yeah.
And it becomes such a resource drain as well. But like I was saying earlier, it's that there are kernels of truth in all of that sea of noise. And that's such a massive frustration at PNS. We actually, just two weeks ago or so, retracted an article that had been flagged on pub peer for image concerns. And in assessing it, we it seemed initially that the concerns didn't warrant more than a correction.
And then we identified additional concerns with the images that hadn't been flagged and that were very egregious and very problematic and led to the retraction of the paper. So it it was satisfying to be able to correct the publication record in that way. But just the time and the effort and the resources leading to that, this is why we're not taking care of everything in three days.
It's there's a lot going on. I'll just say the thing that drives me crazy about the images is I barely believe images in articles anyway. I mean, they I mean, you're supposed to pick the typical average image that represents your data, but. I don't think the practice is doing that. So I tend to believe the data way more than the images.
And if you're going to make something up to make it up on the images but leave your data, the same would seem like a very silly way to make up data, you know. So I think first of all, I would say that in many cases, images are data. But that's no, they are data. But it's just never I would never give my grandmother chemotherapeutic based on a histology image.
I would want to see some data, you know, so and there again, to with image concerns, there's again, there's so much of the nuance and it goes back to the question of intent without intent, because there's such there's such a spectrum. I mean, if you're talking about a biology paper that has seven figures in each of them, has 14 panels of Western blots, it's not unheard of for there to be innocent error in figure assembly or in record keeping.
I wish that wasn't the case, but that's just the reality of the situation. Can that be should that be considered at the same level as something that is clearly deliberately done to mislead? So there's a spectrum. Thank you all. I want to have an opportunity to Thank the panel. Really appreciate all your insights today.
All right. I'm going to turn it back over to Sophie to close this out. Thank you, everyone. Excellent panel. To close out the day very briefly. Thank you all for joining us in person. Thank you for joining us online virtually. Thank you to all of our fantastic speakers, our moderators, for taking the time and presenting engaging conversations that facilitate a lot of dialogue, both here in the room, but also, of course, on our breaks.
We would like to Thank and acknowledge our sponsors for the event at upon cad, Moore media, Morrissey and silver chair. If you missed any of the sessions today, do not worry. The sessions are available on the Whova app and they will be available shortly on the platform. You can access them any time at your convenience. Finally, we have a speed networking or I should say community developing session tomorrow morning at 900 AM.
Please join us to say Hello and make new connections, new and old connections. And our first panel session begins at 10 AM tomorrow morning. So for now, have a wonderful evening. Thank you all for your participation. And we will see you tomorrow.