Name:
Rethinking Peer Review: Will New Models Bring New Voices to the Scholarly Dialogue?
Description:
Rethinking Peer Review: Will New Models Bring New Voices to the Scholarly Dialogue?
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/9f2b7e8f-08d6-43fe-b6f6-2d518729fc40/thumbnails/9f2b7e8f-08d6-43fe-b6f6-2d518729fc40.png
Duration:
T00H56M06S
Embed URL:
https://stream.cadmore.media/player/9f2b7e8f-08d6-43fe-b6f6-2d518729fc40
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/9f2b7e8f-08d6-43fe-b6f6-2d518729fc40/nds_2023__rethinking_peer_review__will_new_models_bring_new_.mp4?sv=2019-02-02&sr=c&sig=8K74vP9THt%2Bj3zk5Lg3k7LjxEdHFRVfr2QGz7R5p1hE%3D&st=2024-11-20T07%3A35%3A34Z&se=2024-11-20T09%3A40%3A34Z&sp=r
Upload Date:
2024-08-07T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
I'm Jenny Herbert. I'm an associate publisher at the American Institute of physics, and I am here today to talk to you about peer review and specifically disruptions in peer review, which I think goes with everything else we've been talking about for the past day and a half. So I'm going to let my panelists introduce themselves in a minute. But just to frame the session, what we're going to be talking about today are new models of peer review that bring more people into the peer review process and that provide more value for researchers.
So without further ado, I'm going to start with Jane. Jane, why don't you tell us a little bit about yourself, and then you can hand it off to Elizabeth and feel free to just share your microphone if you want or you can come up. So hi, everyone. My name is janani. I'm a postdoctoral research associate at the University of Illinois.
And today, I'll be speaking on open peer review models with public participation. Yes So for preprints, public participation is common. The authors, they submit, post their manuscript in a preprint server at the bioRxiv and readers can comment on it and before submit these preprint to a journal.
But the ideas of readers comment on paper is not new. Also, in scholarly journals, we have these ideas of readers as reviewers, and the first journal to implement this idea was the current anthropology published by the University of Chicago. Still today, in 1959, they created a model, a peer review model called treatment for their review articles. So what is the idea here.
The regiment was looking for many perspectives to improve our paper. So the author submits a paper to this journal review article. And then the paper was provisionally accepted by the editor and was sent by mail to us a list of readers and these readers could be experts in the topic of the manuscript or readers that are experts in parts of the manuscript. And here is an example of a paper published by this journal, current anthropology.
And as you can see in the second image, the paper was already published with the name of reviewers and 7 readers reviewed this paper and also to recognize the work of reviewers. The journal a star in the paragraph was the reviewer suggests some changes. In 1978, Professor Stephen hanhardt created this model, open peer commentary, and it's applied.
It's adopted by the Journal of behavioral and brain science and is still published today also. And the idea here is the manuscript is posted as a target article, and you can nominate yourself to be a commentator of this paper and these papers, they usually receive like 20 or 40 comments by readers. At that time, professor harnad wrote a manuscript discussing this idea of why we should increase the number of reviewers, why we should invite readers to be reviewers from the side of the editors.
The editors may choose the wrong reviewers. They can misunderstand the reviewers reports, and sometimes reviewers may not apply reviewers advice. And from the side of reviewers we have a problem that maybe they lack sufficient expertise to review the paper. But we have some criticisms around this model from the side of the editors. They may say that we have a lack of experts to review and how we can increase the number of reviewers if you don't have enough experts and from the side of the reviewers, they feel afraid of reviewing public.
What I want to talk mainly today about the idea of public review in open access journals. Public review is a very confused terminology. Public review have different names in literature and also in the website of their journals. In the literature, it's called open participation, crowdsourced crowdsourced review or public review. And the main idea of public review in my presentation is that any person interested in comment, a manuscript, they can do it.
You can be a scientist or don't need to be a scientist. So let's. Barnard adds that in public peer review we can have invited reviewers by the editors and the interested members of the scientific community to present these models today, I divided them into two categories to make it simple to understand and the diagrams that I'll be showing. I did this by myself based on the descriptions of the editors.
I'm not going to be giving many details, but I would like to call your attention to two aspects of these models. There can be hybrid open with stage and in stages of open divided in three steps. This public preview models with open in the pure view contest. They can adopt public preview or another traits of the open peer review process, such as open identities and open reports. And I also would like to call to your attention about how it happens by email inside or outside the journals and the types of reviewers of these models.
The first example that I'm showing is this was an experiment conducted by the medical journal of Australia in 1996. First, the idea of these experiments was to understand if authors and reviewers would be open to submit their manuscript to our peer review model and if their readers would be engaged with these models on the internet. So first, after the manuscript was reviewed by 2 experts, invited by the editors and accepted to be published, the authors and reviewers was invited to join the experiments and the editor published in the website of the journal, the paper and the peer review.
Peer review reports. And then authors and reviewers could interact with each other and the readers of these journals. The commentators was health professionals, health science professionals, doctors, scientists and also patients. And here we have readers. They have to review their identities. But identities are not mandatory for reviewers. And then after all of this processes of open peer review, the manuscript was published with the comments with comments, comments selected by the editor.
In the second, exempt from the field of education happens the opposite. We have open peer review first and then traditional peer review. And then here after the Submit the manuscript, the manuscript is published in a discussion forum inside the journal and people can register to comment on the manuscript. And here the readers that are making comments, they are mainly the scientific community.
And here the identities are mandatory for everybody. After the review is finished here in the open peer review model, the author is invited to submit their paper to a traditional peer review process that the reviewers selected by the editor, and then the manuscript is published alongside with the comments. And now we are moving to examples of models that do not apply traditional peer review, but only open peer review.
And here it's similar to the previous one because the manuscript is posted in a discussion forum inside the journal. And readers can comment which type of reviews we have here. We have reviewers invited by the editors, indicated by the authors and members of the World Economic Association and the members of the Economic Association. Then can be economics, people with credentials and people that are lay person.
They are just interested in economics. They can join the peer review process to. And after the manuscript is reviewed here, the editors make a decision to accept or reject, and they also review the manuscript and reply to two commentators. And the manuscript is published with the comments of the reviewers and these journals. They have something else that they have a post-publication commentary section.
He is a model of the European geosciences union interactive peer review process. Here we have something extra because the manuscript, after being accepted by the editor, the manuscript is posted in a preprint server of the European geosciences union. And in this preprint server we have the author received commented from invited reviewers and public reviewers in their papers.
And here, for example, identities are mandatory for public reviewers and optional for invited reviewers. And after the manuscript is reviewed, the author replied to reviewer's comments. The authors also interact with reviewers. The editors make the decision and publish the manuscript with the peer review reports to.
This last one is very close. These tabs are very close to the traditional peer review process, but we have some differences here. After the manuscript is submitted, the editor screens the manuscript by the editor. Here they are academics and they are patients too. We have patients as editors and the journal invite to academic reviewers and to patients patients to review the paper.
And they review and send the editor, send the review reports to the author. The authors reply to the reviewers and the editors makes the decision and the manuscript is published. With the peer review reports, like in a PDF format. There is a small difference between the model of the BMJ research involvement engagement because the image they have enter now review before designing publish the journal, the paper or not.
So what is the main idea about the attitude of the models. As we see, we have seen we have hybrid model and open peer review models. What happened here. When I do traditional peer review, first, I can impact the way that authors and reviewers interact with each other. If I do traditional peer review first and I accept the manuscript, the authors may say, I don't care about interact with public reviewers because my manuscript was accepted.
If I do open peer review first and then traditional first, the idea is to improve the paper based on public reviewer's comments. And then I invited select reviewers by the editors to conduct the traditional peer review process. And then we can have only open peer review. The idea of public peer review is more than select papers is to improve the papers. So we have something else related to the peer review process.
And many authors and editors. They say that the idea of public review is to combine the strengths of traditional peer review and open peer review. And what I found out about it that it means that combining these strengths means making open identities optional or optional for reviewers. And the decision about the publications is on the hands of the experts, reviewers invited by the editor.
And what does public means in peer review. So it depends on the journal. For example, for the Journal of instructional research, public review. What is a public review. Public review are still scholars. They are not non scientists, but for example, in health science journals, in terms of public, we can have scientists and non-scientists and health professionals, professional Health Sciences and nurses and doctors and any type of readers.
And what is the idea of public reviewers? They can contribute with parts of the manuscript based on their own expertise. It means specialist knowledge, lived experience. And also they might say if a manuscript is easy to understand or not, and public reviewers, they can make hard criticism about the papers. Or maybe they can compliment the authors in public to. So regarding adoption challenges, public preview has been seen as a complementary to traditional peer review process, and criticism around these models go beyond the qualification of the reviewers and open identities.
I also find some results in linking lack of family familiarity with the internet and technology and open identities are a sensitive aspect of these models because many people do not feel comfortable to share their identities. And it seems that when I'm a public review, I feel more comfortable to share my identity. But if I'm a invited reviewer, I don't want to share my identity. So my final words are about I would call your attention for the human aspect of peer review.
I have a research project with undergraduate students to study peer review memories. Some of them are very fun. Some of them are not fun because in my perspective they are very violent. Like this, this example here. And I think I love technology. I love the idea of peer review. But as Professor Angela said, that peer review that you must system subject to the intellectual imperfections of human behavior.
And also Emily ford, another peer review experts, she said that we need to rethink peer review and we need to rethink the privilege and power in academic publishing to change peer review because we are full of relations of power inside the peer review process that we need to change to make peer review better. And we need to discuss vulnerability of identities of many reviewers. For example, early career researchers.
They don't feel comfortable. I guess everybody here knows it. And some patients, they feel like I don't have anything to say to a scientist, so we need to listen to people. It's sometimes can be uncomfortable and and we have to approach this as a human system to improve it. And my final words are about the idea of what is AI expert. I have been conducting researching peer review for 80 years and everything that I read about public preview.
One of the reasons that I'm against because the reviewers, public reviewers are not qualified, but what it doesn't mean being qualified. And what is an expert and what does AI expert look like. For example, in the field of librarianship, there is a research about data demography, about experts, and they are white And they are full professors. And I'm an anthropologist. I'm an expert in science communication, and I'm a postdoc in information science.
And I know that I don't have the expected image of an expert, even when I have all credentials, because I'm young, because I'm a woman and because I'm from Latin America. So we need to discuss what is AI expert and how these experts look like. And another point is a peer review process. Some public reviewers as patients, they say, I don't want to reveal if the editors do not consider my peer review report, so I don't want to be included, but I'm included, but I'm not listening to.
So I want we want a review process. And also we have the idea of what is AI expert in these times. And we have new consumers assigns new consumers of science as lay readers that are beneficiaries of open access journals. So we need to think about these people and guarantee that these people that are lay readers, they can understand scientific information as I write, that they have as science, as a public common.
So thank you. Next, we'll hear from Elizabeth. Do I need someone magic to come help me find my slides. I will. There we are. Thank you. In the meantime, I'll introduce myself.
I'm Elizabeth marincola. I'm very pleased to be here. I'm based in Nairobi, Kenya, at the science for Africa foundation. But I've been involved in open access for a very long time. Until 2017. In the United states, I was executive director of the American Society for cell biology and our journal molecular biology of the cell was the very first journal to be listed on PubMed central as open access under managing editor Heather Joseph, who I think is well known to many of you.
And then after published in Science news magazine here in DC for almost 10 years I had been serving on the board of PLOS and went to be CEO of plos, where I remained until I decided for largely personal reasons to move to Nairobi, Kenya. And I had the opportunity to work there for the science for Africa foundation, which is what it sounds like.
It is. It's an organization dedicated to advancing science in Africa. We're a little bit like the NIH and the NSF put together, and that we are the largest single provider of research grants across fields for all of Africa, including, I should say, North Africa, because when people speak in the Global North about Africa, they often mean sub-Saharan Africa. But I'm talking about the continent of Africa and and across fields.
And we also have a large policy operation where we advise governments, other policy makers, on appropriate positions to take with regard to science policy and funding. I'll just say lastly, by virtue of introduction, that unfortunately African governments, for reasons that I'm sure are readily apparent to everybody, don't have a lot of resources to be funding science, that's just not been a priority for most African nations.
South Africa in many ways is an exception. But still even South Africa is behind many countries in the Global North in terms of proportion of GDP dedicated to science. So the science for Africa foundation is trying to compensate for that by providing research grants for research and African laboratories and sites across the continent. OK, so that's me.
I'm very pleased to be here. Thank you for the invitation. We launched open research Africa in partnership with F1000 in 2017. When I first got to Nairobi and it was meant to leapfrog the existing opportunities for African researchers to publish in Africa based publications because it went right from the get go to completely transparent, fast, open platform that accepts various forms of submissions, not just the traditional research article and encourages findability mine ability through its open format from beginning to end.
You can see that and this is consistent with other F1000 publications that all versions of the article are published openly, reviews. They're published openly because we believe that transparency is a big step towards inclusion. Here I'm just demonstrating that we publish all stages of, of research. We encourage people to modify their own research.
Versioning is clear. Peer reviews and comments are iterative. So the entire thinking from beginning to end or no end is entirely transparent. Before somebody maybe Jay was talking and on the earlier session, which was very interesting about the barriers to research well open research Africa attempts to tackle one of.
The big barriers to research, and that is the delay in being able to publish the delay before being able to satisfy all the comments of the peer reviewer. We publish first, we make the content available. We make the content suitable for the authors, and after it passes peer review, it is then indexed, so there's a barrier for indexing, but the content is not held hostage to meeting that barrier.
It's clearly, clearly marked as not yet having passed peer review. We attempt to elevate the status and discoverability of outputs not limited to the research article. This includes methods, data, software, anything that can be subject to peer review and even non peer reviewed content, which is, again, clearly marked as non peer reviewed content. But that may be applicable, even conference proceedings, that kind of thing.
So to talk about peer review in Africa, as you can see, there is a very small sliver represented by the dark blue piece, that kind of 11:00 on this chart of peer reviewers from Africa that was this was touched on before. Awareness is a first step towards inclusion. And this is one of the reasons that the science for Africa foundation promotes open publishing, because it makes our experts more visible to the editors and authors, publishers and the global north, as well as the Global South.
So we represent a very small portion of peer reviewers in the world. Not surprisingly, the big blue piece is the US. The orange piece is UK and gray is the not the rest of the EU is. The UK is not part of the EU. Gray is EU, UK is orange. And then in terms of breaking it up within Africa, there's one point that I would like to emphasize, and that is that Africa is not the same thing as South Africa.
The state of research in South Africa is somewhat more advanced for various reasons that may be intuitive or non intuitive. They have greater funding for research and they're simply more visible and more numerous in the research landscape than their colleagues elsewhere in Africa, including Southern Africa. But I'm talking about the country of South Africa. We want to get editors, publishers and authors in the Global North more aware that Africa consists of more than South Africa.
And so when you think of reaching out to your African colleagues, we want you to reach out to a diversity of countries. I will point out that actually, even though science for Africa foundation is based in East Africa, in Nairobi, Kenya, as I said before, we represent the entire continent. But I will say that there is quite a critical hub of very good research, especially in the biological sciences clustered in East Africa.
Very good work coming out of Nairobi itself, as well as Uganda, rwanda, as well as West Africa, Ghana, Nigeria and other places. Somebody mentioned early today my colleague joy wingo, who is at the Center for a communications at the University of Nairobi. There are people who are really promoting the visibility and the inter collegiality of African scientists with the rest of the world.
So when it comes to peer review, why should we care that researchers and Africa are included in peer reviewed databases and included in one thinking when reaching out, when authors are asked to suggest peer reviewers or when editors are selecting peer reviewers? What are the advantages of participation in peer review. Visibility? clearly, especially when they're reviewing in an open environment where there the review itself, as well as their identity is known, they get a first look at New findings.
This enhances professional knowledge. It opens up doors for collaborations and reviewers, as we know, often serve as kind of a farm team when looking for new members of editorial boards. And so why are African researchers poorly represented in peer review community. Well, obviously there are fewer researchers in the Global south, especially Africa, than in the Global North.
But there are other issues that are woven throughout these considerations. One is a bias that exists. African researchers as individuals may be less well known to Northern editors, so there's not a natural gravitation to people like that. And there is a somewhat insidious belief that African researchers may know something about, quote, African diseases but may not be the people you think of when you think of first world diseases.
So, for example, an African researcher may be an intuitive colleague to reach out to with regard to expertise and malaria or aids, but not so much cancer or heart disease. Yet the good and the bad news is that communicable diseases in Africa are greatly declining Thanks to global efforts, starting with Africa itself and joined in by the World community of tackling those difficult communicable diseases that have killed millions over the last decades and centuries in Africa.
But the bad news is, of course, that non-communicable diseases are increasing greatly. The diseases of sufficiency in terms of food, lifestyle, transportation, that kind of thing. More and more Africans have obesity, heart disease, cancer, those things that have an environmental element that they were not as heavily subject to in past decades and centuries.
So there is a lot of research going on in non-communicable diseases in Africa to benefit the African people, but of course, to benefit people throughout the world. And African researchers should not be dismissed in terms of their expertise in these areas. What are some other barriers to peer review for African researchers. There's simply less a culture of peer review in the United States.
I think it's considered a requirement to be part of the scientific community, especially the academic community, to participate in peer review. That's less true in most of Africa. Arguably, African researchers have more demands on their time with fewer resources. And, of course, language barriers. Most African researchers speak english, but there is a critical mass of lusophone and francophone African researchers who, for whom English is more of an effort.
And so there are language barriers, not to mention native languages, which was also touched upon this morning. So those are the points I wanted to touch on. I look forward to the discussion. And there I am. I look forward to talking to all of you. Thank you, Jim.
Thank you so much. Next, we're going to hear from Alessio. Thank you. I think I'm just going to go at this and it will appear. There you go. I'll try not to move around. I have the tendency of walking a lot, so I'll try to stick here. If you see me navigating, just stop me.
Thanks, everyone. I'm Alessio. I'm head of journal development at the life, and I joined actually in January this year. So it's very fresh, very exciting. And I'll be telling you a bit about trying to stick to the time there is coffee and break. So I'll be quickly telling you about a new model of publishing that we've launched at the end of January this year.
This is the outline. A one slide very quickly about the organization. Not everyone might be familiar with it. Then the new model, how it works, some preliminary data on the new model. A colleague of mine was here about six or seven months ago and she told me there was a lot of excitement about seeing some data, some actual data. We want data.
So there you go. I'll have some data and then I'll happy to take questions. E-life is a non-profit independent organization that was established about more than 10 years ago, led by scientists. What we do is we publish, review, preprints in the life and biomedical sciences, and there is a division, so to say, of the organization that develops open source platforms for the dissemination and the organization of research and preprints.
Our funders that you kind of see and not see at the bottom of these slides. And they essentially support us for disrupting and changing scientific publishing for the better. Until this is the model that with that goal in mind, we launched at the end of January this year. And the idea here is to change the way we. Currently published science. Which is not to say that the current publishing system is all bad and it doesn't deliver on what are the goals of publishing.
It is really to say that evidence and sentiment from the community points to the fact that there are drawbacks in scientific publishing, and that's what we are trying to do. We are trying to come up with a solution and trying to address those drawbacks. Yeah so the first step I'll walk you through it is submission. We considered only preprints of this submission and normally we decide there is a decision.
At this stage, the only decision you will see in the model is at this stage, if we review or not a preprint, this decision is normally taken by three or more experts, active scientists in a given field. We discuss and consult. So it's a consultative stage at this point they discuss and they decide, yes, we review it or not. Now, we are asked a lot, understandably, what are the criteria.
How do you guys decide what to review and what not. So one thing I want to highlight is that this decision. Should not reflect the quality of the science. If we review something, you will be able to see and judge the quality of the science with the reviews that we publish transparently and an assessment that we also publish alongside that and examples. There are two questions really that the scientists, when they consult, they address, they ask to decide, yes, we review or not.
The first is, do we have the expertise. And that's a kind of Black and white question. Easy to understand. The second question is really. Coming from the point of. Not looking at ourselves as publishers because we consider preprints and these things are there. They are published already. We are not publishing anything.
They are there. They can be accessed, they can be cited. They can guide people and scientists and they can misguide as well. So the question is that the editors ask is, do we feel there's going to be value in attaching alongside this published research and assessment and reviews by experts. And if we feel that there is value, we'll do it.
And this is what I was just mentioning there, which is a now otherwise people read. And then usually if you read, you don't listen or sometimes you are able to do both things. But yeah, so after this we review. And the other major difference now is that there is no decision anymore. There is no rejection. We publish.
We actually do publish now. But yeah, we publish everything we review and it's a consultative process. So reviewers and editors get together and they discuss again in consultation. Before we get back to the authors with two outputs. I apologize. I call the expert public review. should highlight that this is not open review, so it's done by peer reviewers and the authors don't know the identity of the reviewers.
Yeah so it's transparent peer review. I'm going to correct that. But we produce the reviews and we produce an assessment, which I will have examples of. We call it a life assessment. And this is like five, six sentences that editors and reviewers again consulting write together to highlight the strength of the evidence and the significance of the findings according to their opinion.
Yeah and. Before we go public with the assessment and the reviews, the authors have a chance this dark blue step autoresponse they have a chance to look at that before we go public. Correct factual errors if there are any, and they can upload a response. If this is not a revised paper, it's a response like how they intend to address the reviewer comments.
If they have some arguments to put forward, that's going to be transparent there alongside the reviews and alongside the assessment. Yeah at this point. We then publish everything we review as a review preprint, which is suitable already at this stage. And now the other major difference is that the authors are in control. We don't decide anything.
The authors can have three choices. They can take on board the reviewers recommendations and then send us a revised manuscript at which stage we will engage with them and we will update the assessment and the reuse, which is, by the way, what we are seeing happening now. All the authors that we have had so far are engaging with this process, which is what we would have expected based on a trial.
The other option they have is from the moment we produce the first reviewed preprint, from that entity onwards, second version of a review preprint, third version, they can say, I'm done with this, I want to publish it as it is. Even if I'm not addressing all the comments, I can explain my answer as an author why I'm not addressing some comments. I want you guys to produce the final version, which is what we call their VR.
This version is version of record is sent to indexing services, and it's also a keen as any other article that we publish in other journal. For what for essentially for funder requirements. Yeah this is the only difference. The two major difference. The third option that the authors have and I am highlighting them there is that they can take the review preprint and send it to another journal.
That's OK with us. And if it doesn't work at that journal, they can also come back to us and say, OK, I want a version of record. Now, we are not seeing that happen. So far in 0 cases that we know of, but it's an option. This is how it works. I should highlight that the idea, a major goal and motivation of this model.
I want to highlight it already. Now before I might run out of time is that we want to shift. The way we assess scientists and science at from. We want to shift it from judging their work and impact based on where they publish on the journal title where they publish. To what is actually published in the paper. So in this model, if you have in the citation, if this model works, whether you have a life in the citation or not, should matter.
Zero if you want to know about the quality, you can not just rely that the quality is of a certain kind because there is life in the situation. That's the whole thing that we are trying to destroy. You have to look at the assessment and you have to look at the reviews too. If you want to know the quality. And the assessment is there because it wouldn't be realistic to expect any common reader, especially if he she or he or she is not an expert to dig into the reviews.
So this is how it looks. You have you won't see it now, but underneath the half stroke, you have the life assessment which is there to make it difficult to be missed if you're just scrolling quickly through it. The paper indication that is a review preprint history with the links to each of the versions, with the assessment of each version and the reviews and access to the reviews there.
And this is a table that shows you the common vocabulary words that we have come up with that we ask or editors and reviewers to use in the life assessment on the left for the significance of the findings and on the right for the strength of the evidence. And these are examples. So you see those common vocabulary words in bold. And what I want to highlight here is that this assessment is short.
It's written in a way that even if you are not an expert, at least at the core, to get a first sense of the quality of the science, you should be able to understand it. Yet we think it's much richer than A10. Accept, reject decision output, and in that it's closer to the input that you have in the publishing system, because the input in the publishing system is science. And science is as mass is a scale of a million scale of greys.
And so this gives you the power to of, if you wish, stay closer. That's the idea to, to that mass, which I loved and I still love as a scientist and I still continue to love. But yeah, that's the idea. Yeah so the benefits, I kind of covered them already. So I'll go quite quickly. In the interest of time and publication time, this is one of the major benefit. Now, normally it takes about nine months roughly human gestation really for a piece of paper that is vetted, meaning that has been seen by peer reviewers to get to the public so that the public can build on it.
With this system, we get the vetted, peer reviewed piece of science there. At the moment, with one to three months time, hopefully you'll get a bit quicker once some things that now are manual will become automated. But yeah, the second the life assessment is supposed to be compact yet richer than binary decisions and in our mind it should have. It should help hiring and funding decisions.
The process is transparent. So the reviewed preprints the common vocabulary words I showed you their meaning as we think of them, the auto response reviews. All of these things are accessible. They publish output the end outcome that aligns with funder requirements and also the process gives more control to authors so that with this system, if an author thinks that reviewer point is out of scope for that specific main message, the author, they will live with the assessment that reflects that.
At the same time they have the option to publish their not to wait three or four years or. We hope that with this we promote, as I was mentioning, a way to evaluate science and scientists based really on what rather than where they publish. And finally, the model. It's interesting that this is covered for transparency. There is a publishing fee that it's 2000.
We the good news here is that the new model for its design allowed actually to reduce the publishing fee from 3 to 2000. I should mention this doesn't generate revenue and we are also lucky I should mention, to be able to offer waivers to everyone who's not in the position for paying. Yeah how it is going the data. Sorry for keeping you waiting. If you were more interested in data.
These are the submissions that we are getting in light Green New model versus old traditional model which we are keeping alive for the time being for people scientists interested, they are on an increasing trend. So that's good news. How much of this we send out to review a bit of a fluctuation. We don't have really a kind of monthly goal there. But if you average and compare the blue, which is new model with the other two curves, on average, it's about 30% that we send out to review.
Again, this is twilight that we are not seeing much of a difference comparing old and/or traditional and new model in terms of countries we get submissions from. This is limited to top 10. We have many more and don't have the 2022 data here. But the trend is the same.
Again here. It doesn't seem like if we compare 2022 on the left traditional model with new model 2023 on the right, that we are seeing drastic differences in terms of the areas where we receive submissions from. There is just medicine that seems to be kind of dropping a bit below and cancer biology appearing. But other than that and some kind of positions moving, it's kind of overlapping.
Editor survey. So this is very preliminary. I am still analyzing this data, so I'm just going to highlight what we had time to look into it. We ask editors, we have senior editor and reviewing editors, about 800 scientists, senior editor decide what to review, together with reviewing editors and also having oversight of the process, reviewing editors, kind of a name that I personally don't like.
But they don't review papers. They are responsible for deciding what we review or not together with the senior editors, and they also are involved in inviting reviewers. Sometimes they act as reviewers themselves, but that's just to clarify a bit this. And so we asked senior editors, do you support the new model. We have about 82% saying moderately or highly supportive, supporting the new model.
Did you submit. About 55% either did submit already or plan to. And do you think the life assessment is useful. 86%? they think so. And this for reviewing editors look looks similar, 80% moderately or highly supportive. About 50% almost have submitted or planned to submit. And I don't remember the number.
I was hoping to read it, but I can assure you it's about 85%, 84% It's very close to your head in terms of if they find it assessment useful. There is work to do because as you can tell, we have a number of editors in our board that are not supportive. They are not planning to submit. They don't think they life assessment is useful, which is fine.
So we lacked on that engage eventually part ways if that's what's going to happen there is work to do. Huge Thanks to our funders and supporting organizations. I'm listing a very narrow. It's a very kind of short list of organizations beyond our funders who have supported publicly the use of reviewed preprints for research assessment, which we would hope and we hope it's going to happen more and more.
And yeah, Thank you for your attention. If there is anyone who's interested in hearing more, especially if you want to experiment with this model or build on it. I would love to hear from that because this is a collaborative thing. We don't really think we got it right 100% of the first time. And it's a dynamic thing. That's how each that's how you should think about it, just an attempt to improve things.
But we need more and more publishers and journals and people getting on board with this and experimenting. So if you are interested, I would love to hear it. That's it. Thank you. All right. I would love to have questions.
Although, letty, I'm cognizant of time. Where are we. That sounds great. We have two questions. As he term getting the first one. Go ahead. This is a question I'd be interested in Janine's answer on this.
There's a risk that open identities in peer review, while they're intended to democratize the process, increase inclusion and diversity, could suffer from the same problem as society in general, where there's deference to known experts and people feel intimidated about standing up and talking, which I do, too. So have you got any evidence to show that it actually does bring more participation rather than reinforcing traditional structures.
So just to repeat that question, Tim is wondering if public peer review reinforces traditional structures that can keep people from wanting to contribute or if it really does help bring more people in. So we're asking for her. English is my second language. Thank you. You don't think it's reinforced?
I think he's trying to improve the diversity and inclusion, but you have to conduct my research. More research to learn how to deal with relations of power in the peer review process, how we can make peer people included and and they have voice in the peer review process. I don't have all the answers. I would like to have, but I have a research project to understand this more. And Elizabeth, I would love to hear your take on that as well, because I think that touches a little bit on what you were talking about.
In terms of encouraging a diversity of voices. Yeah I think much of that dynamic is incumbent upon the people who are more established in the fields or more highly visible to, to underscore or endorse or defend, if you will. The opinions, assuming they're not completely outliers of those who are less well known and don't have such a greater voice.
So I don't have any data on it, but it seems to me that can go a very long way in redirecting a discussion. And we encourage people to recognize the expertise of others, not just try to advance their own opinions. Fantastic Thank you both so much. Let me see you at the microphone with a virtual question from. Yes, from the virtual attendees.
The question is, how are peer reviewers compensated at elife? And I would actually spend that to just ask the panel. Generally compensation for reviewers. Yeah, Thanks for the question. So at the moment, there is no compensation. We are open to compensating them, including financial compensation, which is one option because we don't generate revenue.
I can easily already say that if we are going to go down that road and we are open, the publishing fee would increase. On the side of that, I would like to say that besides financial compensation, I think it's a broader discussion in, in the sense of there are many different ways And we were, we were talking about this even yesterday during the exercise of changing topics at the.
And I think you could think of ways to give them more credit or give scientists more credit, kind of, for example, when it comes down to promotion, tenure and all of that, consider not only the efforts that go into. The numbers of papers that you have published and what you have published in terms of publications, but also the reviewing heifers, for example. So it's a larger discussion. And I think these other options that in addition to the financial one should be and are being discussed, maybe don't want to take too much of.
I'm answering this in addition to, quote, compensation in the form of visibility and intellectual engagement, potential for collaboration and so forth. I've long advocated financial compensation for reviewers for several reasons. One is just on the face of it, they're producing work. And in modern society, work is recognized with financial compensation.
But more to the point, I think for publishers, a very small amount of publication goes a long way psychologically in procuring the engagement. First of all, the willingness to review and secondly, the actual psychological dedication to the work of review. And thirdly, it's not coincidentally, the majority of people who review are postdoctoral Fellows who tend to be to exaggerate the picture.
But you get the idea. People in their 40s trying to raise children and getting a very small stipend. It's a very difficult stage of life. And I think it's only appropriate that such people get peer reviewed, get compensated for peer review. So for a combination of these reasons, I really think it's in the interest of publishers to offer a financial compensation for peer review.
And it can be surprisingly small. It can be a symbolic amount. It does not have to be to correspond to the number of hours and the fair pay for an hourly wage and that kind of thing. Even $50 a review or something along those lines. I think Will go a long way towards encouraging a concurrence in peer review and also engagement in peer review.
Jane, do you have any comments on payment and peer review. I would like to encourage editors to support new voices in the peer review process and to reflect and to think about what is an expert. And please invite me to review. I'm a researcher and I love to review. I have to research projects to engage undergraduate students and graduate students in the peer review process, and they are very excited about the peer review process.
It's beautiful to see how much they feel like they want to change scholarly communications and they want to be part of this. So I guess that's my call. Wonderful well, I think that wraps us today, actually had volunteered to organize this session specifically because I wanted to try to find someone to come up here and say that we should pay peer reviewers. I didn't yeah, I didn't succeed in that.
And actually, so Jane studies peer review and we work together on organizing this and thought perfect. I can't find anyone that seems to study people that pay peer reviewers. Certainly Jane will know people. And I asked Jane and she said no, I don't know anyone doing work on that, which is why you have a whole panel of no.
One that's talking about paying peer reviewers as their whole presentation. So thank you so much, Elizabeth, for so for saying that so well. And maybe if you come back to New directions next year, we'll talk about paying peer reviewers. So with that, Thank you.