Name:
Credit Where It's Due: Reimagining Peer Review Incentives in a Changing Academic Landscape
Description:
Credit Where It's Due: Reimagining Peer Review Incentives in a Changing Academic Landscape
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/d1ddcde0-db2d-44c2-a1e2-2bb2177f4606/thumbnails/d1ddcde0-db2d-44c2-a1e2-2bb2177f4606.png
Duration:
T01H05M10S
Embed URL:
https://stream.cadmore.media/player/d1ddcde0-db2d-44c2-a1e2-2bb2177f4606
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/d1ddcde0-db2d-44c2-a1e2-2bb2177f4606/GMT20251007-181258_Recording.cutfile.20251007222006212_1920x.mp4?sv=2019-02-02&sr=c&sig=WKiWPSvos0q30vCVwwIRU%2BsoWKQZ8mAoFJPSlAm6msw%3D&st=2026-04-09T15%3A14%3A20Z&se=2026-04-09T17%3A19%3A20Z&sp=r
Upload Date:
2026-04-09T15:19:20.2639719Z
Transcript:
Language: EN.
Segment:0 .
OK all right. We're going to go ahead and get started. Hi, everybody. I'm Lauren kmiec. I'm the deputy executive editor for the science family of journals published by ars. And I'm really excited to be here with you today to bring you this panel on peer review and incentives in a changing academic landscape.
So I'm going to give a very brief intro to our topic. And then the panelists will introduce themselves. And then the format here is just a panel discussion. So peer reviewers are a key pillar of the scholarly publishing landscape. But today's peer review system is under intensifying strain. Individuals must balance this work alongside their research, teaching, and administrative responsibilities.
Expectations and standards vary widely between publishers and across disciplines. In this panel, we will explore how scholarly publishing stakeholders holders can support the reviewers of today and tomorrow. What incentives and recognition frameworks are most valuable for our communities, and how I can be used responsibly to help reviewers. So let's get into it.
I'm going to turn it over to my colleague valda to introduce herself. Hi, I'm valda Vinson, and I'm executive editor of The science journals. And I won't say any more at this point. My name is Lauren Collier Spruill. I'm an organizational psychologist working in industry, and I received my PhD from Michigan State University.
Some other alum here, I'm Ryan Johnson. I'm the head of research services in the Georgetown University libraries. So Hi, I'm Sarah Muncy, I'm the managing editor of the American Historical Review, which is the main journal put out by the American Historical Association. All right.
Terrific So we're going to start with an overview of how the role of peer reviewers has evolved in recent years, and what new pressures are shaping reviewer participation today. So, Sarah, why don't we start with you. So I'm talking from the perspective of a historian, but the humanities kind of writ large with peer reviewers, we're doing more with less. We have a smaller pool to pull from.
At the same time that we're having lots of impressive, engaging scholarship being produced that deserves peer review. Yeah, the pool is smaller, and the burdens being put on people who are in specialized backgrounds or in niche areas, or being asked to review more and more. And so it's harder and harder to get that person to say yes, to get that review actually submitted, especially when you're doing it, you know, fivefold.
What we're also seeing are shifts in the type of content that we're publishing as well. So we can the type of content that we are publishing. Historically, we see a lot of traditional articles being our bread and butter. But as we start to publish in different areas digital humanities, round tables, other types of content, reviewers are having to shift what they understand is what a review is.
What does that look like. What are the expected deliverables. They're engaging with it in a different way. And so the expectation of a reviewer is different. And that's partially because of this shift in this landscape that humanities are publishing in right now. OK Thanks, valda. So could you tell us a little bit about how things are either different or the same at a scientific journal.
Yeah, I think there are a lot of similarities, especially as you talked about reviewers having to expand what they're considering, a smaller pool of reviewers. I think we have, you know, the one problem that's been around for quite a while is more and more multidisciplinary research more and more actually. Reviewers are asking for more different technologies to be applied to a problem, but then we have to have reviewers to cover every technology in the manuscript.
So that means we need more reviewers. It's a little bit of a not so virtuous cycle. So there's the problem with that, although we really encourage multidisciplinary papers, but they do come with their challenges. And then we're also, I think more recently really getting into an area of mismatch between the author pools and the reviewer pools. So, I mean, you know, the one problem is that huge proliferation of journals, all requiring finding reviewers.
But another issue is that we have science emerging in Asia for sure. You know we have signs really emerging a lot of science coming out, a lot of science being submitted. But the reviewer pools in those areas have not really been built up yet. So we're using existing review pools to review a changing sort of dynamic of submissions. I'll stop there for now.
OK so moving on. I'd love to ask Lauren this next question, since you studied peer review incentives during your graduate work. So, Lauren, in your experience, what is the most meaningful type of reviewer incentive. Is it financial compensation or something else. And how might this vary across disciplines, career stages and geographies. Yeah happy to answer this question.
This was something that we actually took to practitioners in the field in organizational psychology. About half of us end up going applied for context. Half go into academia. And so thinking about what motivates people to engage in the peer review process might be similar or different to what's happening in other fields as well.
One of the biggest ones that we were hearing from people was that they were engaging in peer review, because they were intrinsically motivated to do so. That's why I'm peer reviewing. As somebody who works in industry, it's not in any part of my promotion or evaluation for work. So that's something that draws me to do this, mainly because I feel like it's important, as somebody who has graduated in that field, to contribute to future knowledge in that way.
However, there are other different types of ways where, particularly for those who are in academia, it could be more heightened, shall we say. So when people are applying for tenure track positions or getting promotions in that area, it could potentially be to the benefit of fields in general to consider both the quantity at which somebody is engaging in the peer review process as well as their quality.
Some people really do invest time into becoming the best possible peer reviewer that they can, but that's not necessarily reflected in who is ultimately selected for tenure track positions, which can create some sort of a mismatch in terms of people who might be skilled in the art of teaching or mentoring students, and those who might ultimately be selected solely based on that publication record, particularly if you're in an R1 or something that's going in that direction.
In addition, some other people mentioned that they wanted it to be more of a part of the grant selection process. Time spent in peer review means that you're taking time away from applying for grants or doing your own research, and it would be nice to ensure that people are still rewarded for engaging in that work in a tangible way that affects their bread and butter work that we are, or that academics particularly are very evaluated for.
And then lastly, and one of the ones that resonates with me most as a person who is working in industry right now is having some form of subsidization of professional dues and/or conference attendance, so that those of us who we are intrinsically contributing to the field. But there's not really another way to reward those. In practice, we could benefit from being able to have that academic fellowship with our academic counterparts if we were able to come to these conferences, be rewarded for doing good, reviewing work in that way.
Or we could be part of these professional organizations through the subsidized means. And so those are kind of the key ones that emerge from when we spoke with members of the organizational psychology field. Thanks anybody else for this one. All right. Let's let's keep moving. So, Ryan, I'm going to you first with this one.
Since you've spent some time on tenure committees, how should peer review work be factored into tenure and promotion decisions. Traditionally, we know that service and research are assessed separately in these evaluations and weighted differently. But but should that change. And if so, how. Well, for clarification while at Georgetown, the librarians are academic professionals.
Before Georgetown, I worked at University of Mississippi and Washington State University, both of which we were tenure line faculty, and I was tenured at both institution, was on tenure review committees and actually wrote tenure standards. So, I mean, I have experience in at those institutions doing this. And and typically peer review is considered service at almost every place, which means it's not particularly valued because, I mean, this is a reality that in reality you have to have service.
But that's the extent of it usually is if you don't have anything, that's a problem. If you have something, that's fine. So it's so to change that to increase value makes would require both universities as well as the faculty in those universities to change what they value and to consider that now. That's difficult because institutions are I mean, I've sat in sessions where University administrators talk about how important service is, and they walk out of the room and the tenure faculty look at them and go, no, it just is.
You know, it's not. And so you have to kind of change the culture of those researchers to, to, to increased value for, of peer review, in particular, to shift it from the service side into the research side. If it's considered research, then it's considered valuable. And that faculty evaluation process. Now that's a cultural issue that has to happen in universities among the faculty.
That doesn't mean that. As for the journal editors and publishers can't encourage that within for their authors, for their editors to consideration. But that's what it would require to do that. But that's what increases the value of it in this form. Which I think would be really important to have good peer review, because as, as a consumer of research, which I am, because I'm working with students, I consume research and I teach students how to consume research.
Good peer review is essential, so we can trust what we have and to teach students how to effectively use that. If you think of students as both early career researchers and the public, which undergraduates really are. A good pair of views is important for them to understand what it is. So if we can change that model of peer review, it changes kind of the consumption of that research and your consumption of your work as well.
So I think there would be an added benefit to that. Thanks I think that, you know, at the moment. And we're all considering peer review as service. The institutions, the journals, the authors, the reviewers. And if we could reframe it as really part of the research process, I think we need to reframe, reframe that for the authors as well.
A problem we have, a challenge we have is that most authors see the real work as doing the research. Once it's done and they're writing the paper. They're entering the chore phase, you know, now they're doing the work. They have to do that. They have to write the paper because they have to. They don't see the importance of communication of that paper to others.
They don't necessarily see the review as part of that whole scientific process. And I think the more we can do that, the more we get to a place where science is transparent and is trusted. Because you know why? Trust it if you're just slapping stuff out there. Thanks and, Sarah, I think during our planning call, anyway, you had some thoughts on whether standardization of peer review might help.
So could you elaborate, please. Yeah I mean, I think I'm speaking again from the humanities perspective that there is not a strong standardization. So it's a little bit harder to justify it moving to the research area when different universities and institutions are viewing it differently, if there's kind of this mass movement of making it be research, because it is research to write a review that is going to help allow more acknowledgment and recognition of the work that is integral to a review, to a later question.
We have some ways, but if the reviews were published where they were treated as a publication, that that might facilitate that change, because then it becomes, even if it's not a peer reviewed research publication, it's still a publication like an encyclopedia. Articles on which do have value. And that is, I understand that the publication of reviews is problematic in a plethora of ways. But if they were published and it was counted as a publication in some way, that would change.
I think it would change the way people approach writing the reviews. And it might change the way people think about them. So that's actually a perfect way. No, that's our next question. I was going to bring up open peer review. It comes in many different flavors, but right now at least, it seems the open but anonymous model has been gaining a lot of traction.
So I was going to ask the panel about does publishing the reviews alongside the paper change the way that peer reviewers are valued. Even now, maybe on the flip side of the part you just addressed, Ryan, what if there it's open but anonymous, even if their individual identities are not revealed, is there still merit in that for the folks doing the reviews. I think there's merit.
I think there's also merit in the consumption of the research because for early, particularly early, career researchers, if you can see the reviews, then you can see how the research develops and grows. So as I can think of reviews I've received, where you get multiple reviews that sound like they were written about different articles. I mean, you have to wonder what if the two people read the same thing.
And so I think that's a way for people to learn and to learn about reviewing, but also learn about the research process. So I think there's an argument to that. Knowing how to review something is not necessarily always taught in the way we assume in programs, and that's a way to teach someone the actual transparent way to do a review. Obviously, that depends on the discipline, but that assumption can't really be made in the same way anymore.
Yeah, I mean, it's interesting, it's interesting. I'm very interested. I hope that some people trying these different models now, because I think most publishers are discussing where we should go. I hope that some kind of scientific, experiments that actually can produce some meaningful results will get done as people try these different models, because I think it's hard to know exactly what's going to work.
You know, ideally you could say, let's, let's, let's put signed reviews up. That way. The author gets credit, it's transparent. It's but then the downside is that people say, well, early career researchers will be intimidated by that, which is a real concern. With AI, we have the possibility that identifying reviewers may be trivial in not too long.
So I think, Yeah, there's I this is one where I really am cogitating. I don't know what I believe. Yeah so the conversation is moving pretty quickly. And Yeah, you know, at least in this room here, we're all talking about it and thinking about it. And you're right. I just wanted to emphasize the point that you made about today's anonymous reviewers may not be anonymous tomorrow, and that's maybe not at the forefront of our minds, but it should be.
And we're going to get to AI in a minute. But I did have one other question before we do. And starting with you, Lauren, what are the unseen or intangible benefits of being a peer reviewer. So there's definitely a couple that come to mind, some in the realm of self-development, where as a reviewer, you get a front row seat to what's coming up next in the field, potentially.
So you get to see what the latest in content domain knowledge might look like. I know that my advisors would often discuss some of the papers that they were reading saying, Oh, there's an excellent idea coming up. I can't tell you what it is because I'm trying to, you know, respect. But there's something that you're going to want to look for in 2022 or something like that.
And so just to be able to see where the field is going is a huge reward for being a peer reviewer. Another one in the social sciences is you might become familiar with new analytic techniques that might also be more experimental, have not yet been published. One of my other advisors discovered that way, actually, and became so enthralled with what one of the authors had done that he decided he was going to teach us an entire class back when it was relatively brand new.
And so then one of the other things that I think goes beyond the academic landscape is. Learning how to give good developmental feedback is a huge boon when you are engaging in this process. This is something that can help both people who are advising students. Those of us who supervise direct reports in the workplace, it's just an overall good skill to develop in general, how to provide helpful communication and feedback to people.
So those are the main ones that I see. I think one other is I think it expands your understanding of the literature in both your field as well as related fields. You know, if particularly if you're very focused in one field as a peer reviewer, you might come across related literature, which will expand your understanding of what's out there and show you of new things that you should be reading or looking at.
And because, I mean, we teach students to look at bibliographies to understand, I think as a peer reviewer, you do that too, because that is the most current, should be the most current research. And somebody else might have seen something that you didn't or you wouldn't because it's in a journal you regularly read, or in a field you don't regularly read, but all of a sudden you're made aware of it and it expands as a reviewer expands your understanding of the literature.
So I think back to an earlier point. You said about the research becoming more multidisciplinary. An area that you see benefit as reviewers, especially in history, we've got historians, but also people working in museums and educators and digital humanities who are getting the chance to offer up their expertise and have some influence on academic content and be recognized for what they can bring to the table.
And so that is maybe not something that we've had before, but that allows them to be a part of that peer review conversation. And then just the one thing I would add is also the value of being brought into that network. It it's not obvious, but the number of times we ask someone to review a paper and six months later that person submits a paper, it may get rejected, but they had the courage to submit the paper to Science, you know, and that's intimidating for a lot of early career researchers.
And if it's another journal, I think it may give them, you know, a society journal. It may be like, is that journal a good fit for me. Should I? And when they get asked to review, they're like, Oh Yeah, that journal thinks I'm part of their community. So I think it also has that effect of allowing scientists to feel like part of this community.
All right. So moving on to AI tools and technology, everyone's favorite topic these days. I know that we can't get through a conference talk without getting into AI to some degree or another, so let's do it. In the past year, the prevailing narrative has shifted from. Should we allow AI in peer review to how best to do it.
So Lauren, turning to you first. How should we encourage responsible use of AI in reviewing while acknowledging that confidentiality. Confidentiality is still essential. I think that this is where the journals will best be poised to take the lead. As we've discussed among ourselves as panelists, people are going to use AI. It's kind of like the Kat is out of the bag at this point.
It's something that is a technology that is readily available to all of us today. However, I think if journals take the lead in mentioning like how people can use AI in this process, I think that that will hopefully get us around some of the other things I wanted to discuss today, which could be some of the downsides of relying too heavily on this new technology. If journals say, how one can go about using it in their peer review process, where those limits are things that they probably should not do if they're using it to augment their peer review.
I think that that will be helpful going forward. I also think that I more broadly can be helpful in terms of doing things that require maybe more rudimentary feedback where it's like, OK, there's some sort of error going on here, or there's something where maybe you can reword your feedback to be a little bit more kind. If you're feeling very spicy about a review that you're sending out, I feel like that's a, that's a, an appropriate use at this current time.
I think some people have a lot of faith in the technology. I use it a lot in my discipline and at work, and I don't quite think that the technology is there yet, where you can just funnel something into AI and get an answer that you feel fully confident in. One of my coworkers the other week actually tried to analyze some data using an internal AI tool, and it hallucinated on her and told her something that was completely, factually inaccurate that only she would know because she did all of the interviews.
So I would not necessarily go as far as to put full faith in AI. As it stands today, in addition to this, when you're inputting any type of, I would say, intellectual property from an author into a publicly available model, you run the risk of you didn't ask the author for permission. And so you don't necessarily know that they would have been OK with that being part of their process.
And then secondarily, that public model is training on everything you're putting in right now, in case that was not clear. So you don't necessarily want findings that have not been vetted or validated from an official peer review process to then be appearing to the general public as an answer to something unwittingly. There's a whole lot of potential intellectual dangers associated with that.
I do know that some consultants, some firms are also working to make internally available models. That's one that we work with at work where we're keeping our property internal. But I think that that would once again be potentially a journal by journal solution or even a field by field one. And who is to say that all of those become standardized in a meaningful way. So there's benefits to using it appropriately and responsibly.
There could be pitfalls if it is not used in those ways, but I can't wait to hear what other people are thinking on the panel about this one. I think one of the problems we're talking about AI is there is no single AI solution. I mean, what ChatGPT does and what illicit does and what research rabbit does and what Gemini does is different. And so when you talk about an AI solution, it's what do you want the AI solution to do.
And so I think there might be ais, AI tools that might be appropriate in a review process, perhaps one that would help people write in another language or. Phrase things better, those kinds of things, rather than creating or evaluating data and those kinds of questions. So apparently I'm too far away from. So being hurt is usually not my problem.
But, but but I think so. I think if you're going to do it, you need to be clear on what AI tools you mean and what you want it to do with it. Because just to say I currently is almost meaningless. I mean, because there's so much variation in that, in that mix. And I think and consistency across publishers is going to be increasingly important. I think this is slightly off the peer review question, but as authors will submit an article to one publisher and then maybe to another publisher because it was rejected, if the first publisher allows AI to be used in a particular way, and the second publisher doesn't.
All of a sudden that article becomes anathema to the second publisher. And so that's going to make the research process and the writing process difficult for a lot of people, both in the peer review question, but also just in the construction of articles, which is a slight variation. So I think it's I just wanted to reemphasize something that we discussed during our planning and actually at my table earlier this morning.
Just that publishers have an opportunity here to set the tone to normalize disclosure of AI use. And we were talking a little bit more about authoring rather than peer reviewing. But I think the point stands for both. The prevailing attitude in most communities is still, this is something I need to sweep under the rug and pretend like I didn't do, and make sure it's written well enough so that an AI detector won't, you know, flag me.
But if we are allowing it. And of course, each publisher needs to set their own guidelines on that. We should be we should be public about it, and we should communicate regularly with our authors about what's allowed. We should have disclosures required early and often in our submission process.
All right. Off my soapbox here. So so so valda, turning to you from your perspective, what types of tools would be most valuable for peer review. And then we'll go to Sarah to see if it varies across the disciplines here. Yeah, I think, at least in the initial as I is developing the places where it can maybe help us the most in the it as part of the review process is, is in the sort of technical aspects of review.
So I think there's lots of things that we kind of hope the reviewers are checking, but we're certainly not sure the reviewers are checking. And then we're left trying to do editorial checks on all these. So I think things like, you know, figure checking, which we're doing at the moment. So we're using proofing and checking all figures for duplications, alterations.
And that's really helpful because that's something that's sort of a complete throw of the dice, or quite an unlikely throw of the dice. Expecting reviewers to pick that up. You might get lucky and get a reviewer that's really good at image recognition, but chances are not so much. So proofing is one. We've been actually doing a trial with another company called dataseer, where they go through the manuscript and pick out, you know, all the reagents, all the code, all the data sets and tell you there material availability statement.
Is it appropriate. You know, coy, they'll look at a little. But the Materials, you know, literally a list of all the Materials these ones have our IDs, these ones don't. All the data sets. This is deposited but not at the best database. So I think the kind of things it's been finding for us are really onerous for a human to, to find. I mean, to, to go and write down a list of all your reagents and figure out if they all have IDs or not.
Who wants to do that. So I think that kind of thing, I could be really helpful and hopefully it can move quite soon to things like and I think I know there are people already working on things like this, like statistics, for example, is the statistics appropriate. Is it all well enough done. Have they got bars on all their, there's this whole list of things where we try and make ourselves long checklists to check for them all.
And I could do it very quickly. So I think that's where there's a lot of promise. And I think in a similar fashion with the humanities. There's plenty of this kind of checklist that comes with the review or something that can make that more efficient would be highly desirable, desired. But another tool I know tool is the right word to use here, but so many of the reviews that I encounter that are just not that good, that are not as helpful, are coming from a place of misunderstanding what they're supposed to be submitting.
I get a lot of questions from younger scholars who don't really know what to do. They think I've been asked to review this article. I've written up this paragraph. Now what? And so having to explain OK, this is what we're actually wanting from you. And so documents that I can provide them resources I can provide them that walk them through that are really essential.
And an area that I think there could be a lot more development. And then perhaps this is some kind of like pie in the sky kind of idea. But there are so many systems and so many accounts and so many of our reviewers we lose to. I don't want to make another account. I don't want to use this other system. So some kind of universalization of that process would be wonderful.
We'll see. Anyone else have thoughts before we move on. All right. So this is let's go to well anybody actually. Let's discuss how and if and how technology might help us expand our reviewer pool. Harkening back to the answer to that first question, and where we were talking about how the reviewer pool tends to evolve more slowly than our current community of authors.
So how can we leverage the current technologies to help us with this. Or can we. I mean, immediately, you could imagine, help with language editing, but I'm not sure how far that gets you.
I wonder if you could do, there are structured reviews, there are forms for structured reviews. I think we use them at one of our journals. And I know that early career researchers tend to like that because it guides them through the review. And I haven't really thought about this much at all, but I'm wondering if there is space for a more interactive, structured review so that it wouldn't just be a static form, but you would actually be interacting.
If the paper is this, then that if you. So that could be something that really I thought about it as Sarah was talking about the training and wondering, you know, could you give like on the spot training to early career researchers. So kind of doing a structured review and at the same time saying this is what is needed in this portion of the review. Can you.
Yeah so anybody here with a software development background want to take that on. Any other thoughts on this one here. OK we'll keep moving. All right. So well we're going to have plenty of time for questions. But I do have one more question on my agenda. And I'll go to everybody with this one. If you had your magic wand and you could wave it and, you know, poof today, you could implement a single reform to help peer reviewers in, in your community or globally.
What's one thing you believe would have the most substantial positive impact. Well, from this end. Yeah you know, I think Megan said something interesting this morning. She mentioned publish or perish and then she had published with purpose. And I think that's what we're trying.
I mean, it's a really big challenge, but if we can break down this Publish or perish and build up a publish with purpose and the way that I'm trying to tie this in is that we need some. It's all about recognition. It's all about the incentives. It's all about the fact that scientists believe, you know, to get tenure, they need to publish papers in high profile journals to get a job.
They need to publish papers. It's and How's that measured. It's measured by impact factor or age factor of the author or number of citations. So those metrics, I would agree with everyone who is trying to come up with all sorts of solutions that I think are not going to work. But I believe that us, the publishing industry, needs to confront that.
That is a real problem. And how are we going to confront it. What are what can we do to shift the needle so that what you know, the work that the scientist does is what matters, that they're really are publishing with purpose. I don't know how to get there, but that's can all put our heads together over drinks later. Lauren this could be for my field or for other fields where there is a practical applied component.
You know, those of us who are volunteering our time essentially for free, we're not being evaluated for our work in any way, shape, or form when we are engaging in the peer review process. Just a little something to go with that intrinsic motivation could be helpful. So going back to what we actually found in the research that we did for organizational psychologists, just maybe even subsidizing being able to go to a conference and fellowship with other people who are doing the research that we would love to bring back into our organizations.
I know I was going through our SIOP website. The other day looking for an employee voice program type of research that I could bring to work with me, and I think it would be amazing to be able to go to conferences, particularly for those who are our earlier career, where maybe you don't have the funding or organizations not paying for you to go to these conferences to be able to reconnect with people from grad school, reconnect with our advisors.
And also have that bidirectional communication with the field. We could bring questions in that we do need answers to in organizations, and then they can give us the research back that can then go beyond the confines of the paper paywall that we often find ourselves facing in industry. So I think maybe, maybe a little bit more of that would be helpful. If you're in a field where you have a potential practical application to, that could be something to consider.
I think that the question of publishing what purpose is really important in this. I mean, do we really need a new science journal and a new nature journal every year as the one who buys them for my library. I mean, well, I'm just I understand that was a expanded metaphor, but but there are new journals every year, and I wonder if we really need more journals, but maybe we need a different way to think about publishing and kind of how we make our research available.
Because to be realistic, libraries don't have the money to buy all the journals anymore. So maybe we need fewer better journals or different model journals rather than more journals, which fits into all of this, which I understand goes a little bit beyond the scope of the panel. But it kind of also fits within the conversation because like I said, I'm predominantly a consumer of your research rather than a producer of the research, and it's becoming increasingly difficult to acquire it all to make it available.
So so we need to rethink kind of the whole process perhaps. So I think to circle back to points that we brought up earlier that the audience is for Scholarly Publishing continues to expand, but the Academy is still such a solid foundation of that, and I would love to see a single form be that a collective and mass movement, to make it recognized as a part of the research process, and not just service.
Now, I just wanted to pull out one more point that came up. I think, Ryan, you brought this up during our planning call and it gets back into the discussion of the tenure and promotion and evaluation discussions. But there needs to be some real serious acknowledgment of the time commitment that it takes to do this work thoroughly and well. And I think that's well, we all think that it's lacking. And could be improved.
So thank you very much. Let's take questions from the room. This is from Mara. And she says, Thank you for mentioning the importance of not assuming it's acceptable to put other's work into generative AI. How do you recommend journals communicate guidelines, as folks may be not thinking about their use of AI in this way.
And what should those guidelines be. That's an excellent question. And I can only really answer for, I guess, what my personal philosophy is on this, which is that if you did not get express consent from the author to input any element of their work into AI, it should not go into any of these publicly available models.
There has to be consent. That way people know where their intellectual property is going. And as we know, it's being used to train up the model and then could be an output for other people. So there is a concern there. I do think that it will be up to the journals to provide these standards. And I think that putting I would say maybe de-identified information from the review that you're writing into AI to reshape how it could be framed to make it more developmental could be different.
I don't know necessarily how easy it is to do that, since you are, you know, probably talking at length about the paper in the review that you're giving. But I do think that journals should probably, at this point in time, where AI is now state, do not put any element of the intellectual property into the model until there could be solutions or some sort of internal model process that goes with the field, or goes with the journal, such that you can safeguard that intellectual property and know that it will be used for, say, positive purposes.
We'll go with that for now. And and logistically, I would say that journals need to look at their workflows and see what the touch points are with reviewers, because we know all too well that, you know, you can put it in your policy. You can put it into your information for reviewers. You can, you know, write it in the sky. And people say, Oh, I didn't know you said that.
So I think you need to try and put it in their face right when they're going to do their review. So probably you want something in your letter that goes through reviewers, but they probably forgot that by the time they hit the link to go to the Review, they might download the paper without actually going to the review form. So you've really got to think through, at what point in my process Can I guarantee that I have their eyes at the point that they're about to do something.
Because otherwise you have no guarantee that you actually spoke to them is in my experience, I think that goes for instructions for authors too, doesn't it. Yeah keep putting it in their face at every stage. Jessica I'm Jessica miles and my question is for Lauren CS but anyone else who wants to chime in. This panel has mostly focused on peer review in the context of manuscripts, but a few of the panelists raised that peer review happens in other areas.
Lauren, you mentioned grants, and obviously that comes with study section. Ryan, you mentioned things like, Pa, obviously, you know, people are reviewing dossiers and the like. So, Lauren, I was just curious if any of these other examples of peer review came up in your research and whether or not the folks that you spoke with distinguished between manuscript peer review or, you know, study section, Pa, et cetera.
That's a great question. For the purposes of the study, we focused mostly on asking about that manuscript feedback, but some people organically did mention their experience with Grant reviewing. I think that at least in our field. The manuscript is the heavier lift for a lot of people, just because there's a lot more content that you have to go through, essentially.
But they it does take time and effort to provide a thorough grant review. And you do think about it in terms of outcomes and outputs as well, such as like if you did a not as good job on somebody's grant review, that means that they're not getting funding to continue their research. It is kind of a high stakes situation and should be. I also think incorporated into whatever reforms we are looking at with peer review type of things.
In our field, too. We have white papers where maybe you're working with organizations and putting together documents like that. Those require reviews as well. Book chapters. There's all sorts of areas where maybe as we're expanding our view of what peer review can look like, maybe rewarding those elements too, would be a helpful part of that as well.
I'm curious if anybody else has. Pa reviews is an inordinate time consuming thing because the dossiers are huge general. So it's AI mean, and they're not I mean, that's not rewarded either, but it's just part of kind of doing what you do, though, typically is acknowledged in the faculty role that if you're in the if you're on that tenure committee or you're a tenured you, you have to do that.
And so it's though if there was a way for an AI tool to do that for you, that would be lovely, I think for everybody who has to read them. So I my name is Michelle Wilson. I'm the head of open scholarship services at the University of Maryland. So I'm a librarian and I come from a library publishing background. So the library acting as a publisher and working in partnership, especially with a lot of student journals.
And as you're all talking about, how do we kind of inculcate a different culture that encourages academics, researchers to value the peer review process, to have this better appreciation of ethics, to understand how to do a good review. That's a lot of what we try to do as library publishing partners, is that we're working with the students not only to provide them with the technology platforms and improve the quality of the journals from their appearance, but also their practice and the way that they understand the publishing process, and to build publishing ethics and understanding of the publishing process into their work, starting as undergraduates.
And I wonder if any of you have ever thought about incorporating student journals into your catalogs, or sort of fostering that, or continuing to think about including undergraduates or graduate students into your publishing processes and thinking about them as your early career researchers, as a way of starting to think about beginning this process earlier, before they are subject to tenure and promotion requirements, and it's a lot harder to incentivize them and to get them to appreciate the benefits of this system and this work on its merits.
Well, I will say is, I mean, the University, we do publish student journals in our repository, as you do as well. And and so we do work with our students in that way. I wonder if the law, the law review journals might be a great model for this, which are typically student run and student reviewed. Those articles are student run and reviewed, which is a way to learn a little bit different form of modeling.
But it is a student run kind of publication model as well that would fit into that question. Well, I think one of the things that happens a lot of times with student journals in universities is that they don't get a lot of editorial mentorship. So they're working with a faculty advisor who might be sort of like, moderately overseeing their work and making sure that they're doing whatever they need to be compliant as like a club with the school, but they're not getting mentorship to learn about what it means to create a formative review, to develop editorial policies, et cetera.
And so in the way that editors and other people at scholarly societies work with the journals that they support as professional publications, I wonder if that's ever anything that anybody considers in terms of helping to bring up that next level of scholars. It's interesting because we get we get requests. You know, generally our audience is sort of graduate student and above. But absolutely, I see the value of starting a training process earlier.
We get requests, but normally, you know, from this University or that University or this University and we aren't resourced to be able to say yes, yes, yes, yes, yes. So no, I have a special place in my heart for Maryland. But so I'm wondering, is there any sort of overarching meeting that these students go to. Because if there was something like that, that's a place or where you might invite an editor to come and give, you know, a session or so, it would be great to be able to build some kind of interaction, but.
We're not resourced to do a whole lot more than what we do. So Yeah, have any journals that come work with us. But in another life I work for a library publisher, so I have a soft spot for the student journals. But we do have programs with student interns who then take the editorial guidance that they get from us back to their individual journals. However, I would love to talk with you more about if there are any Maryland students who are in this sphere, right?
That's actually a great idea because we've also had internships and I've never thought of connecting them to student journals, which is a take it back to their institution. A lot of publishers do have trainers that come onto campus to work with researchers, and particularly early career researchers and faculty. And I wonder if they can make some of those opportunities available to those student journals as well.
I mean, I think typically those would those. I see them most awfully from the commercial publishers. I mean Elsevier. Wiley Springer. Yada yada. But TNF so. But I mean so but but that might be if you're coming in to look at talk to the faculty, maybe you could add an hour for the student.
I think as well, while you're there because you have trainers that come on and do that. Oh, Yeah. Excuse me, there's a question from online. Oh, we're very good. This is from Michael. I'm sorry. We're very good. At what we don't want peer reviewers to do, like integrity checks.
And we keep emphasizing that I cannot replace their job. But how do we frame this positively. What do we expect peer reviewers to do. This will help answer questions like, how do we support reviewers and what training can we give them. Well, we have a little I could give a talk on that to your University. But I think basically we're asking reviewers to engage with the manuscript to make sure that it's technically sound that the, you know, results support the conclusions.
And those are all within the area of the reviewer's expertise. The reviewer is not necessarily an expert on ethics. Although if we have a paper with specific ethics concerns, we'll go to an ethics reviewer. But so so we're asking the reviewer to give us. What best what the match to their expertise to review the paper.
Technically along those grounds, we're asking AI to do some to do things that maybe not every reviewer is completely trained that this is all needed. They don't know our checklist. We don't want to give them the checklist to have to go through. So we're asking AI to do those things that are more data checks, integrity checks. But when we talk about data integrity checks, we're not talking, you know, does the data support the conclusions that we're asking the reviewer for.
We're more talking, you know, are the Materials. Can you tell exactly what reagents were used here. Can you tell exactly what data was used. Do you know where the data is. Can you access the data. So so that those more logistical things would be asking of I. Maybe even before you can use AI in this process, maybe completing some form of training so that you understand where the boundary lies with AI.
I think some of the hesitancy I think I'm seeing around AI, even at work or in broader society, is a lot of people don't quite understand how it works and what the boundary limit even looks like with using those forms of technology. Maybe engaging in. I know that nobody wants to add more to their plate at this point where it's like, Oh, now I have to learn how to use this tool first. But I think it could benefit people who are hoping to use this tool for reviewing purposes if they have a little bit more familiarity and practice just leveraging it more broadly.
And then also for that reviewing purpose, what the training looks like TBD. But I do think that there could be some benefit to that. Yeah, I've been standing here a while. Sorry, guys. Thanks for your patience. No no worries. You know the phrase that keeps getting said over and over again.
So I appreciate that is the publish with purpose. And so we also did this at aps, which, you know, using the alliteration. And we launched it to our community. And our editors loved it. They were like, really, this is why people should publish with us. And I think that our audience is so limited. But it's been said like numerous times.
I'm like, this should be something that we should group together to promote, specifically from a nonprofit society standpoint, you know, like, hey, we are different in a way, and we do provide this community. So it's more of a call to action for everyone. But like, you know, if I had a voice like, you know, AAAS, I might get a little bit more comms director, I'll be in touch. Megan Yeah, we'll take it under advisement.
So actually, my name is Jonathan Schultz. I actually sat down because my thought Michael's question was kind of what I was going to ask. But thinking about it some more, you know, you could imagine an AI is not there now, but you could imagine in a couple of years it might be able to tell us whether the results match the conclusions, things like that.
So could we get to a place in a couple years where we as editors don't necessarily need actual reviewers. A lot of it is done by AI, and if not, like, what is the irreducible human aspect of the review that can't be replaced by AI. Or have you been thinking about that. I'm just curious. That's a really good question.
And I mean, I think, I don't know, speculating wildly, you know, maybe we'd want a tool that is specifically designed to do review. There's you know, and it's going to if you think of the vast expanse of topic areas that have to be reviewed. But I surprises me all the time. So who knows. The quick answer is I, I, I mean, I can imagine it getting there to where it could write a really great review that would probably be as good as what you got from 80% of your reviewers.
And the point is that most of the time, you're not at one of your great reviews. You know, I mean, there's those reviewers that just have they see some maybe they see something more, they see beyond that. They see. And so, I don't know, I it's going to be interesting to see. Or will we just be able to provide these I reviews to experts and say, here's the paper, here's the reviews.
Is this all kosher. I don't know, it's, it's going to be interesting times. But the thing we Yeah, let's ask the question in two years time and see what's happened. I mean, at the moment we're all talking about how AI is going to take away jobs. And so far, apparently it hasn't. Right Yeah. So maybe it's just early days, but Yeah.
I do wonder, have we found everything that there is to find, given that these language models are trained on things that currently exist. We could use it to maybe augment peer review processes. But I'm wondering if it were to make, say, a final decision on what goes into a journal versus not. Is it going to overindex on things that are similar to things that are already in place.
Right how do we advance the field if we're basing our future selections on things that are already within the data set. That that would be my question to you. Could we completely eradicate humans. So just. Yeah, just to speak to that. I think that is what we have to be really concerned about, that we narrow ourselves down and it'll seem like we're going in a perfectly good direction.
And just to speak to that, I was just at a meeting recently on chemistry and medicine. And there was a lot of talk about very large scale screen that was done to find new targets. Right doesn't matter what. The interesting thing is that they did find new targets. They did find, you know, outside of New antibiotic targets, outside of anything that's known. They were all targeting the membrane.
Who knows why. Somehow in the training data set, it sort of directed things in one direction. And so I think it's something we really have to be afraid cautious about. Yeah I have a follow up question for Lauren. Lauren, you. This is Anne stone from. I'm an independent marketing consultant, and I am curious.
You outlined some of the benefits of peer review. Intrinsically you're interested, you're curious, you want to contribute to the advancement of the field. So that may be the answer of what humans do different to AI, right? But we know that there are plenty of people who are in the research community who feel that the peer review process and publishers exploit the reviewers.
And I wondered what your, like, sort of short list that goes with all of your benefits of peer reviewing might be to respond to them. People who feel like the publishers are exploiting them in their time. Well, I feel like it's a little bit of a complicated question, and that I'm probably going to say something a little controversial.
There is the option of compensating people accordingly for the work they do. What that might look like could be financial, could be otherwise. I know for me, it would be great to have access to papers I've published and not pay $30 for them when I have misplaced the proof from years ago. I do think that there could be more done to give some sort of acknowledgment or compensation to people who are doing that work, usually of their own intrinsic motivation, particularly if it's not currently recognized as much within the field of academia as you're kind of rising in the ranks.
Could there be some sort of symbiotic relationship between reviewers and the journals, such that they're getting some sort of tangible benefit, in addition, for doing that work. That's something I often wonder as somebody who does it for free. Yeah Yeah. Hi, guys.
So I'm Mike Dinatale from the ACR. Hi, Lauren. Hi so it's a really interesting discussion. I think it's been really interesting and engaging for me. One thing I'm hearing from this discussion is this race to a foregone conclusion where, you know, generative I could replace peer reviewers in our workflows and process. And what I haven't heard is a discussion of the actual costs of that grant, given how expensive all of the generative AI tools are presently for us as publishers to implement, and perhaps even the discounted rates that we're receiving on those tools presently, as these companies compete for dominance in the marketplace.
Place, but I'm going to go beyond that and then suggest perhaps that. As publisher's thinking about AI in peer review in the future, perhaps one of the values we could add to our communities is actually ensuring review by human beings in this dystopian AI future we're racing towards. So that's just the thing I wanted to raise. I hadn't heard anything about that perspective. It's a good point.
Do you mind if I just hop in and say one thing. I know I'm not really supposed to as the moderator, but. Yeah, great point, Mike. And and it does touch on a point that, that our panel made earlier a little bit in that I think we need to not lose scite of the fact that doing the review work, getting access to new ideas before they're out there, new methodologies, maybe work beyond your narrow discipline will make you a better scientist.
If you are one or a better historian or, you know, whatever your discipline may be. And that's something we could lose Hazel if we go only I. So I, I, I do agree with you. Yeah I know, I mean I think for now most of us are not even allowing I so there's, there's kind of a big jump to get to and I, I absolutely agree with you that there's costs to all of this.
And it's, it's really interesting because people are embracing I by saying, Oh it's going to, you know, it's going to make things so much cheaper. We're going to be able to do more with less people. We're going to be do. So far, what it's done is added steps and which that step may not require as many people, but it requires people. So so far.
Yeah, that's what I've seen. The environmental impact of AI is not one that we've raised, but I know I've talked to faculty and researchers who have looked at that and decided that that's their reason not to use it. Which is legitimate. And I also admit I've talked to publishers, and I'm not sure if you're in the room or not, but and ask them how their use of AI and their tools matches with their commitment to the UN Global sustainability goals.
I got that name wrong. I apologize, but it's a because it is. I mean, I live in northern Virginia where the land of data farms. And, you know, we look around and they're talking about modular nuclear power plants to, to power these things. That might be OK, but I don't know. But I think there are a lot of implications with AI that are beyond just kind of the use that and how we fit that into our views of the, of the world.
You know, if that's important to you, then I think that needs to be a factor, because these things may destroy the world in a variety of ways and not just taking over the world, but but just the environmental impact is kind of scary in a lot of ways. So I wanted to add that I think I can augment peer review in the humanities, but humanities peer reviewers should be written by humans.
Great note to end on. Thank you so much panel. Appreciate it. Thank you. Thank you. OK guys another quick break. Stretch your legs. Get the coffee.
Do your thing. Also, if you're starting to think about dinner and you may have missed the announcement on your handouts, is a QR code to a Google Sheet where you can sign up for a hosted dinner if you're interested. It's pay your own way, and we're all going to meet at the restaurant when the time comes. But if you're interested in joining a group informally for dinner, please check out that list.
Otherwise, be back in a few minutes for our next session. Thank you.