Name:
Publishers and Funders: Future of Manuscript Review and Proposal Evaluation
Description:
Publishers and Funders: Future of Manuscript Review and Proposal Evaluation
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/9962e240-c275-4dba-80c1-694e7192e821/videoscrubberimages/Scrubber_1.jpg
Duration:
T00H36M48S
Embed URL:
https://stream.cadmore.media/player/9962e240-c275-4dba-80c1-694e7192e821
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/9962e240-c275-4dba-80c1-694e7192e821/SSP2025 5-28 1500 - Industry Breakout - Prophy.mp4?sv=2019-02-02&sr=c&sig=K0Sjt1enyJyv56C5wLnRfyEG5gbFPU81L%2BF68BcmWyc%3D&st=2025-06-15T22%3A47%3A04Z&se=2025-06-16T00%3A52%3A04Z&sp=r
Upload Date:
2025-06-06T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
OK good afternoon, everybody, and welcome to the session where we are trying to look into the future of publishing. But through the prism of two sides of the Research Triangle. The Research Triangle is essentially three edges. It's a funding outlook into the future. We will do x, y and z. It's University researchers. It's a present.
We are doing X, y and z. And then there is a publishing the book. In the past we did X, yz. And these are the results. And then the story continues to go in the circle. So I have also been preaching in the events like that we cannot just sit here and discuss the future of scholarly publishing without involving other sides of this triangle. And this year, we decided to do it ourselves.
So we propose a breakout session where we would bring together funders and publishers, which I thought was a very cool idea, but what I was not aware about is about the plans of the current US administration, who exactly at the same time, started its crusade against, among many other things, against research funding, which in our small environment led to the disaster, meaning that every US funding agency I would talk to would say, no, we're not allowed to.
We cannot even think about going somewhere and speaking about anything. I mean, that's complete chaos. And then speaking to non US funding agencies, I would say yeah, sure. But we are not going to travel to the US at this time. So it was extremely difficult to pick up this panel. And I am extremely grateful to you guys for agreeing to be here.
And in view of our current political situation, you may say, OK, who cares about 2030 if you don't know what's going to happen a week from now 2025? And to this, I have a story for you. I am doing my research in particle physics and in particle physics. They make experiments which in the best case scenario are about 25 years of planning. So to put things in context now we are discussing the next post national collider experiment, which should start if all goes well around 2045.
And there is a plan for it for a major upgrade in 2060, and then a scientific program all the way to the 22nd century. So what I'm trying to say here, science is a long game. No matter what happens in the current cycle or with the current administration anywhere, it's a long game. And that's why it makes sense for us to leave behind all the wars which we have about the future of research, publishing, research funding, research in general, and think about things five years from now.
How is it going to look like. And with this purpose in mind, I invited here three distinguished people. So from my side. From my side. So first is Sara Roy. She is representing the publishing side. She is director of open science and publishing innovation in American Institute of Physics, and she is driving open science strategy of IP, focusing on developing new publishing models and sustainable business strategies to accelerate the mission of IP to research focused open science.
On top of that, beyond the IP, she is a person who many of you have thought because she is one of the people behind the Declaration for to defend research against censorship. The action which is going on right now, where were we trying to say that No, we are not. I mean, I am a researcher, so I'm speaking about this as part of this section. A very little part.
No, we are not going to sit silently and watch it. We will be protesting. We will be taking our countermeasures. So thank you, Sarah, for doing this. My next panel member is Rebecca Kirk, associate editorial director at PLOS. And again she focuses on open science. And that's not surprising. This is a people who are forward thinking about the reforming of science and transparency through the whole research trajectory.
And she is also a part of the sustainability and sustainability of higher education initiative. Thank you. And last but not least, a representative of a funding sector. Taylor Diorio is the chief of staff at research hub. And research hub is something which demonstrates to you that no, you don't have to rely on governmental wish to fund or not to fund research.
You can do it. You can bypass government and do it yourself. It's a different type of a funding agency. So Tyler did his PhD in biomedical engineering at Purdue University with a focus on neuroimaging. And over the last three years he was building the research hub. And it was the very last moment that he is the co-founder of. Patrick Joyce could not join. And Patrick was very kind to come on the very last notice.
And there was to be supposed to be a person number four sitting over there. But since the beginning of the talk, at the last moment, he couldn't join. And in this landscape, let me start for the story itself. So first of all, we will be doing this in the following way. Everybody is welcome to as we will be now discussing the question. Everybody is welcome to use the means to cast your votes on each of the subjects.
You will see the QR code on each of the slides, and those of you who will miss it or who will not want to do it later. Again, there is also the same voting in our app. If you open this app and go to our session, you will find there the voting procedure. So we are interested to hear what you want to say. And we are going to course, to make this information public so we can reflect and use policies. So without further ado, let me start the presentation with the research question number one.
Namely, what is the main challenge of today in scientific finding or in scientific publishing. Coping with the volume of submissions, properly assessing the quality of submissions. Speed of assessment, evaluation of the impact of funded research of published manuscript or just administrative burden and process inefficiencies. So I will start with Sarah. Please provide me your opinion.
OK cool. So from the publisher perspective, I think the first two are essentially the same challenge. The unprecedented volume of submissions is making the ability to assess quality really challenging. With respect to that challenge. This ties to the funder one. So let me jump to the funder answer.
If I'm wearing my funder hat, the biggest challenge is going to be evaluating the impact of funded and published research. I think those two are linked. The challenge with volume is fundamentally, I think, undermining peer review and making the gatekeeping role or the stewardship role of publishers very, very hard to do. And consequently, the funders are having less and less to fall back on in terms of understanding what is the actual Roi on what they're funding.
So as a quick high level, I'm doing one, 2, and 4. I think we can talk that way anyway. That's OK. I didn't know we could choose all of the answers. Sorry that's. That's fine. We can change the rules of the game as we speak. I mean, I think for me, like research integrity is non-negotiable.
So the top two are definitely tied together and are really important. But the thing that jumped out at me in terms of thinking about a really huge challenge that we're facing is it's that element of evaluating the impact. If you think about impact, as is the positive impact on society more broadly can see how that suddenly becomes an almost impossible question to answer.
We've seen how perverse incentives in terms of how we measure this have led to problems and B, ultimately, in terms of volume of publishing and citations being the way that we measure impact. I think thinking about how the research influences policy is one really important element of thinking about impact. Hard to measure, but really important. And then there's also the reuse.
What are the stepping stones to future research and thinking about the other artifacts of research in terms of things like data and code and how they're then reused by others to then inform future research trajectories beyond that initial research question are elements that are more of that societal impact rather than the impact factor, I suppose. Thank you. Yeah, I agree entirely.
For me, it's the biggest one is evaluating the impact of the funding because when we fund research, it's not that we're just creating these final publications which are great or even all the stuff that comes downstream of that, but you actually create scientists that go out and act as nodes of information to interact with all the other researchers in the world. And so trying to track that, I don't currently know how you do it, but I think providing forums for discussion at the minimum is a good way to start.
But trying to build towards what are we actually trying to fund. How does it affect science in the long term, and where are the nodes that actually kind of incur those changes. Thank you very much. And I think we all agree that all these things to some extent, are challenges. So let's jump to the future. If funding and publishing sectors could collaborate on one initiative in five years from now, which would create the most value, in your opinion, common standard of research and identification and hence recognition of contribution to open science, joint solutions for addressing reviewer fatigue, unified review recognition system.
And finally measures or special sensors for reproducibility. So let's start styling for that. Yeah so for me it's enhancing the contributions to open science. I think one of the biggest issues we have in science is the reproducibility crisis. It's really difficult. Even when I was in my PhD to even recreate the work of something so fundamental to your thesis that you kind of have to take their word for it at a glance.
And I think that mostly stems from the fact that the information available to researchers is just inherently limited. Even if you have the full text of the final publication or the open source code, or the phenomenal appendices that take you 20 hours to read, it's still not enough because it doesn't tell the story of progression, of the research, how the researcher arrived at that method, the pitfalls that arrived in the middle, and the whole story.
So I think trying to incentivize contributions along the way as the research is progressing, you can provide feedback through open peer review and discussion. And then when this publication comes out, you have this nice, full story. In my kind of idealistic world of showing ideation methodology, failure, failure, failure, success. Thank you. Thanks I'm going to.
I think it will probably surprise no one that I'm going to choose open science, given I work for floss. But I think there's that ensuring the recognition of these outputs. Thinking about the contributors throughout the research cycle is really important. The state of open data 24 survey actually highlighted that the primary blocker to engaging with open data is the lack of credit.
So if that is the primary blocker and we've agreed that open data is important for reproducibility, if we can remove that primary blocker in some way, then we should be able to really advance things really rapidly. And five years isn't long. Like 2030 feels like a long way out. But if we could address that credit aspect within five years, I think the value we would see would be phenomenal.
Thank you sir. OK, I want the room to answer first. So how many picked a can you raise your hand. B OK. C you can do what I did and vote multiple times. De and so many of you didn't vote. So when I think about this, like my heart says B right. Like open science is going to enable a lot of these things.
That's kind of my lens most of the time. But I'm going to be I'm going to take a different thing here. I think the fundamental challenge that publishers and funders are really struggling right now is with quality assessment. And the giant difference between peer review as done by a publisher, peer review is done by a funder to enable grants and then peer review done internally in government agencies.
Up until very recently that actually focused on reproducibility. And I think if we got to a World where you didn't have that problem because reproducibility was something that we actually spent a lot of time and a lot of resource on, and that's a world where you have far fewer papers. The world we're in now can't do that. Then you're actually getting into a space where we're really focusing on the science, and the science is enabling more science.
So even with open science in my title and PLOS, being a former home of mine, I actually don't think open science and the current paradigm we live in solves for this. I think what open science as it exists right now is actually making some of this worse because it's perpetuating more and more and more digital objects being out there that are not reviewed often, that are not interoperable, that are not discoverable.
So I think open science has a place. But until we solve the rigor of the peer review problem, and embedded in that is recognition and fatigue and identification, I don't think open science is a solution yet. So if I could collaborate on an initiative with a funder, it would be to work on building reviewer networks that are innate are empowered to review at that higher level where they are looking at the data, they are looking at reproducibility.
They are it's more than just what we do now by virtue of our resource limitations. Thank you very much. That's a very interesting, very interesting perspective. And yes, so let's move on. Continue on the same topic. What kind of information sharing within or between sectors would be the most beneficial quality score of review grants or review manuscripts being shared with the public.
Expertise availability in specialized fields from people, reviewer performance data and finally suspicious behavior of researchers and/or reviewers. And again, let's start with backstab. I'd love to be optimistic and cheery and pick one of the others. But I actually think in terms of information sharing, the suspicious behavior, one is really showing its benefits already.
So I'm kind of cheating because we know it works. I mean, I think we are increasingly facing attempted manipulation of the whole publication process at scale. It is something that is a huge risk to our industry. But more importantly, I think it's a risk to science and the scientific endeavor. And I think the sharing between publishers, between funding agencies, between all of the actors in this space, done appropriately with all of the correct measures around careful data sharing, obviously, is an incredibly powerful tool.
And I think if we can continue to do that, if we can collaborate in this way. We have a real opportunity over the next five years to actually do something meaningful to clean this stuff up, and that is going to be for the benefit of research just writ large. So I picked the not very cheery one, but I think it's an optimistic outcome. But that's completely fine.
The goal is not for you to pick anything, right. The goal is to keep your discussion in certain frames yet to let you improvise on this. So Sarah. Oh would you. Same thank you. A little bit more. The only reason I say this is I had the privilege at uksg of having a conversation between funders, libraries and publishers on this very issue.
And the number one thing that became very apparent is there's no easy mechanism for funders, publishers and institutions to share with each other how these behaviors are perpetuating. There's issues around privacy. There's issues around we don't want to attack people's reputations without a tremendous amount of evidence. There's no one external body that everybody looks to.
And so everybody kind of just wants to toss the hot potato and walk away. And the burden that that's putting on publishers I think is becoming unsustainable. So DM you have Mike. Yeah I mean agree, we see this issue a little bit with research hub. Sometimes if you don't know where a discussion platform for scientific research and occasionally there will be users who come on with some bad intent.
So knowing who those users are ahead of time would be phenomenal. I do think another issue too is getting the quality scores on funding. This matters a lot actually, to nonprofits and private funding individuals, because what they care about is they might have a loved one or some other motivation for wanting to fund a specific disease like Alzheimer's. But they have the money and they don't necessarily know where to put it in order to get the best outcome.
And so they've come to us in the past because we can connect them with researchers who help them craft a sort of like proposal area and even find individual people to target. But they have a really hard time knowing I mean, frankly, nowadays from a really good researcher in the field and you really need to know the 100% percentile to get the difference. So I think providing the context there on open sort of peer review of grants could actually be really helpful too.
Thank you very much. And to this part of the discussion, I should add the following observation, which I think is very important. Science used to be very elitist, very, very meritocracy based. Their reputation was everything. The research integrity issue in that kind of science was impossible because you lose your reputation once and it's gone forever because it was a small world network 10, 20,000 people.
Everybody knows each other, if not directly than through one handshake. You cannot cheat in this network because it gets to known to everyone. Today we are speaking about millions of researchers, and there are many people in this world who are researchers in my field whom I never met and with whom we don't share coauthors or common friends. That shows how wide these things becomes.
And that's why the reputation mechanism, which used to be the main driver of this whole trustworthiness in science, is completely gone. Simply through the size of it. And therefore, unless the mechanism and we are still operating as if it is there just by inertia of this, and this is what brings that kind of problem to the game. So moving to the next question, industry advancements by 2030, I mean I know it's almost impossible.
We may hit singularity by 2030 falling. All right. But assuming that we don't hit anything really drastic, what the industry advancements are likely to change the sectors like to the biggest extent, of course. I led research and quality assessment blockchain based on similar recognition, system of contribution recognition, very fast grant approval system like here for the provocation.
Like I put two weeks taking my inspiration from fast grant initiatives of the COVID time when they were supposed to approve grants within two weeks and then give them immediately after that the incremental forms of publications, nano publications, whatever you want to call them. But completely different kind of publications like Instagram, like Twitter, like publications, but still true research, not fake science.
And finally, novel mechanisms of financing science. What of this you see as the biggest change in the sectors. I want to start. OK yeah I think it's a mix of E and f for me. One of the things we're trying to do is to fund pre-registrations, which are open access grant applications, experimental plans. And I think this is kind of like a newer way of funding science, because it starts with peer review at the very beginning of research, not at the very end when you've finished everything and you kind of can retrospectively defend it.
It puts everything out front immediately. And I think in order to do that, we have to have a different type of publication. That's incentivized for researchers. Like, sure, there's a big incentive for researchers to write this pre-registration if they can get funding through it, but it'd be awesome, too, if they could get citations that can actually help their career. And also, future researchers can go and look at those incremental publications.
And then one separate anecdote, when I was doing my PhD, it was a lot of computational work, and that's a pretty niche area. And so I wasn't able to find a lot of really helpful information from literature that actually helped me translate to good projects. And I found that on discussion forums where people talk about code on their GitHub, they have issues and you can't really cite these things except maybe anecdotally to your peers, but they are, I would say, as important as one of the most important publications in my field for me, being able to actually create these simulations in my case.
And so I think I would well, I think probably my answer is a bit of EA and a bit of f as well at plus, at the moment we're conducting a research and design project, and this is something that's funded by the Gordon and Betty Moore Foundation and our WJF. So it's a really good example of funders and publishers working together.
And we're looking at identifying components of a new publishing model that are founded on open science principles that have a business model to support it. So whatever that looks like in terms of novel mechanisms of funding science and funding publishing. We've held initial convenings. This is the stage we're at. And we've been talking with researchers, funders, and institutional leaders, and discussions have centered around the shift beyond research articles to a new model of publishing.
So would that be described as micro publishing. Remains to be seen, but I do think it's been described by the people at the convenings by something called a knowledge stack. So all the different bits of knowledge stacked together that create then the research outcome, and that then you can publish and showcase and give credit to all of those individual bits and pieces that create all the knowledge that goes to answering a research question and advancing science.
So it's early, which is why it's sounding a little bit lofty and inconclusive. And that's kind of actually the point of the research and design stage of the project. And so we will be reporting on that as it comes out. But I'm excited about that. So hopefully there will be some more to say soon. I think in terms of advancements, it has to be from the publisher perspective, AI as a tool, as a tool for bad actors, and a tool for best practice on both ends.
So I think we're going to be talking about that at this conference quite a bit. From the publisher, from the funder perspective. I think f is actually interesting because I think maybe not novel mechanisms for financing science, but novel players financing science. So I think in the US especially, we're starting to see overtly political organizations finding ways to finance science and launch journals.
And so is that a flash in the pan or is that going to be something that we see more of moving forward, but without necessarily giving it a normative judgment. There's going to be, I think, a politicization of science funding that at least in the US, is going to be something that we're going to have to grapple with. So I would say that is from the funder side. My answer.
Thank you very much. And I only want to note here that kind of incremental form of publication will be going hand in hand with a completely different approach to contribution recognition, because if you are publishing little bits of information and many people contribute to the result to emerge, how do you recognize this. Who gets the credit. And that's how blockchain buzzword appeared there.
So approaching the end of the story, what metric, what indicator would best indicate successful cross-sector collaboration again by the time of 2030. So expertise availability be reproducibility rate, collaboration score crowdfunded research rate or amount or share and finally a research discoverability, the ability to find what you need to find.
So again, who wants to start there. So yes, definitely reproducibility rate for me. If a publisher and a funder or any kind of cross-functional, cross-sector collaboration could demonstrate to the community how to do that in a way that's meaningful, I think that could be a real paradigm shift for the industry. But alas, that does not seem likely.
Yeah, I don't love metrics. Metrics tend to become targets and tend to not be used in the way that we intended. So I struggle a little bit with the whole process. But in terms of reproducibility, if we could give credit for reproducibility and give credit for providing the information that allows things to be reproducible, and we could think about that in a cross-sector way that's measurable and gives that credit.
There would be real value to that. But yeah, where metrics become targets, targets become perverse incentives and everything goes a bit sideways. Yeah, I agree totally. I think it's Goodhart's law that when a measure ceases to become a target or something like that. But yeah. Agreed totally.
I think it's all fundamentally the reproducibility rate. Like if our science that we're doing isn't reproducible, then we're leading to negative science on net because everything that's built on top of the initial reproducible research invalid probably needs to be redone or at the very least reanalyzed. So I'd love to see that happen through crowdfunding of research. But ultimately, however we can do it, I think reproducibility rate is the target.
Thank you very much. Finally, this was supposed to be an optional question, but we have a few more minutes coming. So that's probably for the audience. But you will have to start. What kind of initiative would motivate you. You personally, as a player of this game, contribute to cross sector improvements.
Just establish dialogues between sectors when you know that they are not separated anymore. They are not silent, but they actually talk to each other. That would encourage you to join this conversation. Work in groups on specific shared challenges, pilot programs, testing collaborative approaches, development of voluntary shared standards, and finally, research on improving review quality and efficiency something which is clearly dear to my heart as a prophet.
So I suggest you start, and the audience is welcome to use the microphone to provide their perspective if you have one. So thanks. Yeah sorry for me personally. And this is we're thinking about an individual driver. It's working with others on shared challenges, so I do this in two different ways as a SDG publishers compact fellow, which is a mouthful.
And I've been working with people across funders, librarians and publishers, editors to really start to drive meaningful change. Or at least it feels meaningful. And we're trying to achieve it by 2030, which was the target for here. So that's helpful. And also SSP is the my home for the DA committee in particular. I really feel that the different perspectives that are brought to the work done by that committee, and also the things that I've learned from all of the different people that I've met along the way have been personally valuable.
So, yeah, as a selfish motivation, I think that's probably my personal driver. So I think, yeah. Established dialogue between sectors is where I've been working for a while now. So engaging with institutions, funders, publishers, service providers to figure out how we can deal with the systems, the collective sort of complex systems problem that we have.
That's personally the biggest motivator. Yeah, big fan of just kind of doing stuff. So any kind of pilot programs, if you guys want to try stuff. Definitely feel free to reach out to us. We're happy to make things work. But I think in general, just testing stuff in the open and seeing how it works and then making sure that you're not damaging science in the process, be a good way to try.
Thank you very much. Tyler, please. So hi, I'm Tyler Beck from NCATS, a part of NIH. My choice would be see pilot programs, and I've got some pilot programs I'm working on right now trying to find collaborative approaches. But my real point of standing up was, I want you to think about the word voluntary in D there, because one of the things that I'm seeing at NIH right now is that you may not need to be so quiet when it comes to things like this and say standards need to be standards, not voluntary standards.
This is actually a time when it may be possible to push for true standards. When it comes to the way that we do publishing because of the administration. We have take advantage of some of the things that we're dealing with now to push for something stronger than voluntary standards. Thank you. Tara anybody else wants to say what would motivate them personally to contribute to cross sector improvements.
I have actually a question, which I'm really interested in the answer and don't realize I'm going to jeopardize my open mic if you hunger, you pay the money you give. You donate money and then research is done. At the end of that process, the scientist rise up a story, a manuscript and submit that to a publisher. The publisher sends it then to peers to review.
That's the quality process. Why does it have the funder who has invested money in this bit of research, have a say in the perceived quality of the output. Why is the funder not involved in the review. Because they don't want to be. Yes, yes. The answer is they don't work. They don't want to be that essentially make increase their capacity at large.
I mean, you're completely right. That makes sense because then funders go and review the impact of their grants, which could be done in the process while they see that things are being published on research which stemmed from their publication. But yeah, that would. One example of the overhaul of the system and a much, much better cross-sector collaboration.
So that's a wonderful idea to my taste does have an element of right wing watch. Sorry there's an element of marking your own homework. There's an incentive, an incentive to prove that you made the right decision in the first place. And so if the people who made the decision of where to put the money, then are determining the outcomes of how well that money was spent, that if there's no other actors in that process, then you end up with a circular loop where people just Pat themselves on the back and say, didn't we fund this.
Very well, let's put it out there as no other funders or no other that. But I also think you just have to be really Frank about capacity. Like, funders can barely handle everything they have to do. Now, the reviewers for grants are already incredibly taxed. Like, I actually personally think that objection would go away because individual scientists are so egotistical, they have no problem looking at somebody else's work that they funded and saying like, this isn't good.
So I actually don't worry about that. The quality component, there's just no bandwidth. I mean, funders do not want to take on any more work. So I wish it was a ideological thing. I think it's just nobody wants to take on more work. Yeah please go ahead. Yeah so I have a somewhat actually counter experience with some of these nonprofits. This is a non-traditional funder.
They're not the NIH or the NSF, but these people care deeply about the research that comes out of it, and not that they want to be the gatekeepers of what is the final outcome, but they want to understand if they put a dollar million into Alzheimer's research, what happens to it. What research is done. How many students have come out of it. What incremental gains have been made.
And for them, it matters a ton. Maybe because they want to go to the golf course and tell their friends they've done something cool, but more than likely it's because they actually care very deeply about this disease, and they want to show that they've made progress to it. I'll just build off of what Tyler was just saying, which is I think we're seeing a push for the same look at the metrics, and I agree.
Metrics are a bad word. But looking at what the outcomes have really been for the research that we've been funding at the NIH for decades. And how can we show that research resulted in outcomes that are tangible and palpable. And it's not easy. It's not something that we have built into the system that we have that leads to the publications.
Really, right. We have to look at things like, how many drugs were developed based on basic research that was done 25 or 30 years ago. We can't look at the most recent work for that because it takes that long to get from initial work to a drug. So it's difficult. And we're seeing pushes toward that at NIH that we haven't seen before. Yes thank you for that perspective.
Anybody I mean we are kind of over time, but I don't think there is a next meeting until four. So anybody else wants to add something to the discussion. If not, then I would like to thank everyone that was supposed to be a slide with the thank you, but it got eaten by the IT technology. So thank you very much for coming panel. Thank you.
That was an incredibly interesting conversation. Thanks a lot.