Name:
New Directions in Tech
Description:
New Directions in Tech
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/25be1c36-2946-4d53-b87b-db7eff37ecbd/thumbnails/25be1c36-2946-4d53-b87b-db7eff37ecbd.png
Duration:
T00H58M21S
Embed URL:
https://stream.cadmore.media/player/25be1c36-2946-4d53-b87b-db7eff37ecbd
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/25be1c36-2946-4d53-b87b-db7eff37ecbd/new_directions_in_technology_2022 (1080p).mp4?sv=2019-02-02&sr=c&sig=dKfXdVyPJCenAelUOlJMxhW4KfnaWQTC4DdJ5WIcA6k%3D&st=2024-11-20T04%3A21%3A31Z&se=2024-11-20T06%3A26%3A31Z&sp=r
Upload Date:
2024-04-10T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Good morning, everyone. We are going to get started. Welcome to day two of the new directions seminar today is a short day. It's a two session day and we break at lunch. I am Sophie rice. I'm the vice president and Executive Editor at Mary Ann Lieber Inc publishers.
I'm also the co-chair of the esp education committee and have had the honor and pleasure to serve as a lead of a new direction seminar this year along with this fantastic team. This along with my fantastic team of I'd like to thank, of course again Simone Taylor, Alexa Colella, Heather staines, Jeff Lang, Dina Camus, Jordan Schilling, Ben mudbrick and Walker Swain.
So thank you all so much. They are the team that created this entire seminar. So kudos to them and of course, to Mary Beth barela, who has guided us along the way. So thank you, Mary Beth. I would also like to Thank and acknowledge our sponsors for this event at John cashmore, media, Morrissey and silver chair.
Your support means more than we can express, so Thank you so much for supporting the entire seminar. Is a quick reminder, just some housekeeping. The Twitter hashtag for the meeting is SSP and D 2022. So if you are live tweeting or tweeting anything from the event, that is a hashtag to use for those here in person. The Wi-Fi network is on the little white cards right there on the tables.
Also, please remember to silence your mobile devices. And if you are logging in using a laptop into the seminar, please remember to mute your laptops if you're logging in here in the room. Attendees are encouraged to wear mask at all times while indoors unless actively eating, drinking or presenting. Virtual attendees. If you need technical assistance at any time during the seminar, please use the Ask the organizers feature under the Community tab in Hoover or email asp.net. Close captions have been enabled for all sessions, so please use the CC button on your Zoom toolbar to activate these.
Also, we would like to finally remind everyone of the spe code of conduct. We are committed to diversity, equity and providing an inclusive meeting environment, fostering open dialogue, free of harassment, discrimination and hostile conduct. We ask all participants whether speaking or in the chat to conduct themselves in an orderly, respectful and fair manner.
For more information on sbs's code of conduct, please see the FARC in the hoopa app. And now for the first session of the day, new directions in tech. I'm very pleased to introduce Jeff Lang to kick us off. We're going to take seats here over at the table. Make sure everyone here. Good morning, everyone.
Good morning. All right. Thank you again to Sophie for setting everything up today. And for Mary Beth, for all of the arrangements, it's really been a fantastic session. I hope you've been enjoying the day. This is the new tech trends session this morning. And we have three rising technology leaders who are joining us to talk about the technology landscape and recent advances in our industry.
This is especially relevant given that the conference is about the new possible. And I was really amazed yesterday when I was listening to the panelist, John fisher, casually describing how he grows human tissue in his lab. It seems fascinating to me and futuristic. And these entrepreneurs are building tools that are equally magical in my mind. And yet eventually we'll be reflecting back on this as just the way that we all conduct our business.
Each of these companies has a different take on automating parts of the manuscript workflow. Their tools read the manuscript, and they make decisions in much the same way that a computer used to be a person who manually made calculations. I wonder how the terms editor and reviewer will be changing as a result of some of this work. The panel is going to help us understand the potential for this technology and help you think about how to include it in your manuscript workflow process.
To my left here, your right is Dr. Macintosh. Leslie Macintosh is the founder of and CEO of Rosetta, a company formed to improve scientific research, quality and reproducibility part of Digital Science. Rosetta leads efforts in automating quality checks and research manuscripts. Academic she's an academic turned entrepreneur. She serves as the Executive Director of the Research Data Alliance us region and the director of the Center for Biomedical informatics at Washington University School in Saint Louis.
And I hope we have some of my colleagues on the line here as well. I'm seeing some folks on video as well. So please also welcome James Harwood. He's the founder of Penelope I. In 2015, he was tired of reading long author guidelines and poorly reported research articles, so he saw an opportunity for software to help authors adhere to journal and scientific guidelines by giving immediate, personalized feedback.
He studied neuroscience at Oxford and works closely with the EQUATOR Network to promote transparent reporting in clinical research. And Dr. Anita bender Koski is the founder of PSI crunch. She has a background also in bioinformatics and neuroscience are getting some trends here already and she runs both side techcrunch.com org the neuroscience information framework, the antibody registry org and research resource identification initiative.
So we're going to have a conversation today. You're going to hear a bit about each of these technology leaders, they're going to talk about their own organizations a little bit, and then we're going to start to talk about how their technology is changing our industry. So can we have our first look at our slide changer here? Here's where some are.
There we go. This is all of us on the panel. And first up is going to be Leslie. All right. Thank you for having me. Thanks for having on the panel. I managed to get one slide to you and forgot to put my name on it or my Twitter handle. So at mach, and I think Ana knows it and puts that out.
But Thanks for being here. So rapida, which actually comes from the Italian word, it's the polite form to repeat. So if you want to ask somebody to repeat what they say, you say you don't have to say it in Italian, but the was available and that's very important. And I was very much into reproducibility when I started the company about making science better. I was from informatics really wanting to, and we've really honed in two things now with repatha, and it's much broader than reproducibility because it really has moved into trust and science and trust in science.
For those of you who are americans, think about like a Carfax that you get right. You want to look at that before you actually get a car so that you can see if it has an engine and if the engine has any problems on the left there. That's what we do, one paper at a time. That's the precheck, that's the automated way of doing peer review. But it's not the peer part yet.
It's the pre peer review. We've been talking about issues and challenges of finding peer reviewers. Let's make it easier. So they can get to the science of it. And not have to worry about things. Like are the conflicts of interest in there? Are there the ethics statements, the data availability statements, things that are very, very important to an overall picture of integrity, but not necessarily fun to check off all of the time.
And we've automated that. And what we've now done because we became part of Digital Science is we've taken those algorithms that we've developed and we've applied those to dimensions and we now have them over 22 million documents in dimensions and about to be 33 million as we expand to the preprints that are in dimensions as well, so that you can get insights. I'm going to tell you one of the coolest insights they found recently, which is that what institution?
What excuse me, what country do you think actually does the best by institution in having their data availability statements, their data statements, the structure of their publications? If you're thinking it's in North America, you're wrong. If you're thinking it's in Europe, you're wrong. If you're thinking Africa, you are correct. It is Ethiopia.
Ethiopia has leads for institutions in all of them that have this. And we found out that also countries that work with them are other institutions like in England are doing better. So we're seeing great things and we can see this through that. So that's my introduction. Thank you, Lesley. And since this is the new possible and the tech trend session, we're pushing the envelope a little bit.
And we have some of our panelists joining us remotely today as well. So, James, you should be on the line and showing up here and we should have your slide as well. Take it away, please. Hi Thanks. It's a pleasure to join from currently gray blustery England. And yes, so I guess my story is that as a researcher I hated author guidelines, I loved research.
And then as soon as it came to doing anything sort of administrative, I'm very lazy and I didn't want to read through these very long, confusing all the guidelines that could be thousands of thousands of words and took a ton of time to Wade through. So I saw an opportunity for software that checks the manuscript against General guidelines and gives immediate feedback, and that's how I was born.
So some journals like BMJ open have it as a pre-submission check, so it's something you can do even weeks before you decide to actually submit to the journal, you can come and use this tool and then other journals have it as a part of their submission workflow. So every single submission gets checked and then anything that's an important possible failure will get flagged.
So the author can act on it immediately without having to wait for an editor or any other reviewer. So either way, authors receive a report that looks something like this, so they get like a marked up version of their manuscript with a little summary of all the checks that ran and things that maybe need a section at the very top. And then if you were to scroll down, you'd see some more extended feedback.
We have over 100 different checks now. So we cover try to cover as many editorial checks as possible, including structure, abstract subheadings, title page elements, ethical declarations, referencing tables, citations. And we also recommend reporting guidelines for medical research when they're appropriate. And journals can configure everything. So they can choose exactly which checks they want to run, the wording of the feedback.
They want to give. We have a publisher page model, which is $1 per manuscript. So we have an author pays model which is free for the journals. And the value for the publishers is that they get better quality submissions that meet their guidelines, faster publishing times, and the value for lazy researchers like me or just researchers in general is that it saves some time. They don't have to read through the length instructions and also prevents bounce backs when you've not done something that you should have done because you hadn't seen it.
Yeah so it's a win-win. And I think that's it for me in my introduction. Great Thank you, James. And Anita is joining us as well. And here is Anita slide. She only she thought was high HER2 record to be losing Anita. Oh, well, maybe she's reconnecting. Why don't we go ahead and come back to this slide later?
We'll advance one more here and start our conversation. And you can meet her soon as well. So, Leslie, could we start the conversation here thinking about this difference between what people should be doing, what software should be doing? What are some of the benefits that you see of automation in manuscript, workflow and review? So time saving, time and reputation, I think are the biggest things on automation.
And one of the things I want to be clear on is what we do. And I think all of us do, is really this is computer augmentation to human decision making, right? This is not a computer trying to make a decision. It is giving that information. So to editors or to the authors to say, look, is this really what you want to have? Is your study objective? You know, why don't you read this over or we couldn't find an ethical statement.
So that saves time so that the editors or the authors can focus on what they really need to focus on, which is hopefully the science if it gets to that point. And if not, then they don't have to look at it. And just to give you an example, we were able to do process in for what took 400 minutes for humans. We did it in 4 minutes.
Now, this is not to say that those humans are going to be out of jobs. It's going to be, especially with the large volume of papers, that they can then focus on what they really need to focus on to hopefully improve science. Are there any risks that you're concerned about here? Is this all good news? I'm always I always think there's risks.
One of the things and the reason that I emphasize the computer augmented but human decision making is because some of the algorithms are trained on things that also may look like a young researcher would do, maybe a single author or only two grad students. You know, the science isn't as clean as it could be. And we do not want to discriminate against those younger researchers or researchers that are in countries that are in areas that don't necessarily have the same support.
So we don't want to have those biases. And and there may be some that we're not looking at. We try to get a global community to look at this, too, so that we can have different perspectives on the way that we're looking at things. So there are some risks. The risk right now is I'm uncovering a lot of things that I don't want to see. And as in like in the publication data in the manuscripts already.
Well, so we just introduced an author check and this is to understand do the authors have, are they real, let's put it that way, which not all of them are. Are they perhaps too closely connected to the reviewers and to the authors and to the editors? And there's a lot of things we have to clean up in science, and so those processes really aren't in place. So I can quickly find those out, but actually to address them takes time.
This sounds to me like if any of you have gone through the process of adding additional tagging to some of your XML and manuscripts, sometimes you find that the data that you thought you had isn't quite as clean as you wanted. James is that something that you're seeing as well? Specifically about data cleanliness, cleanliness or just this.
What people are finding now that they have these better tools to interact with the manuscript's probably. Yeah well, I guess it's really exciting that automation sort of opens up all these doors, which previously would have been it wouldn't have been feasible to delve so deeply into all of these checks. But now that we can cross-check names against existing data sources.
And we can do similar things with funders and Grant codes. And so it I guess it's. Automation is great because it speeds things up, but it also opens new doors and means that we can do new stuff that we couldn't previously do. Yeah so back to this benefits question. What do you think is what kind of things are enabled now that you just weren't able to do before this technology?
Well, I want to maybe just also unpack what we mean by automation because so us on the panel, we were all into checking the content of manuscripts and research integrity were kind of aligned. But then there are also other automated tools that I just want to sort of give a nod to, whether it's like automatic journal recommendations or peer review or recommendation systems, automatic fraud detection or image duplication detection, file type converters or metadata extractor extractor and all these things.
I think I would count as automation. And I think what's interesting to me is that not only does it allow publishers to speed up and enhance their current practices, but it also allows them to start offering new services and maybe catch authors at earlier time points that they previously wouldn't have been able to do. So if you offer some of these tools for free and you make them accessible before submission even begins, then suddenly you've got.
You can sort of start interacting with customers much earlier because it's not really costing you anything to do that. So the get new things in terms of the depth of service and you also get newness in terms of the breadth and the time span of service, I guess. So these are tools that are going to be interacting with staff at the publishers. They're going to be interacting with the volunteers, be the researchers or cases where the volunteer where the editors are volunteers as well.
They're going to be potentially interacting with the authors as they're submitting manuscripts. Are there any considerations there about having these tools interact directly with all of these stakeholders? Yeah that's something that I've thought about a lot. So I have always wanted to be an author facing tool because to me it makes sense to give early feedback, to give feedback early to an author so that they can act on it.
And when I first started working with publishers that didn't. I think that was quite a new idea. And they were really thinking of it as an editor or reviewer facing tool, perhaps because they. Just hadn't done it before. But so now we have features where you can set up different, you can do the configuration differently for the different roles. So you can give one set of feedback to authors.
And then maybe an editor is more interested in fine grained feedback and a reviewer. There might be some things that you want to flag to a reviewer that you maybe don't want to flag to an author. So if there are really genuine concerns about. Dodgy statistics or fraud or something, and you're worried about authors maybe gaming the system, then you maybe just want to keep that those kind of checks for the internal team.
So Yeah. So you can definitely if you've got all these different, different, different user groups, you can offer a tailored service to each of them. And I think that's important because there's so many different things that we're checking for and if what we ultimately want is behavior change. It it's appropriate to give different kinds of feedback to different kinds of people at different times and in different ways to get that behavior change.
Can you talk a bit more about behavior change, but which behaviors are you thinking of? Um, I guess. Well, for me, the most salient example do a lot of work with reporting guidelines. These are guidelines for clinical research. Typically, they are. The way that authors generally use them is as a checklist on submission.
So it's sort of a way of declaring that you have complied with these guidelines. And what we found is that. Prompting authors to fill out these checklists at the point of submission does increase the number of authors that submit the checklist. So that's great, but they don't actually go back and edit their work. So they it's sort of it was a bit disheartening really.
So then what we realized is OK for an author to actually act on this advice and to act on the results of this check, they need time and we need to catch them at a moment in their research where they are motivated and able to act on that, on that feedback. So you need to get them earlier for some things. You need to get them earlier and then. But then when you're talking about sort of formatting the abstracts, I think that can be done later in a shorter notice and it's less, less of a burden on the author.
And they also know that they're going to probably get turned away if they're abstracts in the wrong format. So they're more motivated. Excellent answer your question. So that was a really good example. Do you have any thoughts also, Leslie, anything around how these new technologies will be interacting with people, and stakeholders for publishing?
So taking from a different perspective and I, like James, had started out thinking I would target researchers and work with them on improving things. I've really moved more towards working with publishers, working with funders, working with institutions more at a higher or a different level, I should say organizational level. One of the things that I have found is that obviously there's behavioral change between the different stakeholders and the.
When we started putting data together for funders, they started looking to see which publishers were adhering to their policies and that then it changed. I won't say the one funding organization that was very specific. Well, then we need to go to these publishers because we have our data shared just like we wanted with our policies. So it changed their behavior and what they were doing.
It wasn't something I was after necessarily, but just saying it is so excellent. And I think Anita has joined us now as well. Thank you so much. We're going to step back for a second and slide. So that Anita can introduce herself and her group. Hi so sorry about that. I had some technical glitches along the way this morning, a little bit unexpected.
So Cy score is fully functional and working within journal press and editorial manager. And one of the things that we're doing is we're helping ACR and a lot of our other colleagues from the other journal space to enforce some of these guidelines.
So, for example, the American Physiological Society actually does not give the full report to the authors, unlike acr, ACR does give the authors the reports right away. But what the American Physiological Society does with their journal submissions is they actually go ahead and reprocess those for the authors as an editorial checklist or review.
And we are starting to just see what the differences are between the different delivery methodologies that are being given for these reports. So thank you and sorry for, again, my technical glitches at this morning. So in any case, the size score tool looks over, creates a core report which also has a score. And that score's intent was to use a very easy number for editors to be able to enforce.
I am actually very pleased that now our colleagues at ACR have begun to enforce the score as a means of improving the compliance with these checklists. And so but of course, that doesn't happen right away with a journal. The journal has to kind of trust that the score is adequately representing better reporting. So what has happened is that, you know, now a CR at least basically tells authors as a compliance checking mechanism that they may not publish with a score lower than a four, which actually does do that compliance checking.
But it certainly didn't happen at the beginning. It took over a year, actually, for us to get to that point. So, Anita, I hear you and the other panelists all talking about building trust, making these tools available at first, sometimes for internal projects, eventually making them available to external users, the authors, and then perhaps also relying on the output of them to make decisions or to help the editors and the reviewers make decisions here.
And I know you've all spoken to me when we were having our planning session about not just the potential for helping the workflow within the publisher, but helping the interaction of the scholarly research process across the industry with a specific focus on research integrity. That's been a big topic for folks who are paying attention to peer review week this week and taking a look at all the recent publications in the Scholarly Kitchen speaking about research integrity.
So I'm curious your thoughts here about that, that build up of trust. How how are we going to use that trust that we're building in these tools to advance research integrity? So that that is exactly what this will take. This will take the use of the tools. And when I started using the first version of our tool, I didn't let any of the authors that I was working on their papers with.
So I became the methods reviewer for one of our journals, the British Journal of pharmacology. And I looked at every single paper coming through the British Journal of pharmacology. I looked at the method section and I began to run the tool. Then I ran the then I basically reformatted it into a peer review report. Essentially, I'm also a scientist, so it wasn't that hard for me.
I dealt with the authors one on one. We went through hundreds of papers during that year that I did this. And at the end of that process, my tool had gotten better to the point where it was actually producing a report that was, while not better than what I'm able to create, it was within the right ballpark. So that takes.
And that is what happened for me when I began to trust this tool. And once we were able to basically I was able to trust this tool. I came back to the editorial board and said, look, I now trust this tool. It does what I do roughly. And that was, of course, the first version. We've improved it since then, but that is how I started to trust that the tool was actually giving me reasonable results.
And when, you know, at least for in terms of our tool, I haven't played with Penelope, as you know, at that same level. I haven't played with Rosetta, but I trust our tool because I had to use it in my own editing essentially for this one journal. And then I was able to give that to the journal editors and they began to also trust the tool. Then I as we moved on to the next journals, the next society publishers, they began to use the tool, which is great, but then it takes a little bit of time for them to, to get to know the reports and to understand the reports and then to trust that those reports are actually giving them something that is of value and is reasonable.
And I think, you know, it's a process. Building trust is a lengthy process. It's not something where you see one report and you're like, wow, this is really amazing. You need to see that consistently. And so you just I think if anybody on the call is or in the room is considering something like this, I would say pilot with the tool that you would like and work with the reports, work with the authors, and after some time you should get to know what the tool does.
You should understand, you know, there will always be mistakes. I mean, these are automated tools, but I would say there are probably mistakes that we are all aware of in peer review in general. So I don't think that this is unusual, but I do think that if you're thinking about it, consider it just a pilot project so that you can get to kind of know and love or don't love and try a different tool.
I don't know, but you really ought to try it for some time and look at the reports and see what the tool is actually saying about your journal. So, James, as we're starting to develop this trust and as people start to understand what these tools are capable of, what kind of benefits are accruing to the publishers and broadly to everyone engaged in this research process.
Well, as I was thinking about this topic, I. I think I would, maybe. You break it into a few different categories. So automation can improve research integrity by acting as a gatekeeper or by facilitating the gatekeepers. And I think image duplication software be a good example of that. Plagiarism checks that are all very familiar with a good example of that too.
And then on a bigger scale, automation opens up possibilities for doing these mass audits or mass monitoring like Lesley was just talking about, which then if you take that data and go to funders, then that's sort of lobbying in a way, or sort of putting pressure on funders to try and get behavior change there. And I think that the thing I would maybe add to that is that these.
These tools that we've got are also a nice gateway to for authors to then learn. And if you create a pathway. So you have your automated check, it says, oh, you, you've not done a sample size calculation. You need to have done one. But then what next? So then you have to think, well, where am I going to send that author?
What do they need now to make sure that they do a sample size calculation in their next piece of research? So the I guess there's. Trust in the software is really important, but the software's being used in different ways. The automation is being used in different ways, and it can shine a light on the system as a whole so that we can trust or not trust the system.
And it can also be the gateway to self-improvement or education. Yeah I've now completely forgotten what your question was, but that's what I decided I was going to talk about. So I think you're right on track here. And so as we were making these investments, what's the opportunity cost here?
What if we're not making these investments? What do we miss out on? Or what's the what's the risk if we don't do it? So I'm going to I'm going to get very philosophical, philosophical here for just a second, because science has changed, the world has changed. The internet was introduced. Open science has come about. Everybody knows that at this point.
There are biases in algorithms that we're all very aware of and from a very high level state. The thing that I worry about is that that science, what's going to happen to science happen to is what happened to journalism. And we have a new system upon us. We have processes that are not codified. We have an ecosystem that operates much like being a professor, operates in that, you know, a little bit in silos.
And we're a lot of introverts and we come here to socialize and that's lovely. I love it. But if we what's the what's the thing that's going to be lost? Truly, if I'm going to go to an extreme, it is trust in science. That's going to be the opportunity that is lost. And it is going to be so much harder to regain that than to try to nip that right now.
And back to why do we look at automation not to take over completely, but to augment our ecosystem so that we can move forward at the rate that we need to move forward? Yeah, and I think we have I honestly I think COVID has shown that. I mean, people people moved at different speeds. We did have some automation there, which was great that was able to highlight where those could really help.
But it would have been nice to have been able to sort through all of those papers beforehand instead of having each one, each group in silos going, all right, which COVID papers are we going to trust and which ones aren't? So that was an opportunity lost. But I think it was also a thing that we learned, made clear what we could have been doing if we'd had the technology or if people were using it for that purpose. Excellent so we're going to try to save a little bit of time to talk, to take some questions for the audience.
So I hope you're all thinking about questions you have for our panelists today. But before we wrap up, you're all folks who are working with sometimes really big organizations, sometimes medium and small sized organizations. And I know that we can be a little challenging to work with sometimes. So I'd like to ask you, I'll start with Anita. What's been most surprising for you or challenging in your work with publishers?
Oh, gosh, the speed the speed of publishers is as somewhere between glacial and I think is there a is there slower term content. So just adoption of new technologies is a very slow process. We were ready for a slow process. We understood that it was a slow process. We weren't quite as ready for four glacial processes as this actually occurs.
But I just I wanted to just really quickly just highlight a bit of our work based on what Leslie has said. So we brought together a group that I helped to run, brought together nine different AI tools on top of the COVID data set. We could not actually and we put those not on top of the entirety of the COVID data set. We put those reviews on top of the preprints.
And that was really quite amazing because met archive and archive, there were about 25,000 or so in a few years, reviews that went up and they went up into public places. They went up into a hypothesis window which was created for us by the archives, and then they went up into Twitter and we had over a million views of those Twitter tweets. We had a massive engagement with the community with those reports.
They weren't necessarily the best reports, but it was, you know, eight different tools for most of that time that we're actually engaging with authors and potentially reviewers on. We don't actually know whether the particular handle is part of the author's or if it's part of the reviewers. But even when the archives will not let you post a comment, that is where I think it should have probably gone the ability to just take what I would call the wrath of the scientific community, which we were definitely quite willing to do for the COVID epidemic.
We're not willing to do that for everything, but we were willing to do that for the COVID epidemic. Really, I think, demonstrated that and that that level of engagement really demonstrated the fact that. People are ready to be able to see some of these tools in action. And I know that, you know, lots of people were engaged around that literature. And I'm just very proud of my particular group and my community that we were also able to do that.
That's excellent. And Leslie, do you have any other thoughts? What's challenging? What working with publishers? And we're surprised. So besides time and that's all the time at which it takes to process, the thing that surprised me a lot coming from the research world is I didn't understand the publishing world from the other side at all.
And when I started talking to people and was like, well, where does this fit in? And I didn't even know that that piece of the workflow even existed as a researcher. And to be on the side going, oh, you guys try to be friends with the researchers. I didn't know that that's know, so I'm seeing it from a different perspective, which obviously influenced how I originally thought about my company, too.
And I'm not criticizing anything. I'm just saying that it's a little opaque to researchers what this process is, and especially if you don't have a mentor saying, here's how the process works, you miss it. And so that surprised me. And so. And it's a great group. I love working with you guys.
It just surprised me. Yeah, well, we had fun at the very least. James, how about you? Anything that you've learned working with publishers? I would definitely echo that. And I think I found I found a few different barriers, really. Some of them are technical. It's technically been challenging to integrate different technologies together.
Then there's this understanding how publishers work, understanding the procurement process and to draw a comparison with the London startup community is pretty thriving. There are lots of them. And if you're making if your company is trying to sell to the NHS, for instance, there are schools that will teach you how that procurement system works and who the people are, what you need to do.
You'll introduce you to funders who are super interested in that market. And I don't know if you go to an investor who doesn't know what scholarly publishing is or how it works, and you explain it to them. In my experience, they run a mile because it's bonkers to them. They can't understand the economics of it or why, why it works that way.
So I guess it's likely that for the next few years, innovation in this area is there. It's going to be people like us. It's going to be people who have come out of academia themselves or have come out of the publishing industry themselves. We're not that sort of shiny unicorn Silicon Valley picture of a start up. So if we want to foster more, more innovation in this space, we need to reduce the technical barriers, the make it easier for everyone to understand how different industries work.
For instance, funders are still a bit of a Black box to me, even though I almost understand publishers now. And maybe we need to create new support systems to kind of nurture innovation in this space. Digital science is one example of a group that is doing that, but it would be nice to see funders doing it. I mean in the UK we have be nice to see the Medical Research Council having like an innovation wing that looked at this kind of stuff.
But yeah, so those, those have been the challenges that I've encountered. So there are some opportunities here, both for the maybe for the funders and other groups to be helping out and don't want to forget our library community. Maybe they have some role here. What are some concrete things that the scholarly industry could be doing?
Do you asking one of us? I thought I'd go with James first, and then we can. We can get everybody what I thought is making sure. Well, one thing that I wish had existed before I started was like Leslie, I was surprised at how little I understood about publishers and what they did and how they operated. And I I'm still surprised that there isn't better communication between the different pieces of the system.
I feel like researchers should be trained in how publishing works and how publishers work, and also how funders work. And we should all better understand the system that we're all collectively trying to improve. And I think that would foster more dialogue between the different players as well. I think I was hearing some grumblings of agreement in the room here as well.
Leslie, what do you think? I agree. And I'm going to focus on one aspect instead of just repeating what James just said. But when you are a younger researcher and the man yesterday the professor was talking about, the younger people don't understand why peer review and why societies. And we're sitting here in a society building and I didn't understand that either.
And that needs to be illustrated. Why do societies exist? Why do publishers exist? What do we offer the community? There's a lot of criticism. There's a lot of criticism internally about why do publishers exist, what do they do? And I do think that's the strength of it is a core pillar of science and strengthening science.
Are we trustworthy? You know, you find your community of people. That's how you end up building trust, not just by the paper. We can automate checks in the paper all the time, but ultimately it does come down to the people that are behind it. Why are you a society? Why are you a publisher that needs to be communicated to not only to researchers, but to the greater society, if you will?
Why do we exist? Because it's a Black box to many people. And yet I think this is the reason that we also have trust in science. We can go to these places when we want to understand what's going on in our world. And it's an opportunity here. It's excellent. So and Anita, thoughts from you?
I fully agree with both my colleagues, but I think one thing that the publishers specifically could improve is their willingness to try things that are different. I know that everyone is super busy that everyone has, all of these competing things drawing their attention away. But if you are more willing to try to do things with some of the new technologies that are out here, then I think it would also foster the next generation, because the next generation could see that there is a success and that translation into a successful from a great project to a successful startup to a tool that's actually used in the publishing industry is kind of a trickle.
And, you know, I think that part of it is that a lot of publishers just aren't willing to try new things. And so I think, you know, I would compel all of you just to do like one little thing, just to try something something new And novel, and that could potentially improve the workflow. I know that there are a lot of technical barriers along the way, but I think the willingness to try things is, is really.
Necessary for a startup ecosystem to develop more fully an incremental process. Getting people to dip a toe in the water, that sounds useful. So the last question that I have before we turn it over to the audience here and I'd like to hear from all of you, where do you think these tools will be in the next 10 years? If we're looking back, what will be the impact on peer review and on manuscript workflow as you would you give us start?
Well, so I guess because I'm now with Digital Science that kind of talks about where things are probably going and we were able to integrate with dimensions. I just think the aggregated data and the insight from data is where I'll be and where we'll be in the future. I certainly see building us, being using the data to build trust in science and to get insight. And I'd like for that to continue.
So, Anita, how about you? I actually do have quite a bit of hope that we will see more of these tools becoming a part of the journal workflows and being kind of a standard part of these workflows. I'm not sure exactly what that's going to look like, but I do not think that even the most stubborn publishers are going to be able to get around this at some point.
I think everyone has now kind of embraced orchid, and I think that more and more of these kinds of innovations because really fun, drive, orchid, these are all innovations as well as the plagiarism checks and others now are kind of becoming part of the publisher workflow. I think we're going to have more of that happening in different parts of the ecosystem. So but I think it will be slow.
I don't think that ten, 10 years we're going to have just a huge, massive change. But I think we will have some change and more of these things will become standard. All right. And James, last thought here. What does the landscape look like in 10 years? I think we're maybe going to see a convergence.
Convergence we've got authors are going to expect to use nice, modern, easy to use tools and software as their submitting to journals. And publishers are maybe being put under pressure as other sort of media outlets have been. And then from the academic side, we're seeing the rise of matter research as a field and more and more people becoming sort of putting a magnifying glass on how the system works.
So I hope that we'll see those two things join up, really. And then in the future, we'll have a publishing system that's maybe more biodiverse and filled with more innovation and more people trying to solve problems in different niches and flourishing. Excellent so now I'm going to be opening it up. And jacki, do we have some questions online? Yes, we have two questions.
So I'll present the first one, give you a chance to respond. And then the follow up. With screening technology increasingly deployed by journals and with the availability of technology to access novelty and like I'm sorry. Assess novelty and likely impact of research. What does the panel think will be the future role of peer reviewers? Anyone want to take it?
I'll start I'm going to stick with the it takes a while to change things. So if we're taking in the next 10 years, I still I think the role of peer review is still going to have a human look at the research, the science, whatever field it is and whatever term that you use but not have to the editors and not the peer reviewers not have to focus on all of the other checks.
So hopefully, I guess my hope is that we can still leverage the important peer review system to hone in what they could be looking at and get their insights from these experts. Any other thoughts online? I'm just going to have to absolutely excuse me. Agree I think that there's a lot that I can do, but it certainly cannot do everything.
And we're very far away from replacing the peer review system. But we can complement the peer review system with automated checks. We can leave to the robots the things that the robots really are attuned to do and do very, very well while giving the peer reviewers more time and space to think about the kinds of things that humans are really good at doing. So I think it's going to be more of an overall picture of a concert of different sorts of things that.
Peer reviewers don't always have time to do right now that can be picked up by these robots, but we will not replace them. And that I would even argue, I don't think we would want to replace them in any way, shape or form. It's a good distinction. I always think about it like this. Superhero movies where the hero puts on their special suit and then all of a sudden they can run twice as fast.
Or they put on their glasses and they can see X-ray vision through walls and stuff like that. So I think the role of automation in these kind of tools is to help the reviewers and editors to do their job. Is it more easily or faster or with more depth but not to replace? I think, Yeah. That's how I feel about it, too.
Excellent jacky, go ahead with our next question. So the next question is, do you see 100% of the information that your tools capture appear in the final work? Or do you see a lot of data loss along the way? I think I should take that one. So we actually do monitor what happens after the peer review process, after everything is finalized, after typesetting and everything else.
We are encouraged that there's definitely something trickling through the system. It is not 100% of the things that we know should be there. So I would put it at our most eager authors will actually do a tremendous job at changing their manuscripts to improve their scores. And I mean, given that we actually give a score, we can definitely find out what is the average score during review and what is the average score.
Once the papers are published and part of the published literature. And we do do that. And we see like for example, with the British Journal of pharmacology, they are currently hosting papers that collectively are the second highest scoring journal because they're being asked to do this. But are they is the British Journal of pharmacology at ten, whereas everybody else is a one?
Well, the answer is no. They are just a bit better than the rest, which is kind of how these things go. They're they're slow still. It's not everyone. It's not everything. But it is definitely it does help. We see about difference, one out of 10 points difference with consecutive reviews of the manuscript.
And then afterwards we do see those being reflected in more, for example, research resource identifiers being available in the manuscripts, which is one of the checks that we do. And we do see certainly more blinding, more randomization, more iocs getting into the appropriate places. So it is definitely having an impact, but it's not from 0 to 100, it's more like from 0 to 25. You know, we would really we have a long way to go, but we I think we're taking some nice steps with our partners.
So we would have a different perspective. If not, I want to make sure we get to some folks here. John, what you've been waiting patiently. Hopefully I had a little bit of a different perspective. So dawn Samuel, cactus communications, we're from within industry, 20 years in activity. I'd be remiss if I didn't come to the mic and bring paper pal and paper pal pre-flight to the discussion integrated across hundreds of journals the like.
I'll also say I came to the mic to talk about productivity and the who, what is I? Ditto to everything that's been said. We think about all these things all the time. It was a great panel, great discussion. One thing that I thought was missing was in the future aspect of things is the acceleration of papers is going to overwhelm the publishing community if the publishing community doesn't adopt technologies.
And so, yes, the adoption is slow, but quite honestly, the technology, whether it's a.I., whether it's other future technologies, are going to help save the productivity cycle of the publishers, but are also going to augment the productivity cycle of the authors. So our researcher life platform is a productivity platform, productivity tools for authors to accelerate their paper, the quality of the paper, but also their paper generation, the way they think, the way they access data, the way they generate their papers.
And theoretically, that's going to also keep moving the curve up and up and up. And the publishers have to keep have to keep pace by adopting technologies to keep up with that curve. And so really, I'm just coming to the mic to just say that technology is our friend. It is not making decisions for us, just helping us do our jobs.
We will not be able to hire enough people in publishing to keep up with the publication curves. If we don't leverage technology to augment our decision making processes, that's something that John was mentioning yesterday from his perspective also. So I think our final takeaway here is reach out to your friendly neighborhood technologist and ask them about how you can be using these tools. I want to thank Leslie in the room and Anita and James online.
And Thank all of you for participating. Thank