Name:
Reimagine Peer Review with KGL Smart Review®
Description:
Reimagine Peer Review with KGL Smart Review®
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/3a71d7ec-0b82-4b1b-99e8-47c8cac2f347/videoscrubberimages/Scrubber_1.jpg
Duration:
T00H29M45S
Embed URL:
https://stream.cadmore.media/player/3a71d7ec-0b82-4b1b-99e8-47c8cac2f347
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/3a71d7ec-0b82-4b1b-99e8-47c8cac2f347/SSP2025 5-28 1330 - Industry Breakout - KnowledgeWorks Globa.mp4?sv=2019-02-02&sr=c&sig=ZbL5mLEP%2FjdqARI0Pex8uMMUrLJDLZ%2BrpTDv8BTznug%3D&st=2025-07-01T21%3A20%3A11Z&se=2025-07-01T23%3A25%3A11Z&sp=r
Upload Date:
2025-06-09T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
All right. Hello, everyone. Thank you so much for coming and joining us today. I'm Alex Kaylor. I'm director of editorial and I'm here to show you smart review, which is our application for automating and managing peer review workflows, including Best in class research integrity.
I'm joined today by Tim vines, who's the founder and CEO of DataSeer, and Adam day, who's the founder and CEO of clear skies. These are two applications which are integrated into smart review. And so we'll be showing you how those integrations work as well. So smart review overall is an application. As I said it automates and manages manuscript submission workflows.
So this includes submission checks as well as specific checks for research integrity. We've got integrations with third party applications. And we are also integrated with another cl application called pure dash, which I'll show you a preview of as well. So why smart review. First, I should say that CWGL as a company does a lot of different things.
End to end services. A lot, a lot of you may know us for the services that we perform and production copy editing, but we also provide peer review services. So for the last 20 years, we've been providing peer review services for a variety of different societies, publishers, and in that time have experienced the wide variety of different workflows, submission check workflows that all of the different societies in our industry perform.
What we've seen over time, of course, what we've all seen together over time is that there are increasing threats to research integrity, and this is resulting in the need for us to do more checks during the peer review process. At the same time, we're seeing increasing pressures on our budgets. And so we're not wanting to spend as much time doing these checks. So we're trying to balance doing more while spending less.
So with smart review, we're building a framework to simplify the most tedious checks to ensure quality and accuracy in 100% of the checks that we perform, and to really open up space and time in submission, check workflows for staff to incorporate new and more complex checks, and to spend more time analyzing manuscripts. So why smart review an unassisted submission.
Check workflows. Editorial staff need to navigate multiple panes. So this is typically for most editorial staff. This is what your day looks like. You've got your laptop open. You've got a submission check protocol that could be anywhere from 10 to 20 different pages. You've got manuscript metadata that you're viewing in a manuscript management system.
You've got the manuscript file itself that you need to analyze. And then also you might be keeping a separate file going on submission notes for an email that you might be sending to authors. And so staff are bouncing back and forth between all of these different panes. When you train staff are hoping that they're going to follow every single step in the protocol, not skip any steps, not miss anything.
And so it can be a lot for staff to keep track of what we've done. In smart review is created a single interface to organize all of the work that staff and the editorial office perform. So you've got your checklist, you've got the manuscript file, the file inventory, and the metadata. All work is organized onto a single screen to organize the work that the staff do on a repetitive basis, as they're checking in submissions.
Taking a closer look, you can see that the submission checklist itself and the manuscript file are shown side by side for easy comparison. One of the things typically in a checklist, it could be anywhere from 5 to 1915 pages long, depending on the size of your checklist. And staff are reading through this Microsoft Word document, maybe an Excel file that has all the different steps of the checklist and smart review.
The checklist gets loaded. So that each step of it you click through one at a time. So you've got the directions. Whatever they're assessing, you've got the action that they might take the manuscript. Note that they might make any email texts that might get included and then a submission letter to the authors. And since the staff are clicking through each part of the checklist step by step, it guarantees that nothing is going to be missed.
Also built into this, you see there are some buttons at the bottom, so there's a place for notes. There's a place for the email to be built as staff are moving along through the checklist. And again, you can see that the manuscript file is pulled up here as well. So these are side by side in the same pane. You can use Control Find and search features to search through the manuscript while you're going through the checklist.
Another feature of smart review of our interface is the checklist. Progress is shown as staff works flagging orange if any steps are not complete. And again this ensures accuracy at sale. So if you're moving through hundreds of submission checks per month, and you want to make sure that every single submission check is done properly, smart review helps make that happen.
Why smart review. Just taking a look at one of the kinds of checks that staff do. For instance author list validation. Typically, when you're involved in this process, you have the manuscript file, you have your manuscript management system, and you're trying to look at the author list formatted in two different ways and two different systems, and hope that staff don't miss anything when they're making sure that the author list matches in both places.
And smart review. We're basically ingesting the manuscript file. We're ingesting the metadata, we're comparing them. We're making it very easy to see immediately in one step if there's any discrepancies between the metadata and the manuscript. Each of the errors that each of the types of checks that we've automated, in addition to showing them side by side, you can also see those errors highlighted in the manuscript itself and in the metadata.
So here you see an example of a discrepancy between a manuscript title at the top and a manuscript, and at the bottom in the metadata it's highlighted. It just makes it very easy and clear for staff to identify checklist errors very quickly. So what have we automated really a wide variety of things. Too much to list in one place. And of course, we would love to be able to show any of you a demo.
If you'd like to see the system in action and see all of the different things that we've automated, but lots of elements of the manuscript versus metadata, match statements and disclosures, smart reviews able to read the whole manuscript and pull all of the various different statements and disclosures into one place. A lot of different checks around figures and tables, the way they're formatted, if they're labeled properly, if they're cited in order, if they're referenced properly.
Anonymization this is a very time consuming step in most checklist processes. We've automated this as well, seeking out conflicts of interest in a variety of automations and checks around citations and references as well making sure they're in order, they're cross-referenced properly, et cetera. So overall, our smart review framework, it enables quick adaptation to evolving technology and best practices, so the manuscript files and metadata are ingested directly from a manuscript management system, be IT Scholar one editorial manager.
We perform smart review, performs automated checks on the information that we've ingested from these systems, and then we've also incorporated third party checks our API can integrate with any third party application. We've brought two examples of those here today. You'll see how they're integrated with smart review. But really smart review is built as a framework where it sits on top of a manuscript system.
It integrates with any third party application, and it's built to be able to evolve over time. We know that we're going to continue to be able to develop a variety of different automations. I think some of you may have seen others here today. We'll continue to see more different applications coming on the market in time. So smart review is really built to be able to integrate any of the applications that come on the market and work with the information from manuscript management system, but all within a single structured environment, so that from the perspective of editorial staff, it's one application, one place to process all of the different information and inputs that come in.
So the result in the end is technology assisted decisions, accurate outcomes at scale and enhanced quality and integrity. And this can really be customized for any society or any workflow. So the checklist itself is specific to your journal. Your journals workflow is loaded into smart review all of the steps, but then the information, the third party applications, the automation, all of that is put on top and integrated with the workflow that's specific to your journal or journals.
And you can have one standard checklist for your whole portfolio. Or you can have different checklists depending on what the needs are of each of the specific journals. The smart review. Integrations provide flexibility for our customers so they provide best in class research integrity tools, the ability to integrate with new tools that you're interested in working with easily.
We will. We set up the agreements with third parties so that you don't have to and you can try out different tools that are available on the market. And then we help you seamlessly incorporate these tools directly into your submission check protocols, providing concrete action steps to take based on the analysis provided. That's one of the other questions that a lot of people have as these new tools are coming on the market is what do we do with these signals.
So smart review provides a place to really incorporate the data that comes out of these research integrity tools directly into your checklist, so your staff know exactly what to do with the information. All in all, I think smart review is a system that really shows the interplay between technology, workflow and people. We're using technology to automate checks, but all the checks are being performed by staff themselves and in the context of a specific workflow for a specific journal.
So smart review brings together the technology, the people, and the workflow all into one space and one interface. All right. Excellent I'm going to hand it over right now to Tim vines from DataSeer. Thank you.
Thanks, Alex. So in a former life. I was a managing editor for a Wiley journal called molecular ecology. And one of the things that gave me the most gray hair was as a managing editor, trying to assess whether or not the article that had arrived had met our open data policy. And so years later, we've made DataSeer to address that personal pain.
And I think it's actually one that many other journals feel. So what DataSeer essentially does is automates the process of assessing compliance with an open data policy at a journal. And this is complicated, as you probably are aware, if you've tried doing it yourself because authors do all sorts of things with their data. And on top of that, there are all sorts of things that could be required.
There's many different levels of open data policy across the industry, so the authors may be required to make their data on request. They may be required to put it in a repository. They may be required to state that it's present in the manuscript. And then, of course, is it there. So they may say that they've put it there, but is it actually there.
And so there's multiple layers of checks of does the data accessibility statement statements, say what it should say, and if it says x, did the authors actually do x. This is complicated. It means that the editorial office has to spend minutes going through the manuscript, perhaps going through supplemental files, perhaps visiting repository web pages to work out what the authors have done and whether that meets the standard of what the journal requires at whatever submission stage we're at.
And so we've decided to automate this check. And so that's what DataSeer does now. And there's a bunch of different pieces on here. This is I've tried to make this deceptively simple. There's a lot of things here. So bottom left data availability. Often there is one data availability statement in the manuscript and a different one present in whatever system is being used.
And sometimes they contradict. So it's up to the journal to decide which those aren't. So both of these are unearthed by DataSeer. Over on the right there, there is a checklist which goes through the different elements that the editorial office would be expected to check. Is the data. Does the data accessibility statement say that all the data in the manuscripts.
Yes, that's true. Do the authors, did the authors submit their data to a repository. No And so it performs these basic checks up here. The action summary. This is where it gets really cool is because we've got powerful large language models that not only do these assessments on the right, but they then compare that assessment to the journal policy expectations and says what the authors need to do to meet it.
Either they've met it and there's no action required, or in this case, the authors have put all their data in the manuscript and the supplement information what they say they have. We couldn't find any data sets in the manuscript. There was no supplemental files listed. So it's probably not true that there's data in the manuscript. So they need to provide their data sets. And so you would send an email to the authors saying, hey, you haven't provided your data set.
And what this has done is it's simplified the process of assessing open data to just do the authors need to do anything or not. The last thing I want to highlight on here is this top right button. Journal offices are complicated. There are always exceptions. There is always unexpected features of manuscripts that come in.
If you click on that can talk to the LLM and say, well, why did you make this choice. Why did you say that the data is in a repository when it wasn't. And it can say, well, it is because I saw it, I saw this, I saw that it was on a Cambridge repository. But that's not a valid repository. OK let's update the rules. And then the rules are updated and the system moves on as if it's just learned.
It's instant. You don't need to come back to DataSeer. You just re-educate the LLM. And so over time, it really comes to understand what you're doing in the editorial office. OK OK. Thanks so much, Tim. So the workflow that Tim just described, I'll actually take it back.
One, if you look at this checklist on the right, a lot of these steps would typically be part of the checklist in the editorial office that we might be performing for a particular journal. So in this case, the integration with DataSeer lets us cut out these steps that might be part of our checklist and condense it down essentially into just three steps instead, so we can review the extracted data ability statement. And then we review the recommended action and then paste that action directly into the submission email.
That's going to go back to the author. So I think DataSeer is just a great example of how smart review has automated a number of steps. But then when we work with our third party integrations with our partners, we're able to automate more steps. And then the checklist that your editorial office would use would be customized and set up to accommodate if you've got any third party integrations. All right.
The next one that we're going to take a look at is paper mail alarm. So, Adam, why don't you come on up. I spent some time recently agonizing over what the company slogan should be. Some of you may have been there before. The thing is that we detect bad science and we're very good at that.
But no one really wants to read bad science. What they want is good science. And so one of the names, one of the slogans ended up on the cutting room floor was clear skies where the bad science isn't. And that's why it ended up on the cutting room floor. But what we ended up with was your partner in research integrity. And the thing is that I think fundamentally, science is actually all about people.
It's about people interacting and it's about people working together. And it's been wonderful to work together with the people at CWGL. So what we need then is tools that support people in doing their work. And so that's what the paper mill arm does. So very simply we're talking about triage.
So let's see a paper comes in or actually here we're looking at a whole journal. So articles come in. They get triaged within seconds of arriving in the submission queue, and we give them a rating. And the rating informs what the editor might want to do. And one thing that we know that's quite interesting is that before people used the paper mill arm, before editors use the paper mill arm, their rejection rate for alerts is much higher than for other articles.
So we actually already know that peer review on its own is a powerful tool for dealing with paper Mills. It's kind of what I was saying. This is really all about people. Science, I think, happens when people interact with science. And I think research integrity in some ways starts and ends with peer review because that's where the humans look at the science. So we get an alert rating, and this is how it's presented in a smart review.
And that gives us some sense of how serious the risk is. So green means we don't see a risk. Orange means we see some risk. And red means that we are very sure that there is significant risk. And then we can click through to oversight. So oversight in some ways is short for human oversight. It's about people again. But here, we can see the different reasons why this particular article has been flagged.
And I should say that we've again, out of respect for human beings, we've censored some of this because it includes. A real person's details. There's a red finding here. As you can see in the top corner. That is something that we call keystone. And keystone means that we found an individual who has a high rate of connection to problematic behavior in their history, and then what we can do is we can go and look through the references on this article, and we can intuitively find some irrelevant references.
If we go down the list, we've got a little relevance metric there. And then we can go and look at the author history as well, which is, again, another list of articles like this. And we can see which articles caused that author to be flagged. And if we want a really nice visual way to explore the data, we can just go through this visualization, which will help us see which of these articles are most significant, which ones are the most problematic so that we can direct our investigation and handle it very quickly.
And that's really the important thing, because the costs in dealing with paper Mills are in the investigations. That's where all the time cost is. So what we want to do is avoid those investigations, which means either getting peer review to handle the problem or it means doing fast investigations, getting to the point very quickly and then moving on. Moving on to the next article.
OK, I'll hand you back to Alex. Thank you. Thanks again. Adam so this is just taking a look at how we integrate papermill alarm into workflow for society or for a journal. So if you get a read alarm, you'll go and look at the oversight report that Adam just showed where you'll get more information on.
Why was this flagged. Read what is the history for this particular author group or for some of the authors in the group. Or if you get an orange flag, that's still a Warning signal. So in either of these cases, what the protocol would tell you to do is go first to look at the oversight report and then built into the protocol will have additional investigations. So, based on what you see in the oversight report, there are other potential steps you can take as well.
Digging deeper into the authors themselves, if you get a green signal, then you would just proceed with your checklist as normal. But one of the things I guess that's really nice about this particular integration is that we can't afford to do a deep dive on every author who submits to the paper for every single manuscript. If we did, we could spend 10 minutes on every paper, 15 minutes just going to the institutional website of does this author really exist.
They might have a non-institutional email address. We could be looking at other manuscripts that they've previously published, trying to see if they've used this email address before. We could be looking on Retraction Watch, seeing if they're previously retracted. We can't do that on every single paper, but with this tool integrated, then we can trigger the staff to do additional checks and then pass that information on to the editor and say, look, do we just want a desk reject this one.
So it's a way to get the editorial office staff even involved in helping to triage out some of these papers. So we're not sending all of them to the editors, not sending all of them to peer review. Another integration that we have is smart review is with peer dash, which is actually another cool application.
Peer dash is an application that basically takes in information from the manuscript management systems and consolidates all of this journal data across the portfolio. So one of the challenges we often have in the editorial office, if we're managing multiple journals, is you've got to log into this journal site to see how many submissions are in the queue and this journal site to see how many are in that queue.
Since we're already connecting everything to our API and we manage a lot of journals, what we've done with peer dash is basically we're bringing in all this information from the APIs, from each of the different journal sites into a single place where we can look at what is the status across the portfolio. And we can also report on journal performance across the portfolio from one single interface, rather than from our different various journal sites.
Wrapping up. And I want to leave just a little bit of time for questions. Overall, what are the benefits of smart review. Well, checklist automation certainly it improves accuracy and consistency. It saves time on rote tasks. We've been using smart review now for the whole first half of this year.
How much time savings is really going to vary depending on the checklist in place. But we're very proud to say that on some of the checklists we've deployed this on, we've saved as much as 15% and 20% in checklist time using the application through the automation of some of these tasks while leaving room to do other checks. The user interface that I showed you, it really simplifies the workflow for staff, again ensuring that they can do everything that they need to do in one place and not have to navigate between a whole bunch of different screens.
It also ensures compliance and quality. One of the biggest challenges when you work off a checklist is telling staff, please don't skip any steps. Please read every step and do them all in order. This way we're clicking through one step at a time and making sure nothing is getting missed. Also, on some of those checks that are really tricky, like those author list comparisons, we're just bringing it up and making it easy.
These are the things. These are the kind of checks that it's when humans do them. It's easy to make an error in smart review takes out that error. Research integrity smart review helps uphold research integrity. A by incorporating specific checks that are research integrity checks, but B by saving time and allowing staff to have the time to go down and dive a little deeper when they need to on some manuscripts.
And finally, the API, it unifies multiple systems and it enables third party integration with any third party application. So again, as our industry is evolving, as new applications are going to continue to come onto the market, journals want to be able to take advantage of these as they evolve over time. And I think a lot of journals, a lot of our customers are looking at all the applications.
They're ready to jump in. They want to get involved, but they don't know which one are we going to use. Is it the best one. Is it going to work for us. And so most smart review really provides a way to try out these new technologies as they come on the market to improve your workflows over time.
So thank you so much. I want to leave some time for some questions. Mike gross here is our director of marketing at HCl. He'll take questions if anybody has them. Or I think we have a few questions lined up if needed. Thank you. Any questions from the audience.
Well I just I think it's really good to highlight that as well as looking for the bad actors and the errors in the authors, which we all know authors need help. You're trying to expand the quality then being transparent and reproducible science. So I think you pulled things together really well. So it wasn't a question. That's OK.
Thanks very much. Yeah I mean, I think it's a really great point, Adrian. And there are other. For instance, another application we don't have them here today that we're integrated with is image twin, which is one I think a lot of you are probably also familiar with. When you stack all of these applications together, kagl, we're kind of automating some of the basics of the formatting checks.
But then when you start to stack all of these different applications together, you're really bringing in powerful tools to protect your journal and to save time. And actually, as you all were saying, as we use the applications, they are getting better as well because they're learning from us. They're learning smart review is learning. Paper mill alarm is learning.
Data series is learning. Each of these applications are learning from us as we work on them. So, it really is, a situation where we're making our staff more effective, we're making them more accurate. But we're also going to this technology is going to learn from us over time as well. And smart review really provides a way for a lot of that learning to happen all together, in one place.
That brings us to the end of our session. It is o'clock, but feel free to come up and ask any questions if you have them of me, Adam or Tim. And again, we would love to schedule a demo for any of you if you'd like to see this live. Thanks so much again for coming.