Name:
SSP Innovation Showcase (Summer 2023)
Description:
SSP Innovation Showcase (Summer 2023)
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/e826cb90-8573-4409-85b1-ce8e4e828ea2/videoscrubberimages/Scrubber_1.jpg
Duration:
T01H07M11S
Embed URL:
https://stream.cadmore.media/player/e826cb90-8573-4409-85b1-ce8e4e828ea2
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/e826cb90-8573-4409-85b1-ce8e4e828ea2/GMT20230714-155908_Recording_gallery_1920x1032.mp4?sv=2019-02-02&sr=c&sig=QIygeOahGHlGZPGxgNmlPrHXCMVQUVvoiZRjenIvIQU%3D&st=2024-11-21T18%3A58%3A10Z&se=2024-11-21T21%3A03%3A10Z&sp=r
Upload Date:
2024-04-10T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Well, it's 1200, so why don't we get started?
Hello everyone, and welcome to the innovation showcase hosted by SSP. We're happy that you could come and join us today. Before we get started, do yourself a favor and pull out your Cell phones. You're going to need that, and I'll tell you why in a moment. But we want to remind everyone today of SPS code of conduct. If you want any further information about the code of conduct conduct, excuse me, you can scan this QR code.
And that's why you'll need your phones to start with. So I'm David Myers, and when I'm not volunteering for SSP, I'm the CEO of data licensing alliance first marketplace, making it easier and more efficient to license STM content for AI and machine learning. But on behalf of and as a member of the SSP community, we're happy to present this showcase where each speaker will have about 10 minutes to present. After all the presentations are done, you as participants can ask questions.
We can. We will be short on time because we have six presentations today and each one will be 10 minutes and we only have an hour. So I encourage you to please use the chat functionality for any Q&A and the panelists will be happy to answer those as we go along and I will direct any other questions when and if appropriate. Further, each panelist will provide you with a QR code that contains their contact information at the end.
So if you want to individually connect with them at a later date, you'll be able to. And lastly, please put yourself on mute as a courtesy to all panelists and participants. Thanks a lot. So without further ado, we have six companies, as I mentioned, cactus hum create, ux, Mercier, origin and x publisher. And now for our first presenter is Jay Patel, head of business development at cactus.
Thank you, David. Happy Friday, everyone. Really appreciate you taking the time out to join us. So today I will be talking about our paper pal solution. And basically how we're using AI to improve manuscript submissions both for authors and for publishers. Cactus is a global technology company.
We develop and provide editorial and author services, science, communications and AI and mobile solutions. We have been serving researchers, publishers and societies for the last RA21 plus years, both through human experts and technology solutions. And so our solutions kind of run the gamut from paper pal to mind the graph, our discovery mobile application. And all of these products are meant to benefit and support authors and editors throughout the publication journey.
Well, I'm not sure who said this, but, you know, this is you know, this is really our mission statement for AI. And it's basically that a tool is as good as the human who is using it. And in all honesty, you can't take the human out of the process no matter how good the technology gets.
You still need humans to be involved in the process for feedback, for training and for refinement, and also to generate context and reasoning out of what the machines are doing. Um, the one the key take home message from my presentation today is how we at cactus, along with other folks in the industry, are leveraging to address challenges faced by publishing industry.
Now now we know. We all know that, you know, new challenges arise all the time. And at this moment, you know, we are the industry itself is facing quite a few challenges that range from pressure to reduce publication time. The increase in volumes, which is most likely going to go up even faster and greater than it has in the past, mostly Thanks to large language models and generative AI.
The persistent problem with paper mill submissions increase in deceptive practices as well as synthetic content or generated content. And of course, you know, another issue that has existed and continues to exist is tracking and identifying retracted literature and ensuring that that literature is not cited, you know, in future manuscripts.
So, you know, guess the question is, how do we leverage paper pal to address these challenges? But before we get into that, I really wanted to speak about what paper pal actually is. So paper pal is a standalone solution, which we have built over many years. So this is paper pal is not something that we just sort of put together in the last year or so.
It's a product that really has had different iterations over the past seven years in different forms, utilizing different technologies and providing, you know, different checks. Paper ballots just happens to be the latest and greatest iteration of a lot of that, a lot of those solutions that have existed in the past. So the, you know, paper pal really looks at real time language support and review.
We have integrated technical and integrity checks that are tailored for both the authors as well as for editorial teams and the publishers. And then we also have developed a robust and ambitious roadmap to support new checks, content types and to address new challenges that may arise. And our vision for paper pal is to create a suite of checks and services that will extend well into the future.
And as new challenges arise. All right. So paper pal includes 30 plus checks and they include both language and technical checks with many more to come. Um, and the whole focus here is to help authors improve the quality of their submissions, but to also help editorial teams, you know, make sure that what is being submitted, what is being accepted matches what, what the, you know, what the journal is looking for and to really reduce the time it takes the editorial office to do reviews of submitted manuscripts.
All right. So currently, you know, paper pal is being utilized by over 500 journals. We have to date have uploaded and assessed well over 210,000 manuscripts. It's been used by over 110,000 authors. And conversion rate to download on the average is 10% And these are just some of our partners.
And you know, once again, you know, I'd like to thank, you know, Duncan McRae from Wolters kluwer for being such an enthusiastic partner and also providing us some great feedback. I mean, as you can read here, you know, he you know, he told us that, you know, when our editors first saw preflight paper pal, preflight in action, they were blown away. Um, and we know, we continue to hope to wow both authors and editors and to help them, you know, save time and improve the quality of submissions.
So how does paper actually address the challenges that we face today? So this is you know, this is basically a review screen that an author or even an editor would see when they submit their manuscript for review. And we recently we actually presented a study last year at the peer review conference where we did a study with Wolters kluwer looking specifically at how paper pal preflight would impact rejection rates.
So we took papers and we put them into three different buckets. One was no, no, no checks at all. Another one was with basic checks and another one. The third one was with premium checks. And what we realized is that between no checks to premium checks, we were able to reduce the rejection rate by 80% And and this is something that we've seen with other journals and publishers where when they do start using paper pal, as the quality of submissions improves, the rate of rejections comes down.
All right. So, you know, if you've been active on social media for the past six months or so, you've seen, you know, technologists and researchers talking about, oh, you know, I've created a whole paper using chatgpt. So we decided why, you know, why not go ahead and make one of our own. So we can test it with paper, pal.
So here's a paper that we actually generated using chatgpt. And when we ran this through paper, pal checks in our solutions, of course, found several serious red flags. And that comes as no surprise, I think, to all of us. All right.
So some of the issues that had found were things abstract is too short. Abstract is not structured the way that it should be. It also found issues with no supporting citations as well as missing ethics statements, and that the manuscript does not follow the imrad model. So while, you know, while paper pal will point out what's wrong and what can be fixed, it's also built to be a co-pilot for the author or for the editorial teams where it will say, you know, here, you know, here's the sort of revision you should make and here's why you should make that revision.
So it's you know, it's just not like, hey, you got this wrong. But it's you know, it helps train the authors in guiding them along the path of saying, you know, here, you know, here's the revision you should make and here's why you should make it. So it it helps them improve their writing style over time as well as they engage as they engage with paper, pal, you know, more and more.
OK, great. So one of the new features that we've been working on and it's very close to being introduced is our cactus detector. And this is going to be our newest check to the paper pal family. And it was able to correctly determine that the article was written by chatgpt and that it was generated.
And, you know, we really do look forward to testing this with our partners. And and, you know, we'd love to get real time feedback from, you know, from our partners, from authors, from editors, you know, because that's really the only way that we can keep improving this and keep meeting new challenges that arise for publishers. Some of the other checks that we are looking to introduce are data reproducibility, scope, match article type discrepancy, as well as methods and retraction and reference checks, as well as fabricated text.
So those will be those are on our roadmap and we should be looking to roll those out within the next six to 12 months as testing progresses. And finally, how. Oh, hey, you skipped ahead. Sorry about that. So how do you deploy paper pals? So, as I mentioned before, paper pal comes in a couple of different flavors.
There's one that is author facing. We also have one, you know, that sits in, you know, that that looks at pre submissions. There's an editorial facing version of it. There is no integration needed. So it's a standalone solution. So you can keep using what you're using without having to change to a new system. It can also be used post acceptance or in the auto automated copy editing process.
And as I mentioned, we do have paper pal for editorial offices as well. So I hope you found this educational and informative and look forward to interacting with you and answering your questions. Thank you. Thank you, Jay. Next is John challis, senior vice president of business development for Hong.
Right well, happy Friday, everybody. Um, I am going to introduce you today to alchemist, which is humm's new suite of data Fed AI tools for scholarly publishers. AI and data are close friends and that it's hard to do without good data. Humm is a data company, but we've built this suite of tools that allow our clients to take advantage of what I can do.
And I'll walk you through some examples of the first tools that have been, um, have been released, Uh, why we care about this. well, sorry. This is what we're going to talk about today. Why? why we should care about this, sort of the strategic imperative about first party data. And I I'm going to introduce alchemist to you and then I'll talk to you a little bit about how you would actually leverage this.
And so we'll go through a couple of use cases. I keep clicking the wrong button. OK why? Why should you care about first party data and ai? So researcher publishers have successfully managed the transition from print to digital, but today, for at least two reasons we'll talk about in a second.
The strategic imperative has turned to audience understanding who will build and have a direct, meaningful relationship with readers, many of whom, of course, are or could become reviewers and authors. Aggregators, and I'm using the term broadly to include any entity that's leveraging its existing scholarly audience would like to own that last audience mile.
They already have an audience, and their business model, if they have one, is to tax other content producers to access it. And aggregators are working hard to make that a reality. But we believe that having your own relationships driven by valuable experiences and not just putting your content on the internet is the only way independent publishers will remain competitive. And we believe there's a window of opportunity where publishers that embrace data quickly and fully can reset their competitive environments while others will fall behind.
So leveraging data will be the biggest driver of success or failure in research publishing in the next decade. There are Ii era defining challenges that are disrupting scholarly publishing and creating an enormous new challenge and opportunity. First is open access.
Open access means it's now critical to influence individual researchers. So for publishers, that can be from hundreds of thousands to tens of millions, depending on what you're publishing either way. Publishers focus shifts now from the 5,000 or so librarians that used to be responsible for 90% of subscription dollars to influencing the world's millions of researchers at an individual level.
And publishers need deep audience data because they need to compete for and recruit authors directly under the open access model. And the second reason is I because publishers can collect so much data about people and content and topics and organizations. They're particularly well placed to use a key tool, the large language model or lm. Commentators and scholarly publishing have been focused on A's challenges to publishing.
But there are some big immediate opportunities to leverage AI to understand your audience and content and to communicate and personalize more effectively to cascade manuscripts, to surface special issue topics, to reveal content collection, opportunities to flag research, integrity issues, and so on. I can be a publisher's best friend, but to work it needs data.
And the best data from a competitive point of view is first party data. So hum has created alchemist a suite of easy to use tools that make our data platform more powerful. And in order to explain what's particularly innovative about this, I have to go into a little bit of detail about how LMS work and how other CDPS work. So in the old world, content was tagged and those tags were rubbed off onto people as they engaged with particular pieces of untagged content.
In order for a tag to be associated with a person, that person would have to have interacted with a piece of content that had that tag. That approach, of course, has a few issues for topics that don't appear very often, so there's not a lot of content tagged with that particular tag. You won't find many people associated with that topic. It's also not good for predicting audiences for emerging concepts, and it Mrs. the existing potential to capitalize on semantic understanding that today's LMS have.
In the new world. Each piece of content has essentially an infinite number of things. It is and isn't about. And as people interact with that content, alchemist is able to apply embeddings to those people. As people's interests change over time, their affinities are constantly updated and people can be understood to be likely to be interested in topics that have not appeared as tags on any content they've already read but are related to those that they have looked at.
So, for example, let's say you wanted to pinpoint an audience of people interested in the potential effect of aspartame on human cancer rates. In the old world, only a small number of people who read articles on that precise topic would be tagged as having an affinity for it. But in the new world, someone who has shown interest in carcinogens and also someone interested in dietary science in humans would be inferred to have potential interest in this topic, even if they've never read an article on aspartame ingestion and cancer.
And a human wouldn't have to come up with that set of criteria. I would take care of that. Sorry I'm having a little trouble. Here we go. There we go. So alchemists does embeddings, which is kind of the thing that I do on three topics.
Content topics and people. So in that way it's very different from other CDPS and it's a kind of an extension of, of how LMS are used usually around just content itself. An embedding, which is as say, the language of LMS is just kind of a multi-dimensional array of what something is or isn't about. And what makes alchemist unique is that it produces embeddings for all of these just all of these types of first party objects.
So let's look at some examples of functionality that you can drive with this. So content tagging. We'll talk about all of these in some detail. Infinite affinities, personalized content recommendations, segmentation based on inferred interests and then audience deep search. So content tagging under alchemist understands complex topics.
It discerns patterns and themes within content, and it recognizes the intricate relationships between different academic areas. And it can tag content. This tagging is consistent and complete, and it's across your entire content corpus. And this is in addition to not instead of any existing tagging or taxonomies that you currently have.
To understand people's interest in infinite depth. All chemist understands the ebbs and flows of interest as time passes and as people engage with more content. Affinities are scored by topic, and because topics are understood semantically, even topics that don't appear in content or are new to the world like COVID in 2019 can be scored. Uh, personalized recommendations. So if you understand people's interests in infinite depth, um, you're able to do the same thing for, for, for users.
So you can match users with content they're likely to engage with. But that they haven't yet seen it. AI driven content recommendations at the level of individual profiles are now an out of the box feature in home. Segmentation based on inferred interests because hum can comply topical affinities to people. It can be used to create audience segments with those affinities as criteria.
This means you can create within seconds a segment of people interested in aspartame and cancer, and then you can combine it with other criteria. Like I'd like them to also be a senior researcher. I'd like them to come from Germany, Germany. I'd like them to have published with us before. And those segments are created in seconds. And then audience deep search. You can now search for a group of users in the same way you search for content.
So paste in a description of a webinar or a special issue description or title, and find immediately all the people in your audience, whether known or unknown, that would be potentially interested in it. You can do the same thing with a manuscript abstract to find potential reviewers, for example. There are tons of other possible use cases. The next ones we're working on are these. These are the next three.
Because we're tight for time, I'm only going to talk about one. And that's the special issues generator. So he was able to see gaps in the scholarly record for a particular publisher as well, as well as places where there are high topic content engagement and low amounts of content. And so it's able to drive that pattern to generate a series of insights on the connections between the content items and make recommendations, including, if you want, title and description of what special issues you might want to publish.
So hum does this so that you are actually able to act on the data that you're collecting. So hum collects data, it structures it, it has tools to interrogate it and get insights from it. And we've built these tools that let you action it. And we help democratize data and AI by building these tools in a way that they're easy to use so that it can sit on every desk. You don't have to have a data scientist to do this.
Um, if you're interested in learning more, we'd love to be in touch. You can visit us at homeworks, or you can take a picture of this QR code and reach out. Thank you, John. Our next presenters will actually have two now. Ravi Venkataraman taramani, the CEO of cryo docs, and Yvonne Kemp fenz, executive director of stitching a switchboard.
Thank you, David. Hey, it's great to be here from Chennai and to collaborate with Ivan all the way across to Europe. So happy to be here today. We're going to talk about an interesting topic, which is opening up we know is all about being open. But what does open up mean? Let's find out.
so if you look at what credit is, credit is an ecosystem for scholarly publishers that manages publishing workflows end to end, from submission to review to distribution. And the switchboard is a mission driven, community led initiative designed to simplify the sharing of information, actually metadata between stakeholders about open access publications throughout the whole publication journey.
Yvonne, can you tell us a little bit about the challenges of the. Yeah, yeah, Yeah. Challenges of the landscape. What we've seen over the years is there's no consistent use of, of metadata and pids, which makes it really, really challenging to, to interpret and to connect to your research.
If you're a research funder or an institution. A big challenge in the landscape in the publishing, in the publishers landscape is the lack of interoperability between the systems, the editorial production, distribution and so on, and the new business models and the policies from and the mandates from funders bring actually complexity and inefficiency. A lot of communication and exchange of information is needed, and that's highly challenging.
So what is the consequence is that well-intended policies and agreements can be confusing and not always effectively implemented and hard negotiated agreements not always realized to the full. And last but not least, the progress in and the development of new business models is slow. Now how do you always switchboard as a community initiative?
Been around for a couple of years is benefiting its if we Zoom in to one of the stakeholder groups because it's really by and for research funders, institutions and publishers. The publishers behind the initiative all want to support a smooth and compliant author journey and want to report on publication output to relevant institutions and funders. So joining the switchboard benefits by improving your workflows, facilitating publication arrangements and increasing publication visibility.
And as an intermediary. And that's the efficiency part that's well known from other industries. I always like to compare with swift in banking. Working together on standardized exchange of factual information is efficient. Thank you. One so switchboard has built a really efficient solution, and what docs has done is built a great solution for journal publishing all the way, starting from when the authors submit their manuscript, going through the whole peer review process.
Once it gets a decision, let's say it's an accept decision. It goes to production through a bunch of various tools and value added steps, and at the end, once it's ready to distribute, we provide it to the hosting platform as well as distribute it to a lot of third parties. And in talking with Ivan over the years, we realized that over the last few months, rather, that, hey, we have a lot of customers who are on the platform and they're all a lot of them are publishing, but they're having challenges in terms of getting this message out to their community.
And when I heard about switchboard, it made me think, hey, how can we make it easier to get to switchboard? Switchboard is a common language that everybody can speak in terms of spreading these messages to institutions, to libraries and such. And so we need to come up with a way by which you allow our publishers to get on and reach the world. What we have in is something we call a click universe. So click universe is a set of what we call last mile connectors.
And these are of various kinds, starting off with content indexers like clarivate and dimensions, hosting platforms like atypon and Silverchair and repositories like you have dryad. And then finally with identifying platforms like Ringgold and ORCID. But we also have some lookup platforms like thunder registry or reviewer locator and made us think, hey, how can we look at integration with switchboard so that we can now bring them into our click universe?
So what godox did is we effectively collaborated with Ivan and looked at a way by which we could take the content that's already on the platform and look at building a click where we take the data that's exported to us after acceptance and then use that data to connect to switchboard. What happens is switchboard uses a JSON format, whereas the content that's coming in is in XML or in JATS. And so there's a need to translate that content to be able to fit into the switchboard and send that signal to everybody else.
So what we did is we built a integration where upon approval of a particular manuscript, let's say it's the version of record upon approval, a P1 signal is sent to the switchboard portal, at which point we then track that particular message as OK is sent that message and give a notification to our publishers that this has happened. And on the way. Switchboard side.
Their particular portal receives that signal. And then once it receives a signal, it processes it, and sends it to everybody who is interested. So what happens is signals go out to relevant research, funders, institutions. So as part of phase one, this is what we've implemented in phase two. We're also going to be supporting E1 messages that allows for publishers who are adopting models and transformative agreements to now be able to provide this service to their authors.
And as we look at our particular community, we said, hey, our community is taken care of. But let's also look at other publishers who might have a challenge in terms of getting onto a switchboard. And so that's where we're also imagining a phase three where publishers are not on the ecosystem now have the option to send us metadata on their JATS compliant XML, and we would then route it to the switchboard.
And the way that would work is we'd go through we'll first verify the XML, make sure that it's valid, make sure it's compliant, and then also go through a list of rules that Bridgeport has in terms of completeness of information so that when it gets sent to the funders, they have all the requisite information that's necessary. So this is what we have built. I'm going to what we've done with Ivan is to enable publishers to comply with funder requirements, also ensure Seamless data flow through solid interoperability with publishers, workflow systems, and finally providing an efficient and cost effective way for publishers to connect to the switchboard API.
You want you want to add any other advantages from this particular scenario? I think also meeting the reporting requirements of esac and jisc. A lot of people who make deals have these reporting requirements, and this enables them to not only automate it in an efficient way, but also to comply with the industry standards that are developing and more to come on this topic later in terms of reporting requirements.
Fantastic So if you would like to learn more about us, please do scan the QR code. And what we have today is ready to go solution for you to get up and open up. Thank you. Excellent well, Thank you, Robbie and Yvonne. Our next presenter is Samantha Greene, head of content marketing for mercia. Thanks so much.
All right. So happy Friday, everybody, and happy halfway point of this webinar. Um, I am the head of content marketing for mercy and at Mercy we are all about restoring trust to the scientific record with industry leading fraud detection, multi source identity verification and automated workflows. We do this at all points of the research lifecycle, so our partners and the publishers that we work with can scale and diversify their published outputs with confidence.
And today I'm here to talk to you about our vision for the future of research integrity. Now I look at it integrity, like something between a balancing act and a pressure cooker. On the one side, we have intense market pressure for all stakeholders. Publishers have pressure to publish more, especially content authors. You know, we've all heard the publish or perish slogan, and editorial teams need to publish faster and move faster through that peer review process.
But on the other side, we have these barriers that make it not so easy. Um, oops. Whether that is multiple vendors, manual processes or legacy workflows and all of these can really hold us back from relieving some of those pressures on the different stakeholders. And that's why we at Mercy really don't think that there's a solution to research integrity without a holistic approach.
So that approach has to be proactive. That means integrity checks throughout the publishing process, both earlier in the research life cycle and at all stages of the submission and peer review workflows. That solution has to be integrated. Research integrity is a massive issue and there's countless forms of research misconduct and emerging forms all the time.
So our approach is really about bringing together best in class technology in one platform, one dashboard for our publishers to be able to access the best technology in the industry. We also have a diversified approach. Mercy got its start in the conference research space or early stage research, and we see a huge amount of value in diversifying integrity and how it is embedded throughout the research lifecycle, not just in that journal article.
If we can kind of embed more integrity checks into conference research conference proceedings, that will have a trickle down effect, a cascade effect for those journal articles. And our approach, lastly, is connected. You know, fraud detection and plagiarism detection are incredibly critical tools. But a huge piece of research integrity is about disambiguating the identities of authors, reviewers, their affiliations and really understanding the community and who we're working with.
To remove all potential conflicts of interests, bias and so forth. Now, all of these principles are a key part of our approach to integrity. Our triple strength solution, as we call it. We verify authors and content using multi source identity verification. We prevent fraud with early alerts and alerts that can be embedded throughout the publishing workflow.
And we protect. We have that proactive approach that I was talking about with a comprehensive dashboard that gives you the ability to analyze trends, track different types of emerging forms of misconduct. And so on and so forth. And all of these come together to really save editorial teams time. It saves time against manual checking for quality issues or integrity issues and allows peer reviewers and editorial teams to perform a much deeper level of review and content evaluation.
And it looks kind of like this. Lots of different checks. These are 25 of our integrity and pre-flight checks that are part of our program. Um, and yeah, we've got dozens of checks. They all exist in a single dashboard and we're adding more checks each month. You know, in the next quarter alone, we're, we're poised to add, um, you know, potential checks around, um, image manipulation, more enhanced content checks, things like that.
Um, in my 7 to 10 minutes, I obviously can't talk about all of those checks. So I wanted to highlight sort of a, a few key features, some highlights. Um, and to do that, I really wanted to start with the dashboard because I think this is such an impactful way for publishers and editorial teams to set strategic directions. Um, you have the ability to analyze at the portfolio level, the journal level, the volume level, drill down into the article level and see how different journals are performing against each other, what trends are emerging, emerging and set journal policies that can help you to mitigate future risks.
Um, I also think at this point it's really important to note that these checks are, um, a way to support editorial decision making rather than dictating it. Each publisher has the ability to view the pass fail scores of different checks, but then drill down and see that nuance, see the context and set the thresholds and triggers that work best for their program. So thought it would be good to kind of highlight a couple of different checks.
The first is retractions. Um, obviously the publishing process can take a really long time and you know, not all research misconduct comes down to ill intent. A lot of times it could be mistakes or errors from that sort of pressure to move faster and to publish more. So, um, with this particular integrity check, we're able to track the citations of retracted papers, which might even have just occurred in the time since submitting to publication, and the author may not have been aware of those retractions.
So with this, we're able to stop the, the spread of misinformation and mistakes and really kind of correct the scientific record in real time. Um, and the last check I wanted to spotlight is content detection. As we've already talked about in this webinar alone, this is really kind of a complex issue and one that is being talked about a lot. Um, but with, with what we have, what we've created in our integrity checks, you're able to get a sort of probability score of how much of a submission might have been created by AI.
And then you can go into the paper and analyze it further and sort of see the text, say exactly where and how likely it was that it was generated. Now think policies for appropriate use when it comes to content are still evolving. But no matter where different journals and different publishers land on this, it's going to be absolutely critical and a critical first step to be able to identify I use in submitted content, and it's a really rapidly moving zone and we're constantly improving this particular check.
Um, as, as we learn more and as we test it and use it more. So to close, I really wanted to reinforce the impact that this type of integrity program has. Um, it eliminates the risk of damage from preventable retractions. It helps you to reduce risk by analyzing your full portfolio with nothing kind of getting left behind or falling through the cracks. It prevents against revenue loss for those systemic integrity issues and it saves time.
Manual quality checks will become a thing of the past for reviewers and that is my close. We are currently trialing this integrity, service so anybody who's interested can reach out via the QR code at the end and take a look firsthand at what this can do. Thank you, Samantha. Our next presenter is Jason Roberts, senior partner at origin.
Hi, everybody. Greetings from Toronto. I'm going to talk to you today about origin reports. Origin reports is a subdivision of origin editorial. And what is it? Basically, in a nutshell, it is a browser based reporting tool that we designed for editors, editorial offices and publishers, and we designed the reports with the utility of the end user in mind.
So that these reports are literally hundreds of charts and tables that instantly output data in the way that editors, publishers, editorial offices want to receive this information. Quite often there's a challenge in producing data in the style that they can interpret quickly. We've removed that challenge and the idea is that, you know, all you would do is you just feed in your data from your submission system into origin reports.
And you can play with it there. And then and I'll talk about the user interface in a little bit or you can pre-program your reports and at a particular given interval it will just simply output them for you. You don't even need to go into origin reports once you've done it once. What we're hoping to achieve with this tool is really to aid interpretation and that so much of the challenge in reporting right now is literally just obtaining the data and then manipulating it into a way that you can then interpret it.
So we're moving beyond that, that phase by doing all the hard work for you. It's a challenge for many editorial office folks to create reports, particularly if they have to download data into Excel. Many, many of the people in my line of work do not know how to use Excel. It's most powerful. They might not know how to use a pivot table.
So we get rid of all of those challenges. And the beauty of it is that it is system agnostic. So if you're on one editorial manager, we're also building out two other systems we're currently building out to review by River Valley technologies. We can build out to any system in particular. So what is unique about this? Well, first of all is its design is that it's really easy to use.
We designed it with the end user in mind, and the end user was originally just origin editorial. It was an internal tool. But we've had so many other people look at it and go, wow, we could have our hands on it, that we've decided to put it out there on the market for anyone to use. It is intuitive in that regard because most of us, certainly at origin, are not technology people.
It instantly updates when you feed in your data. So if you have questions that you want to challenge your data with, you can just answer them there. And then which is particularly useful, say if you're at an editorial board meeting, I in the past when I would go, I would have my slides already set and then somebody would ask me a question like, well, then how many authors from China wrote a systematic review?
And then I'd have to go back home after the meeting and get that data run for them all. We can move beyond that here and that we can now just provide that in a live time environment. The data interpretation element that I referenced at the start of the presentation is really that we've designed the slide, the charts in particular to give you quick shortcuts to interpretation. So instead of complex tables, the charts will actually point you at the information that you need to know.
In particular, I'm finding that this is useful with things like the spread of data. And so you can see well, you can see patterns, but also patterns that show you your consistency. And this is really important if we're to understand the inefficiencies in editorial office workflows and then in turn, the service that you deliver to your authors, which is ever more critical in an author as customer future.
A neat thing is that we have these portfolio based reports so that if you have multiple journals that you're in control of, you can literally copy and paste, if you will, the, the reports to the other journals. And again, once you've designed them, once you need not ever go in again. So just a quick detour for a second. Who who is origin, if you're not familiar with this?
Origin is the largest independently owned provider of editorial office services in North America. We work with hundreds of editorial offices. And so therefore, I think we're uniquely qualified to talk about this, that we've seen all the different reporting contexts that we could ever possibly imagine. Um, so why is this. So what is the problem we're trying to solve here?
Why is reporting. So hard? Well, first of all, it's time consuming. To give you an example, one of the clients that I used to do their reporting for, it used to take me a week every quarter to run their quarterly reports. I had to extract the data out the system. I had to clean it up before I could use it.
And then I had to create all the different charts in Excel and hand it over to them. And that could take me a week. It now takes me less than five minutes because the data is pumped into origin reports. I've already pre-programmed my reports, designed them so they look exactly like how this particular society client wants them. There's 32 different charts and tables I can give them.
In the past, I could only ever give them 6 for each journal. Now it's 32 and it's all done within five minutes. And now I can spend my time interpreting the data and it's been revelatory. We've been able to see patterns and behavioral changes in the authors over the last few years that you just simply couldn't see in the morass of data that we had beforehand. That's the whole point of this tool is to try and expose things that maybe were hidden before.
Um, we often find that data is poorly presented. We've seen what journals have done before. And we find that they're using the wrong metric to report on something or they are actually just literally reporting it incorrectly. We know of actual inaccuracies in the reports that are provided by the submission systems. I'm not going to name them. But for instance, in one particular case, if an author has two institutions in two different countries, it can change the count of the number of submissions.
And so there's little things like that. So we actually correct for that so that you don't have to as the user. Um, so we also, like I said, we're trying to add a level of sophistication that maybe not been there in the past with journal reporting and that it is, you know, that we can do things like report the spread of data. We're trying to move away from generic data so that you can change your parameters to get more accurate information in a classic example.
I would give you is when you report turnaround time and if you don't, a lot of reports that maybe are giving out that are generic include things like immediate decisions which you probably wouldn't want to report if you're in the editorial office. It's a challenge to display results. It's a challenge to understand things like a relational database. If you have to program your reports, you might need to know, well, how do I connect the reviewer database to the manuscript database?
And if you make the wrong connection, you can get different results. So we've done all the hard thinking on that. We know, which is the preferred way to report. And and we've come up with these standards that mean that you don't have to worry about them. Trust us. We've already tested this and it works. Um, one of the neat things that we also report on is that and you can turn this on and off depending on whether you want to do this is parameters and data inclusion criteria.
So basically, how did you actually report something? We often find when people come to us and say, we don't understand why the results are different. It's like we don't know how people reported things before because they never wrote it down. So the interface might look like this. So you can actually see your results on the right there. But then you've got all this control panel on the left. So here I can control for things like if I'm reporting total submissions and maybe I want to exclude certain manuscript types, perhaps I want to get rid of letters to the editor, for example.
And then again, you can change the visual display so I can add the column totals if I want. I can change the color. I could change the color to the color code for your journal to personalize it. Um, you can then also add on greater nuance to your data. So maybe somebody asks a question. Well, I only want to know about certain article types. Great I'll filter for the ones that I want to see every time I do this on the control panel, on the left, the data updates on the right.
And then you can also filter things by country, by article, type by decision, type by the editor, type of editor. If you wish. You could exclude editors by name depending on what you're trying to report, and you can instantaneously change the date parameters. So if you want to report annually, if you want to report by quarters, you can just do that in the click of a button.
And then there's many ways to skin the Kat. And so if you don't like a bar chart, you could maybe use a map or a donut bubble chart, whatever. You can change the x and the y-axis. You can flip things around in the tables. All of these things are designed at the click of a button. And for your utility. Very quickly, because I'm almost out of time. Some of the use cases that you might come across for editorial offices, this is obvious, you know, for just general reporting, but also maybe for things like reporting on poorly performing editors, you might want to report on a revolving review of behaviors because they are evolving.
And so you might need some historical context to understand what's happening. Now for publishers, it could be that, you know, maybe you're a publisher that's got to go to a meeting where you've got to do four different board reports. Now you can just do this with a click of a button and have your report ready to go. And so there's no having to, you know, again, time consuming. It's all ready for you to go for societies.
Maybe you can do, particularly if you're monitoring your editors performance, maybe if you're paying them for, you know, for performance related activities. We have a whole suite of report cards. You can even do individual report cards that, you know, every editor can see what they have to improve on their performance on. So finally, how can we help you?
Well, we can design some specific reports for you if you wish. We can do deep dive analysis for you. If you don't want to do it yourself. And finally, just to mention that there are cross journal reporting options coming and further link outs to other systems. And so basically, these are all the reasons why you should be using origin reports.
And you can go play with our sample data there. If you wish. And Thank you very much for your time. Please take a photograph of that QR and code and go visit us. Thank you. Thank you, Jason. Our next and final presenter is Florian kistner from publisher. Hello, everyone, and Happy Friday.
Um, I want to show you how the fully cloud based business process ecosystem x publisher can digitize. And transform publishing processes and even processes around that. Let me give you an overview of my presentation. I want to start with a simple three step publishing process, create content, manage content, and publish your content.
We have designed specific solutions for all of these three steps. Editor our based XML editor enables anyone to create structured content the digital asset management solution to manage your content through the whole asset lifecycle and our multichannel publishing solution, which allows fully automated production and publishing in various output formats and to multiple channels.
And these are not some third party systems that are our all own developments. So we guarantee Seamless integration behind. And beyond that, we have a lot of functionalities which come out of the basis of the fully cloud based business process ecosystem, which make all of these three steps even more efficient, but can also be used for processes around that our powerful and flexible metadata management to enrich your content a workflow engine with graphical and editor to model and enforce your best practice workflows for efficient collaboration on your content.
And last but not least, a large variety of benefits of a true software as a service, super secure, fully accessible, intuitive and highly available and easy and fast to configure and implement. I have a detail slide for all of that. Let's start with the first step of the publishing process. Create content. So the basis for efficient publication is XML.
I think by now nearly everyone has understood and would agree the major benefits. The output is structured, machine readable and you have an industry wide standard. But there are also still a lot of challenges to create XML. You either need a lot of technical proficiency, so basically the ability to code or an external vendor probably somewhere far away where you don't know what happens to your data and content.
Our solution for that problem is x editor. X editors are fully web based online XML editor, which allows anyone to create schema valid XML without any technical proficiency. The editing interface looks and feels like word, but it behaves completely different. It guides an author or editor through the document and the schema and creates the XML code in real time. Due to the rule in the background.
It only allows the author to insert elements or attributes which the schema allows. So we guarantee only schema valid output can be created. Talking about schemas, we out of the box support the whole chat family, which makes the most sense for Scholarly Publishing but also others. And we can even create completely custom schemas if a client wants that.
So to summarize with editor, anyone can create schema valid XML without any technical proficiency or the need for an external vendor. That's sad. The ability to easily create XML internally, you're able to manage all your content internally and win back control. For that, we designed the digital asset management solution.
Where of course ex-editor is Seamless integrated, but you cannot only manage XML editor files in the dam, but also images, word or other Microsoft Office documents, videos, InDesign files, whatever you name it. All managed in one place. And for everything. Now you have to change the chance to have a real single source of truth, which you can create, edit, manage, collaborate on, share work through the whole asset lifecycle and bring to some kind of final state.
The dam has functionalities for professional licensee and copyright information management to ensure, for example, that images aren't licensed twice and to avoid copyright infringements. Also, there are very powerful tools for metadata management and collaboration above that. I have separate slides for that. So to summarize with publishers, digital asset management, you can win back time and control over your assets, avoid redundant work, save resources and costs.
Um, before I go to the publishing part, I want to talk about metadata and collaboration functionalities, features which come out of the basis of our business process ecosystem and make the especially the dam solution much more powerful. Let's start with metadata. Of course, XMLS have metadata and that can be handled with an editor.
But also all files have metadata like file size name images, have image properties and assets in the dam, have copyright information and licensing information. In addition to that, very important, but let's say standard metadata we can easily create with publisher custom metadata forms. Just create a blank form with drag and drop drop in text fields, radio buttons, drop down menus, whatever, name them intuitively, arrange them in size and order, define which of these fields are mandatory and therefore enforce data quality within the system from the beginning and decide for what kind of assets based on file format, category, location.
You want to have this form available and therefore ensure data quality based on your organization's needs. In combination with our integrated intelligent full text search, this increases findability a lot. So find your assets instead of searching for them. Another super cool functionality out of the box and out of the basis of our business process. Ecosystem is our integrated workflow engine.
The graphical editor allows administrators to model whatever workflow an organization works with. So publishing related, for example, peer review, review loops, copy editing, proofreading, proofreading and many more. You can assign tasks to individual users, roles, user groups or departments, but it doesn't only allow intuitive modeling of the workflow but also makes them executable in the system immediately.
So how do our clients benefit from the workflow engine? Well, it can ensure that internal processes are followed and therefore process quality increases. And therefore the quality of your content and work in general. Organizations can establish best practice workflows to increase efficiency and collaboration across teams and departments. It helps to avoid email, ping pong every change content editing approval signature are documented in the system with username and timestamp.
Unwanted changes can be done easily with our functionality. Time travel and last but not least, it helps to reduce idle times a lot. Easily assign tasks to teams instead of individual users. Find substitutes. Also, we can enable push up notifications for the browser for our mobile app and email notifications and implement escalation mechanisms. And as I already mentioned, that's not only usable for publishing related processes, but also every work process around it.
You want to digitize. So we internally, with our 450 employees, use this for our travel expense reports or application processes and even more. So to summarize, our integrated workflow engine allows to enforce best practice workflows, avoid idle times, and therefore enables for easy and efficient collaboration on your content.
Back to the overview. Now We created machine readable content with editor, enriched with metadata, collaborated through workflows, managed it in the dam, and now we want to publish highly automated. That's where the publishing solution is built for and where the magic happens. We take the single source of truth machine readable asset like the editor document or images and runs the run them through integrated production services like antenna house print technology or InDesign server to fully automated create multiple output formats like HTML, EPUB InDesign PDF for different publication channels.
So digital could be a website storefront or open access platform and print could be magazines, journals, books. So the idea is to reuse your single source of truth, high quality content, publish it to multiple channels at the same time, and therefore open up your organization for completely new publication channels and that highly efficient and cost effective. Then, Thanks to automation and again, no external service for typesetting required everything through automation and this end to end.
So that was a quick overview about the publishing process with publisher. But I want to finish with some more generic but very important benefits of our cloud system. First of all, high availability. You and your colleagues can access your content anywhere, anytime, certified. Data protection and security. With our European background, a lot of clients in the governmental area and certification certifications which currently we are the only company worldwide to have.
I'm more than confident to say that we are the most secure cloud system worldwide. And siem, see, time is to you. So let's finish with one sentence. The platform, the business process ecosystem is highly configurable through a no code, low code approach. So this enables us to let our client start within weeks and avoid lengthy and costly implementation products.
more than happy to demo parts of this Follow the QR code or reach out to me through email, LinkedIn or our website publisher.com. Thank you, Florian. Well, now it's time for just a few Q&A. We're a few minutes over, but if anybody has any burning questions, please either throw them in the chat or just unmute yourself and speak.
and we'll just give it just a few moments. In the meantime, as I mentioned, pull out your phones and scan the QR code and you'll be able to be connected to any of our six presenters or actually seven. And you can follow up with them independently. So we'll just give it a second here to give you some time to scan the codes.
in the meantime, if you can't unmute, please just type in your question and I can direct it to one of the panelists. If not, you can follow up with them directly.
OK um, I want to thank all the panelists and, of course, you for your participation today. And in the innovation showcase. We have upcoming next, um, a open access presentation for our training series.
Um, so please note that there's one on the 19th and one on the 20th. And with that, this concludes our session today. Thank you all for being here and we look forward to seeing you to the next innovation showcase. Have a great day. Bye bye.
And I believe that's it. So, presenters, Thank you very much. I didn't see any other questions that came through, so.