Name:
SSP Innovation Showcase (Winter 2023)
Description:
SSP Innovation Showcase (Winter 2023)
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/4c45c1aa-fff0-474a-ab04-0fe1047ea87d/thumbnails/4c45c1aa-fff0-474a-ab04-0fe1047ea87d.png
Duration:
T00H55M40S
Embed URL:
https://stream.cadmore.media/player/4c45c1aa-fff0-474a-ab04-0fe1047ea87d
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/4c45c1aa-fff0-474a-ab04-0fe1047ea87d/2023 SSP Innovation Showcase GMT20230201-170055_Recording_ga.mp4?sv=2019-02-02&sr=c&sig=ZrNKzGjU1cLMe3Lzne0FQCfKdhuI%2FGwLP32%2FQbGx6Hk%3D&st=2024-10-16T01%3A54%3A35Z&se=2024-10-16T03%3A59%3A35Z&sp=r
Upload Date:
2024-04-10T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Well, it's 120'clock, so we should probably get started. Hello, everyone! Welcome to the innovation showcase hosted by Ssps. Advancement Committee. We're happy. They could join us here today before we get started. Do yourselves a favor, and pull out your cell phones. They're gonna be QR codes. You're gonna want to scan Next slide. Before we get started again. I want to remind everyone today of Ssps code of conduct.
If you want any further information about this code of conduct, you could scan this QR code that you see on the screen So I'm David Myers, and when I'm not volunteering for Ssp I'm the CEO of data licensing Alliance, the first marketplace, making it easier and more efficient to license Stm content for AI machine learning on behalf of and as a member of the Ssps Advancement Committee. We're happy to present this showcase where each speaker will have 10 min to present.
After all, the presentations are complete, you, as participants, can ask questions. If you have any further questions, please use the chat functionality for Q. A, which I will direct to the appropriate panelists. Of course you can always ask questions in the chat as they go along, and the panelists may, in fact, be answering you in real time. Further, each panelist will provide you with a QR code that contains their contact information. Should you want to individually, connect with them at a later date?
So today, we have 4 great companies. The presenting companies are copyright clearance center knowledge works. Global data is here. It's dry. So our first presenter from Ccc is Charles Hemingway. Oh, everybody, and welcome first, of all, it's that gets us paid for an opportunity to share what's going on. Ccc. With the broader audience. We appreciate it very much, and I'll just start off by saying some incredible challenge.
It's hard for me to even explain how to make toast in 10 min, and I'm gonna try to discuss something fairly complex. So welcome we'll get started now we'll talk about the latest service that's become available from Ccc. It's called but we agreed to tell it is a service that makes it easier for publishers to model agreements and share them with their buyers. So right now in the ecosystem. That's the merge.
As a significant difficulty for publishers, and we're aiming to help them with some some new tool. Let's talk about some basic truths that institutions, publishers, and sponsors of all kinds in the market are working, trying to collaborate around deals. Seems to be easier for large publishers and a little bit more difficult for small publishers especially to get attention and to get time with the with buyers.
So this is a great tree statement, because if we're all working on this, you know, in a vacuum, it wouldn't be super complicated. But by nature the ecosystem is fluid, right? Nothing's ever exactly perfect in terms of data. The day it moves changes, and there are outside influences which need to be accounted for so modeling agreements and negotiating agreements for publishers is incredibly complex undertaking and what I'm what I'm imagining and what this Means to me is that you know a lot of the work that used to go into subscription offers and subscription renewals has has decamped from there and has moved into this space away.
Agreement management. So it's a you know. It's a similar kind of undertaking. But it's still very, very different. And publishers don't have really what they need to to work with. But at the base of it all this data, right? We can't have, meaningful dialogue with buyers and sponsors if we don't have a a source of truth right that we can all agree on that. This was our institution, open source, output, and from there they there can be conversations about what does an appropriate screen that look like?
What does a a relationship looked like And that's led to some compounding problems for the publishers. So poor data quality, lack of tooling has led to what is perceived as a lack of transparency in the market. And I say perceived, because I know publishers firsthand want to do business transparently, but because of you know the kind of chronic bickering over data. It feels like publishers are being resistant to that to to that kind of tenant of negotiation.
And so for our publishers, who we know closely. This is somewhat representative of the processes. They go through to put a deal on the table, put an agreement in front of a sponsor. Now, if this were a a dance step, it would probably be more of a fox front, because publishers are gonna line up. Probably doing steps, 5, 6, 7, 8, 2, maybe 3 times before there's some agreement about either quality of the data.
And then be, you know, following that in terms of terms of the deal. So this is not sustainable for a publisher that has hundreds of agreements. This is incredibly taxing for publishers that want hundreds of agreements. It's incredibly daunting to imagine doing this love of work. So what we are doing to support that is, launching a service called the Way Agreement Intelligence. It's an automated service to do agreement modeling. It uses enriched data. We have an AI that does.
Enrichment, using the ringled hierarchies and Ccc's incredible backlog of transaction data. And it helps publishers build sensible models it can be shared easily and in a tool that can be used by non data experts. So vps of sales area sales Admins, we're building a tool with, you know, a regular folks can use, even if you're not a a data scientist And what it's gonna do is break like the process down into a few easy steps.
Prepare a baseline of data, model your agreement, and then tweak and adjust and analyze So let's tell you what that looks like whoops Operator. Error. Sorry about that step. One is preparing a baseline that really means segregating the data that you need to model forward. Some publishers and some users will look at a whole data set and gain really interesting insights about the business of large right.
Who should we be pursuing for agreements? There's some great insights that we've talked level, but the purposes of modeling it allows you to segregate and draw bright lines around your data to make sure that if you're trying to offer a deal over 2 3 years that You're working from the proper data set to build a forward working model. Second step. It's really all about creating scenarios. What if scenarios, trials, time, time, travel?
What will the deal look like under certain conditions? So there's a very elegant set of tooling which allows you to compensate for increases of away content in the portfolio increases and output from the partner or sponsor decreases in certain parts of your Portfolio, i. E. Will. We have less going into hybrid journals, and more coming up the the open terms. So there's really dense set of tools to allow you to create those. What if scenarios do that to time, travel and understand?
How do we get to a place that's fair for us and for and for our partner? And the last step is the analyze, and a gyps. So this is a place where you can. This is a tool where you can continue to tweak the model. You can easily comprehend if it is or isn't, what you've been discussing with the client. Oh! And my favorite part of that is, there are really elegant visuals, graphics that you can take them move into your proposals.
Yeah, as well as the baseline of data that you use. So when we talk about working in transparency with your with your buyers and market partners, I can't think of any better way. Right. You could build your proposal. You give them the data that you based on. And Matt to me, says, transparency, so it's an exciting new platform. We've got a handful publishers that are touching it and trialing it right now. And we're excited to have to have more.
Get in and more touch. So I would definitely welcome any contact from this group. Anybody that is living under this kind of this kind of rule that has to do this sort of thing, I'd love to talk with you, love to show you exactly how it works up close and the and take your feedback and with that I'll turn it over to our next presenter. Thanks. Charles Thanks very much. Our next presenter is Tom Buyer from Knowledge Works, global Tom Hi, everybody! I'm Tom Byer. I'm the director of platform services here at a knowledge works.
Global hub, factory. And I'm gonna talk a little bit today about the module that we added to our publishing platform. While back it's called pub, gen, and basically allows you to enhance your platform with free form content. So I mean to talk today a little bit about the the distinction between traditional content and free form content of what we mean by that some drivers and benefits for doing this, and then look at a few contact case studies so with traditional content.
Well, we're really talking about in terms of a publishing platform are the books and journals that you know as a publisher, you're used to to publishing, and that content goes through a relatively long, you know, workflow process it comes out of peer. Review after it's been accepted, and then it goes through copy editing and composition, and the construction of the E deliverables before it gets delivered to the hosting platform so as a hosting platform, we get a package of Xml files and Pdfs and images and Supplementary material, and we go ahead and put that, you know.
Oh, onto the platform In contrast, free form. Content is author directly in the platform there's a very minimal workflow process there's like a Qa process. But beyond that there's there's minimal workflow. And it can consist of really any kind of content that can be embedded in a web page that you know, video audio, interactive text images. Whatever you want, it often will want to be interconnected with your traditional content with the articles and books that you're publishing, and it often will have a higher level of design polish than the traditional content, just because it you know it's very easy to take advantage of any of the things that you can do in a web page, and oftentimes it's the the nature of the content is such that you want.
That's sort of higher level of of polish. So that's sort of begs a question. And how is it different from just a traditional web page, you know, as a publishing platform, we've allowed our users to create web pages from the start. They can, you know, create as many web pages as they need, but they're really treated very differently from the publishing content. And so with with this freeform content, we treat it as real published content. It's can be tagged with at the same subject. Terms. It can be searchable. As I said, we provide some tooling to make it easily cross, linked both among different.
You know elements of this free form content, but also, more importantly, with the traditional content it can have. You know, formal contributor tagging, and we can even register Diis for it. For this content, so that that's really what we're talking about here. And you know what are some what were some of the drivers that led us to do this? Or that, you know publishers wanted to sort of better integrate content that was produced by different parts of their organization.
And they need. And we also got a lot of of feedback from from our publishers that they needed to be able to respond more quickly and potentially handle one off pieces of content or try to handle a new and emerging content formats. And so that's really where this sort of was, you know where the idea for this came from. But it has had a lot of sort of add-on benefits. There's the speed at which you know you're able to get content up in the flexibility. We can really handle pretty much anything the you can like I said that you can put in a web page, but then it provides, you know, significant benefits to your publishing platform in that it really improves, SEO, and discoverability.
It provides extra content, that is often, you know. Then, interconnected with your traditional content, and brings people to the platform in different ways, and finally, it can consolidate disparate platforms that you've got, and therefore sort of reduce your infrastructure. You know, overhead, so let's look at some sort of concrete case. Studies. So you can get sort of a sense of what I'm really talking about here. The initial impetus for this really came from working with publishers who had news organizations, and they wanted to better integrate that news.
Content with their traditional publishing content. So one publisher that we you know, initially started working with was the American Society for Nephrology, they have a kidney news man it's a traditional monthly news magazine. But they also want to incorporate Daily News because they know everybody's online. And waiting a month, you, you know, was too long for a lot of their needs. Similarly, the American Veterinary Medical Assistance had a daily. I had a news roundup in their flagship journal.
But again with everybody online, they wanted to be able to add Daily News and also break it up because it that that news roundup had really sort of been treated as a single article, and by adding the sort of Daily News. You got a much more granular set of news items and again improve just discoverability and and SEO. So here we're just looking at that. That that kidney news online site. If we step into a search result, you can see that we can return both the news magazine results as well as the Daily News. They're totally integrated. And if we look at that daily News item, we can see that it's tagged with the subject textonomy and has all the affordances of a web page that you know in this case it's very simple. It's just a text.
Item, but it can really be whatever you want. Similarly, with Avma, we're looking at, you know. You a mix of journal articles and news posts, and again, those news items can include contributors that can include a full publication history images all of the things that you can do with us with a standard web page. Another big ex case of of the use of this has been blogs.
A lot of our publishers have blog platforms, and they're a great way to promote content. But because they're on a separate platform it can be harder to cross. Link, the content. And so we worked with our with one of our partners, brilliant to take their blog and move it onto the publishing platform, and this enabled us, you know, to automate that cross linking it can consolidated platforms and provided significant discoverability and SEO Benefits to the publishing platform. So we just take a quick look at that what you're looking at now is the the homepage for the blog.
You can see everything is tagged by their quite rich subjects. Taxonomy. It covers all of their various different imprints and you can see they've categorized their blog posts by different kinds of things. Interviews podcasts that kind of stuff. And if we look at one of those blog posts, you can see that we've got a nice multi column layout. You know we can. We can do a lot of things that you would expect with with the blog. So just a couple of different examples there, and you know what else can we potentially do with it?
Oh, and here, what what I'm showing is that we've again got the blog being returned within search, and in this case we're actually seeing both the original research article. And then the blog post with the authors about it, you know, and and mixed right together. So depending on on your user population, you can go, you know, they can go at whichever you know, level. They're interested in, directly from from this one place. So what other kinds of things can we do with? This kind of free form, content, sponsored content is something we've talked about.
Podcasts. As I mentioned, or something that a lot of our publishers are doing. And we're starting to to incorporate that onto our sites. What we're seeing here. Video, you know, embedding video is another common example. And then finally one of our publishers, the the Journal for of Neurosurgery, has a monthly journal club, where they collect authors to talk about articles of interest.
From that month, and then post that webinar. That's another great example of the kinds of things that you can do with this free form content. So hopefully. That was, that was of interest. And certainly, if you wanted to discuss further that there's the QR. Code, you can also contact me directly. So thank you very much. Hope that was interesting, and now on to the next speaker Great. Thank you, Tom, before we get onto our next speaker, which is Tim Vine, some data is here.
I just want to remind you all to post any questions in the chat or in the Q. A, and that icon you can find at the bottom of your screen. So without further ado, Tim vines from data, see Great. Thank you so much. Okay, yes. So this is data. See, let's get straight into it. This is a piece of scientific research I'm gonna use a clay like a match for a piece of cake to hopefully communicate what I'm trying to say.
Here. The reason I picked a piece of cake is that it's made up of multiple layers. There's the article which kind of is like the icing on the cake it's the summation of all the other parts that went into it, and it sits on top of all that. So there's data sets, code objects, protocols. That's metadata that binds it all together. And then it's the lab materials. So all of this comes together to make the evidence that is communicated in the scientific article And this is how the research process goes. So funders pay for the research they pay the researchers.
They pay the university overheads, they pay for the materials. So I just do the research. They make the cake. They then send the article to. A scientific journal after peer review. Hopefully the journal publishes the article, and then scientists tell the funder about the article in their end of Grant report, and you may notice there for the last 3 steps the rest of the cake has disappeared, and we're only really addressing the IC And this is a problem. Because if you divide the amount that we spend on research across the Oecd every year, something like 425 billion dollars among the 2 and a half 3 million articles that get published each year fun is a spending between 200,000 $500,000 on a Piece of research on all the on all the salaries, the lab equipment, and so on.
And the re, when rest most of the cake goes missing. All the sort of substantial parts of it never become public. It means that that public investment of science is largely being wasted And it's also a problem, because it means we can't do a lot of the stuff that we want to do. Like researchers can't leverage those other outcomes to to do new things. So we can't actually verify whether an article is correct without the data and code.
We need to rerun the analyses to see if we get the same answers. The authors did, and we also can't repeat research from scratch if we don't have detailed protocols and a list of the lab materials that the authors actually used. We can't expect that it would work in the same way, and also without the metadata. We can't find and reuse data sets effectively And so the absence of these outputs actually really hurts science. So the solution, as you've probably all intuited, is to ensure that all research outputs become public, all of the outputs associated with the particular slice of research made public together And why do we care about this particularly now? Well, as you probably all heard this, Nelson Memo just came out, and it has 2 pretty strong requirements in him.
One is that all published articles funded by Us. Government agencies, are accompanied by the data sets. That is, the icing must have the rest of the cake underneath, or at least one part, and these agencies also have to monitor their progress. And this is a bit further down in the Nelson Memo. It's a bit of skill, but they do actually have to monitor how they're doing with open data. And as you probably aware, this policy is set off a landslide of open science initiatives because everyone wants us to conform with whatever the Us.
Government is doing, because it's such a large actor in this field And this is where data see it comes it perhaps unsurprisingly. We have 2 solutions, so we could do baseline assessments. And this answers the sort of monitoring problem where we can help a stakeholder understand what what's going on in their corpus of articles in your journal, what proportion of authors share any data at all at publication, what proportions share code what's what happened after you brought In a new policy, and the compliance checks are for individual articles, and this is where we meet researchers at their article and help them understand how a broadly worded policy applies to their article, and what they need to do to comply with a stakeholder open science policy So baseline assessments. So we answer questions like, does data sharing change through time?
Our research is now sharing more data than they used to did. An open science policy I brought in have any effect. And what's going to happen? Going forward all the things I'm gonna do in future new policies I might bring new incentives. I might introduce. Are they gonna have any effect? And a really good example of this is what work we'd be doing with pl on the open Science index indicators we've been monitoring across their entire corpus from 2,019 onwards what has been happening and here they've Got. The dark blue line is pluses articles, and the dotted line is a comparator set drawn from pubmed central, and you could see that on average, plus is doing quite a bit better on having data sharing and repository.
However, 30% is still not a very high value. So there's definitely work to do. Code sharing is a bit of a different picture. It's low across both. The comparison sets, and the plus corpus. But without this information, plus couldn't make any decisions about what to prioritize and what to do. And so we see this as being absolutely essential for most stakeholders in the in the field at the moment, because without good data on what's happening, how are you going to go forward in any meaningful way And the next thing we have is these article compliance checks, and this is shows authors how open science policies, open science policies apply to the individual articles, so huge amounts of the cognitive effort associated with open science is because the pr has to take all the different policies that Stakeholders have introduced and work out how those broadly worded statements apply to their individual articles.
And it's difficult and annoying and time-consuming, and therefore it tends not to get done, and also because they know at the same time that the stakeholders don't really know what the author should be doing. They can get away with not doing anything. So we change that dynamic. We meet researchers at their article, and say, to comply with stakeholders X and Y and Z's policies, you need to share. The following data sets on the following repositories, the code objects need to go here. You need to put the protocols and protocols. I/O. Whatever combination is applicable, and because it's then very clear to the researchers what needs to be done.
The pi doesn't piece be doing on that leg work. They can actually get one of their grad students to do it so much lower. Burden on the lab, and because it's clear to them what they should be doing, it's also clear to the stakeholders what should be done, and so it's much easier for them to monitor compliance at the individual article level and the great benefit of doing this as well, is that We're then able to join these things together. If we worked with the authors to say, Okay, this data set needs to go on that repository. We can also record that method data and share it as event data on data side of crossraft. So that that linkage is then visible to the world.
And this is how we can build a rich metadata community around all the open science outputs associated with an article Okay, so the example here is what what? We've been doing with the lining science across Parkinson's. This is their blueprint for collaborative open science. And we've been working with them for almost 2 years now, and they are it focused on all aspects of the outputs associated with the articles.
And so this is one of our sort of windows. Graphics that we've created with them. So assessing the first version of the pre-print, the authors have just finished their research. This is the proportion of things that they're sharing, and then, when we end up, post the data, set, check, the compliance is way. Better, especially around data and code being shared. Oh, please still work to do around materials and protocols. But obviously we're going, you know, we're going to make progress on that going forward.
Okay. And I think that's all I have. Thank you very much. Great. Thank you, Tim, for that presentation. Our final presenter is Kg. From strive kg. Thanks, Dave, and thanks to the Society of Scholarly publishing for this opportunity really great interesting ideas which we have heard of the last 30 min or so, just a quick background on stripe, we are a leading provider of technology and solutions to publishers and information providers, and you know what I want to present to you today is something which you've been working around to enable general transfers.
So let me get this slide show going Okay, hang on a bit Oh, fish! So we all know what the challenge we transfer is right. Everybody does that today, a good percentage of manuscripts get rejected, not because of poor quality of science, but it's because they are not probably suited for the journal which they are submitted. And at the other end, within the journal, publishers, Portfolio, there are a set of journals which do not have enough manuscripts, so the whole question is, how can we better manage the portfolio and look at these submissions to find the right home for them while at the same Time making the author as part of the decision journey, helping, retain the loyalty and other experience.
And now, if you're going to do this on scale, we can't manage this on excel files and all those sort of you know workarounds. And it needs to be something which takes away human effort of management, tracking and provides enough data which then enables, you know, decision making in terms of how to build the whole transfer strategy. So that's what we wanted to do. As part of this exercise. So in convergence with customers. This came about as a challenge and we said, How can we build a solution?
Which kind of addresses this and that's how this transfer desk suite came about. So the whole idea of this solution was that it is a customizable solution. It is peer review system, independent. So provides that opportunity that there are publishers who are running 2 different peer review systems and moving journals between the 2. It's certainly possible to do that. It makes the author as a key member of the decision making process, and and that, we believe, is critical in terms of improving. You know, experience and satisfaction. It automates the whole transfer workflow so removes that effort out of the editorial assistant.
So they don't need to sit on weekends trying to look at excel files to see which came in or where to offer the transfer to it is scalable. So we started a customer with 10 journals. Today we are at about 1,100 journals on that same platform, processing close to 20,000 submissions. So it, you know you you could do 10 journals on it. You could do 1,000 journals and and the most important thing I would say is the data which gets generated at each point. There is data which is gets generated. And using that data, you, there are decisions which can be taken.
I'll probably give a couple of examples as we move forward How does this whole thing work? The whole thing works? Basically, the first thing is the input to Tds is actually an email. So what customers do is that when the rejection later is being configured on the system, at the same time, if the editor feels that this manuscript is suitable for a transfer transfer, email then this push to Tds and that creates a record in the system what then happens There is the general recommendation, so that record bases the journal looks into the journal recommended Module to find 3 to 5 suitable journals, which this, which would be suitable candidates for this manuscript.
Oh, this general recommendation could be a simple cascade list, a matching table that if it's generally it should be B. Cde. It could be Ml based algorithm or somewhere of a hybrid. Now we could develop that or many customers have their own journal pathways. We could integrate that. But the final goal is that there is a sort of account candidate list which is created, and then that becomes the transfer offer which goes out to the author, though the system sends out an email to the author listing the journals.
And there is enough information for the author to go in and look through to save the if they need to find more information and the outcome at that point in time for them is to make a choice to say out of, say, 3 to 5 journals which are provided there they choose a particular journal once a journal is Chosen, and they press the submit button. The editorial assistant gets an email to say, this author is willing to make this transfer. The actual process of the transfer. That is something which is outside, because that happens in the peer review system. So, if you're transferring between the same system, it's much simpler if you're transferring between one system to another system, it's a little more steps.
But the editorial assistant does that outside of the system, once the transfer is done, and the author acknowledges that transfer, we kind of complete the loop where we have the editorial assistant notify that the transfer is completed, so that the system can track and round Trip from a workflow perspective to completely capture that information. I talked about reporting centashboard. So at each stage we have the ability to track that data so you could look at what's the percentage transfers? What's the arrival of manuscripts?
How many emails have been opened. So all this sort of data, we look at it. So some of the decisions which we have to make as part of discussions with customers would be, when do you want to send this email out? Because the Tds actually gets the email right at the time. And the author is receiving the rejection. So sometimes, if there are delays, this email could reach a little bit earlier. So you want to time that at the same time you don't want to allow too much time in such a way that by the time your transfer offer goes out the author has already made up a mind and found another journal so it's about trying to find that balance there is no it's an art.
Rather than a sign. So just need to experiment. Similarly, it could be about reminders, right? So do we set reminders at 3 days, 5 days. What we have seen is that when the first email goes out, there is a a big increase in terms of transfer requests or coming through. And then the next bump happens when the reminder goes out. So it is possible that people would have missed the first email. So if we wait too long, like, say, if we wait 10 days to send out a reminder in that timeframe, probably the author has made another decision.
So it might be possible. At the same time, you don't want to bombard with an email every day. So fine tuning that could be 3 days or 5 days is another thing which becomes useful. The system is also customizable where we even tried to say, send out emails in native languages to see if that improves transfer letters. So you could look at Ab testing in terms of trying to update the email template itself of the transfer offer to see if that provides benefits. So the whole idea is generate that data, use that data in terms of improving.
Try different strategies and see how your transfer system works. This is a quick example of how the screen looks like. This is a static one at this point in time. But basically, this is the plain dashboard. You could basically go into the detail all the different reports are available. You could download the data as well and run Pws or integrate it with a standard. You know, bi applications to run reporting as well So that was a quick curtain razor you know, about the system itself.
We've got a a case study on this. You could. The link is here. You could download that you could use the QR code to you know. Send your messages, you know it's it's as we discussed with different industry stakeholders. There are new ideas which come in so happy to engage in the conversation. Thanks again for the opportunity. Thank you, Kate, really appreciate it. Well, now it's time for the Q. And A. If you have any questions for the panelists again, please post them in the chat or in the Qa.
Section, at the bottom of your screen so we'll take a minute there for Q. A. And while you're thinking about all these questions, there were 2 already that came through so maybe I'll just review them. First one was is Ccc. Using our institutional ids as well as Ringo. Given. These are OA agreements, and Rlr. Is an open registry, supported by many OA institutions. Should we be thinking and doing more with Ourr than a paid service like Ringo?
And Charles answered excellent question as of today. The Ringle data offers the density of hierarchies needed to enrich records for modeling. It does not require that it publishes, ringled in their internal systems, etc. Second one is Ccc. Seeing any consistency in the types of la deals. I know they are discount deals, multi player deals, subscriber membership deals, and all you can publish deals. And Charles again answered good question. We see about a half a dozen common agreement constructs that create a new quote, unquote families of agreement types.
There are growing cases of multi-payer and S. Twoo agreements that we are supporting. So those are the questions we have so far any of our participants have any others give it a few moments to give it a thought and type. Your question. Oh, looks like we've got a new one The question is, I like the the data center aligning science across Park Parkinson's example.
Do you think other funders will adopt these baseline checks and workflows? What's the takeup? Oh, yeah, absolutely. We're getting absent for right now. We're getting a lot of interest in the sort of baseline I think everybody is is digesting the fact that this mandate from the Ostp is pretty serious, and everyone's sort of scrumbling to find out where they are.
With respect to open science, hey, step is kind of unique in terms of being a very new funding agency, and therefore we're able to set the stage around open science to make it a condition of funding. And so that kind of platinum level in terms of how they're approaching this. But absolutely, yeah, we're working with quite a few other organizations. Some journals in an increasing list of funders to produce these these requirements on the individual. Articles to help pro compliance.
We've got about 5 clients for service, and some of them doing both Great. Thank you. Thank you, Tim. Come on. So you those questions, if you have any looks like we got another one. This is for Kg, can you clarify how papers are transferred? If an author agrees to a transfer, is this handle through Tds or our 2 journals communicating to So that's a great question. So the Tds is more of a workflow platform. So this is something which happens on the pay review system itself.
So if you're all your journals are set up on the same peer review system, there are transferred pathways which they offered. If you're using 2 different peer review systems, then I think you'd have to download it and re upload it into the system so that's something which happens outside of Tds itself. But this on feedback. What we did implement was a complete mechanism to make sure that once it's completed, that signal kind of flows back through a operator trigger so that Pds then captures to make sure that if this transfer was offered to the author the author accepted it the Transfer us completed. So it becomes one stage of getting all the data Thank you. Katie.
Any other questions. Here we go. This is for Kgl. Does Kgl data integrate with other discovery? Layers making an article, and it's related blog posts visible in a university libraries collection, for example. Yeah. It's a good question. So in the case of blog posts, they're typically in front of the the paywall. So they're entirely discoverable by any indexing service.
That is, you know, scanning the site Google or whatever. But we can, in fact, package these up and distribute them to downstream services just the way we do. Journal articles. So if you know, if there's a downstream service that will want to take that content, our export, Mac mechanisms can package it up and send it one Right thanks. Tom, well, now, we have kind of a question that came across, and maybe I'll log it up to all 4 of the panelists, and this is a kind of a more general question, which is, what are some of the biggest challenges to developing these new technologies.
And innovations. So maybe from each of your perspectives you can give it a kind of a quick comment. And I'll start in the right order. Starting with Ccc. Off the top of my mind. Aside from the usual resources and money constraints, play every everything we do. It's mostly about feedback, you know your clients who you're building for are running at 103 to 110.
So, getting their time, they had meaningful dialogue about what's needed and what's useful is is harder than ever. So I I think getting time with your buyers is is incredibly difficult. Thank you. Kg. Yeah, I would. I would totally agree with Chuck. We schedule regular meetings with our publishing partners to talk at that higher level, because basically, we don't want to add anything to the platform that isn't highly needed by our our publishers and there's no reason to build technology and just hope People come, we really want to be solving problems. So we do as part of our, you know, regular managed service of the platform meet with all of our publishers on a regular basis to discuss.
Not just you know what the day to day is, but where they want to be 6 months, 9 months, 18 months out, and try to make sure that the platforms they're ready for them when they get there Thank you, Tom. Tim from Davis here Do you like talk for hours about this having? This is because we're a startup right? Where is it's hard in lots of different ways. I think one of the key problems that's out there is.
There's not It's hard, yeah, for bigger for potential clients to take the time and the risk of trying out a new product. So, even if you think you you've sort of defined, you know you've behaved, said, that is this a problem you have. And they said, Yeah, this is that next step of like, okay, well, you're willing to pay for a solution to this problem. Are you? Is this important enough for you? That you're gonna be able to take time out of your day to to work on it with us.
And I think we're very lucky. Actually, in slowly publishing this very wide range of of actors and stakeholders, with different levels of engagement around these problems. So we've been able to find a few sort of early adopters and evangelists that have sort of helped us define and hone our products. But yeah, I recognize that's not always the case. And it's a big challenge Thank you, Tim. Kg, I can bring it home I I would say 2 things right, David. One is, of course, as you look at some of these, you know. Are we looking at really fads? Visa? Wait trends right, because you're going after something.
What might happen is that some something else could completely take that out. So I think that's important to determine early on, as you give the investments, and the second thing to what Tim mentioned also is about change management. Right people. Of course there is always assistance to change when you're getting something new. So how do you get those champions who are able to show the value those early adopters? I think that's important to get that going. You know the the inertia aspect of it is important. You get couple of people on, I think it the it, the Juggernaut goes on. But the first 2 people are important to get that feedback back in Great. Thank you. Alright. We got another question just came in, great with the growth of open data, open access and open science.
What are other big pain points or challenges for small to medium-size publishers it's great to see the work, plus and data C are doing. For example, or Ccc. But how can smaller society publishers also be part of these programs and pilots? Can Ssp help, so I'll open it up to the field, including anybody from Ssp. Wants to comment Can I answer, okay, this is this, we're getting into philosophy for a second.
Here, but for open science. One of the big problems that we kind of see is that they're at the main 2 actors. They're funded funding the science and publishers sort of shepherding the science on the publisher side. The they have the moment of attention from authors. Do this, or in a publisher paper. They also have articles that can be changed before they get set in stone. As the version of record, but on the publishers don't actually experience any economic value from open science.
That is an article that's completely reproducible, shares all the data. The code is just beautiful, and one that's completely irreproducible. They get the same Apc, the same subscription revenue. So you know, maybe the publisher would naturally be able to put some money towards it. But it's it's a heavy left on the other hand, there's funding agencies like I said that are spending hundreds of thousands of dollars on the research and all those outputs are going missing. But the only ever hear about articles in the end of Grant Report, by which point they've been version of record for 2 years.
Nothing could be changed. And so we need to realign this as a way to help Pub or our authors understand what the the funding agencies want at at the peer review stage. And so that mechanism and that's where small societies can help. Because they can say to their funders, Look, we, we are with you on this. We want to help your author, your your grantees comply with your policies, and how can we do that?
And I think that's actually all publishers and I think there's also, perhaps a business opportunity there where you say, Okay, this is gonna cost some money for us to do this work for you. Yeah. But anyway so that's sorry. That was a kind of a long answer, but No, no, this is great. This is why we're here. Thank you. Okay. Anybody else. Yeah, from from the publishing perspective. That's really kind of one of the benefits of the platform. We have our variety of sizes, of publisher on the platform, and certainly our bigger publishers are often leading in terms of the enhancements they're pushing us to do.
But we're very sort of careful in working with them to ensure that those enhancements are done in such a way that all of the publishers on the platform can then benefit so, and that means that then the smaller you know, publishers can can benefit from those enhancements to the Platform, especially with a lot of like the reporting requirements that Chuck was talking about with open access and the open science in general. A lot of that kind of stuff is just hard for the smaller publishers to understand what the issues are.
And when we work through those issues with our bigger clients, we can then turn to them and say, Hey, here's here's what you need to do. Here's how we can help do that. So I really see that as as being one of the benefits of of being able to provide a common, you know, publishing platform technology to our publishers, and especially with our smaller publishers, it's really being able to hold their hand through that and and help them, prepare their data, and, you know, develop the workflow that they need to, or where we can, you know, help automate that process as well.
And that's also why I like to participate in forms like this. Because then I start learning about things and can direct people as well. So just get. It's very much part of of our ethos in the way we work with our publishers to try to especially help the smaller ones that don't have the time to to keep abreast of this kind of stuff Thanks John. Anybody else. Chat I'll throw one in. I'll agree with Tom on the point of the platform. Effect right getting on to common platforms, because you have, you know, shared investment it's not up to each publisher to exhaust their resources, building every tool limit gadget. But it's it's shared across the group and best practice emergence and you can you can ride the Coat tails of the of the bigger buyers. If you're a small publisher.
So, but we're we're seeing, you know, small publishers and mid-size publishers. Struggle is in now. This D, you know the market of agreements, the deal-making market. If you want to call it a market, they're suffering to get the attention right while we can wander in and talk about project deal with, you know, with the, with the country of Germany. You know the American Society of Bugneys, you know, might find it tough to get an audience with that bike. So, I I would say it's probably good to have mids and small scatter under an Ssp. 10 form committees around the issues that they're struggling with or get involved with those committees and be scrappy right?
See if there are ways that you can collaborate and work together to construct group. You know, agreements, or just explore everything. Every possibility. Right. And that's that's really where I see the smaller publishers getting a little bit trampled right now, getting that time with the bigger buyers Right. I I agree. And you know, from my perspective at Dla, you know, I agree that it's the agreements in the A common platform that they can operate under that will really help especially the small and medium-sized publishers, especially most of their offering is OA because we're Really talking about a different use case. We're talking about AI machine learning rather than subscriptions.
And you potentially can monetize that because it's a different use case rather than a subscription for humans to read. It's for machines to mind. So getting on these platforms and experiment is probably one of the best things that they can do. Okay, any other questions before we wrap it up Okay. Well, you all I've asked you to pull out your phones at the beginning of this webinar. And so now you have the QR. Codes at the bottom.
These are the contact. Qrs for Ccc for knowledge works, global for data C or for Striat. I would ask you to connect with them directly if you have any further questions, I want to thank you. Thank you. The panelists, for being on this showcase today, and of course all of you for participation in today's showcase. This concludes the session today and again, thank you all for being here with us, and have a great day