Name:
Solving for OA/UX: The Powerful Potential in Improving User Experience (UX)
Description:
Solving for OA/UX: The Powerful Potential in Improving User Experience (UX)
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/5a330a31-f719-4873-a6e7-6aff8cb70aea/videoscrubberimages/Scrubber_21.jpg
Duration:
T01H00M33S
Embed URL:
https://stream.cadmore.media/player/5a330a31-f719-4873-a6e7-6aff8cb70aea
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/5a330a31-f719-4873-a6e7-6aff8cb70aea/session_4d__solving_for_oa_ux__the_powerful_potential_in_imp.mp4?sv=2019-02-02&sr=c&sig=JGDt6ghVlJiT8KTu9nI%2FxTrY6aCEwGymhJa8OMdF8uo%3D&st=2024-11-20T06%3A27%3A22Z&se=2024-11-20T08%3A32%3A22Z&sp=r
Upload Date:
2024-02-23T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
You just want to remind everyone of SSPS code of conduct here and the core values before we get started. Um, so welcome to our session and just to introduce the people who we have here today. Right next to me is Jamie Carmichael, who is the senior director information and content solutions copyright clearance center. And then we have Jason price, who is the research and scholarly communication director of the statewide California electronic library consortium.
And then we have David Haber, who is the publishing operations director of the American Society for Microbiology. And we're going to talk today. Well, they're going to talk today about different aspects of the user experience problem in the open access publishing landscape. So the experience for authors, publishers, funders and institutions to enable open access and open science is improving, but it's still challenging and suboptimal in many areas.
So we've asked ourself whether there's more that publishers can do for users. Can libraries, publishers and service providers work together to develop best practices for open access, publication delivery systems? So some of these challenges predate the digital turn and open access. Um, but some of them are peculiar to the open access landscape.
Things like determining eligibility for funding, invoicing and payment is fractures, and data is still fragmented. There's no single source that covers all disciplines. Uh, from which we can pull data to give us an institutional view of where our authors are publishing, how much of that publication is open access versus paywalled or provide a view of trends over time.
We have to have is this information from various publisher dashboards, bibliometric databases or service providers. There's also right now no single source to handle payments, whether those are being made APC by APC or um, in bulk or under transformative agreement. Now, accurate aggregated data is critical to inform future decisions.
So the data fragmentation issue is a big one and also the process for eligibility, invoicing and payment. Um, in some cases, it just takes too long. In one case that I can personally speak to, it took me one month, 12 days and 34 emails between researchers, the library department, fiscal officers and the publisher to get the APC funding approved to get it allocated between authors from different institutions, do various exchange rate conversions and internal transfers and just get it paid for a single article.
In our presentation today. Um, we're doing some audience interaction, so we'd like to ask you to use your device to go to menti.com. And enter this code. I'll leave it up for a second. I still see phones going. So OK.
Looks like all of us are there. So if you could answer this question, what is your level of pain in managing open access?
OK it looks like responses are slowing down. So based on your responses are wide ranging survey data. Um, it looks like most people have at least moderate pain in managing open access and then it on either side. Very severe to mild. Sort of the next bracket and pleased that some people have no pain. I'd like to know who you are.
Please raise your hands. You can't be anonymous anymore. Um, so another question for you. In your opinion, which stakeholder experiences the most pain in open access, publication or management?
OK so clear winner there. We think that the authors have the most faith. well, that's a great segue to our first speaker, Jason, who we'll talk about a few things. But Jason, if you could tell us from the author perspective, how has the transition to open access and perhaps also the transition to digital publishing changed the authors experience of submitting a paper to their journal?
What's what's their paper? So at the consortium, we're pretty far removed from authors even further than the institutions who's, who's who have their own sets of authors, but did spend about 10 years as a graduate student doing research and teaching at Indiana University in the Department of Biology. And I'm going to leverage that a little bit. Mostly what we get at the consortia is looking at the outcomes of these agreements and seeing weird numbers and thinking something's going on here, but we don't know what it is.
But I'll try to provide some insight from the author perspective. So one maybe obvious message, but it's really important to remember is that every additional decision is a pain point. My section we kind of coined from bad to worse because there are so many more decisions that authors have to be making and trying to make educated decisions with little background on what's going on.
And that's really tough. So anything that causes friction, waiting or delay approval of information is an issue. So quick story. One of the most easy going faculty and the Department where I was a grad student for a long time, everybody loved to go talk to him, super patient, like to think about ideas no matter how crazy he would be there for you.
He was also famous for the chain of expletives that would come out of his office whenever he was submitting a paper and actually went back and visited 20 years later and the expletives happened again. He was visiting somebody else in his lab and was like, Oh my gosh, I've been here before. And so, you know, it is really frustrating for many of them. And the delays are a big issue. So one thing that we've started to do then is trying to reduce one of those delays and, you know, maybe not a decision point, but at least the delay by automatically approving articles that get submitted and are found to match under the agreement, rather than saying, OK, your articles been submission for approval of funding and they have to wait and have to follow up.
They look at email, all those sorts of things, right? So that means that the metadata and the accuracy of identifying those authors is really important. What's the acceptable level is partly that's an administration burden for libraries and consortia that we don't want to take on, but it really is also thinking about the authors then not wanting to put them in further limbo after their paper has been accepted, often trying to figure out what's going on.
Um, so you know, a second pain point and I really will focus on two I guess that are more specific. Um, is really to publish open versus not to publish open that is actually a pain point decision, you know, how do we know this? Well, even though none of our agreements require author payments, we see uptake ranges from 90% down to 25% That level of variation, you know, why is that?
What's going on there? We're trying to figure that out. Is it risk that they or their institution might have to pay extra? We think that might be part of it. We have some agreements that ask authors if they have funding available to contribute to help offset the costs of that publishing. And it may be that they're being driven away by that.
But you know, it could also be just friction in general. Will it be more delay? Is there more? I have to figure out if they go down and they see part of it, as you know, you'll be notified after x time. They may bail back out and go back to the other. So, um, on that decision point, it's a lot of trying to read and understand what we're not doing necessarily at that decision point is providing a reason why they would want to publish.
And I think remembering that taking data that maybe even would be context specific, imagine a journal level in this journal, if it's hybrid articles that are published open in this journal, get this many more downloads or citations, that sort of thing. So that's a point that clearly is having an impact. And another one I'll bring out. But there are others down the line for sure is which Creative Commons reuse rights license should I use should I choose right to go C by or the letter soup.
Right and C and D And C and d? What does that mean? Maybe we can consider avoiding the need to choose where possible because obviously that's one less decision point they need to make. And certainly some societies and publishers have decided to go down that route. But then, you know, for that case. Where choices do matter, and we really want to provide that option.
How easy do we make it to understand in the interface? Is it a bunch of reading and trying to understand all these various things? Or if they say, OK, here's my license and I do a checkbox, how does that affect what license comes up, the visual pieces of it? Those sorts of things. That's one way to potentially make that decision easier. But they are, you know, having to have all of these steps along the way that may affect or delay their article.
They want to get back to the ideas. They want to get back to the research. And so just reducing those decisions and then when those decisions are necessary, making them as easy as possible is really key. Thank you, Jason. I realized I forgot to introduce myself when I was starting. Please forgive my bad manners.
My name is Willa tavernier, and I'm the research impact and open scholarship librarian at Indiana University. Um, but we are going to move on to our next speaker. David, in your view, what is the central problem and what is the pain from the publisher perspective? OK so I'm going to tell a little story. And it's about this badge. OK it says, I'm David Haber.
I'm affiliated with the American Society for microbiology, and it says, I'm a production manager or content architect. OK I've proofread this three times. This is a printed object. Now, when I log in to SSP in my profile, it says, I am the director of publishing operations. My name is David Haber. I'm associated with the American Society for microbiology, and somehow I'm not a member.
Don't know. Think I'm a member. I'm pretty sure I'm a member. I'm a member through my affiliation. Now if I go to the manuscript submission system, they know me as Marino. My email address is bob@roberts.com and I'm affiliated with Cadmus communications, which is now. OK I also paint my house a lot.
And I live near Sherwin Williams and so I have a pro account there. And so to them I'm known as. Dave Haber and they have an institutional subscriber account to RSM, so maybe get some benefit through that organization because of a pro account, maybe can subscribe to some cool RSM content, right. Meanwhile, the CEO at my company, I'm known as Dan Farber because when he was introduced to me, he confused me with someone else.
So the other day he was saying to me, hey, Danny boy, help, help me make publishing faster. Let this author experience be unified and quick and easy. And I'm thinking, I don't even know who I am. So how can I tell who? Liz Beck is or Michael Eisner or Ron Mexico. I can't disambiguate any of these people at all. So if I can't tell who I am, how am I as a publisher going to tell who these authors are and set the experience for them to get them to publish quickly so that from my publishing perspective, that's like the core issue right there.
So here's my story. OK, so we have. Sorry we have another question. For you to look at. So please answer this one. Rank the following elements of open access management in order of most difficult to least difficult. Go and if there's anyone who has just come in, the code is at the top of the slide.
I feel like we're playing that game with the hats and the Kat on the hats. They're all moving.
That the answers have hovered between 47 and 50 answers, so we'll give this some time. Like a five way tie.
OK, responses seem to have slowed down. So the rankings. First so most difficult author awareness of funding eligibility. The second is modeling and measuring open access deals with reliable data. The third is author awareness of funding mandates. The fourth is scaling open access agreements, and the fifth is, well, my personal pain point.
Billing and reconciliation of Article processing charges. So thank you very much for those. we're going to. Move on to Jamie Carmichael, who's going to talk about a study that copyright clearance center did the state of metadata. And so Jamie's going to tell us what this study discovered about the pain level.
Jamie Thank you. I will just start by saying that prior to metadata the musical today, I thought this was a very creative, compelling look at metadata challenges. My my perspective has changed. Kudos to Heather and the cast for pulling that off. That was incredible. This is still a very valuable resource. It's complimentary and it echoes a lot of the themes that came out of that production this morning.
You can access it through the QR code, but based on where we sit as an intermediary between publishers and their customers to facilitate open access, we had a hunch about where the breakages and complexities are in the flow of metadata across the research life cycle. But we wanted to open that up and hear other perspectives and voices to understand really could we pinpoint where the problems are to be able to take baby steps toward solutions?
Right maybe this problem won't seem so overwhelming or insurmountable if we could chunk it out into very, very practical visualizations. So some of the themes that came out of that research and that are reflected in this map actually align with the very first question we asked, there was overwhelming consensus among the stakeholders who we interviewed that the researchers carry a huge administrative burden in having to not just assert metadata or persistent identifiers, but to then reassert it as they go through their process, because we are not doing a good job collectively of capturing that and persisting that.
And that goes way upstream prior to publication, even as early as Grant application. So that was a really big takeaway. Institutions, right, because metadata inconsistencies persist throughout the life cycle. Institutions have these laborious manual workarounds to reconcile funding, eligibility and billing. And as Willa pointed out earlier, to reconcile unstructured data across disparate publisher systems for comprehensive analysis to really understand where are they doing their publishing.
And publishers, the metadata breakages that they interfere with business transformation. Right it contributes to high operational costs and it complicates fulfillment of open access, deal terms and analysis of deal performance to inform future decisions, which is one of the categories that we had asked you about the level of pain you're feeling. And last but not least, let's talk about the impact on funders.
Missing metadata. So registered grant IDs or institutional affiliation. It makes it very difficult for the funders to track research, outputs, difficult and costly, I would say. It presents potential barriers to open access uptake. Problematic impact tracking and incomplete analysis to inform future investments. So this map is just a very granular look at these challenges to create a shared understanding of where things are breaking down.
So that together, individual stakeholder groups can make incremental changes to improve the overall state of things. This this is a living project. We plan to update this as things improve. We want to share, you know, the progress that's happening across this realm, because this is not the only presentation at SSP that's covering this. It it is a very big issue.
So now we want to talk a little bit about best practices. And I'm actually going to start with David. I think spoke about an identification and disambiguation problem. Now we have a plethora of persistent identifiers out there for researchers, for projects, for research organizations, for articles, et cetera.
what role does the ecosystem of persistent identifiers play and what are the best practices for the industry around that? And of course, Jamie and Jason, you can jump in on this question if you want. So I really think. One of the issues with persistent identifiers, I think, is whenever you come up with one, someone else says, oh, we need a new standard and we need a new one.
And so what ends up happening is you have. No ability to determine who owns who controls the metadata associated with a given identifier. So let's talk about authors, because that's simple. There's this thing orchid, right? They're the church. They have the author data. The author controls their data, supposedly. And yet as a publisher, we get requests to change names, right?
Um, we get requests at an ad hoc basis to change names in certain articles for whatever reason, and we'll do it. Does that mean that it was changed in orchid? No maybe it was. Maybe it wasn't. But if we can establish as an industry that, hey, there's this thing called ORCID. It's the House of author information.
And if it changes there, it just cycles through the entire life cycle, whether it's grant applications, whether it's pubmed, downstream aggregators, whether it's publishing systems, like if that information can cycle through seamlessly and like authors know, like me. SSP, right? Like I have multiple affiliations. I don't know, half the time what they are, where they are. I'm a confused person.
I'm like anyone else. So like, I don't know what date is out there about me and who controls it. I know I certainly don't. But if we as an industry can say, OK, so we have this kind of identifier, maybe it's orchid, maybe it's some isni thing, like who knows what it is. But if we can establish that this is the place where we go for this bit of information, we have fun.
We go to. One ref to get ideas about funders, we go to crossref to get ideas about articles. So we have some establishment of certain identifiers. But when we're talking about individuals themselves, I think it becomes difficult and confused and everyone's sort of running around at an ad hoc one. One off basis trying to sort through this.
Um, I think about persistence through the, through the actual process as well. So, you know, whether it's David, Dave, Danny boy or is it Sherwin was it that the author's going to be careful about getting their name and their institution correct as they're putting their paper together? So why then can it be that somehow when after acceptance they're indicating their institutional affiliation, which affects whether or not they're identified as eligible for funding, will they come up with a specific completely different institution as a mistake?
It's only, you know, 1%, 3%, whatever small percentage it is. But that shouldn't be happening. We already had that data and it was very likely right. And it's just not being passed along. And so that is, you know, a huge challenge for us that I think we need to solve because we know we've already got that data. And even if it's whatever it is to begin with, if it's wrong, it's going to get caught in the system.
But if you keep entering a new one that's not dependent on the other, a, you add the opportunity for mistakes and b, maybe there is somebody who says, well, I really want to publish this open access. Maybe I can figure out an institution that is close to mine, match it and get funded. Right? and we don't know that it's happened so rarely. That's probably not happening, but there's the concern about it.
So I think making sure that key pieces are passed on is so important. I just want to support what both of you are saying. We just worked we just worked with the publisher to analyze their configuration settings in their upstream submission and peer review systems. And we found about 7 or 8 different opportunities for them to just reset something, make a new value, a default to improve the open access experience.
Because think, think back to, you know, the variability across the journal settings at that level. Right not all editors are as aware of open access implications as we all might be. And so, you know, having them, having them make corresponding author versus contact author or submitting agent, the default value that helps trigger the right rules in open access platforms like, like, like rightslink.
So that's a very important exercise. I was going to talk a little bit about that later, but it seemed it seemed relevant. Now you really need to look at the way you're configuring your upstream systems because metadata drives absolutely every aspect of open access automation. That definitely makes sense. I know when we were going onto one platform and we were required to provide the Ringgold ID for our institution.
And you know, when we went into ringgold, there were so many in the hierarchy. Um, because Indiana University has a ringgold, Indiana university, Bloomington has a ringgold, the Department of x, y, z has a ringgold, and the Ruth Lilly Medical Library has a Ringgold. We want anybody from any of those different institutions, however they may put in their affiliation to be caught in the system for.
For the data to come back to us. So it would be very important. Um, in a service provider or publisher system to recognize that if somebody enters school of education, Indiana university, Bloomington, that we want all of that to resolve back to Indiana university, Bloomington. Um, but another question for the panel and perhaps Jason can speak a little bit to this first is how do we solve the problem of information deficit information asymmetry across our institutions and with our researchers, um, lack of education?
What, what would be a best practice or best practices to address that deficit, reduce the pain and maybe reduce the number of decisions that researchers have to make. Yeah, my take on this is maybe different from some others, but we do serve a lot of smaller institutions, faculty that don't publish often, but I think any range of faculty authors trying to expect to be able to educate them on what's going on in this crazy world that we can barely understand and that they would take time for.
That is a lot to ask. And probably not ideal. I prefer in the case of the information deficit to basically simplify as much as we can, make it easier to understand, recognize that lots of words, berries, things and is hard to find and we just have to simplify the interface and the process. Um, you know, it's the idea that education should it should be a part of this is, is pretty it's hard to take because I just don't think there is that time in that understanding of what's evolving and different in every place about how it works and how it doesn't work.
Some will learn from experience, others just need to be able to make those clear decisions as easily as possible along the way, which I know that's not really an answer. It's a cop out answer. Right it's complicated. And and if it's complicated, trying to put it into a few images or words makes it really hard. But we have to try to do better at that.
I mean, we are not close. And I think there's a lot of just stuff that's completely unclear. And it's clear to me why it's unclear, but we're not getting to the point where we're saying, OK, we need to go test this with authors, we need to do some usability studies. We need to watch them go through and understand where are they getting stuck, what are their questions?
And I don't know that that's happening as much as it should. And it has to happen more now because of all the additional decisions we're adding in. I think this is Jason. David? yes. Sorry no, it's OK. It's all right. What's your name again?
Yeah, precisely. Oh, OK. So sorry. I got it. No, this is me. So this is just an example of my authors. My papers been accepted. Great but there's so much text on this slide. This is that decision point where the author has to decide, do BI Publisher not?
OK, you've got two radio buttons there, but it's not even that clear between those two radio buttons that if they choose exclusive license to publish, are they choosing to publish it closed? That's not clear from this text. It's not clear who is paying. Somewhere buried in that long paragraph is no additional cost. Well, what does anybody think when they say no additional cost?
They think, well, somebody's paying for it. What's going on here? You know, is there a payment being made? What what is that look like? Um, it just is an awful lot of text and an awful lot of things all at once and not clear decisions. And to be fair, you know, have one other example, I think I'll get to show that is from the same publisher and they've done a great job of the other side of it.
But this kind of approach is what authors are seeing in an email and they just want to get it done. And this is not helping them do that. So this next one is really just that reminder that we can make better kinds of images that provide data that say this is why you should publish open access. And having those out there on a white paper someplace where some authors might somehow see it is not the same as presenting it to them when they're making the decision.
And we ought to be able to do that and do that better. And I like the potential here to make that context sensitive because it's going to vary a lot from journal to journal publisher to publisher, all those sorts of things. Then that last piece here is what I think really is that best practice example that I mentioned earlier. So at the top, you can see that basically they're there. They know it's Creative Commons.
They're making this choice. If they choose a license, they can then check the no remixing and these other options as they check those various options, they then see below what the actual license they're choosing is, rather than trying to have to understand what the license is n choose backwards, they're actually saying, here's what the function is. And because they're checking those different check boxes, it then gives them, OK, this is the license you're choosing.
And if they want to find more information, those the links that are cut off on the bottom, they can go and read more. Certainly, you know, an occasional, you know, 5%, 1%, whatever it is, we'll want to dig in and understand. But ultimately, this is the kind of interface that we want to be pursuing, I think, for making it easier for authors to make this decision. And I think that it would be fairly easy to standardize this across publishers because Creative Commons has a license chooser, a plain language question by question.
I employed it in a class with undergraduates where we were making an open educational resource. It's very easy to understand the way they've done it, and I think it should be fairly easy for any publisher to embed it in their submission system. And it would be, you know, across the board people would see the same thing and I think they would get accustomed to it. So Thanks for that.
Um, best practices for publishers. Sure so I just wanted to share some insights that were crowdsourced from our customer success team at CCC as well as our publisher partners. Um, we, we use these pillars in our publisher onboarding for rightslink, for scientific communications. But really these are platform agnostic and just good practices for anybody with, with gold programs or thinking about going in that direction.
So very, very first one, prepare your teams. Teams that are not aligned are going to slow your progress in meeting customer needs or complying with funder mandates. So whether it's defining your business models or setting your deal parameters with the sales teams, adopting technology, communicating early and often to get buy in and to reinforce why it is, you know what, why you're doing what you're doing is really mission critical.
I'd also recommend revisiting your current submission and production workflows as part of an end to end mapping of your author experience to see what changes your teams might need to make. I know that publishers are actively working on this and think it's a step in the right direction. Give yourselves time to implement these changes and just be prepared that you may encounter some resistance to policy change or change of behavior.
Right? but challenging those things is OK and very necessary to achieve a better user experience, particularly for your authors and institutions. The second one evaluate and enrich manuscript metadata. So if scalability of open access is a priority to you, I mentioned this before, you have to know that that metadata and persistent identifiers drive automation.
Your ability to scale hinges on the quality of that data from spawning the appropriate workflow to showing the proper pricing or discounts or matching the manuscript to eligible funding options. Right we've talked about this a little bit, but this one is huge. And it goes beyond the user experience. There are serious financial and reputational risks if your deals are underperforming due to poor quality metadata.
So I really can't stress that one enough. You might use two or more submission systems. You know, we've seen that before. Compare and contrast your configuration settings. This is what I was saying before, so that you build a uniform, scalable author experience. Hello three define and operationalize your agreements and workflows. We recommend just taking a step back in order to automate your deals, right?
In order to automate and simplify operationalizing your deals. Take a step back and see how complicated or complex your agreements are becoming. And I'm not suggesting that you know, you come up with a cookie cutter approach. I think the nuance is important in nurturing the individual relationships you have with your customers, but document your overall program and your deal specific requirements to see what's necessary and what's just extra complicated and really not worth the time because it can be quite a headache when you're trying to operationalize that.
Optimize the author experience. OK, so this one is very obvious, but as Jason said, authors are not experts in open access publishing and that's OK. We don't want them to have to make decisions that they don't want to be thinking about. Remove them from the funding process, but promote that funding has occurred based on your agreement with their institutions.
This is very possible with technology. We have had this kind of workflow for three years now and it's a win-win. It's a win for the authors. We just notify them that, you know, you've been approved for funding and you don't have to lift a finger. Congratulations go work on your next project.
Also, just realize that, you know, whatever whatever system you use, whether it's homegrown or third party open access platform, it's just one part of your author's journey. I sound like I'm repeating myself here again, but reviewing messaging from marketing through submission, acceptance, it's just critical to a satisfied author at the end of the day. On board your institutions and funders. We do have funders actually interacting with the system.
And approving requests. We have hierarchy logic where if the funder has agreed to cover open access charges under the terms of the grant, you know, why, why try to get that money from the institution. Right let's prioritize the funder first. And so, you know, that's I think there are good signs that the technology is progressing in the right direction to meet the needs of the different stakeholders.
And give self-service tools wherever possible. So Jason mentioned that, you know, he doesn't really want to review every funding request. He just wants to set that to autopilot because he trusts what's coming through from the very first few cases. And that's something that we strongly encourage because not only can you automate the experience for the author, you can also automate the experience for the institution.
And finally, use your data. Right your data is going to tell you a very important story to inform your future business decisions, assess it to understand what's working, what's not working, where should I be changing my deals? Where should I be simplifying things? Shared platforms like rightslink are built on the notion that data is secure and standardized for unified institutional experience.
We're working with more than 35 publishers now to create a consistent experience for the institutions and authors using the platform. But that's, you know, that's still a subset of the market. Increasingly, we're integrating with other third party systems like the switchboard or oable, so that institutions can access their data wherever they prefer to do their business right? don't have to come to our destination.
We recognize this and we really want to build a better network to help the whole operation run more smoothly. So those are some insights from doing this for about a decade now. So, Jamie, the last thing you spoke about was integrating with other systems. Do any of the panelists have thoughts about best practices surrounding interoperability between the systems of all the players in the ecosystem, including library or University run repositories?
Have you given any thought to that aspect? Well, these guys are thinking. I'll just say again that metadata is key to system interoperability. Right you have to be able to pass standardized values back and forth in order to make that, you know, that experience seamless, depending on, you know, depending on the nature of the relationship between the two systems.
So I don't know if you want to add anything. Yeah, sure. So metadata, right. what's fascinating to me about this is, um, the way we've constructed the entire workflow. And the workflow process is such that, Uh, the same pieces of metadata are coming, um, at different points and sometimes they conflict and sometimes it's just what Jason was saying earlier, right?
And so if, if you can establish at this point in the workflow, this is where we're going to trust this piece of data. Um, and that's going to inform the, the, the funding agreement here. Then, you know, it sets up your entire production process. It sets up your entire agreement process and it sets up how you're going to approach the data you have because. I'll speak out of turn.
Our data is awful. And the reason is, is because we have. We have so many systems talking that were all founded on. For different purposes. And because of that, we cannot get a good handle on. Who is who? What is what? When an editor is an editor and when they're an author and what that means and how they're different, and if this person is affiliated with this institution and we have an agreement with this library to do this thing, whether it's a preprint repositories or whatever, like it's really difficult because we have not established well where we're going to get this piece of information and because we got it at this point, it means it's primary and it's the way the industry has grown and built and we just sort of topple things on top of each other and just and you have this house of cards and it's always going to make it fall.
Um, and you know, that's OK. You can build it again, but it just makes it, it makes it hard. Um, but sorry, I was, I was struck, actually, that maybe it's a little less. OK because more of the metadata problems in the past were maybe about discovery. This is about cost and payment. And so this is not this is a time when that made metadata becomes more important.
I mean, better for it to topple than never get fixed maybe. Right but but yeah, I can hear that. So when you have a was thinking about interoperability, interoperability earlier today, we manage agreements that have 50 or 60 institutions on them each and we have them from five different publishers. One publisher uses Ringold, another publisher uses raw, um, some interoperability there. And being able to compare those or being able to know what's going on as we're looking at it is helpful because you get email domain or you get whatever institution name is in their dropdown, you get those kinds of things that you want to be able to edit and update.
But ultimately you we need to say, yes, these are the institutions that are participating and we need to be able to look at that. And so my experience with raw has been that, oh, this is an open infrastructure system. I can go out there and figure that out and make sure that they've got those IDs right. And so when I'm looking at it, that works with Ringold as a library consortium that doesn't have a Ringold account, I can get access sort of temporary this way, but it's hard for me to sort through that system and figure out what's going on.
So, you know, the interoperability is important, but also just being able to have some better sense of what's going on. There are a lot of pieces along in the system. And if I knew, though, for all of my institutions, these were the appropriate range of names and IDs that fit because sometimes you need multiple for our institutions, then that would be really helpful.
Instead, I'm having to sort of put them together for each different agreement because it's a different set of institutions here and there, and I just never know if it's right and I know we're missing, and that might be that institution's higher payment. And whether those articles are ever identified as associated with them could be broken and could not know that fixing that seems really important. Yeah, just to add, I think understanding the relationships between all those identities is really important.
I think that's part of the value of, of the Ringgold data set, which is, you know, we're up over 650,000 IDs now. There is an open layer to Ringgold. I'm not sure how well known that is, but it does. You can map to raw top level IDs through the isni, which is an ISO standard so that does exist. It's actually powering some of the eligibility messaging through the switchboard.
Right? so Ringgold is behind the scenes they take in raws they're able to do the match through isni. So there's I agree that it might be OK to have a multiplicity of IDs as long as they are interoperable. Right right. And so the publishers we work with aren't getting us they're saying we use this one. Here's your list and here's your IDs and well, is this right, is this wrong?
I don't know. Having them be able to come to us with that. And say, well here's the overlap and here's what we're looking at and here's how we can make sure this is right. That would be helpful because that we're, you know, we're tangential enough to this, and it's hard to know that that's going on. But knowing that it is important.
Well, I want to tack on this because this is interesting to me. So from a publisher perspective, those that affiliation data is published data, right. And published data is defined by various weird style rules you may have from print halcyon days of the past. Right so like, you know, a copy editors job, they see Ohio state, they have to put in front of it or else they get slapped on the wrist.
Right and that informed how those institutions now have identifiers. Right and we don't necessarily capture those. Raw, wrinkled, whatever identifier you want at submission like we try, but it's all, you know, author type something. It sort of matches sometimes. But then we default to what's in the manuscript, right? Because that is what's submitted.
So really what it comes down to is, This copy editing style and these print styles are defining how these affiliations are like. Codified and then on the back end, these technologists have to come in and try to match stuff. Right? and that seems to me bonkers. So in my mind, the easy solution is figuring out a way to capture affiliation data in such a way where you're not worrying about format.
This institution has defined itself by this ID, and this is what it's called. And who cares if there's commas here or this is capitalized or it's missing a word because this is how the identifier has defined how that's called. So therefore you remove these kind of old, ancient like print style guide weird things that come out when you publish things and you're and then you can push it down earlier in submission, which and then you solve the problem of.
Manuscript file versus whatever some crazy person entered as the submission system. That makes sense. Do we have any questions from our audience? Yes, please. If you can use the mic, that would be helpful. Sort of commenting on the discussion about the crazy ways that organization names can appear and whether or not there's a standard that could use that would be automated.
I assume you are asking for ORCID for your authors as they're submitting their paper, and ORCID is using raw now. So if you can use an API to query that orchid, it should pull in the raw for their institution and that might help publishers deal with this very problem. Yes OK. But here's the problem here. The but yeah, here's the but and it's a big but right.
Submission systems a lot of them are based on person or profile, right? Like SSP. So it got my role wrong. And so if I put an ORCID ID in a submission system, but I'm using the manuscript file as the source of all affiliation data, that doesn't necessarily mean I have an ORCID ID associated with that author unless we have excellent production vendors who do that kind of matching for you.
But still, that's a problem and that causes delay and there's going to be mismatch, right? Because the ORCID ID is authenticated at submission. Great spectacular. But any sort of ORCID ID that's inserted in production may not be authenticated. So you can't rely on that kind of data that way. Now, you know, technology companies will come along and say, oh, yeah, don't worry about it, we got you covered.
But that's not how it works. And that's it's a mismatch between what's being submitted and what's being captured at the submission metadata stage. And it points directly to what were talking about earlier. In my experience across the agreements, across the publishers, the degree to which ORCID is required varies. Some say you absolutely must.
Some say you're encouraged, some don't know if to what extent it's really pushed. Furthermore, we get reports of all of the articles published across, you know, there are many institutions. It's you know, those are reports are 5 or 600 articles a year. Orchid ID is nowhere to be found in those reports. So we can't find that on the back end. We can't necessarily look and say, oh, is this ORCID ID mismatching the institution?
Do we find it some other way? So though that might help on the front end, if it doesn't come out that way on the back end, then it's hard for us to figure that out. So I've heard a lot of ways in which we can use the new technology to improve these systems, right? So I keep coming back to thinking, what if instead of relying on input from the author, we do some machine reading of the actual manuscript and then pull data from there and ask them to verify rather than input?
And what are your thoughts on that? That that technology exists? I mean, I can tell you definitively in scholar one and editorial manager, they are doing that when the data is available and formatted correctly because different journals have different policies. We did a deep dive into this in the last month. Um, so that exists.
But It needs. I just want to reinforce that it's very important for this validation to happen during submission. We've we've been asked if we can implement disambiguation at the point of acceptance, which is traditionally where the platform comes into the picture. But that's too late because now the tail is wagging the dog and we're going to have sourced data from the upstream systems out of sync with the downstream systems because like I said before, we're integrating with the switchboard, we're pumping data out, and you don't want to get into that mess.
So we are working diligently with our submission system partners, right? That that's how we trigger the rules in, in an open access system like rightslink, we work with Korea docs, we work with em, we work with scholar one, Scholastica highwire. We, we work with pretty much everybody and we're trying to help them understand that they are at a critical point in the open access workflow.
The systems were designed for different use cases years ago, but this has to be a priority for them. So we are advocating for that. We encourage you all to advocate for clearer author messaging. Everything is very configurable and customizable. You don't want to overdo it with, you know, a full paragraph of directive. But it's super important to do that mapping. And you know, I'm inspired and encouraged from this conversation today where I want to be able to, you know, pump out some best practice tips in re-evaluating your configurations in those upstream systems for the purposes of transforming the industry toward more open access.
I think we have to do something like that. Any other questions? We have some more time. I'll ask one. Willa and I'm a ringer from s.s.c., so just self-disclosing that. So Jason wanted to go back to something that you had said a little bit ago around, um, you know, educating the author is not the solution and that there are solutions that, you know, everybody's working sort of in a silo on their own solution.
So the question really is, from your perspective, how do we open up the door and really encourage cross stakeholder collaboration? And, you know, going further towards that, you know, you mentioned something about authors testing and is like, did you read that message? No OK, maybe we need to put it in blue. Um, so as a service provider, that is very interesting to me. So how does that how do we open that door?
How do we make it inclusive? How do we really say, we really want your feedback? And it's not just performative like we actually are going to use it. Well, there seem to be general agreement from the audience, at least today, that the author pain point is, is the place where we need to look. So it seems to me that if you want to bring in cross stakeholder conversations to work on this, starting with good data that says, here's what we're learning and seeing by, by investing in those usability studies, that's more than just asking questions.
It has to be about watching this process and getting a sense of what's going on. I can say that it is one of our biggest dilemmas for these multi-million dollar read and publish agreements that were trying to support, promote, show whether there's what the value is from them that we don't know what is going on in the authors minds and we don't know whether it's friction, misunderstanding or actual position choice that that is affecting what's happening.
And so we seem to be dancing around that when there is more or less time tested way to figure that out. And we're so I would argue that if there was some effort that could be brought in that basically instead of us educating the authors, the authors educate us by actually watching what they're doing and seeing where it goes wrong and asking those questions, then you could, you know, if the need is to streamline and to fix it, if we have data to work from to do that, then we want to work together and say, oh, yes, I understand that I'm in support of that.
Oh, that's what's going on there. So the ability to really understand this is it's opaque to so many of us that are trying to serve these authors. And we can't really ask them because they don't not one by one or surveys. It's just not it's we're not going to do it that way. So I think would pull together stakeholders if we could basically get good study, design that study together, maybe about what are we learning, what do we need to know and what are we figuring out that.
That ought to drive the improvements that we're talking about making. Um, as an administrator, for example, just today I got an opt out report from one of our deals that about 9% of our authors are opting out. Um, but I don't have any information on why. So the question then becomes, do I reach out to each of these authors and ask them why?
Or is there a better way? From the publisher system side, is there a way to tell at least at which point they are, you know, deciding not to go away? I have a thought. Unless you wanted to. I have a hunch that it might be because they are not aware that they are eligible for funding when they're submitting their paper.
The other crusade that we're on is to move that messaging way upstream, whether it's, you know, in an author portal or at submission at acceptance is too late. If they're making their license choices at submission, they need to understand if, you know, if there's funding available for them, or am I going to have to pay out of pocket, which just isn't feasible for some authors.
So, you know, I don't want to speculate, but that could be it. And so that's another thing we're working with our submission systems to pull that data up as early as possible to help authors make that decision if that's what they choose. Thank you all. Thank you all for sitting with us and engaging with us through this discussion and have a big Thank you for our presenters.