Name:
NISO Update 2 Recording
Description:
NISO Update 2 Recording
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/f61aa536-9bdc-4c4c-ab88-85bb590640c6/videoscrubberimages/Scrubber_1.jpg
Duration:
T01H10M39S
Embed URL:
https://stream.cadmore.media/player/f61aa536-9bdc-4c4c-ab88-85bb590640c6
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/f61aa536-9bdc-4c4c-ab88-85bb590640c6/NISO Update 2.mp4?sv=2019-02-02&sr=c&sig=jGyJkAad0JGqcU1HBmHAnj%2Ff5rXLxhuh7EwoINL3JAk%3D&st=2024-11-23T09%3A13%3A54Z&se=2024-11-23T11%3A18%3A54Z&sp=r
Upload Date:
2024-03-06T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Hello, everybody. I'm Nitti Lagasse. I'm the associate executive director at ISO. And welcome to the second of two nice update sessions here at ISO Plus. The ISO update sessions are an opportunity for people who are working on current ISO projects to tell you a little bit about their project, what they're doing, what the status is, what's next, et cetera I like to think of these sessions as sort of a cornucopia of nice work.
As you may know, ISO covers all kinds of different areas in information interchange, and I think that's one of the strengths of our program is that we've got so many different things that might appeal to different audiences. So that's to say these projects might have something to do with each other or might have nothing to do with each other. But that's part of the plethora of our solutions.
So I'm really pleased to have our speakers. These are all co-chairs or working group or standing committee members for different projects. And the way that we've set this up, at least we did for the one on Tuesday that worked pretty well, was we went through the different sessions alphabetically by project name. Saved questions until the end. And then we had questions that we opened up to everyone.
I can also set up breakout rooms. We did that last year at ISO plus, and that was also a nice way to get direct interactions with the different projects. But I guess we'll see how we are doing on time and decide at that point. I have told all of the speakers that the talks are to be 11 to 10 to 13 minutes, and I'm going to jump in at 11 minutes to say you've got 2 minutes left and then at 13 minutes, I will sadly, I'm not sure how I will do it, but I will cut people off if I need to.
But I don't like doing that. So and I'm not even sure I can. Well, I'll figure out how to do it if need be. So very interested. If you have questions, please put them in the chat. There is also a document. Let me just give you the link to the document. Put that in the chat.
Oops that's just a place for you to stow any, I don't know, things you want to share or questions or resources that you want to distribute. That's just a place for us to share notes. So without further ado, very happy that we've got Bill Allen, Tony, Bobby and Linda here to talk about their projects and we can start.
Bill, are you ready to go. First off the bat? I'm ready. All right. So bill castor is here to talk about a relatively new but soon upcoming output project called content profile link document. So, bill, the floor is yours. All ready?
Let's see here. OK you seeing my space? Yes great. I got to get my Zoom hardware out of the way here. Yeah this is a really interesting been a really interesting project. And we've had a terrific working group. I'll give you the names of all the people at the end here.
But oops, I am. You're not seeing it cut off at the top, are you? Good because it is on my screen because I know I think you might have your Zoom Zoom control at the top maybe. Yeah Yeah. OK so you know, what really prompted this initiative is the recognition that the way we consume scholarly content isn't just text and isn't just articles.
Of course, it still is largely articles and largely text. But increasingly what we really need are text images, data, metadata, media, you know, basically all kinds of stuff. And we don't necessarily consume that all as documents. Sometimes it's just granular parts of a resource. And another aspect of the current ecosystem is that we pretty much use web technologies for almost everything we do.
So basically this new standard is called content profile linked document and coupled for short. And what it defines is a flexible extensible format based on standards that combines enables combining content, data and semantics. And it's designed to be a machine readable, self describing markup and can be used to exchange data and content between systems and APIs and services.
So as I think I alluded to a minute ago, it's really built entirely on web standards. And the reason for that is that these are standards that we all use everyday researchers, authors, publishers, hosting systems, service providers, all of these. Technologies and specifications and standards on which cloud is built are all web standards. So the basic document and its structure is in HTML.
It's rendered in ccis Jason LDA is used for metadata, context and narrative semantics. So for example, in HTML you've got a structural element called a structure, a section, but it's narrative semantics. Ah, this is an abstract or this is a conclusion or the abstract may be AP a paragraph. This is a conclusion, this is a hypothesis. So JSON L is used for putting that kind of semantic intelligence into a field document.
Schema.org is used for content. Samat semantics. What my friends at access innovations and not only them call the about of the stuff web annotations. That's another standard from the w3c. It's not commonly used, but increasingly and it's extremely useful. It's been around for quite a while to reference arbitrary locations within the content.
And then it can all be all of this can be packaged up in a package specified by the w3c publication manifest. That's quite a new specification from the w3c. It's probably about not quite a year old at this point. So that came along at a good time for this initiative. So here's what a link document looks like. Sorry for the blurry bibliographic. I was frantically trying to redo this bibliographic and ran out of time before the session started.
It should have been the second speaker. I could have had this look better for you. But anyway, you've got the content. You've got the structure. All of that is. Tagged in HTML. And then you've got the narrative semantics that I was describing a minute ago and the data semantics. And that is RTF expressed as Jason held.
So the title in HTML is an H one. The narrative semantics is title. The author's name may be just a paragraph in HTML. But the narrative semantics defines it as an author, and you've got some data semantics providing the ORCID that more specifically identifies that author. Similarly, the abstract is an H2 in a section. In other words, that's that establishes a section within this H1 section, and that section has the heading and a paragraph, all of which is a section.
And that's an abstract. And the narrative semantics about that are the data. Semantics might be that this is about proteins. And the introduction similarly. So what you're looking at there is the basic structure of a linked document, and then a content profile is a way to specify what a linked document for this particular purpose is. So for, you know, and it can be highly specific.
It could be a textbook that could be an exercise within a textbook. It could be a description of a research project. It could be anything but. The content profile basically specifies how the basic bones of a linked document can get assembled to accomplish a particular purpose. And so I bet you almost everybody on this call is thinking, wait a minute, we've already got Jets.
And that's just fundamental to scholarly publishing. What are you doing creating a different format? Well, we're not creating a different format. Jats is the journal article tag suite. And it is a very powerful, very rich vocabulary and. And schema for. Describing journal article content. And it's not designed to be consumed directly.
It's always rendered into a different form. So when you're looking at a journal article online, you're either looking at a PDF or you're looking at HTML. You're not actually looking at the chats itself. The chats itself is for machine processing and it's and it's extremely useful, which is why it's the lingua Franca of scholarly publishing that's not going to go away. The point is that the chats article XML can be converted to CPA along with other stuff that you might want to associate with it.
So what that means is now you can package up content, any arbitrary chunks of content, Plus the data and the semantics. All of that wrapped up in a. In in a. Publication manifest package and the recipients will be able to consume that as a unit. So that's basically what we're at. We we think and it has been piloted in a couple of publishers.
So we know this works. It's not finalized yet. But as Nitti mentioned, it's getting very close to final. And here's the group of people that put this together in the working group. So that's me if you want to contact me. But I'm going to go back and leave the roster on the screen because you'll see that we get a really good representation of people from various sides of the scholarly publishing ecosystem to participate in it.
So we had some big guns on this working group. That's right. I'll take questions at the end. Yeah, we are just a side on. We are finalizing this for availability in a public comment period, we hope in the next couple of weeks or very soon. So it will be something that we hope that people will send in comments that then the working group can consider and respond.
Thanks for adding that, Nitti. I wasn't sure. I wasn't sure whether to stick my neck out about how soon that was going to come out. I don't know how. I mean, it's complicated that we're in the final stages and we hope it will be out soon. I can see it around the corner. I just don't know where that corner is going to pop up.
So exactly. Yeah it's pretty complicated, but it's not a huge, long thing. So I would encourage folks on this call to give it a look and give us some feedback when this comes out. Thanks very much. That's it for me. Thank you. Thank you, bill.
So we will have time for questions at the end. Bill did not take you. Yeah, you did a great, great timing there, bill. So next up is Alan Jones, who is going to talk about the is CDL project that's called interoperable system of controlled digital lending. So, Alan. Yep thank you.
Let me just do this. What are you guys doing? We see what you think we ought to see. OK excellent. Hi, everybody. My name is Alan Jones. I'm from the New school and I co-chair the energy library. I'm sorry.
Interoperability of systems for controlled digital lending. Just to give you a little bit of a sense of the way that the group is actually taking the concept of controlled digital lending. We're not just thinking in terms of systems, but we're actually thinking in terms of the practice of controlled digital lending, potentially between two or three different systems. One of them may be a Digital Repository.
The other may be a. An inventory management system in the 3rd May actually be a broker type of system that actually manages those requests and supplies from different libraries as well as different patrons. I just want to give a shout out to the group. It's a fairly large group, but these guys have basically been working at this for over a year now.
And we feel that we're making a lot of progress in terms of being able to put out a reference framework that we believe we can have some conformance around best practices, as well as recommendations for different types of interoperability that CDL systems as they come to market will be able to work with not just newer library service platforms, but also older ils platforms as well.
So I do want to talk a little bit about the four different models that we've actually seen in the market as the market has started to develop. One of them is the traditional course reserves model, where you're actually physically taking things off of the shelves and sequestering them. There is really no inventory management per se. It's as much as that process is manual, it's not technically managed.
A second model actually tightly integrates the inventory management system, what you might call your ils or your library service platform and is really focused around much more institutional circulation. So if you think about this in terms of a particular institution has a collection and they want to generally circulate this CDO objects or digital objects to their patrons instead of their physical books.
This is that model. There is a third model that's really focused around interlibrary lending and the two different types of interlibrary lending models that seem to be coming to market. One is where there's a shared infrastructure, and the fourth, where all of these different institutional CDO systems that I've been talking about are actually talking to each other and.
The processing, lending and supply requests. So what we've seen is that there's a pretty common workflow between most of these controlled digital lending systems. There are many discovery issues that have to get addressed there. Certainly access controls and this management of this own to loan concept or the fact that you are only circulating the number of digital copies that you have physical copies of within your collection.
And how that's actually controlled is really important aspect of controlled digital lending. The repository piece in terms of the file management and the storage of these digital assets as well as some of the authentication and authorization problems, you can see how in a first or second model that's much more institutionally based, you don't have these types of authentication problems because you're all using the same single sign on apparatus.
However, when you start exchanging digital objects with other institutions and other universities or other libraries, that becomes a much more mixed bag in terms of trying to figure out what's the best way to make sure that someone in another institution actually has access to something within my collection. Other issues that we've seen, there are some solutions that are much more mobile based. There are others that are much more browser based.
And one of the things, particularly within interlibrary lending, if you have a mobile based solution, you don't want to necessarily ask for browser URL for that controlled digital lending object, you probably want it to work within your ecosystem. So how do you actually let the supplier know what it is you have so that you can more seamlessly interoperate? And then there's the loan management this is the actual checking in and checking out of these types of items and releasing the digital object or releasing the physical object when the digital object expires.
So these types of things, you can actually see that there are many different places along this workflow, whether it's checking the validity of the patron, checking the validity of or the availability of the physical item, checking out the actual item, checking it back in and resolving it. So there are any number of different places within this workflow that an ils solution or even ACL Digital Repository solution can actually be querying or talking to these inventory management systems.
So as I said, those four models actually have all of these types of steps in common. So what I'm going to do is I'm going to talk a little bit about some of the recommendations that the group has begun to put together so that we can get a sense of some of the challenges that we're going to have as we all begin implementing these within our institutions. So for some, we're actually looking at the discovery problems of holdings and what does this look like when you're dealing with the amount of physical items to the amount of digital objects?
Having something like a single part volume on a holdings record is easy. You know how many items to check out versus how many that you don't. However, in a serial or a multi part or a multi vol, there are real challenges in terms of enumeration and chronology, especially if you think about the holdings record is really telling an aggregate of what you have in a particular title.
This is going to be a challenge to make sure that that owned to loan ratio is actually tightly managed. There are other types of issues in terms of description, in terms of where do you actually put the terms of use within your records? If you load your Marc records into any type of a universal Union catalog, you may have issues in terms of where that data is supposed to be stored so that across all of the partners that load the load, they're controlled digital lending objects.
We want to make sure that these are all in the right place so that we're not actually transforming all of these different descriptors based on the system that they're actually leaving. Now, there's an emerging form that's coming out that's called opvs that actually merges bibliographic data as well as terms of use and presentation info into a single feed, and particularly with the use of an application called library simplified.
This has really come into use. There are also issues with our old friend open URL. You can actually see here one of the things with open URL that doesn't necessarily give us any kind of. It doesn't tell us if the preferred format that somebody might be requesting. It almost always assumes that you're asking for an electronic document.
So if you need to actually convert that electronic document from physical to electronic. We want to see if we can actually get some of the interlibrary lending protocols that are much more recent, such as 1866, and have them inform some of the use cases that are going on within the open URL spec. The open URL spec was last revised in 2010, so I think a bit has actually happened since then.
So we may actually want to create some work items and have that group take a look at some of those proposals. You'll see here the ISO 1866, which is a protocol that's used between requesting and supplying libraries, already has the use case for controlled digital lending, as well as e-book lending already in its version, it's accepted version. What we're looking for now is to see the industry actually adopt this and use the taxonomy that's articulated in these use cases to begin promoting controlled digital lending between requesting and supplying libraries.
Within presentation and delivery. Certainly there are going to be accessibility issues that have to be planned for. One of the there's a subgroup that's actually looking at different types of accessibility controls, as well as access and preservation file formats as well as technical specifications. There are certainly challenges once we've digitized this material, you've seen some of the functionality either in Amazon and Google about whether you can actually search inside or look inside.
I think people would just be happy to deliver the book at this point. But these are some questions that are beginning to arise as we begin thinking about controlled digital lending as a diploid service. The other thing that I just want to mention is, is that there is a bit of overlap with the collective collection life cycle project in that if you can think about the material, the scope or focusing on the material, there are also beginning to be conversations about can we coordinate scanning efforts for different particular titles or areas of collections across a large consortium?
Can we also have a shared infrastructure in terms of sharing assets? So this type of thing is definitely something that other groups will take on. Authentication and authorization, as I said, is probably one of the biggest areas here. You have the simplest manually managed process of sequestering here with general circulation. You can see we've added the integrated library solution to actually check the availability.
Usually we're using the SIP protocol to do that. An oldie but goodie. Part of the reason is because we wanted to guarantee as much backward compatibility as possible. So that's certainly going to be a recommendation that comes out of the group. And then as you start 90 seconds and then as you start thinking about what this looks like across a network, other methods of authentication and authorization actually become much more prevalent.
So, for example, whether we're using one time passwords or whether we're using hash passwords or token URLs or other types of authentication mechanisms are going to be some of the recommendations that comes out of the group. I'll save you this just to put up the calendar as far as where we are right now. We're in the midst of drafting our recommendations and collectively putting together the report.
In April, we're hoping to have a public comment period. And then July, we're hoping to actually release the full report. So if you have any questions, certainly I hope that you raise them at the end of the hour. Thanks awesome. Thanks, Alan. Sorry to jump in. No, no, no, that's fine.
So our next alphabetical project careening along the road of NISO work is a manuscript exchange and common approach. And we've got Tony Alf's here from haywire to tell us all about it. While we're already on em. Yeah let's see. And I don't think you're actual don't have any snow there in hopedale, do you?
There's no snow. It feels like spring outside. Whoops I'm. There we go. So one moment. Can you see my screen yet? No, not yet. Oh, OK.
Too much technology. I thought I hit share, and then I guess I didn't know. Now you can see my screen in a moment. There we go. Yep we see a. Internal view. Here we go. Yep looks good. OK, great.
So my name is Tony Alice. I'm a senior vice president of product management at high wire press. There, I lead a team of product managers that oversee a suite of platform products that address the whole scholarly publishing infrastructure. But for today, my purpose here is I'm involved in promoting industry standardization that is focused on system to system communication protocols and other industry shared services.
And as part of that, I serve as co-chair of the manuscript exchange. Common approach. Nicely standing committee. So manuscript exchange. Common approach of our mecha is a nice recommended practice that facilitates the exchange of manuscript files and of data from one system to another. I I'll discuss what is meca?
I'll review the history and the publishing workflow challenge it was meant to address. I'll also identify real world implementations and I'll finish with a Preview of ongoing work of the Meca standing committee. So I've been involved with Meca since 2017. A straightforward description of Mecca of the Mecca recommendation is that it is a documented methodology that describes how to create a package of computer files and how to transfer the contents of that package in an automated, machine readable way.
So the purpose of Mecca is to establish a common easy to implement protocol for transferring research articles from one system to another. So that the different systems don't have to develop multiple pairwise solutions for each and every system that they need to talk to. The magic of Mecca is that it lays out an easy to follow map to accomplish this and the Mecca specification.
It fully describes how a software system should structure files and assemble those files and then transmit them. So real quick. In 2017, John sack from high. He contacted representatives from a bunch of different submission system vendors and we all got together and collaborated on a common methodology for transferring manuscripts between our various systems.
At that time, I was at aeris systems. In 2018, the same group submitted a proposal to ISO to codify the file in data transfer protocol, and ISO accepted the proposal and the working group was created. And that that's who you see in this table. The initiative, what we did was we produced a set of guidelines in best practices that publishers means of systems and preprint servers or really any system at all that we can all utilize to transfer data and files.
The recommended practice was approved on June 26, in published July 6th of 2020. So the primary objective was to alleviate author frustration. Authors there, frustrated by redundancy of effort, they have to repeat tasks and duplicate efforts during the manuscript submission process, especially when an author is asked to resubmit their rejected research to a different journal. And similarly, reviewer frustration.
It was cited as a major concern and something that was a driving force behind the early discussions around the need for a common approach for transferring manuscripts and related peer review. So use cases along with relieving author and reviewer frustration through this by creating this mechanism. The mecha also addresses other primary use cases, so transferring papers to and from submission systems and preprint servers, transferring papers from collaborative authoring tools.
And other manuscript preparation services, and transferring manuscripts to and from other systems in services like third party peer review or AI and ml queue services, production vendors, repositories. So anybody can, can use this set of protocols. The Meca team, we focused our work around several principles. One principle was to let journals set the rules of what is transferred.
So it's up to the journal to decide what gets transferred. The Mecca team, we defined what data and files could be transferred, but only minimal data needed to that was needed to start a submission record in a system would be required. We wanted to define a minimum viable product in order to get the project off the ground quickly and to be sure that it could. We wanted to be sure that it could be expanded for future use cases, and I'll talk a little bit about that near the end.
We wanted to design a protocol that was based on best practices and industry standards because we wanted there to be a low barrier of entry for the use of Meca. And it's useful to understand that Meca is a technical recommendation. It's not code, it's not software, it's not a database, it's not a central hub. It's not a service like crossref or ORCID.
It's a specification. So I've covered why and what is mecha? And here are a couple of real world implementations. So the cell biology transfer network, it's a coalition of cell biology journals that use mecha to transfer articles between journals that are published by different publishers that are using different submission systems. The journal of cell science journal, cell biology and molecular biology of the cell offer manuscript transfer options to authors when manuscripts have been declined.
So they're all working together to do cascading workflow. When a manuscript is submitted to one of the journals in its decline, the journal editor can offer the authors the opportunity to transfer their manuscript and reviewer comments between the journals submission systems. The authors are given the option to update their manuscript files prior to the resubmission. The manuscript transfer options.
This this option. It's not it doesn't just alleviate the burden on the author. It also alleviates the burden on reviewers who may encounter the same manuscript from different journals. And then another implementation. The preprint server is archive and archive.
They utilize Meca for their beta, j and j to B offerings. The beta j means transferring from their preprint server to a journal, and so that's meant to save authors time in submitting papers to journals by basically submitting, by transmitting the manuscript files and metadata directly from archive or archive to one of many participating journals. So again, the authors don't have to spend time reloading manuscript files, reentering author information into different systems.
Data B means that the journal can transfer a manuscript, a submitted manuscript to a preprint server, which is becoming more and more common. And again, it just saves time for everybody. So also biomed archive, they have a service called beta x, which facilitates the transfer of a preprint that's been uploaded to one of their preprint servers to a third party service that can help the author improve their manuscripts.
So services like that do things like perform or peer review or check for funder and data compliance. These those services are independent of archive and Med archive, but Meca is used to transfer files and data from the preprint server to any of those sorts of third parties. So ultimately Mika recommended practice. I think it can be seen as a successful collaboration with stakeholders from various areas of the publishing ecosystem.
It provides a framework for manuscript exchange that has a low barrier to entry. With the initial recommendation, the working group recognized that there's still work to do. And so we've committed to work together to evolve this recommended practice. You can see here is the nice standing committee. There are members of the original team Plus additional members, and we're always looking for more people to work with us on this.
Ongoing work. The standing committee, we're focused on three activities. Get them all up there. First, it's outreach, like what I'm doing right now. Another important activity is our investigation into moving from an FTP transfer method to an API solution, and we are looking for assistance with that. And if anybody out there wants to help us with that, that would be really welcome addition to our standing committee.
And then a third activity, which is, I think, really interesting and exciting because I've spent two decades in peer review, actually longer than that. It's examining other initiatives around peer review like transparent peer review, community and pre-print peer review and post publication review in order to incorporate those different types of use cases into the Meca recommended practice.
We have developed a review XML schema and based on JATS and we want to expand that to incorporate lots of other peer review models. 90 seconds. I'm done. Oh, great. Thanks, Tony. Wonderful so that's all about Mecca. Next in our alphabetical list is the Open Discovery Initiative.
And Bobby Latham from Springer Nature is here to tell us what's going on with ODI. OK you can see me, right? Yes OK. And I'm going to share my screen. OK let me put it in the display mode.
And we see slides. OK so you see the slides now, right? Yes oh, OK. There we go. OK so you see what we can see. OK so good morning, everyone. Thank you for attending the nice so update presentation today. I would like to thank the ISO members for the opportunity to present at this conference.
So let me introduce myself. I'm Bobby packham, discovery Services Manager from Springer Nature. I work closely with the discovery vendors, link resolution service providers and other third party vendors to create a clear targets for the essence content. Springer Nature content. So this is the agenda for today. So I'm going to talk about the history of ODI and the goals and benefits and how each party has their own responsibilities.
And your participation as a librarian or content providers or publisher study librarians are discovery vendors. So I would like to start with the code and I know like you might have heard this previously about from someone, Google can bring you back 100,000 answers, but librarian can bring you back the right one. So this is from Neil Gaiman. And so let's move on to the next one.
So we have a 10 years of history. So Warrior was proposed at ala annual in 2011. As you see, it took more than a decade then. Then in 2014, the first recommended practice was released by the standing committee. And then I joined the committee in July 2018. The historic standing committee revised in 2017, and then the updated recommended practice was released in 2020.
OK so the goals of the odi, so it is important for librarians to work closely with the discovery system because their service are important for the users and their system becomes more complex for librarians these days and it is hard for them to explain the librarians. What is included in their system are not so the Warrior ARP provide significant opportunity to understand what is indexed and where it comes from and how it is used.
So increased the need of they ensure that the coverage meets the librarians needs and it determines what usage statistics should be collected for librarians and for publishers. So that was the primary goal. So defining the models for fair linking from discovery services to the publishers content, and what usage statistics should be collected for librarians and for the content providers.
So the benefits. So anything we do that, we need to find the benefit. So the audio makes it easier to understand which resource is included in the discovery system for librarians. So finding the relevant content is not that simple. So the Warrior makes it easier and therefore the content provider's perspective. So the participation in the discovery service makes content more valuable and discoverable, increasing the usage and decreasing the likelihood of cancellation.
And when it comes to discovery service providers, participation would increase as transparency, improving customer satisfaction and retention. So so this was released in 2014, the technical recommendation for content providers and discovery provider for data exchange, including the data formats, methods of delivery usage, reporting frequency updates and rights of use. So this is a model by which content providers work with the discovery service vendors.
We are fair and unbiased indexing and linking. It's a way to access conformance by content providers and discovery providers. Now we included the library and festival, so from 2014 until now, 2020 where we are now. So promote educational opportunities about adoption of these recommended practice, including how discovery systems fundamentally work. And it also provide support for the content providers like US publishers and discovery service providers during the adoption.
So it provides a forum for ongoing discussion related to all aspects of discovery platforms for all stakeholders, the three of them the major one, content providers, discovery providers and librarians. So I just tried to put together the, the shapes and everything didn't come out well anyway. So these are the each party has responsibility to the others. Discovery vendors, librarians and content providers have been repeating over and over about these three parties work together.
So we are all in this together. So understanding the problems, so discovery service providers, content providers and librarians. So so we all are part of the process and each group has an impact on the other. So nothing can happen if you don't participate as three. So communication needs to happen between and among each group and your participation is so important for us. And so we're going to look what is the discovery services provider role and then go one by one, librarians role and then content providers.
So the responsibility of discovery service providers, the as you can see, the discovery providers need to provide clarity on what is included in their indexes, both in terms of titles and larger market databases. So discovery service providers should also provide monthly reports that include a number of records in central index and the number of records full text searchable in the central index and then abstracts a number of records, abstracts searchable in the central index, subject searchable, and then articles that are free to read.
And the data and the date of most recent market product update is also important. So the transparency. Transparency is what is included in the discovery system. So we have I will show you the collection of title level details in my other slide. So here you can see the transparency and a collection level of metadata. That's the first one and the title level is the second one.
So libraries are their end. Users need to understand what is available through the discovery systems in the central index. Important fields need to be included. Is a provider market product title in the knowledge base number of articles in knowledge base number of unique records in the central index, the percentage of full texts that's searchable records in central index and then abstracts that's searchable in the central index.
So as I repeated earlier. And then the market update and the reports, so you can see the collection level and title metadata. I don't want to go into further details. So this is like. The same thing. Do not discriminate based on the business relationship in generating the results. Relevance or link order the fat linking and allow librarians presence in establishing which platform to link and the statement of neutrality in algorithms.
That's important. As you can see, we provide and then when the search results comes the linking as like biased linking. That's why we noticed. So business relationship between the content provider and the discovery service should have no impact on the results. Relevance are the link order for the end users. So the counter reports provide usage on one provider's content by all customers.
And then these reports allow content providers to be able to see the customers are using their content. Clicking on the links are only license institutions getting access. Since content providers are losing control of their metadata within the discovery system, this happens. Actually, it becomes very important for us to see how our material is being used because we provide something.
They show differently and then we receive complaints and the customer service get a lot of complaints from them. So seeing the statistics from all mutual customers to the content providers can tell if the content is being found. And so the again like it talked about over and over. So the stated in the above it is important that the code metadata is made available for indexing.
A core set of metadata elements and the content items. That's full text and the transcripts and other things. The full range of metadata improves discovery service for users, information such as coverage, content providers, type, and what is provided to the discovery service vendors are all the details of information that can assist librarians within the lining of the user to content that minimum amount of data necessary for open oral resolution.
So libraries need to have open URL to link the platform of their choice. OK I'm going further into details. The presence of the data points that support the direct linking should not supplant open oral access, would remove the librarian's ability to choose which platform to direct users to. So for materials to be discoverable in the discovery system content.
At a minimum, the following needs to be included when applicable services is title, author author identifier publisher name volume issue page then identifier component of title component of title identifier then item URL and then open access designation full text tag content type content format language indexing data abstract. So the recommended practice includes the examples of all these fields illustrated.
You can see it in this screen and when it comes to librarians role. So when establishing a training program for the discovery system, one needs to take into account that there are multiple user communities. So library public service staff need to know how to best advantage of these searching features to be made aware of any potential problems configuration.
And then you can also develop a training program or develop a training to meet different users' needs and review all the system upgrades. Sometimes they have outdated system that is not supporting. Make sure your system is up to date and then even, you know, like so you can and learn more about what is available on the discovery system and making sure that we also released in the t-20 in 2014 like we recommended between the content providers and the vendors.
But 2020, we are highly recommending librarians participate, learning about these systems. So that we get less and less complaints and then in adding content that the user expected. So releasing the conformance statement is important. So complete and publish a library conformance statement I can show you end of the presentation if you have time Follow up with the vendor partners on their conformance statement.
And if you need more information, go to the NISO website to learn about this library conformance statement. 2 minutes. So oh, OK. I did not. I took so long. These are some resources you can see. OK this slide took more than hour compared to other slides.
You know why? Right so I was getting a pictures of each of people. I'm glad that I was able to pull the pictures of them from online. And they are they did all the Warrior hard work and this should be visualized. So I brought this screen here, the slide, as you can see, Lara, Rachel, they are the co-chair for this committee. OK so these are some resources.
I don't want to go into details, but you can find the benefits of this participating the witii conformance statement and then they say a fake available implementation guide. They also provide a implementation guide and reference guide for the content providers on the NISO site. And Yeah. Here OK.
I don't know if we have time. I can show the librarian's conformance statement. If not, that's fine. Yeah You've got a minute and a half, so. OK Yeah. Let me see how I can do it. Escape Oh. OK OK. Sharing Powerpoint, not your browser.
Oh, man, I can do it then. Or like, you could just. If you go if you Google ISO odi, you'll pull up to standing committee page. And from there you can link to the library conformance area, which is an area of particular focus right now. OK yeah, it's OK. I don't want to complicate things. We have very less than a minute, so if you click on the NISO site or type on Google nice or odi, you'll see the pages and then content providers, they released the conformance statement and the discovery renders and librarian so you can see how they released and what, you know, basically you're learning about the system, you know, you know how you cooperate with the discovery vendors.
So all these details are available. There are more than 30 fields. I think I saw it. So please learn more about it and release the conformance statement. It is important so that you can access the content from the discovery vendors, whatever we provide. OK and so that's all. And attending the ISO press conference will help you focus on identifying concrete next steps to improve the information flow.
So it is a Plus for your libraries. So same way releasing a conformance statement for ODI on the NISO side should be a Plus for librarians. That's all. Thank you for listening and have a good rest of the day. Thank you, Bobby. It's great to. Presentation and you can stop sharing. There we go.
Thanks so the proverbial last but not least, we have transfer last on our list, first in our hearts with Linda from. I'm sorry, Linda. How do you say your surname? It's Moby. Moby? I will remember that. So, Linda Moby from skylark?
Yes let's see if I can get my technology to behave itself. And let's see. We know. We see the presenter of you. OK, let me swap. There we go. Now now, look, I think we see what you want. Yeah OK, let me I. It's so funny. You can't.
Can't control your stuff. Oh, right. Well, anyway, greetings, everybody. I've learned so much from this presentation. I really am delighted to be sharing this little slot with the people in front of me because it all seemed related. It was been really fun. I'm Linda wilby. I work at SCLC, which is a library consortium based in Los Angeles.
You see, this little picture is implying that it's like always 80 degrees. Well, it's not. And I am the assistant director for external relations here at skellig, and I serve as co-chair of the siso transfer standing committee. Yeah let me see if I have any control over next slide. Yes, indeed. So what is transfer?
Transfer? this has to do with journals. Journal publishers and societies may decide to switch where their titles are hosted from time to time, and to make sure that the complex steps involved in these transfers are addressed by both transferring and receiving publishers. The transfer code of practice, an ISO recommended practice, was developed and the code is voluntary, governed by a standing committee representing all stakeholders and has been endorsed by more than 90 publishers and societies.
The transfer alerting service is a database of those registered transfers that can be searched, and it is hosted by the IFS and center in Paris, and it's receiving publishers that post the information about those transfer titles. A little bit of history this group was created by or the transfer code of practice was created by the UK.
Sg a UK sg working group and was they published their first version in 2007 and the work was transferred under ISO in 2015 and it's regularly updated. We are on version 4 right at the moment and working on version 5 and I believe I started working with this group and maybe 2017 or so.
So I've been through a bit like everybody else. I'm trying to give credit to the amazing team of standing committee members. My co-chair, Sophia anderton, is from a society publisher and we have representation from all the major journal packages. I realized earlier this morning this list is actually not complete. We have representation from the isn now center in Paris, but also the Library of Congress Assistance Center and Elsevier representation I don't think is listed here yet, but I will fix the librarians and and societies.
And so it's a pretty amazing group of people. Previously our focus, as many people have said, has been in promotion. And as we go out and promote and and pester publishers to sign up. If you have seen me at your table, just sign up to endorse so you don't have to see me again. The display room in ala, where I'm heading next.
So we updated the transfer website. We worked really hard to create materials for publishers to make it as simple as possible to know how to register those transfer titles. There's a video. There's just a few minutes. Very, very simple. And I will. The slides will be distributed.
There's lots of links. I can I'll drop a bunch of links into that shared document to four people to the faq, the slide deck, the video, and all the other fancy things we have on our website now. But more recently, the last couple of years, we've been working on revising the code of practice and drafting drafting. So this slide lists some of the areas where we've been focusing attention and of course, transformative agreements and open access, publishing and waivers and APCs.
And what happens when a title was going to be published, open access on one platform or publisher, and then it moves to another one where that title is not going to be. So there's lots of little details subscribing institutions, defining who really paid for a subscription, perpetual access rights, backfill content and updating downstream services with corrected links.
And some of these topics are addressed in the current version. It's not like they were completely overlooked, but we need to provide more clarity for the stakeholders. I think it's really interesting that most of the committee, I think maybe all of the committee is involved because they want to improve the code. So this is the second time around for me for revision and this has been an amazing experience.
So collaborative, everybody has a voice. We had a subcommittee looking at the comments we've received. Every time I give a presentation or I've done a few, I receive a flood of new comments about how to improve the transfer code of practice. Well, we are diligently we recorded all of the comments we've received since our version 4 was produced in county in the niceville site.
But Kathy isn't really great for figuring out how you want to revise the text. So we transferred all the comments over to a Google spreadsheet and group them together by category and/or by existing section of the code. So you can see some actually not depicting the highlighted sections. This is a boring spreadsheet, believe me.
And so you can see the highlighted categories, the transformative agreements, the perpetual access, the article level made it metadata, the flips, and then the flip from Open to not or not open to open and journal level metadata. So we went through all the comments, we drafted new text for that.
And now we're going through the existing text and seeing if that can be improved and incorporating the new content. So everybody on the committee is signed up to draft new text. We're not every hour, every other month meetings, we're improving the text. Building consensus. Since this is a recommended practice, it's not prescriptive.
So much as it is helping everybody. The transferring publishers, societies and receiving publisher know what needs to be discussed. Then they can use Meca apparently to actually transfer the content. Let's see a little benefit for publishers and societies. If there's any publisher in this room that publishes journals, that receives journals that has not endorsed think about doing that.
And succinct checklists for both transferring and receiving publishers help you not overlook the important assets and information suggests timeline benchmarks, and you do things the way that works best for you, not prescriptive. And there's a link to the 90 publishers in addition to. Endorsing don't forget to actually register your receipt titles in the transfer alerting service and the transferring transfer.
Alerting service is a database. It's now hosted at the ISO Zen Center that lists all of the transfers that have been posted there since the beginning of time, since the beginning of transfer. And all the information librarians need is in one place the url, it's the past URL and the new URL. When the transfer was effective, the exact volume, which publisher has the archive to provide the perpetual access and folks can sign up to receive the most recent postings and that's released on a regular basis.
And you can search the database here. It is so cute, kind of old school. If you hit the Browse button, you get the most recent transfers. I'm sorry, this isn't a brand new screenshot. This is just an example. And part of what I like to remind people is if you click the link to the title name, then you get a whole bunch more information and including the bits of pieces that I explained before.
And these are some links that I'll drop into that shared document to the transfer site, the alerting service, the code of practice, and I look forward to the questions if we have time. Great thank you, Linda. And indeed, we do have time. We have about an 8 and a half, 9 minutes for questions.
I think we could start by just doing questions altogether. I'm going to spotlight the speakers so that they are in the middle, I guess, so to speak. Let's see. Maybe I can only spotlight. You've got your video on can speakers start their videos? Oops let's see.
And bill. And Bobby. Great I think that is everyone. So we've got our speakers and our projects. Does anyone have any questions for any of them?
Or we could also split up into. Breakout rooms that might make it a little bit easier to talk about the specific projects. We still have some time for that. What do people think? Oh, let's stay together. Or you want to stay together? That's fine, too.
OK all right. Cool let's see. You can. Unmute yourself and ask. That's fine. Or if you want to put a question in the chat, you can do that. I think we did all did such a great job explaining our projects. Yeah, I think that might be the case that there is just you've said it all and that's totally OK too.
If that's the case, then we can move on to our break, too. I don't want to hold anyone. Back but as people were talking, I had questions. Yeah then time to break it down. But I do have a question for Bobby or no. Yeah Bobby, do do most of the discovery providers participate in odi?
That's a good question. Not most of them. Yeah, some participated. You could see the links available on the niceville site if you type nice ODI and then you'll see that links. So you can see the list of participants from the librarian side and the discovery vendor. And then from our side, oclc and exhibitors, they released their conformance statement.
You could see the link there. Yeah, I think it also depends on how you define a discovery provider because I think that definition is certainly growing. Is Google Scholar a discovery provider, for example? And don't you wish that it conformed? And we heard it and I so Plus I saw a talk by I don't have the it's a product from cactus communications and I'm forgetting what the name is but that's an app that's meant directly to end users for discovery purposes, particularly on open access.
So I think that I personally I would say that that definition of discovery certainly is broadening out quite a bit. Which is an interesting. Which is interesting. Yeah I will share the slides with you so you could see the resources page that has more information relevant to, you know, what we have released in 2020, the participants in there and then who will release the conformance statement from the provider end.
Yeah any other thoughts? Reactions? all right. Well, I think I you know, from my vantage point, as the nicest staff who, you know, tries to oversee the work on and actually tries to try to make it, I try to make it easy for people to participate and provide their expertise in these projects.
I'm always really amazed at the energy, the interest that is shown by working group members and particularly co-chairs to pull these things together and keep them moving. So I'm really grateful for all the work you do and especially the time you've put in to making these presentations to talk about your work really makes a difference. And thank you so much.
Thank you. Thanks, daddy. Take care, everyone. Bye bye. Bye