Name:
Knowledge bases and next steps: new and upcoming
Description:
Knowledge bases and next steps: new and upcoming
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/370b7cc4-0753-41f8-a0a2-bc50db9a32cc/videoscrubberimages/Scrubber_1.jpg?sv=2019-02-02&sr=c&sig=ZGktu3U%2BePrpXsaBrrjC4FjiCzD%2Bl6mlBkNHxx%2Ftsx0%3D&st=2024-12-21T14%3A10%3A54Z&se=2024-12-21T18%3A15%3A54Z&sp=r
Duration:
T00H42M49S
Embed URL:
https://stream.cadmore.media/player/370b7cc4-0753-41f8-a0a2-bc50db9a32cc
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/370b7cc4-0753-41f8-a0a2-bc50db9a32cc/37 - Knowledge bases and next steps - new and upcoming-HD 10.mov?sv=2019-02-02&sr=c&sig=AErbzjTZDBFDS4K2GkWJC%2FwSqRrRJMrBnUZsgMTLP08%3D&st=2024-12-21T14%3A10%3A55Z&se=2024-12-21T16%3A15%3A55Z&sp=r
Upload Date:
2021-08-23T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
[MUSIC PLAYING]
PETER MURRAY: Good day, everyone, and welcome to this session on knowledge bases and next steps. My name is Peter Murray and I will be the moderator for this session. During this session, we have two topics. The first is the NISO KBart validator app. On this topic, we have Ben Johnson and Davin Baragiotta. Ben is the Provider Relations Engagement Manager at Ex Libris, a part of ProQuest.
PETER MURRAY: He's been working on KBart for nine years, including two years as co-chair of the KBart standing committee, so he certainly brings a lot of knowledge and history to the topic. Davin leads the Information Technology Team at Consortium Erudit. He is a recent addition to the KBart standing committee and the developer behind the NISO KBart validator app. Ben, let's start with you.
BEN JOHNSON: Thank you, Peter. As Peter said, Davin and I are very excited to bring this to you. I've been waiting a long time for an app like this and let's get right into it. So in order to know anything about the validator app and understand what we're talking about, I hope you know about KBart already. It's a NISO recommended practice about transmission of electronic resource metadata between content providers, libraries, and knowledgebase suppliers.
BEN JOHNSON: It's designed to be easy to create for the provider as well as to consume viable knowledge bases and libraries and people of all technical abilities, so your salesperson at a content provider or your less technical library staff. There is a KBart endorsement process to vet the quality and the overall adherence to the recommended practice, and so that's key in this whole validator app paradigm.
BEN JOHNSON: So the KBart standing committee, we spend a lot of time validating files that we receive for endorsement against specific criteria in the KBart recommended practice during this endorsement process. That's very manual validation and with manual feedback and manual typing, manual comparison and it involves multiple volunteers, which can cause a lot of delays.
BEN JOHNSON: And this ease of implementation that I was talking about for the KBart recommended practice also introduced some costs because it's tab delimited text. it's less standard and so there can be a lot of little-- there's a lot of different ways that files can go wrong. So each point in the supply chain has a need for validating of KBart.
BEN JOHNSON: We need consistency in feedback whenever we possibly can. And formalizing that feedback and creating more of a uniform usage of the recommended practice is critical importance to us. We want faster and more, dare I say, on demand kind of feedback, particularly for content providers who are generating files. As they're developing those files, they can have their questions answered more quickly and be able to develop much faster.
BEN JOHNSON: So automated validation should be a tool to help drive more widespread adoption as well, since it should be easier to implement, at least that's what we're hoping. There haven't really been many attempts at this kind of validation before. I believe there was one in France by the banking consortium but that was fairly limited to just their content.
BEN JOHNSON: So this is really kind of a new thing for us. So the folks that are going to be using the validator, the KBart standing committee, obviously. We have the need to streamline this endorsement feedback process. For those parts that can be automated, we want to be able to focus our times on the parts that can't. For content providers creating their own KBart files as I already mentioned, knowledgebase suppliers also provide feedback about KBart files to content providers or files that look like KBart files.
BEN JOHNSON: So it would be useful for them to be able to use this tool as well, and for librarians, for their transparency and oversight reasons as throughout KBart. And also, because librarians can fulfill these different roles as well as a content provider and the knowledge based supplier. So this is the first official NISO application that I'm aware of.
BEN JOHNSON: It's currently on early development. It's going to be open source, and Davin will talk more about that and he is the person that's been doing all this development. Davin?
DAVIN BARAGIOTTA: Thank you, Ben. So, yeah, I want to deep dive in a quick demo to give you a sense of what would be the validator and how to use it. So the main concept in the validator is a list of validation that are parasitic in Kbart file. You can run it against one file or multiple files in a directory. So I'd show you that right away.
DAVIN BARAGIOTTA: Here, I have a list of files for my own organization, Erudit, and will perform a validation test on the old title package. So here, you just-- yeah, the validator is a written in Python programming language and to launch it, you just have to launch the script and give them the file name. So let's make it run and forget about the output in the terminals.
DAVIN BARAGIOTTA: What's important is to look at what's meant to be read by humans. So this folder has been generated by the validator. You can see the version of the validator and the results are stored in the file. Here you can see that it's the same name of the file with the date of the results. If you open this, it's a csv file with the tab delimited. You open this with your favorite spreadsheet software.
DAVIN BARAGIOTTA: And let's have a look of the structure of it. Like I said, the basic concept is validation, so here are the 29 current validation implemented I'm not saying that it's all well. This is why we are doing tests, but we'll discuss these valuation in a few seconds. Let's look at the structure. So this validation have a certain scope. There is a direct link to the documentation for each validation.
DAVIN BARAGIOTTA: I'll talk about this later on. The are recommended practice reference that are covered for each validation and the results. That's the important part. Finally, there's an explanation, mainly when there is error so user can understand what's wrong. And they have the values that are to check, so obviously, if it's OK, you don't need to check the values. So it gets just displayed there, the values that have been updated.
DAVIN BARAGIOTTA: So I'm the first set of validation-- well, first, there's no specific order, but all these validation are used against the file. So first we look at the name structure. Is it well-formed? Do we have a correct provider name, region consortium, package name, and the date format? And also, the file extension. As you can see here, there is an error in my file with the file extension.
DAVIN BARAGIOTTA: So this would be a request for change if we were in the endorsement process, because the file extensions should not be txd. That's super explicit in the recommended practice. But here, you can see the value check is this csv. Of course there is some part that the validator cannot tell if it's I don't know. Erudit, is it the correct name?
DAVIN BARAGIOTTA: So say, for me, it's OK. There is no wrong character in it. But just make sure that's what I recognize as a provider name or region because it's really what you meant to be one. So that's the idea of the manual. We might change this result value later on. The other kind of tests are based on the column. So Kbart phase two recommend that practice is really structured on columns and to give it's guidance for each column.
DAVIN BARAGIOTTA: In the validator, only perform format test on the columns. That is for a single column. All the values in it are-- do they have a correct format? For the old demo, I will use the printed identifier format because it's complete, it's well documented, et cetera. We go back in the documentation in a second. For instance, does the printed identifier format follows ISSN, et cetera?
DAVIN BARAGIOTTA: The value might be also empty. And now, the validator is just looking at one column. So if it's an empty value, well, it's fine. It's legal to have one. Another kind of result might be a warning . So the warning is when the software is not able to say, yeah, it's OK. I know for sure that this validation passed or I don't know for sure neither if it's an error.
DAVIN BARAGIOTTA: So I give you a warning. So the manual check and the warning has to be revisited. But the idea is that we might have some suggested changes, but it's that the discretion of the content provider to do it or not. So it's not wrong but it's a warning. Maybe it should be better then the content provider double checks it. Finally, the other sets of validation are the rows validation.
DAVIN BARAGIOTTA: The idea of the row violation, it's multiple column validation. So if you have two or more columns implied in a validation, then it's a row validation. For example, each title, each row, has to have at least one identifier. So the Kbart has standard, the recommended practice has two kind of identifier, printed identifier and online identifier.
DAVIN BARAGIOTTA: It's correct to have each of them empty. But both of them? It's wrong. So we need an identifier. So this validation just looks if there is at least one. The previous columns one check if their format was OK. So it doesn't detect if online might be empty and print. Empty also and they will date, it's OK, because empty values are OK.
DAVIN BARAGIOTTA: But this one checks if the two are empty, then it's in error. So that's the idea behind row validation. There's lots of things that we can code there, so it's just-- Here currently, we are at 50% coverage of the current recommended practice, so there's a lot to be added there. Finally, there is a group of rows validation. So for a single title, for example, you can have multiple lines in Kbart file.
DAVIN BARAGIOTTA: One, when you have a coverage gap, so you will have multiple lines to show what are the content that you have in different periods or, I suppose, that content has different access. So some part of it are under embargo and the other are free to access, then you will have multiple lines. So right now, what we do, we group these lines by the title ID.
DAVIN BARAGIOTTA: And this validation check if, for all these row group, by title ID. Does the print identifier the same? If not, it's in error. And if it's OK, if it's the same, it's OK. But discussing with the team and the KBart standing committee, I've discovered I didn't know that there is a use case where content providers say, yeah, I gave you the same title ID but the print identifier had changed.
DAVIN BARAGIOTTA: For example, when the title, they said, changed its name in history, so it might have different ISSN, but the content provider provides the same title ID because it's accessible than the website all the other content of the both names over the title. Anyway, so I will ease the restriction. It will just be a warning. So there is a different print identifier. Is this OK?
DAVIN BARAGIOTTA: So this gives you a sense of what is a validation report. One report profile and, yeah, I went to demo you that we can run this against multiple file. So it's the exact same comment but you just run it on the directory instead of a file. Then you run it, same display but for each file. And finally, there is a specific validation run for all the files.
DAVIN BARAGIOTTA: So I'll show you the result for that. And so you see that now I have results for each and every file there, and there is a new one called all. That means that's a report for validation on all the file, all together. If I double click that, it should be here, voila, and open this. So obviously, it's the same structure. But now, I have a new validation and we have multiple one.
DAVIN BARAGIOTTA: That looks if the content provider name in the file names are consistent. So it's OK here and I can give you proof. Close that, sorry about that. Effectively, all the input file have the same Erudit content provider name and that's what we validate. So maybe, later on, we will have other ways to launch a program, maybe a Python library to put in Python software.
DAVIN BARAGIOTTA: But what would be better is having a service get to interact with the service, with the API, but this is future development and we're absolutely not there yet. I want to talk to you about documentation. This is a local file. The documentation is built with sphynx but since it's a software that is used in Python communities generally, but in development also, and it can publish automatically the documentation when there is a new version and can publish it publicly on the web.
DAVIN BARAGIOTTA: For example, on Read the Docs. So that's the plan, but now it's [INAUDIBLE] because we are just on development. The structure of the documentation. Documentation, there is one documentation page for validation. And here, you have the first section, it's the recommended practice. So you have the exact excerpts of the recommended practice that are concerned with this validation.
DAVIN BARAGIOTTA: So for the print identifier, of course, there is the print identifier section, and also some other parts that comes from the general section. And what's cool, I think, it's also the example section for the documentation. These examples, it's like the contract between the developer and the behavior it has to do. So these examples, when the software-- the automated test will run these examples.
DAVIN BARAGIOTTA: And if you have this in a KBart file, then you should have these values in the print identifier column. Then it should be OK. Otherwise, if you have such values, it should be an error result. So I think it's super explicit what we expect as values. And as you can see here, it's ISSN but it's just like a direct one.
DAVIN BARAGIOTTA: But I have here some real examples. So this one's a real ISSN. These two, empty lines are OK. Like I said, it's correct. And because this validation doesn't look at multiple columns, otherwise we'll had addressed the online and say these are forbidden. So all of these are OK values. And when we run the test, automated test suite-- I'll go back to this in a second.
DAVIN BARAGIOTTA: The test, make sure that the validation logic right under really behaves the way it is expected in these examples, and that's cool. Finally, the validation logic here, this is documentation extracted from the source code. So that's like the developer talking to the community say, hey, that's the way I've implemented things. So reading this should fit, of course, with DRP and with the examples.
DAVIN BARAGIOTTA: So that's for the documentation. I just gave you information on the example. So now, in this case, for print identifier, there's, OK, an error example. We might have some warning. One can argue that ISSN10 should be more a warning than an error. So this kind of discussion, we want to make sure that what the validator does, we all agree in the Kbart standing committee before releasing it.
DAVIN BARAGIOTTA: Finally, I'll give you short example of the test. The tests are run. They will be run automatically. But when I run it manually with the pi test rate, so this is classic test. And we've run the tests and what the validator did there is really make sure that the examples in the documentation are correctly implemented. So really, the source code of the test load the files which are like partial KBart file and they say, these are my input.
DAVIN BARAGIOTTA: Is it OK? If it's not OK, then the test don't pass. And as a developer, if it doesn't pass, I have to fix this up for. Viola. So these are the tests. To give you just an idea of the roadmap, so right now, we're on development. As you can see, there's lots of validation to be added and the current validation there are not all documented and covered by test suites so this are things to do.
DAVIN BARAGIOTTA: We're also running, like in ideal way, alpha test with the Kbart standing committee crew. So once it's ready to be tested, then they test it and give the feedback and make sure that the validation-- After the alpha-- well, we want to release the code base. Right now, we have some concerns about the license. We want to make sure that once we release it publicly, we have the right license.
DAVIN BARAGIOTTA: And then we'll release the code base on GitHub and the documentation on Reader Doc. Finally, we think in September might be good to go with beta test. This question marks because it's a voluntary based development. So it's hard to commit to strong deadlines. But we plan to release to the community. I give you a sense of what can be the contribution in a second.
DAVIN BARAGIOTTA: Finally, we hope that in one year we have a final version where NISO is as comfortable with the tool and the community too. So I skip the [INAUDIBLE] part but maybe might have other recommended practice coverage and web service but all this has to be discussed. So where can you contribute? You or your organization. Well, maybe sometime in September.
DAVIN BARAGIOTTA: There's the tools [INAUDIBLE] your KBART file. Proofread the documentation. Is it correct? Are the examples correct? Maybe we could have more examples. Or maybe there's new validation that should be there. So some use case, or maybe H cases that we didn't discover. And if you can code, you or someone in your organization to provide some code.
DAVIN BARAGIOTTA: So Ben, if you want to give the final word.
BEN JOHNSON: Yes. Thanks for tuning in. And so if you want to learn more about the tool as Davin suggested, things that you might want to also contribute to, here's a couple of resources for you. And if we have additional ones during the presentation, we'll put them on the chat. And that covers it. Off to you, Peter.
BEN JOHNSON:
PETER MURRAY: Great. Thank you, Ben and Davin. Our next topic is package ID. And on that topic, we have Athena Hoeppner and Christine Stone. Athena is the discovery services librarian at Century Florida University. Among her roles in NISO activities is co-chair of the information policy and analysis topic committee.
PETER MURRAY: Christine is director of product management for discovery and delivery at Xlibris, a part of ProQuest. For NISO, she has cochaired the Information discovery and interchange topic committee with me since 2018 I think, Christine, and has been active in many other NISO efforts. We're starting with Athena.
ATHENA HOEPPNER: OK. Terrific. Let me share my screen. And all right. So yes. We're going to talk about package collection identifiers. And let me just launch right into it. We're going to use these sort of journalistic questions to frame the conversation. So what do I mean by package identifiers?
ATHENA HOEPPNER: What I'm looking for as an ultimate result is the same ID for a given package across all ERMs and knowledge bases and perhaps other places too. So we hope to have something that's unique, persistent, structured, and that really can be used like an identifier would. So my model examples might be ISSNs, ISBNs, Orchid, these are things that we in this information science profession are familiar with.
ATHENA HOEPPNER: And I think that they could be useful for package ID as well. So where would we use them? I mentioned ERMs and knowledge bases before. But they could also go on invoices, on licenses, on discovery meta data, anywhere where it would help with the consistency, clarity, and support automation. So why do I want these? From the library's perspective, these are three of the problems.
ATHENA HOEPPNER: And this isn't certainly all of the problems. But these are three of the problems that I face as I try and use knowledge bases and ERMs. I need to match the licenses and invoices with what I'm going to enable in my knowledge bases and in my ERM. I need to be able to migrate from one ERM or knowledge base to another. So if I'm looking at, for instance, we started in SFX back when it first came around in the round 2012.
ATHENA HOEPPNER: And then later we migrated to EBSCO's holding linking management and link source. And that process was really quite an undertaking. And now we're looking at migrating into the ULMA ERM. And that is going to be that process all over again. Because the packages that we're migrating are not called the same thing in each of those places. And I'm going to show you some examples of what that's like here in a second.
ATHENA HOEPPNER: Problem three is understanding which combination of all these different packages that show up in a knowledge base match exactly what our current and historic entitlements are and are going to provide the best user experience when we present those options to our end users who know nothing about our licenses or invoices. All right. So launching into example, problem one.
ATHENA HOEPPNER: Here's an invoice. And this is from EBSCO who is our subscription agent. And it gives me the package title, the list of titles. It's 37 pages long and includes 249 titles. So it's actually one of our smaller invoices and smaller packages for this sort of thing. All right. So I need to match that to my holdings in linking management in EBSCO.
ATHENA HOEPPNER: And I can search-- I can go in and look at the vendor. It's Emerald Publishing Limited. Not too hard to find. There's 161 packages for them. And I can start listing them. So here they are with the package names and title counts. All right. So I can also search by keywords in the packages and I enter Emerald eJournals Premier.
ATHENA HOEPPNER: I start seeing a list here. And it starts getting to be more likely. But it's still over 100 packages long that I'm looking through. Some of them are obviously going to be for a subject subset. Some of them look like they may be for a specific consortia that's not us. It lists the title counts. But none of them match what was on my invoice. So some of them are complete.
ATHENA HOEPPNER: Some of them are variable. And so this is all looking at not mentioning the specific years and details of our holdings. I'm just trying to figure out which of these to turn on. Now as an experienced librarian who's been dealing with this package since we first started subscribing, I'm able to figure it out and know what to turn on. If I need to delegate, it gets really complicated.
ATHENA HOEPPNER: This is how many things we have. It's a problem of scale. We have 1,516,008 total titles turned on. That's in 1,015 packages. So if I'm trying to delegate or trying to manage this even as an individual, it gets to be a bit much. And there's a problem of duration too. So here's a look at Wiley in the holdings and linking management interface.
ATHENA HOEPPNER: And I know from previous experience it looks very similar in SFX. It probably looks similar in ULMA when we migrate to that. Each of these packages are for a different year of the database model. We've had the database model for a number of years. Before that, we had the Wiley-Blackwell. So our license goes back into the late 1990s, early 2000s. And it's been persistent from then.
ATHENA HOEPPNER: So how do I know how to accurately reflect that duration of years? So it gets complicated trying to figure out what I'm supposed to turn on. And part of the problem is with the fact that we use package names instead of some consistent identifier that we can look up exactly what that means. So moving on.
ATHENA HOEPPNER: So how would I use these package IDs to address some of these problems, some use cases? I can enable a package link in a link resolver with using a package ID. Or I can figure out which set or mark records to download. Or I could enable some discovery sources in a discovery index if for instance I load when the vendors load-- let me try this again. When the vendors send their data set to be loaded into a discovery index, if one of the metadata is the package IDs that that is included in, that could be useful in some interesting ways.
ATHENA HOEPPNER: And we want to establish some maybe predictable hierarchy of what the packages represent. Let me give some examples of this. All right. So case number one, enabling a package in a link resolver. Let's pretend that this EBSCO invoice came with a package ID. And I've made something up. This is just completely out of the blue I came up with one night.
ATHENA HOEPPNER: This probably isn't what they will look like. But I've created a package ID. And if I was able to look at that package ID on my invoice, match it to a package ID in my holding and linking management, voila. Easy to turn on the right thing. Sometimes it might be not quite so clear as that, particularly if there's all of these different variations. Some of the packages are variable and some of them are complete.
ATHENA HOEPPNER: And what represents what I need? Well, this is where perhaps a hierarchical option might help. If the Emerald Journals Premier, ignore the backfile there, had a sort of default starting package ID, and then the specific subset I'm interested could have an ending portion that was hierarchical, we could start to maybe sort these things into some logical presentation instead of just, here's a list of all the things in alphabetical order and you can't really tell what relates to which overall collection.
ATHENA HOEPPNER: Here's another example why I might want hierarchical. This is ABIN form, a very large aggregator database. We have I think all the sections of it. But I'm not sure that all the sections have been turned on correctly by who the staff member who is maintaining this. Because let's see. We have one package here that 7,676 from this. I'm not entirely clear if that is the overarching one that includes all these subsections.
ATHENA HOEPPNER: It's really unclear. And I would have to dig into title list and title counts. And it gets muddled. All right. I'm not going to belabor any more examples. I want to hear from you what you were thinking. But I'm going to transition this over to Christine so she can talk about the knowledge base provider perspective.
ATHENA HOEPPNER: I'm going to stop my share.
CHRISTINE STONE: OK. Thank you, Athena. OK. So I'm representing the knowledge based perspective here. Athena already pointed out a lot of ways in which package IDs can make her life as a librarian easier. And a lot of this also applies to us as a knowledge based provider.
CHRISTINE STONE: Specifically package IDs can help us with serving as an disambiguation point. Knowledge bases generally serve as an intermediary between different systems and parties. It serves as a basis for librarians for electronic resource management. But it also serves discovery systems, link resolvers, and also other systems and other processes. It involves obviously the library, the content providers, the knowledge base providers, and other parties maybe too.
CHRISTINE STONE: So package IDs in this context simplify the communication in that supply chain, how to verify things like data feeds, and also how to track changes that happen quite often and in different packages. Knowledge bases usually tell a discovery system what items like books, articles, et cetera are available to the user. So they are really the basis for the availability calculation. And package IDs if they were added to the discovery data could also help with the rights flow from the knowledge base to the discovery system.
CHRISTINE STONE: It could surface recording packages in the licenses. And package IDs can also support more innovative features in knowledge bases such as the provider zone in ULMA. That's our library system that allows direct updates of knowledge based packages by the provider. It all comes back to uniquely identifying packages and the content across different stakeholders in a really complex environment. And I think Athena pointed this quite well out how complex this environment can be for the individual stakeholders.
CHRISTINE STONE: And that applies for libraries. It also applies for knowledge based vendors obviously and often also to content providers. So it generally supports more accuracy, more clarity, and also more transparency in the supply chain. You already heard in the previous presentation about KBART. KBART in short is about optimizing the data transfer between content providers and knowledge bases.
CHRISTINE STONE: It really provides a common format used for sending data from one point to the other. KBART automation was added a couple of years ago to allow for an automated process for updating library specific holdings in a knowledge base. That's already successfully used by a few providers, including [INAUDIBLE] and of it with different knowledge base providers. When the KBART automation recommendations were published, there was already an idea of a second phase that has currently not started.
CHRISTINE STONE: But there was a discussion about updating individual cell packages. Currently KBART automation is really focused on a general package that includes, or contains, all the content of one provider. And it activates whatever the library has access to. But there was an idea of also adding this to packages, meaning that it will also be able to automatically update sub packages like [INAUDIBLE] packages or other sub packages that a content provider offers.
CHRISTINE STONE: So for this, we really need to have some disambiguation of packages. It's an important prerequisite. And package IDs therefore would provide a good basis here too for that next phase to move on this KBART automation. Now the road is not perfect. And of course there will be challenges with actually implementing package IDs.
CHRISTINE STONE: I think it's important to raise the challenges right at the beginning and make it also part of the discussion of the work item. When I discussed this with our content team, they were very enthusiastic about the ID idea to have the package ideas. But they also voiced some concerns over how difficult or easy it is to implement it. And Athena already pointed at scale.
CHRISTINE STONE: And that certainly also applies to the knowledge base. We have masses of packages in the knowledge base. So applying package IDs retrospectively to a knowledge base is going to be an operational challenge. So there's always something to consider. A simple solution is often better than a more perfect one, but which is also more complicated. So there's always something that we need to take into account when we doing such a [INAUDIBLE] item.
CHRISTINE STONE: So again, the main challenge for us is really the size to apply the new package added to everything that's in the knowledge base. We need to compare the different titles of the packages in order to actually map new package IDs. And in the knowledge base where you have more than 20,000 packages and between 3,000 and 4,000 providers, that's just really, really a challenge. Another challenge is that packages sometimes grow in a knowledge base over time.
CHRISTINE STONE: Libraries for example added or contributed things, publishers may change the way they treat journal histories, or their conflicts between perpetual, excess, and other offerings, and so on. So that can lead to a package not having exactly the same content in the KB as on the provider website, even if it's just a difference in numbers. That is really confusing.
CHRISTINE STONE: So it will definitely be a lot of operational work to adjust any knowledge base to new package IDs. But Rome was also not built in a day. I'm very positive about this work item. I really think that this is overdue. We should really go and work on it and get it done. We just need to keep in mind that there are challenges and there need to be somehow also mitigated in part of the work item discussions to make the recommendations doable in the end of the day.
CHRISTINE STONE: So when are we going to have the package IDs? There is of course a nicer process to bring a project forward. First of all, we need to submit a proposal. That's what we are working on at the moment. That's the work item titled, the background, the problem statements, statement of work, including the scope, the partners, participation, and timeline.
CHRISTINE STONE: There's an approval process. I would not expect this to be long. Actually provided the proposal is clear and realistic in its scope and timeline. Then we need to form the group. And obviously we need the volunteers who are prepared to dedicate some time to this project, reschedule meetings, and do the work, and once ready, submit the proposed recommendations for public comment.
CHRISTINE STONE: Then we of course need to reply to comments and submit the final draft, which then goes through the approval process with the NISO topic committee. I already mentioned it in the context of the challenges we're going to face with this new project. And I would like to emphasize it again. The goal is a practical solution that is workable by all stakeholders and provides benefits to all stakeholders.
CHRISTINE STONE: That's the type of recommendation at NISO that can really be successful. So who should be involved? I should say everyone. Of course knowledgebase vendors, discovery vendors, content providers, electronic resources librarians, metadata librarians, and experts, and identifier wonks. These are the obvious stakeholders.
CHRISTINE STONE: But there might be others who are interested in this project and would like to contribute and help driving it. And we really need you. We need volunteers. Don't forget that this is only successful if we have enough people who are contributing and who are willing to work on it. And we hope that you are interested. Which brings me to the end of the presentation.
CHRISTINE STONE: Let's start the discussion. And we are looking forward to hearing from you. And back to you, Peter.
PETER MURRAY: Thank you, Athena and Christine. So this concludes the prepared recording part of the session. We will now move to the Zoom meeting for reactions and discussions of these topics and related ideas. Our plan is to be together in the Zoom meeting for about 10 minutes for general questions and discussion of these topics. We will then open up two breakout rooms in Zoom, one for each of these presentations where you can ask specific questions and explore each topic more fully.
PETER MURRAY: We're planning for about 20 minutes in the breakout rooms and then come back together at the end of our scheduled time for final thoughts and observations. One of the things we'll be doing as we gather at the end is to identify one or two main ideas from this session as we identify areas of new standards, best practices, or publications, or educational topics such as the package ID.
PETER MURRAY: We also want to find people interested in taking ideas and, as Christine said, drafting a more concrete proposal for the NISO topic committees to consider. With that, we'll close out this recording. And we'll see you in the Zoom meeting. [MUSIC PLAYING]