Name:
Making the business case for investing in metadata Recording
Description:
Making the business case for investing in metadata Recording
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/e085de7c-74c5-4f59-b80f-521580f034c0/videoscrubberimages/Scrubber_3.jpg
Duration:
T00H41M24S
Embed URL:
https://stream.cadmore.media/player/e085de7c-74c5-4f59-b80f-521580f034c0
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/e085de7c-74c5-4f59-b80f-521580f034c0/Making the business case for investing in metadata 2-NISO Pl.mp4?sv=2019-02-02&sr=c&sig=hZ%2BV7fDxAvwFNSAN%2FA33wIL7wOueMyxGiC7F544tRew%3D&st=2024-12-26T22%3A47%3A32Z&se=2024-12-27T00%3A52%3A32Z&sp=r
Upload Date:
2024-03-06T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Hello, everyone and welcome to the session for building a business case for metadata.
Now, as we all know, metadata provides context and provenance to raw data and methods, and it's essential to both the discovery as well as the validation of scientific research. Metadata is also necessary for finding, using and properly managing scientific data, and it plays a crucial role in increasing the quality of data within information systems. So today what we're going to do is discuss the relevance of metadata within the research lifecycle, from research organizations to funders to publishers and researchers, and why it is important to research.
We'll have Josh Brown from more brains cooperative, who will provide us with insights and examples into how business case for investment into research infrastructure can be built using metadata. Header from access innovations, Inc will provide her take on how metadata contributes to knowledge management and how to measure its value. Michele will bring her expertise on metadata and its impact on research output.
And Julia will round it out with her take on metadata and how publishers can influence and benefit the uptake and implementation of metadata within publishing systems. So you're in for a treat, because these are really good speakers with a lot of knowledge in this field. I'm now going to hand it over to Josh. Good morning. Good afternoon.
Good evening, everyone. My name is Josh Brown. Thanks, melroy, for the introduction and for the opportunity to speak at nice Plus today I'm going to be talking to you about a way of valuing metadata. And it's actually about the opportunity costs of poor metadata re-use, which we use as a way of illustrating that.
My name is Josh Brown. I'm one of the four full members of the brains cooperative. And I'll be talking to you today about some projects that we conducted analyzing metadata re-use in the Australian and UK research systems. Now the key thing about approach is that actually we are inferring the value of metadata by exploring the costs of not using it effectively. And just remember that as we go through that presentation.
So I'd like to just start with a little bit of context for our work. We focused on identifiers. Now, you could say pids are a keystone form of metadata. They are a source of a huge amount of information in and of themselves. The presence of appeared in a metadata record implies a relationship between the PID and that other thing. But also there's a huge amount of metadata stored in registries about the things that are identified now.
There are multiple ways you can derive value from it, but today we're going to focus on number one, which you can see highlighted on this slide metadata re-use. And that is where that structured metadata that is stored in registries is pulled into other information systems throughout the information ecosystem. And this little infographic on the right of the screen here shows kind of our vision of how this flows through some pin registries between funders, researchers, institutions and publishers and content platforms.
The key thing here is not re-using. That metadata actually comes at a price. Our research has shown that there is a huge amount of duplicated effort and frankly, wasted effort and expertise across the research system, which could really be eliminated by the more effective use of it, Plus their metadata. So we were able to assign a value to that in time. Therefore, money and to a large extent the knock on effects on national economies given the scale of the research sector in the countries we were looking at.
So just for some context here, it's a pretty terrifying volume. I mean, we focus of effort being wasted. We focused on time and by extension, the cost associated with the re keying of metadata into applications like research management systems or grant reporting platforms. It's a well identified and well documented challenge and it has immediate resonance. The scale of it, as you can see on this slide, is pretty significant with estimates of the time that research has spent on have been ranging from 10% to 42% of their time.
So it's a pretty tangible, tangible thing that's well researched. Now, the other potential benefits I mentioned on a slide earlier asked are no less important. Quite the opposite. They may be even more strategically valuable in the long term and I'll come back to those in a little while. So the first step we did was developing this method for valuing for valuing the impact of pets on the research information system was conducted in the UK with a cost benefit analysis.
Now we started by calculating the time spent on basic everyday metadata entry tasks. Now the key thing here is this is stuff that's happening repetitively. So bear that in mind. But we identified through the use of previous research and citations are on this slide. But the time taken to enter project or grant information is about 10 minutes.
Typically, that's just a basic description of the grant or the project, not the full thing. Everything that went on a grant application, for instance, and also the length of time needed to enter publication information averaged out based on previous research by Rob Johnson and the team with research code consulting and about 6.70 3 minutes. And when you scale that up across the amount of activities.
So if we look here, three 36,000 grants issued a year in the UK, nearly a quarter of a million publications per year. And that number continues to grow from UK affiliated institutions. Looking here at data from the dimensions portal and the Uk's gateway to research, you can see the scale of that gets pretty big pretty fast. So we measured this. We looked at the number of researchers using higher education statistic ANSI data, and I'm not going to go through this table line by line, so please don't feel you need to read it.
There's a Doi link to the actual full report if you want to go through it all. But with the help of Paul Clayton, a forensic accountant working at jyske, we built forecasts. We estimated the cost of these savings against offset them against a reasonable adoption curve, took the costs of a support program for the UK to generate pet adoption and integrations and the cost of those implementations and integrations and systems.
And offsetting that, we arrived at savings of about 5.7 million over at the end of five years, which is significant but not compelling. But it does mean that you do recoup the cost of a PID integration. So exploring this methodology further, we were able to work with colleagues in Australia, mal Roy included, to do a cost benefit analysis down there, really able to expand our understanding and expand the method a bit further and go a little bit beyond what we were able to in the UK context.
And again, you can see the scale of the Australian Research system is significant, about 6,000 grants we could identify there from Australian funders alone. That doesn't include international money flowing into the Australian system. And again, the volume of publications getting, getting close to that of the UK and growing even faster. But the difference we had in methodology here, which is significant, is we were actually able to find out a little bit more about how many times that metadata was reused for the UK study.
We assumed it was only used once and that was because we wanted to be very conservative and we didn't have evidence to support any higher numbers. So we did a survey. We asked repositories, managers and research managers at universities across Australia from 27 responses from 23 unique institutions, it averaged out data entry tasks being conducted three and 3.25 times for grant information typically, and publication information 3.1 times now.
This built on work that was previously done in the paper I cited earlier about the time spent increasing project information. It said project data is often input into systems as many as 10 times. So 26 times. Sorry so looking at this, we take it on another scary table for you to digest. But again, you've got the link there.
So you can do that at your leisure. But it comes out at a truly horrifying 38,000 days worth, 24 million aud of time and effort wasted every year on repetitive data entry, which is, I think, pretty, pretty astonishing. And that is a much more compelling number. So we would then decided we would revisit that UK analysis with our extended methodology. And as you can see here, those figures come out at something really quite astonishing in the UK system that's nearly 19 pounds million wasted in staff costs every year just by multiplying the number of entities by the number of.
Number of reduces the number of minutes and then turning that against the average time and the salaries. And that's so we've got 18 million a year, 55,000 person days a year being wasted on repetitive metadata entry. Now, when we offset that against the costs of the same process I talked about before for the approach proposed in the UK to generate pet adoption, you come out at net savings over five years of 45 pounds million, which is something that I think is really quite a significant finding and we're really looking forward to seeing how this plays out across that.
But we want to say that there are things beyond this. What are the limits that we've had in looking at this so far is just taking a very, very simple approach to just metadata key. It's literally the time spent banging on keyboards, research information experts, research managers, trained researchers, trained librarians, putting in data that doesn't need to be typed in. It introduces errors and it wastes time and it causes distractions.
So that is something that we can quantify relatively straightforwardly. But I'd like to talk about some of the other benefits we mentioned earlier. The first is automation. And this is where simply the presence of appeared in a metadata record or an information system triggers an action. Now, this is much harder for us to quantify with the evidence that was available to us in the projects I talked about.
But the key thing here is think about examples. Grant eyes could be associated with raw IDs for institutions and funders, with ORCID IDs for investigators, raids for projects funded so on. Now just think about some examples of that automation. If those things were linked consistently and reliably, you could sort harvested publication data by the grant Doi or send a notification that a new Association between two heads has been made.
So the value of automation can go beyond time saved to include harder to quantify things like more complete information and more timely information processing. Another example here is aggregation and analysis. There's a huge amount of data held in registries and at the institutional or the National scale. Aggregating this information about entities and the relationships between them actually can provide a range of strategically crucial analysis and insights.
So think about the coverage and a completeness of registries continuing to grow and then becoming more valuable as a source of increasingly authoritative information. So knowing all the grants funded and people associated, all of the grants and people associated with a particular fund or a particular program and think how that would increase the likelihood of capturing data about outputs linked to those entities and improve strategic decision making.
How that could enable you to track the ground truth of a project. And follow that through time to discern a real evidence to chain of impact into the future, and thereby that supports evidence of return on investment, on research and innovation expenditure. But it also enables it to be managed more efficiently and effectively.
So that's my final slide. All it reminds me to say thank you for listening and do stay in touch as my email address. But also I'd like to thank the project sponsors for the analysis I've been talking about today. That's jyske in the UK and the Australian Research data Commons and Australian access Federation in Australia. Thanks for your attention. Thank you for that, Josh.
And now Heather will give us a quick insight into metadata contributing to knowledge management and measuring its value. Thank you, melroy. Yes I'd like to talk today a little bit about metadata role in knowledge management and ways of measuring its value. Specifically, how does metadata support knowledge management and how can you express the return on investment in metadata?
And I'll be speaking specifically about semantic metadata. So without really realizing it, you're surrounded by semantic metadata. Think taxonomy. You probably have a place in your house that looks something like this, and this is an implementation of a very simple taxonomy.
So you know where this goes in that organization system and where something like this goes. But what about this? In case you aren't familiar with this, this is a sport. It's a hybrid spoon and fork utensil that somehow manages to persist in our world. In your flatware drawer taxonomy, you would have to expand your vocabulary or that insert tray to accommodate this utensil.
With semantic metadata, we label things or concepts with words, and in doing so, we put a handle on them for retrieval. As an example, your earliest memory probably coincides with the time in your life when you learn to speak, when you were able to attach a word or words to some event. You are able to retrieve that memory because it has words attached to it. So what is metadata? Is role specifically semantic metadata?
We label things with words. If we don't have a word for it. It's a missing a label. So we're missing that data. So we're missing information. So ultimately, semantic metadata is how we find information. Like I said, words help us to organize our thoughts. They're verbal symbols of our knowledge, and we can organize that knowledge from a random collection of thoughts by using various knowledge organization systems.
These range from simple systems, like a controlled vocabulary to more complex and comprehensive systems like taxonomies, thesaurus and ontologies. All these systems have a common point. They help us to label and organize our knowledge to make it more useful to enable effective storage and retrieval of that knowledge. Ultimately it's about how we find information. A taxonomy is a controlled vocabulary for a subject area with its terms arranged in a hierarchy.
The purpose of a taxonomies is to index or describe the subject matter of a document or collection of documents. It's the list of words that we use to label that content. A taxonomy is a central part of a knowledge management system, and it provides the most efficient way for users to access content by using the controlled vocabulary terms of a taxonomy to label concepts in a clear, consistent and standardized way.
We can represent materials on those concepts and store and retrieve them efficiently. We remove them from a dead end miscellaneous folder or from other forgotten files and make them available for dissemination and use as knowledge assets. So how is the taxonomy used in knowledge management? A taxonomy reflects the concept in a document or collection of documents that are important to stakeholders. The taxonomy is used to describe the subject matter of documents what they are about.
In taxonomy, we talk a lot about this. So it's the basis for indexing or categorizing the content using a well-designed taxonomy results in more efficient retrieval, leading to better productivity and cutting user frustration in searching. It saves time and money, not to mention the searchers nerves. And a taxonomy directs the search to targeted knowledge. So metadata for this spork challenge should be structured with multiple, multiple broader terms.
So sports spork goes into forks and into spoons. So a search for either forks or spoons will also retrieve forks. Still doesn't solve the physical problem in your flatware, though. So let's talk about measuring the value. Traditional return on investment models are calculated using actual historic data about income and expenses.
Think things like infrastructure, hardware, software, furniture and labor. And these are things that leave a financial paper trail. We can get hard numbers for these and numbers tend to inspire confidence. However, we don't always have hard numbers to use in our calculations for metadata. As an example, what is the return on investment for an exercise machine?
You spend money on it several or a few thousand, and then do you get money back when you use it? Does it generate revenue? Maybe only if you're the owner of a gym. Possibly the return on investment or the value of that machine is in the improved performance of the user. The user gets stronger, faster, maybe thinner, but they don't get money back.
So is the investment in an exercise machine still worth the expense? A lot of people seem to think so because sales are good. Another option for expressing value and another alternative to Roi when building a business case for metadata might be the total economic impact methodology that was developed by Forrester. They suggest using this phrase exactly as it is with the blanks filter, and this can be a challenge to fill in those blanks.
So we will be doing this project or activity to make a certain pain point better as measured by some kind of metrics which is worth whatever the estimated payback is. So I filled it in with an example. We will be doing a semantic metadata project to make search better as measured by increased per article sales or decreased customer complaints could be, and which is worth an estimated 80% increase in sales.
Now we might get might chuckle at that and say, that's a little ambitious, but why not? Another way to look at the value of metadata is to measure the opportunity costs. So the definition of opportunity cost is the cost of a particular activity. Or the value. I'm going to start over on this slide.
Another way to look at the value of metadata is to measure the opportunity cost. The opportunity cost of a particular activity is the value or benefit given up by engaging that activity relative to engaging in an alternative activity. More effectively, it means if you choose one activity, for example, making an investment in the stock market, you're giving up the opportunity to do a different option, say, buying your car.
Opportunity costs are perhaps the most obvious way to measure the value of metadata. These include time saved searching. We think we know that knowledge, workers, time and success is of great value and up to 30% Some people estimate more of knowledge workers time is spent searching and those searches are successful less than 50% of that time. Other opportunities and their costs could be time to market.
The reduction of duplicate effort and customer satisfaction. That was all I wanted to share today for now. So thank you. Appreciate it. And I'm looking forward to hearing from the rest of our speakers. Thank you for that, Heather. And now we're going to hear from Michelle about the impact of metadata on research outputs.
Michelle, over to you. Yes thank you for the invitation to speak on the impact of metadata on research outputs. This is one of my favorite conferences to speak at. I have quite a few hats in the library and publishing industries and I'm always up for pursuing an interesting project, especially one that allows me to improve how metadata works in the scholarly publishing ecosystem.
The business case for metadata is elusive. Heather's already alluded to that for sure, but I think it can be made in part by looking at how metadata affects research outputs from the perspective of a number of participants in the scholarly communications ecosystem. Just a note on terminology before I go any further today we're speaking about research and data, which is the focus of this presentation, but can be broadly defined as raw data associated with a more formal output.
And it is often used, as Mulroy alluded to earlier, in the validation and reproducibility of scientific research. That being said, however, what I have to say here actually can be applied to multiple content types in our ecosystem as it exists today. So some perspectives on metadata in research outputs. I'm focusing on these four that you see listed here, and I will elucidate them in just a second.
They are all relevant to finding and/or using data and/or making business decisions about how data should be handled in our ecosystem. The first is an end user. This person makes searches on data and they care about whether they can find it easily. Does it pertain to their research or other needs? Researchers are also end users, but specifically they are looking to make research as we roll it into their publication.
Can they make their publication impactful? Can they measure how a set of data is for their research? Third, a funder for a funder. Metadata will demonstrate how research was used possibly. And also can they want to know how research demonstrates the viability of funding similar future projects or related projects? And then finally, from a content providers perspective, metadata about data can facilitate the very basic need of keeping a publication or other provider of content in business.
How do you keep your business running with metadata? So the question is what metadata matters? Then everybody in the ecosystem, all these four users that we just highlighted, feel the impact of metadata, both positive and negative. But measuring the impact of any one piece is difficult. So what is the value of a title? What is the value of an author or a die of a subject term or any other piece of information that you attach to a given set of data?
To this end, I'm actually going to take us out of thinking about data sets for just a couple of minutes here and actually dip back into the world of books and explore what providers of research data can learn from books. Metadata why? Providers of research? Because the end users, be they somebody who's looking for discovery, a researcher or a funder, all fundamentally see a product once it has been produced and is available for consumption in one way or another.
And on top of that, nice so often deals with publishers of publishers of data, publishers of content and other types of media. So we're speaking to you today. You as providers of research data can also learn something from books metadata. I have a quick test case here on a project that I'm working on with lettie Conrad. We were sponsored by crossref to actually measure the impact of metadata on books in Google Scholar.
And we're measuring both books and chapter level metadata in this case, and we are focusing on that and user discovery the other side of the provider who's providing the content to begin with. So end users while. Providers of content, of course, are making the business case. The end users can drive the usage that is driven by metadata, which is another reason why we focus on end users.
To begin with, we found that certain pieces of metadata actually do matter Doi dois in particular, which I'm sure Josh can appreciate, it's our primary industry accepted system identifier. We are especially helpful for Title level searching, but delays alone are not enough titles and other names are also key for finding books within a search string. However, book chapters, which are arguably similar and here's a connection to data sets.
They are also sort of not exactly appendages, but there is a parent child relationship between book chapters and books as a title, the title level. So book chapters do not provide a correlative boost in discoverability. Our hypothesis for a weak impact of chapters with chapter metadata arises from a lack of systematic handling of metadata or book chapter or four, specifically for book chapters and gaps in standards to create connectivity between chapters and their parent books.
So how do you link chapter one, two, three, four with parent object, the title of a book and its associated metadata? In other words, the argument is we need to handle book chapters better and more systematically to make a difference for end user discovery with the outputs that exist alongside the main publication or other research, data set, video, et cetera.
So given the state of things in our industry, what should an organization consider with metadata and research outputs like data sets? First, what are your key outputs in your organization and how do they relate to a version of record, be it a book, article or other book chapter, other data set, etc.? Are you needing metadata for data sets, articles and books, videos all at the same time?
The reality of creating metadata for different formats and keeping that data linked and st requires using many different types of relational linkages in the metadata schema and actually in the metadata records such as is published in or is cited by our key for this. So how do your schemas and IDs accommodate that currently? Do they at all? By the way, data site is great, has a great set of information for this and a link for that will be provided in the notes for these sites.
Sure so second, where do you find friction in your system that slows the flow of metadata discovery? Is it in creating item level metadata? Is it working with your vendors? Is it getting good information back from the content about usage? Focus on what you can control inside of your organization when you send out and what you send out then to your clients and also vendors and customers, et cetera.
Third, what metadata are you providing across channels? Is it sensitive to the needs of each channel while still being robust? What we found with our Google study is that the standards that we have currently that work for traditional means of dissemination don't work with Google Scholar. And Google Scholar is not integrated well either with many of our traditional pathways for disseminating information forth what information is missing in your pipeline to make business decisions about investing in metadata?
What data do you need to make business decisions? Is it feedback from your clients? Industry support, other guidance that doesn't currently exist? And then fifth, what data needs to flow back into your organization? What analytics are actually missing from the data that you currently get? So these are the truths about metadata.
In order for a positive impact to be felt with discovery of research, it is necessary to feed large amounts of good data into the ecosystem and keep it flowing. The experience of good metadata is frictionless and it provides a path of easy discovery to the end user. When metadata is doing its job, you will not see what it does for end users. Researchers and funders. Discovery of information should be easy and painless, maybe even enjoyable for content providers.
When metadata is doing its job, the analytics are robust, the denials are correct and usage can be accurately measured. For all those reasons, the Roi of metadata is very important to quantify and figure out, but also massively elusive to pin down and to figure out what you need to do for your organization. Thank you. Thank you for that, Michelle.
And now Julia will talk us through how metadata and publishers can work together. Juliet, over to you. Hello, everyone. So I'm going to finish off the session by talking a little bit about how publishers can play a role in the investment in metadata and how kind of a publisher perspective on building that business case out.
So I think the somewhat short and glib answer to the question of how can publishers contribute to and benefit from the use of metadata in publishing is how can we not? As it's been clear from all the other presentations today, there's so much at stake when it comes to getting metadata right. And I think it's a reality that metadata are central to modern publishing, just as they're interwoven throughout our modern lifestyles with the technologies that we use.
And they really play such a crucial role in providing that contextualization for digital content in their many varied forms. And as we have increasingly disparate digital objects, it's essential that as much as possible, we're using consistent and persistent metadata so that we can ensure each piece of content, no matter what it is, no matter how granular it is, is able to retain that context wherever it's encountered and by whomever is encountering it, whether that's humans or not.
So I think the importance of metadata is only going to increase as we add more varied research outputs and as we see the complexity of the research environment grow. In many ways, I would argue that the production, distribution and maintenance of high quality metadata, well-structured metadata, is one of the most essential roles that publishers have to play in the modern publishing environment.
Unfortunately, though, it's something that is all too easy to overlook, and as previous presenters have spoken to, in essence, the better we do metadata, the more invisible it is. So while on the one hand, the business case for metadata largely makes itself without metadata, we cannot support the integrity of the research record, nor can we effectively drive discovery, access and the impact of our content.
However, it's that work that when we're doing well is really easy to overlook, and the better our processes and our systems around metadata get, the less visible they are to the many stakeholders throughout the research environment. So this is an area where the role that publishers have to play is all too easy to undervalue. And I suspect a surprising number of people in all sorts of roles throughout the environment of research and publishing maybe still view metadata as a somewhat kind of distant prospect from the core part of processing the core process of publishing.
But in reality, it's embedded into every step and that's only going to increase. So I think when we're building out these business cases in our organizations for the necessary investment, it's really crucial to show how capturing and generating and sharing well structured metadata in the right ways at the right times is really essential to fulfilling so many of the use cases that we have and really importantly showing the impact of that to our customers and to stakeholders.
So I would argue that the business case from a publisher perspective is primarily twofold. So the first of that is in the role that metadata has to enhance research experience, and that's their experience as both authors and readers. But it's also increasingly the role it can play around navigating the increased complexity when it comes to compliance. So thinking about requirements and also demonstrating the impact of that for customers and stakeholders.
So lots of different things going on throughout that entire research journey. So if we think about the researcher aspect and how we can improve the researchers experience, obviously, as Josh talked to earlier, there's a huge amount at stake here. So much time is being spent and wasted on reeking of data. So I think the really critical piece here is looking at how can we collect that data, that metadata as far upstream as possible.
How do we do that once in a consistent way that uses recognized industry standards. So that that metadata can then be float all the way through the publication process and downstream to post-publication. And as part of that, how do we make sure that we are using standards to really make sure that metadata is being passed effectively from system to system throughout that whole process?
There's so much time that can potentially be saved here, which would be so valuable, obviously, to the researchers themselves, but also to the reduced risk of errors when it comes to the publication process. If we're thinking about the reader aspect of the research experience, obviously how we use metadata has moved on a long way from the days of searching for something through library catalogs when it comes to journal articles, for example, and.
We're working in a much more complex environment in terms of supporting discovery, access and impact, one where you're having to work with multiple different organizations all the time to make sure that we are pushing our content out, making it easily accessible, making it easily discoverable through Google scholar, through abstracting and indexing, through working with library institutions, for example. And we have to make sure that we're continuing to invest in all of those relationships.
The more comprehensive and the more useful and metadata becomes, the better able we will be to support these reader journeys, to make sure that readers are able to find the most relevant content for their research needs and do so easily. And as part of that, be able to drive and demonstrate the impact that that research is having to our customers who are purchasing access to this content or who are benefiting from access through open access and can publish deals, for example.
Now turning to the second business case, I think we're all aware of just how increasingly complex the role of compliance is becoming when it comes to publishing. Authors are expected to navigate a lot of different requirements. If you have a multi authored paper, there's going to be requirements from different funding bodies, potentially from different institutions in terms of opportunities for support, for publication fees potentially.
And we really need to think as an industry, challenge ourselves on how can we use metadata to support this compliance, how can we use it to help us navigate that complexity and do so in a way that is easy and intuitive for researchers and also helps to support the funding agencies and the institutions who are supporting this content. And I think as part of that, we need, again, to really challenge ourselves on what systems, what workflows could we adopt, could we build, particularly thinking around fundamental data and the role that institutional identification has to play in this so that we can support that exchange of impact information.
How can we help make it easier for organizations to see the impact that they are funding, the impact of the content that they are funding? What's happening to that content? How is it being used in this really increasingly complex environment? And I think we are all going to benefit from maximizing our ability to support this compliance and showcase the impact that content is having throughout this entire process.
And any efficiencies here, I think, are going to become more and more important because I think this is only going to get more complicated. So turning from the y to the how briefly, I think it goes without saying, but it can't be understated that we need to work together on this. Right we need to make sure that the metadata we are producing and the time we're investing into those metadata are worthwhile.
So the metadata needs to be high quality. They need to be well structured. We need to make sure that they're compliant with FAIR principles. And as much as possible that we are really ensuring the portability of metadata throughout that research and publication ecosystem. Not to do that, we're all going to need to continue to invest in our data governance, in the expert resources we have to collect and share these metadata.
And consistency. Consistency consistency. Right we need to learn from previous examples. Try imagining publishing without dois it's pretty hard to envisage. For example, in the world of journals, we need to get to a point where many other forms of metadata are as ubiquitous and as common as some of those core kits that we're already using.
And I think this is where the role of organizations like ISO are so critical in helping us to come together to address those challenges. Because it goes, as I say, this is going to take resource, but the more that we can act together and work collaboratively, the more that we are all going to benefit from that. And exactly as some of the presenters are talking about earlier, there's a lot at stake.
There's a lot of time that's being wasted and not used to the greatest effect at the moment. So this is a worthwhile investment so that we can really make sure that we're supporting that full environment and that we're demonstrating the impact of content to all of the actors in this space. So with that, thank you very much. And I believe we're going to open it up to questions.
Thank you. Josh had Michelle and Julia for this lovely presentation and to our audience attending this session, please join us in the conversation that is following after this so that we can talk about this and take it further.