Name:
Standards for data management plans Recording
Description:
Standards for data management plans Recording
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/ebaf59ab-aae2-44bb-8db9-4f5c86200429/videoscrubberimages/Scrubber_3.jpg
Duration:
T00H38M16S
Embed URL:
https://stream.cadmore.media/player/ebaf59ab-aae2-44bb-8db9-4f5c86200429
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/ebaf59ab-aae2-44bb-8db9-4f5c86200429/Standards for data management plans-NISO Plus.mp4?sv=2019-02-02&sr=c&sig=WRwb7LeLO5sqg%2FwXYUPIIDAa%2BqARGzJCs%2FtinqiwqUI%3D&st=2024-12-21T14%3A28%3A56Z&se=2024-12-21T16%3A33%3A56Z&sp=r
Upload Date:
2024-03-06T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
GEORGE WOODWARD: There's one. Thank you for joining us for this nice Plus session on planning for standards for data management plans.
GEORGE WOODWARD: My name is George Woodward from Oxford University Press. I'll be the moderator for the session. We have three terrific speakers Michael Cooke, senior technical advisor in the Office of deputy director for science programs, US Department of Energy. Office of science. Maria pretzels product manager for research data management at California Digital Library.
GEORGE WOODWARD: And Jennifer Gibson, the executive director of dryad. We look forward to you joining us for the live discussion and hope you enjoy the presentations. Over to Michael.
MICHAEL COOKE: Thank you very much. It's a pleasure to be here at NISO, so I look forward to the engaging conversation with the community and hope to share some of the perspective from a US federal agency about data management plans.
MICHAEL COOKE: Our data management plans began from OSTP public access memo, usually called the Holdren memo, in 2013. This aimed to increase access to the results of federally funded research. And the aim was to ensure that to the greatest extent and with the fewest constraints possible, scientific research results were made useful to the public. That includes peer reviewed publications and digital data. The requirements of the overall memo applied to many of the larger funding agencies,
MICHAEL COOKE: it included free public access to publications with a 12-month embargo period, and it required recipients of grants and contracts to develop data management plans for how they will share their data. This led to the development of the 2014 DOE public access plan and, in effect, the policy that we implemented with that plan shared some high-level data management principles and then some specific data management requirements.
MICHAEL COOKE: The principles were related to the vision of the Holdren memo. We want to enable scientific discovery and accelerate the process of discovery at the Department of Energy through this data sharing. We want data to be shared and preserved, and enable validation of scientific results, and we want cost management of maintaining the data to be reasonable and justified.
MICHAEL COOKE: The data management plan requirements follow from those principles. Of course, the plan that is provided by the applicant, the person or PI running the project, is to explain how they'll share and preserve data that would enable validating research, have a plan for making the data associated with publications accessible, and have a plan for how the availability of their data management resources will enable that sharing, and address potential limitations, say due to privacy concerns, security concerns, confidentiality, or other limitations that prevent open and public sharing.
MICHAEL COOKE: Within the Office of Science, our data management plans are reviewed as part of the proposal merit review process. We get the outside reviewers, subject matter experts, to comment and provide information about whether that's helping fulfill the scientific objectives and really enabling that validation according to the best practices in a community. There might be additional requirements or review criteria for data management plans depending on the specific way we're asking for applications.
MICHAEL COOKE: For instance, we might require depositing data in a very specific repository, and proposals can include requests for funding to implement their data management plan, and that's another consideration during the review process. So the full DOE policy and Office of Science policies are online with a lot of additional guidance about how to put together a data management plan. We recently updated that guidance to help the community generate better data management plans and understand better how it fits into our process.
MICHAEL COOKE: We have new suggested elements of data management plans online as well as updated guidance for reviewers. Now this became effective at the beginning of 2022. We note that we didn't change any of the formal data management plan requirements that are in our solicitations. We just added guidance to help with these aspects of it. So our suggested elements offer some guidance for what to address in a data management plan to make it responsive.
MICHAEL COOKE: It's a framework for putting together a data management plan that will satisfy the requirements, and I'll go through what that framework is, because we think it's a way for a researcher to help align with best practices in data management as they're putting together their data management plan. We also improved our guidance to reviewers. We want them to consider whether that data management plan is suitable and supports validation of the proposed research,
MICHAEL COOKE: and we want that guidance to connect our suggested elements to the data management plan and requirements themselves. And of course, we're aiming for constructive feedback. We want to continue to improve the data management plans we're getting from the community. We even integrated this with our review system. PAMS is our automated system for handling reviews online. We send emails to reviewers, we include a link to this, they certify once a year that they're familiar with our guidance.
MICHAEL COOKE: So we're really integrating that into the overall process for handling our data management plans. Here are the suggested elements we provide to researchers to aid in that development. So once again, this is a framework to try to build the plan that will be responsive to those four broad requirements we have in our review process. We take a look at the data used or generated,
MICHAEL COOKE: take a look at the standards or any formats that will be used or considered for data management. How related tools, software and code will be handled should be part of the plan, explaining how data will be accessed and shared and preserved, including how long it will be preserved and in what forms it will be shared. Of course, if there's any limitations or considerations for security or the integrity of the data, they should be addressed.
MICHAEL COOKE: The explanation of who will be managing the oversight of the data management should be addressed. And of course, a brief justification that ties into the requested funding in order to implement that data management plan. We think these are the sensible pieces and clearly standards connect back into the best practices within a community and are a very important part of that discussion.
MICHAEL COOKE: On the reviewers side, we're providing guidance that ensures that applicants can see exactly what the reviewers are being asked to consider, so that we're getting that useful feedback both to the agency about whether the data management plan fulfills our requirements, and the reviewers are providing comments back to the applicant so they can continue to improve their data management plans. Those links between the suggested elements and requirements are a key factor.
MICHAEL COOKE: But so is this context of providing constructive feedback compared to best practices in the community that helps applicants continue to align their DMPs better. The overall idea is if we have better reviews, that will lead to better data management plans, and in the long run, better data management, we think, is enabling better science. I'd like to take a step to the side a bit, just to note that we do view, at the DOE, data as a very important pillar of our overall community research support efforts.
MICHAEL COOKE: We're highlighting some of our data efforts by designating them as Public Reusable Research Data Resources or PuRe Data Resources. There's a broad variety of things we support in this area, not just data repositories, but knowledge bases, analysis platforms, other activities that all help make data publicly available. And we're not just highlighting them, but trying to improve our stewardship of these important data pillars as well.
MICHAEL COOKE: These are the currently designated PuRe Data Resources in the Office of Science. They cover many of the different scientific domains that we support in the Office of Science. There's the Atmospheric Radiation Measurement data center, Joint Genome Institute, the Materials Project, National Nuclear Data Center, Particle Data Group, and the Systems Biology Knowledge Base that are all serving important roles in our communities at the Office of Science.
MICHAEL COOKE: We think this is helping the community broadly as well because this is helping our awardees advance their science. We're highlighting our authoritative providers of data or capabilities. The data is easier to find and reuse across the community. And of course, we're hoping that these help accelerate your research. We're supporting your data by making sure it's better shared and preserved through these resources.
MICHAEL COOKE: And we hold those resources to very high standards in data management and operations and monitor their scientific impact. We're hoping that this also provides options for responsive data management plans for funding proposals. And of course, we want to recognize your impact. So using these resources can help streamline your participation in that open science ecosystem.
MICHAEL COOKE: We emphasize the FAIR data principles findable, accessible, interoperable and reusable, within these resources, and use persistent identifiers to enable linking data back to connected scientific results to really recognize the impact the data is having on scientific publications. In 2022, OSTP released new guidance. It was an aim to build upon the 2013 Holdren memo.
MICHAEL COOKE: The memo focuses on ensuring free, immediate and equitable access to federally funded research and is expanding some of the concepts in the Holdren memo. In particular, it's removing the 12-month embargo on access to scholarly publications with an aim to have immediate access upon publication and as well puts a timeline in for sharing the data underlying that publication so that it's available immediately as well, and also addresses having a timeline and a plan to access other data supported by federal research that might not be underlying a peer-reviewed publication.
MICHAEL COOKE: It also requires use of persistent identifiers for a number of research outputs - publications, data, software - in addition to researchers and awards. We're working toward providing our new plan to OSTP for most agencies, and for the DOE, That submission is due February 21. 2023 is also the Year of Open Science. This includes having a US government wide definition for what open science is, shared here: the principle and practice of making research products and processes available to all while respecting diverse cultures, maintaining security and privacy, fostering collaborations, reproducibility and equity.
MICHAEL COOKE: There's more information about what the agencies are doing on open.science.gov, I encourage you to check for information as the Year of Science progresses. And one of the first pieces of news out of the Department of Energy is an enhanced web page about persistent identifiers. They're important because they bring together information across a broad spectrum of research products and connects them for greater discoverability and re-use.
MICHAEL COOKE: So you can explore our web page for some visualizations about the impact that persistent identifiers have, the services that DOE offers to assign persistent identifiers, and we think it's very important in the scientific ecosystem to have these help provide appropriate credit, through citation and identification of contributors, for all of these important elements of research. So please visit the osti.gov/pids website for more information about how our services can help you assign DOIs to your research outputs.
MICHAEL COOKE: More information about ORCID iDs, as we're leading the US government ORCID consortium, and some of the efforts we have to assign duties for awards through our award DOI service. So I'd like to end on a question back to the community. Given that the Nelson memo is setting a new vision for US government support of open science, we're in the Year of Open Science, and that we're looking ahead to broader and more immediate access to some of the scientific output,
MICHAEL COOKE: I'd like to understand how you and your community see this picture, imagining it as sort of a Rorschach test. If you stare for a bit, is it that you're seeing the integrated elements of infrastructure - the data, the networking, the computing - that are enabling research in a more open way? Are these elements of a FAIR, data connected ecosystem that helps your scientific domain more rapidly
MICHAEL COOKE: advance science? Is this open data shared across domains that's enabling interdisciplinary efforts and new science in new spaces? Is this the way we're connecting the ecosystem of data and tools, and models, and publications in an ecosystem with persistent identifiers? Or is it the ecosystem of persistent identifiers that's enabling us to help recognize the impact and the contribution that all those pieces have to the overall ecosystem of advancing science, making sure we're giving credit where it's due through the use of connecting the data to the scientific advances it enabled.
MICHAEL COOKE: So, I'd like to leave with the question: how does this community's standards enable elements of that picture? And I'm very interested in feedback from the NISO community. Thank you very much for having me here today, and I look forward to our discussion.
GEORGE WOODWARD: Beautiful thank you, Michael.
GEORGE WOODWARD: And I think that's sure to be one of the questions that we'll pick up on in the live discussion, which I know we're all looking very forward to. From here, I'm going to hand over to Maria to take us forward.
MARIA PRAETZELLIS: Thank you. All right. Thanks very much.
MARIA PRAETZELLIS: And Thanks for having me today. So I'm going to talk about data management plans through the lens of the DMPTool. And I am with the California Digital Library. So we're part of the University of California. We work system with all of the UC campuses, but we also work internationally on a lot of programs that really are centered around research data. So working with DMPTool, which is really what we're going to talk about today, we also work closely in data publication and data metrics actually with Jennifer Gibson from Dryad, who's going to talk after me.
MARIA PRAETZELLIS: We do a lot of work around persistent identifiers, and I'm going to talk a lot about persistent identifiers in DMPTool. Primarily, we work on ROR, which is the Research Organization Registry. We also do work in digital preservation and data and software skills training. So jumping into the DMPTool, we're a free tool, open source, community supported.
MARIA PRAETZELLIS: We've been around for over 10 years now and really the goal of the DMPTool is to provide a means of communication between data librarians and researchers and to give data librarians a way to reach their research community at scale. That's increasingly important given sort of the quickly changing landscape that Michael talked about due to upcoming changes around data sharing for federal agencies.
MARIA PRAETZELLIS: So the DMPTool provides funder templates for all of the big funders. Institutions can customize them and make sure that they're sort of local requirements and guidance are reflected in our researcher's DMP. But what I'm going to talk about, primarily today is our work around standards for DMP. So talking about how we have structured the data management plan, how we're using PIDs- persistent identifiers within the DNP so that we can, as Michael said, we can give credit, we can track impact, we can check for compliance all through the use of using persistent identifiers within a DMP.
MARIA PRAETZELLIS: So a term that I'm going to use a lot and some folks might be familiar with is machine actionable data management plans. And this is something that the larger community of tool providers in the DMP space have been talking about for many, many years now. Basically what we're doing is sort of transforming the static, usually two page narrative document into a structured document that is interoperable.
MARIA PRAETZELLIS: So we can share it, share that information machine to machine. And the reason machine-actionable DMPs are really important is that they can, as I was saying, they can help librarians and administrators provide guidance to their researchers at scale with fewer resources. They can ensure compliance. As we are rolling in these new requirements, we need a mechanism to track the outputs of research.
MARIA PRAETZELLIS: So using machine actionable DMPs can facilitate that. They can promote research integrity by providing transparency for research, reproducibility and data security. Also really helpful with tracking impact. So helping grant administrators and universities track the impact of their institutional research programs through the use of machine-actionable DMPs. So one primary standards when you think about machine-actionable DMPs came out of an RDA working group.
MARIA PRAETZELLIS: It was released in 2020. That was an RDA working group that really coalesced around providing a metadata profile so that we could structure DMP so that we could transform it from just that narrative document into a structured, interoperable document. So this great group of people, together with the larger the community put together a standard that's now been adopted by many data management plan services.
MARIA PRAETZELLIS: DMPTool uses it, which is what I'm going to dig into today. But there are many other providers around the world that have also taken this standard, this community developed standard, implemented it in their tools so that we can help facilitate and move the community into the use of machine actionable DMPs. So the DMPTool I'm just going to talk about specifically is what we've done around structuring data management plans.
MARIA PRAETZELLIS: And this has really been our focus of work for the past several years in terms of future development. So we now have the ability within the tool to link a plan to the eventual research outputs that a project has generated. So, for example, a researcher could connect it to eventual journal articles that might be published, maybe preprints data sets, protocols, you name it, anything with a DOI linking it back to the data management plan.
MARIA PRAETZELLIS: We can export data management plans as structured JSON files that are compliant with that RDA common standard that I just mentioned. You can do that through the UI if you want to do or most suitably through the API that we've developed. And what I want to really focus on is the use of persistent identifiers within the DMP. So like Michael said, we're really using this as the glue to facilitate the tracking of research over time, the promotion of credit, all through the use of existing infrastructure, existing PID infrastructure, kind of embedded within that DMP so that we can track it over time.
MARIA PRAETZELLIS: So for example, this graphic here on the right, in the middle there represents a data management plan. It's changing over time. You can see all of the people on the left. Those are represented by ORCIDs. I have all that rich metadata that's within their ORCID profile is contained. It's also connected to the eventual publications that resulted from that work, any data sets.
MARIA PRAETZELLIS: So that's all hooked in through the use of persistent identifiers. And I thought it would be fun. Since we're talking standards, I don't usually go in and show XML, but I think it can be useful when you really want to look at what are the standards for data management plans. What does it look like when it's expressed in this kind of format?
MARIA PRAETZELLIS: So I'm just going to take a brief second just to look at one data management plan, just to point out a few things, sort of best practices that we've done to incorporate persistent identifiers within the plan. So you can see here you've got the contributors highlighting. Here we go, we've got the contributor types using credit taxonomy, the ORCID IDs. This was a big project.
MARIA PRAETZELLIS: So there's a lot of contributors. We've got the sponsor in this case, this was the field station. It's expressed as a ROR ID. So this is where the research team actually conducted their research. So it's controlled through the use of the ROR registry. Then, then this is the related identifier. So these are all of the project outputs that were associated with this data management plan.
MARIA PRAETZELLIS: So they're connected through that related identifier connecting back to the DOI of the eventual output and then just jumping down. We've got the funding information. So the funder ID, the crossref fundref is also included in there. I'm really excited about grant ID, so I'm really happy that DOE is taking those on. Once we have grant IDs for more projects, that's really going to help facilitate the kind of connections that we're trying to bring about through this work.
MARIA PRAETZELLIS: So just jumping back to my deck here, hopefully you can still see. So the work that we're doing right now is really focused around scaling data stewardship. So because of all of these upcoming changes to federal requirements around data sharing, I'm thinking about the new requirements from NIH, DOE as well. And I know this is also the case in Europe and other countries as well as we're shifting towards a more open environment for research data.
MARIA PRAETZELLIS: We need a way for data stewards to be able to scale their services. So one approach that we're taking is really using machine-actionable DMPs with PID enabled infrastructure so that we can track research over time and we can allow these relationships to be shared with the larger research data ecosystem in an open, interoperable manner.
MARIA PRAETZELLIS: So just a graphic here to show you how we're doing this. In the middle here, we've got the machine actionable DMP. So we're actually using those connections through the larger sort of DOI infrastructure. So we can ping external funders through their API. So the funding API, we can check that the NIH fund or API, we can check that the same would be available in Europe and in other countries as well, pinging external publishers.
MARIA PRAETZELLIS: So like dryad and checking to see what research outputs might be within their repository and hooking that back in and connecting it to the overall project. So we're really using this as a way to track all of the latest information and metadata about projects and related outputs. And they're connected back to the data management plan. So just quickly, I think it's sometimes useful to think kind of concretely about what a workflow would look like.
MARIA PRAETZELLIS: So this is a potential workflow for sort of a research office or a grant administrator. So we've worked out where they would sort of gather data for funded projects, upload them into the DMPTool to register them with a persistent identifier. So they get that DMP ID using the DMP ID when they cite research, also using the ability to ping external systems to gather that information.
MARIA PRAETZELLIS: And all of that is then updated within the DMPTool so that we can track associated outputs. So similar for API or could be a data Steward or Data Manager who has ever kind of managing that component of research. In this case, they could create their DMP and the DMPTool Really important that we encourage researchers to include ORCIDs for all PIs and of the core research team.
MARIA PRAETZELLIS: You can download your PDF of the DNP to submit with your grant application and get important to generate a DMP ID. You can do that in the DNP and the DMPTool and then utilize that as a citation in your research outputs. And that will connect all of these things together. So just to stop and I'm sure we'll have lots of time for questions and further discussion.
MARIA PRAETZELLIS: We blog about this work a lot on our blog. We've got a wiki as well. If you're interested in signing up as a DMPTool participating organization, you can shoot us an email and I can send you the forms to do that. So thank you very much. Stop hearing.
GEORGE WOODWARD: Wonderful thank you for those.
GEORGE WOODWARD: Very interesting. Our final speaker for today is Jennifer Gibson. Jennifer, over to you.
JENNIFER GIBSON: Hi, everyone. Thanks very much for inviting Jaya to be a part of this conversation. It's a good one and timely. I look forward to your feedback. So indeed I'm executive director of triad have been in seats for just about 14 months now and I'd like to review together the importance of using Data Management and sharing plans.
JENNIFER GIBSON: I've introduced the verb to DMSP, but honestly it doesn't roll off the tongue, so I'm not sure it's going to take off. To start off, I'm going to go ahead and give myself away. I'm going to tell you that with my presentation, I'd like to make the points, first of all, that the DMSP is essential to unlock the power of open research. So so we're circling around year of open science and, and increased traction for open research together here.
JENNIFER GIBSON: And I want to emphasize that these steps are really essential to unlock the power of open research, as well as the benefits for improving research integrity, to advancing research in future and capturing the institutional investment, as well as monitoring the impact of research. All of these things are enabled through the tools. And discussions that we're having today. The second point that I'll make is that the SE part of the DMSP is essential.
JENNIFER GIBSON: It's essential to fully leverage our investments of time and money in research and to fully leverage those things. That data's got to be as made, as open as possible and as closed as necessary. And the third point that I'd like to leave you with is that curation is essential to sharing. It's just not enough to toss things over the fence and hope for the best. And public funders, as we're hearing, are converging around this with their new policies.
JENNIFER GIBSON: So I'll take just a few minutes. So hopefully I make these points and a convincing way. Before I do, let me remind you again of what dryad is. So you may know dryad, because we've been around for a while, 15 years. It will be this year. But we're constantly changing. And today, I would characterize us as an open data publishing platform.
JENNIFER GIBSON: So so no longer a data repository where our data goes down to lie down and have a nap, but a platform where we use modern technologies to help bring the data to life, but also, as we're talking about today, to help the data to travel as widely as possible, get it into the hands that would benefit from having access to it. Dryad is also a multi-stakeholder community of institutions and publishers and societies all committed together to this vision for the open availability and routine re-use of all research data.
JENNIFER GIBSON: And I know you'll agree that the outcome, the impact of all of this work we're talking about today is, is re-use and accelerating research. So the giant platform today has got over 50,000 publications, research data, publications and Association with over 1,000 journals and thousands and thousands of institutions. So we're really growing every day, right, as a platform for research in all domains.
JENNIFER GIBSON: And we are a leader in research data. So we curate and publish just research data. We no longer publish other research objects. Our processes are such that the data is interconnected with other research objects as well as other systems. In the same way that Maria invoked the bits and systems a moment ago, and our process is fully curated. If we take a bird's view.
JENNIFER GIBSON: Of of the research data management process, dryad fits foremost and solidly on the left hand side of this diagram. So this is a diagram you'll be familiar with, right? We've got the planning and design phase, collecting and capture phase of data all the way through to sharing, publishing, discovering, reusing and siting. So so we as a dryad helps researchers to manage, store, preserve, share, publish, discover, reuse and cite data.
JENNIFER GIBSON: But with the movement toward sharing data earlier in the process, as we're seeing with these policies, we could also be a resource for collecting and capturing data earlier in the process. So whether that data is intended for public sharing straight away or whether it should be kept private and worked on a little longer before going live. But that said, dryad is a platform for open data only. We published data under cc0 license.
JENNIFER GIBSON: We we don't have any extra access restrictions on our platform. The glue I'm actually quoting Maria. So Maria was so helpful to chat with you about this a couple of weeks ago. And I quoting Maria and saying that the glue in connecting our process, what dryad does with data management and sharing plans is the metadata. Hi level metadata.
JENNIFER GIBSON: Invoking community supported ontologies is essential for unifying the researchers plan and associated funding and institutional support with the published outcomes in dryad. At dryad we invoke the research organization registry, the raw for institutional affiliation funder registry for funder information and ECD for research classification. And we collect these bits of metadata for every single data publication that's submitted to us and war and funder registry in particular.
JENNIFER GIBSON: I think we're making this point. The three of us are really powerful tools for connecting investments and plans with outcomes, and we can see this highlighted specifically in regard to the NIH policy, which took effect a couple of days ago, and an associated project with that policy. So if you're not familiar with gray, gray stands for the generalist repository ecosystem initiative, organized through the NIH Office of data science strategy to help the generalist platform sorry, the generalist repositories and data publishing platforms to develop common approaches to support NIH affiliated investigators to comply with the policy.
JENNIFER GIBSON: So you can see on the slide the objectives of the program. They're they're wide ranging and they include the adoption of consistent metadata models. One of the things that the group is coalescing around is the need to collect and then report on Institute level funding for the NIH so that when we begin assessing reach and impact of this policy and new policies coming, we can do that at an appropriately detailed level.
JENNIFER GIBSON: So so now, now pausing to say, why is all of this important? Why why are we why are we so deeply engaged in this? It's clear that the metadata connects the data with the funding and institutions, helps us to do cross system searches. Yes, but it also helps to make the data findable. So especially these community supported metadata standards pave the way for data to travel to other systems and to be discovered using their funding institution grant ID, research, classification or related research objects.
JENNIFER GIBSON: This is what empowers research, and when the data is open, it empowers open research and helps researchers to carry out their work without impediment. Funders see this and and as already discussed, are introducing policies for data sharing as well as management. So at the National Institutes of health, the emphasis is on maximizing appropriate sharing of data through the terms of the policy.
JENNIFER GIBSON: And the Office of Science and Technology Policy are working to improve research, integrity and reproducibility through immediate Open Deposit of data. As well as the other facets of their policy. So sharing is essential to the DNP. Funders are also converging on the importance of curating data before it shares again. Not enough just to make it open or to put it somewhere and hope that folks are going to be able to work with it.
JENNIFER GIBSON: And with respect to data quality insurance, the NIH says that data should be of sufficient quality to validate and replicate research findings. And at dryad, where our team of humans opens each file to ensure that it can be opened and read by another human, not to mention the machines. And they're ensuring that the data is adequately described for someone else to use.
JENNIFER GIBSON: And and to build on it. This is curation. So metadata are really the key to bringing together the DNP and the data. But in the interest of using the tools. And the technologies that are available to us in 2022 to accelerate the pace of research and more readily translate research into benefits for our world, the NMPS are evolving into DMSP and DMSP are recognizing that sharing must be done with care and attention curation to be effective.
JENNIFER GIBSON: And that's what the standard should be. I'll stop there. Thanks very much. And I look forward to the discussion.
GEORGE WOODWARD: Wonderful thank you, Jennifer. And a big thank you to all of our speakers. As Jennifer said, it's a very timely topic and there's obviously much to talk about. So we look forward to seeing you in the live discussion.
GEORGE WOODWARD: And thank you for listening today.