Name:
The importance of metadata and open science on research outcomes-NISO Plus
Description:
The importance of metadata and open science on research outcomes-NISO Plus
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/3cf82728-d4ac-4166-9d77-85f72f8352e1/thumbnails/3cf82728-d4ac-4166-9d77-85f72f8352e1.png?sv=2019-02-02&sr=c&sig=DXr%2Be1iqX6cBYE%2BjSOGjFZV%2ByqONkb6MJNIZY5m8i0o%3D&st=2024-11-21T23%3A06%3A06Z&se=2024-11-22T03%3A11%3A06Z&sp=r
Duration:
T00H33M42S
Embed URL:
https://stream.cadmore.media/player/3cf82728-d4ac-4166-9d77-85f72f8352e1
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/3cf82728-d4ac-4166-9d77-85f72f8352e1/The importance of metadata and open science on research outc.mp4?sv=2019-02-02&sr=c&sig=LBcCEVshMyh4i3WX6Yh82kvhZ%2FbbjELgooabqMh%2Fhu0%3D&st=2024-11-21T23%3A06%3A06Z&se=2024-11-22T01%3A11%3A06Z&sp=r
Upload Date:
2022-08-27T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
[MUSIC PLAYING]
JONATHAN CLARK: Hey. Hello, and welcome. Welcome to this session. It's the importance of metadata and open science on research outcomes. My name's Jonathan Clark, and I'm a managing agent for the DOI Foundation. I'm really delighted to moderate this session today. We have three wonderful speakers for you coming up just now. And without further ado, I'm going to hand over to them.
JONATHAN CLARK: The first is Carly Robinson from the Department of Energy. Carly, over to you.
CARLY ROBINSON: Great, thank you. Sharing my screen, so hopefully you all should be able to see that. Wonderful to be with you all today. To kick things off, I'm going to talk about the importance of open science, quality metadata, and persistent identifiers. Again, Carly Robinson. I'm the assistant director for information, products, and services within the US Department of Energy's Office of Scientific and Technical Information.
CARLY ROBINSON: So we're all working in the space of open science. I think we can appreciate the importance of open science and quality metadata, but I did want to highlight a couple of things that I think of when I'm thinking about the importance of open science. The first is that open science can allow for better and more reproducible research. I think it's always the intention that we're putting high-quality science out there for folks to take a look at.
CARLY ROBINSON: But by making science more open, it is kind of-- folks are much more able to delve deeper and reproduce that science, which is really wonderful. Open science also allows for others to participate in scientific discovery and connect to new areas of study. There's a lot of cross-discipline work that's going on, international participation, and open science really enables that kind of broader participation, as well as citizen science.
CARLY ROBINSON: And open science also increases visibility and discovery of research results, which can help increase the pace of scientific discovery. And that's one area where quality metadata is really key. It can really help enable discovery and increase visibility. So quality metadata is also incredibly helpful for connecting research objects throughout the research lifecycle.
CARLY ROBINSON: So think about connecting funding to researchers, to instruments, to peer review, to research results, research organizations. And we've been doing this for a long time, often in the context of text-based information that's connected to research outputs and other research objects. But quality metadata these days can also include persistent identifiers or PIDs.
CARLY ROBINSON: You can kind of create all of these connections, not only with the text-based information, but with these persistent identifiers, as well. And I'll talk a little bit more about persistent identifiers in a couple of slides. But to focus on metadata quality and potentially metadata curation, I think there are a number of factors that we can consider. And I just want to highlight a couple.
CARLY ROBINSON: But quality of metadata is really in the eye of the user-- whoever is using that metadata for whatever purposes they need. But one thing that you can consider is the completeness of that metadata for your purpose. So is all of the metadata that you need included in that information? Does it include abstracts or funding, if that's the type of information that you need?
CARLY ROBINSON: And of course, that can be completely dependent on the metadata schema that either you're using or the metadata schema that was used to create the information. Also, availability-- so is the metadata openly available? There were, historically, a long time where you would have to pay for access to abstracts. But these days, much of the metadata is more openly available. Also, conformance-- so are metadata fields within the schema that you're using being applied consistently?
CARLY ROBINSON: So for example, you can think about organization name disambiguation. So I'll use the University of Michigan as an example. You, as a researcher, might use the University of Michigan. And someone else might use U of Mich or U of M, or the Regents of the University of Michigan. So it's really great if you can kind of have that type of disambiguation built into the quality of metadata.
CARLY ROBINSON: And also, credibility-- just kind of understanding where the metadata has come from, if it's been curated, and understanding the source of that information. So as you're using this information for whatever purpose-- for discovery or other reasons-- if that metadata that you or your organization-- if you're not finding it to be high enough quality for your purposes, you might choose to enhance or curate that metadata to meet your needs.
CARLY ROBINSON: So you might add more information or include some of that disambiguation in there. And that can create higher quality metadata for different purposes. So going back to persistent identifiers and how they can really help enable higher-quality metadata, when I talk about a persistent identifier, I'm thinking about a digital identifier that's globally unique, persistent, machine resolvable, has an associated metadata schema, identifies an entity, and is frequently used to disambiguate between identities.
CARLY ROBINSON: So examples of persistent identifiers that we use are digital object identifiers, or DOIs, ORCID identifiers for people, also research organization or ROR IDs for organizations. So those are just a couple of examples of persistent identifiers. And there's a lot of benefits to assigning and using PIDs. They can enable research to be more open, discoverable, and accessible, because the metadata associated with that persistent identifier is openly available and discoverable with that landing page that the persistent identifier results to.
CARLY ROBINSON: PIDs are also stable, persistent links that allow for metadata to be updated as needed. And by linking persistent identifiers within that metadata, it's much easier to create those connections throughout the research lifecycle. So connecting those research output DOIs with researchers through their ORCID ID or their research organization or affiliations with ROR IDs.
CARLY ROBINSON: And so it's really great to have those interconnections by using persistent identifiers. So I wanted to dive a little bit into my specific use case at the Department of Energy's Office of Scientific and Technical Information, or OSTI. So to see where we fit within the Department of Energy, there are a number of daily program offices that fund about $12 billion each year in R&D funding. That funding goes out to our national labs, to grantees, through contracts at universities and other institutions.
CARLY ROBINSON: And from that funding comes research output. So you can think journal articles, software, data. We estimate there are about 50,000 R&D outputs coming from DOE funding each year. And that's where my office comes in. We collect, preserve, and disseminate DOE funded R&D results. And we disseminate those to the public, to the DOE, and to other federal agencies.
CARLY ROBINSON: Our history of our office goes all the way back to the Manhattan Project, but most recently we were codified in legislation in the Energy Policy Act of 2005 that says that the Secretary shall maintain a publicly-available collection of scientific and technical information through our office. And we have a number of core functions, but just a couple that I wanted to highlight in this context is we provide and use persistent identifier services to make DOE-funded research more discoverable and to have higher quality metadata.
CARLY ROBINSON: And to provide the highest-quality metadata that we possibly can with the associated DOE-funded research results, we have a metadata curation team. We have, at any time, between 10 to 15 folks working on metadata curation. So to quickly highlight our persistent identifier services-- I won't go into any detail on these-- but we are offering persistent identifier services assigning DOIs to research outputs, particularly technical reports, data, and software.
CARLY ROBINSON: We also, through a pilot project, provide the Award DOI Service where we're assigning Crossref DOIs to awards that come from DOE. And we also are working to associate persistent identifiers with people and researchers. We do that through a number of contexts. And we also lead the US Government ORCID Consortium. And then, we're also trying to associate persistent identifiers with the organizations that we collect in our metadata, and we are doing that through mapping our internal organization authority to various organization persistent identifiers.
CARLY ROBINSON: On the curation side of things, we have a system called E-Link And that's the system that researchers and national labs use to submit their DOE-funded research outputs to us. That's a requirement of their funding-- to submit those to OSTI. And they do that using E-Link. And so once that information is submitted to us, it goes through an enhancement, or metadata curation process. And that's kind of built into the E-link system that we see on the administrative side.
CARLY ROBINSON: And so if we notice any kind of issues with the metadata that was provided, maybe a misspelling or something weird with the title, we can update that. We can also include optional metadata that might not have been provided by the submitter. So I'm not going to go into this in a lot of detail, but I just wanted to zoom in on kind of one of the curation screens that we have.
CARLY ROBINSON: So for example, here, we can update the title. Sometimes, if there's a subscript or superscript that doesn't come through as well as we'd like, so we can update that. We can add a description. We can add category codes or keywords, add descriptors, things like that. And once one of our curators has had the opportunity to take a look, they can mark that as complete, so that we know that it's gone through the curation process.
CARLY ROBINSON: One thing that I did want to highlight, specifically, that we do in our metadata curation process is if a research output is submitted to OSTI, we will look and see if there are any associated research outputs through related identifiers or relationships. So in this example, this was a data set record that was submitted to OSTI. And we know that this data set record is associated with other data sets and a couple of journal article records.
CARLY ROBINSON: And so we get this information from a couple of different sources. You can look at Crossref data site. We use Scholix, as well, to find these relationships and add these related identifiers. So this data set, DOI record that we have in our system is connected with these other research outputs. Those other research outputs that are associated we might not have in our collection.
CARLY ROBINSON: And so in that case, we can add metadata associated with those. So this data set had a number of related DOIs. We can add metadata about those. So we can add title, author, things like that. And we do that through a system that we have aptly named the curator to add that information and make that available. And of course, all of this-- everything that we're doing by using persistent identifiers and curating the metadata-- we're working to make all of this information publicly available through our search tools.
CARLY ROBINSON: So osti.gov is our primary search tool to find DOE-funded R&D outputs. And so this is just an example record where we've kind of added some information and are trying to provide as high-quality metadata as we possibly can. And so with that, I just really appreciate your time. And once it's time for questions, I'm happy to take questions.
CARLY ROBINSON: Thank you.
JONATHAN CLARK: So thank you, Carly, for that great overview to start with. Next up, we have Kristen Mueller of the Melanoma Research Alliance. And she's going to share her perspective as a research funder on this topic. So thank you very much, and over to Kristen.
KRISTEN MUELLER: OK, thanks. I'm just going to share my slide. Yeah, and as Jonathan said, I'm from the Melanoma Research Alliance-- MRA, I'll refer to them during my talk. I actually just finished up with them at the end of January, and I am now in a new position at the Arthritis Foundation. But I'll be speaking about my work there. So for my talk, I really just wanted to focus on an important ingredient for funders in the pursuit of science.
KRISTEN MUELLER: And it really goes along with what Carly was talking about, but it's persistent identifiers for funders. So I think it's really critical for funders to adopt PIDs. And in the case of funders, I'm really talking primarily about ORCID IDs and DOIs for now, for several reasons. And so really, for one, it's for the good of the research community.
KRISTEN MUELLER: It reduces the administrative burden on applicants and awardees. So for instance, it makes it easier for applicants to fill out their applications, to populate different streams. And in collecting progress reports, they can more easily upload research outcomes. It's also quite important for the research community at large.
KRISTEN MUELLER: And so there, using DOIs, ORCID IDs, codes in general, will increase the transparency and discoverability of the funder's research grants. And for MRA in particular, this is also important because we are working with a large melanoma patient community. And so it's important for them, too, to be able to discover these things. And then, of course, for the research community and for funders more broadly, it helps make the research outcomes from these grants more accurately identifiable.
KRISTEN MUELLER: And I think when MRA was thinking about adopting PIDs in our workflows, I think the last point here was really the most compelling, was that it really would allow us hopefully to capture more complete, timely, and accurate data that could inform our scientific strategy and then to report our advancements to our key stakeholders-- so to our board, to our medical advisors, and to our patient community. So from here, I'm just going to jump to a few real-world examples that demonstrate how MRA is adopting PIDs in our workflows.
KRISTEN MUELLER: And so this is just an example of an MRA-funded researcher and her ORCID profile. And so you can see, because we have all of our applicants and awardees use their ORCID ID when they apply for funding from us and we joined the ORCID consortium for funders, we're now able to write the awards to the researchers ORCID profiles. And so you can see that here.
KRISTEN MUELLER: And because we're in the consortium, it's saying that this information is coming from a trusted source-- from the Melanoma Research Alliance. Another example is DOIs. So in 2021, MRA was actually the first organization in Proposal Central to adopt DOIs for all of our awards, going back to 2007 when MRA was founded. And so if you click on the DOI link, it's really great that anyone can now access a pretty complete awards record for everything that we fund.
KRISTEN MUELLER: And so just, you can see the DOI up here. You can see this goes to this particular researcher. This is her institution. This is the title of the award. You can see the award amount, the start date, the end date, the key personnel associated with this award and their ORCID IDs, as well as a summary of the research that is being funded with this award. And then finally, it was exciting to see, actually, now some of these DOIs for our awards appear in publications from the researchers that we're funding.
KRISTEN MUELLER: So here, you can see in the acknowledgments section of this paper from this year, one of our researchers is actually, as they should, acknowledging the Melanoma Research Alliance and this team science grant. But they're importantly including that DOI. And so people can click on this and they can be taken to a screen, like I just showed you in the last slide, to get any information that they might be interested in seeing about that particular award.
KRISTEN MUELLER: So I think these all highlights how, as long as these persistent identifiers are being used and the appropriate metadata is associated with them, it really is enhancing the discoverability of the research from MRA. Now, finally, to end, I want to turn to the last point that was on the my slide about reasons why funders might be quite interested in using PIDs in their workflow.
KRISTEN MUELLER: And so this is just-- I'm not going to go into the details of this slide-- but a slide that summarizes a recent portfolio evaluation that MRA carried out. So every three years, MRA evaluates its research portfolio. And we did so in 2021. And we evaluated, as part of that project, the first 189 grants that MRA funded.
KRISTEN MUELLER: And what we are trying to determine, really, were a variety of different outcomes. So we are interested in what we're calling quantitative metrics. So these are things like publications, follow on funding, patents, clinical trials. But we are also interested in more qualitative metrics, like the impact of MRA awards on career trajectories of funded researchers and also just the research outcome's impact on the wider research community and melanoma.
KRISTEN MUELLER: And so to get these types of data was extremely time intensive and labor intensive. And MRA is a tiny organization. We're 10 people. And so we spent a great deal of time going through progress reports, searching Dimension, NIH Reporter, PubMed, Google Patents, clinicaltrials.gov to try to pull out this information along with a survey that we sent to all of the 150 PIs from the 189 grants that were funded that we are evaluating.
KRISTEN MUELLER: And we pulled out all this information to then be able to analyze and report back to our board of directors, and essentially, to our public constituents. And it just-- all of this be made so much easier if we were able to more reliably work with PIDs-- so DOIs for other grants that our researchers received, that they have used. A DOI and an MRA award and publications.
KRISTEN MUELLER: Just all of this would be so much more traceable, so much easier, and just so much more, then, in the future, discoverable by us and by others. And so I'll end just thinking about critical next steps for getting funders to adopt PIDs. And I think it's really essential to make it clear what their value proposition is. I think people are, of course, excited and want to commit to open science, but I think some of these more specific examples of how it could really benefit an organization are needed.
KRISTEN MUELLER: I'd also say that grant management platforms are key. So we were able to do all of this to incorporate ORCID IDs for applicants and awardees into our workflow, to assign DOIs to all of our grants really because we're using Proposal Central. Because they adapted these things, we were able to do it. Because we're just 10 people, we're not able to really spend a lot of time trying to write code and put it into our system, something like that.
KRISTEN MUELLER: And so I think just working with grant management platforms is essential so that funders across the spectrum can incorporate PIDs. Also think it would be useful if DOIs now became standard granted IDs. Because right now, we have-- for every award, we're giving to the institution and to the researcher a DOI, but also their grant idea. And so it just causes confusion in the system when you go to things like citing an MRA award in the publication and things like that.
KRISTEN MUELLER: And then, of course, I think it's essential to highlight the success stories within the community to really allow other funders to understand why it's so important and beneficial to use things like PIDs. And then finally, I think it's essential to continue creating the infrastructure for adopting grant DOIs across the research spectrum. So for publishers, when an author is going to input all this information, they need a place to put for DOIs for their grants, in grant applications so they can easily follow their, for instance, pending support into a grant application just by using DOIs for their acting awards and for institutions to just more easily connect awards to researchers.
KRISTEN MUELLER: So I think, hopefully, going forward, more researchers will begin to adopt PIDs. And I'll stop there, and we can move on to the next presenter.
JONATHAN CLARK: All right, thank you Kristen. Now, next up, our final speaker in this session before we move to the discussion, is Steve Pinchotti from Altum. And he's going to give us-- cover the role of technology and how that plays in the research outcome reporting. So over to Steve.
STEVE PINCHOTTI: All right, thank you very much, Jonathan. Thank you, Carly and Kristen for amazing presentations. I'm excited to be here today. Thanks for joining our session. Yes, once again, my name's Steve Pinchotti, the CEO of Altum. And Kristen mentioned us a little bit in the previous presentation about Proposal Central. So we run Proposal Central. It's the largest independent grant-making platform for research funders globally.
STEVE PINCHOTTI: So what we're going to talk about today is certainly the future of research outcomes. Our perspective comes out a little earlier in the lifecycle but I think very relevant for everybody in the session today. So a little bit of a higher level in my talk today talking about just research outcomes as a whole. So research outcomes really are the building blocks of our society.
STEVE PINCHOTTI: Researchers learn from each other. They share information. Scientists and researchers are very curious by nature, want to learn, want to advance knowledge. And these outcomes are used to explore new areas of research across the globe in all different facets of research, whether it's health and biomedical, energy, agriculture, all different walks of life. So by nature, these things have helped evolve civilization and our species for many, many millennia.
STEVE PINCHOTTI: And why do we do this? Well, we love to share our discoveries. So if we were doing research and we found the cure for something or we found this great intervention, we want to share that information. We want to tell people about it. It's obviously going to help the planet, help everyone that's doing this research. So how it would typically be done in the past is someone might go to a conference and share those discoveries.
STEVE PINCHOTTI: Certainly scholarly publishing has been the core way that information has been disseminated for centuries. So information wants to get shared. Another really important reason to share outcomes is because funding, overall, for research is limited, although it is approaching a staggering number-- it's approaching $2.5 trillion worldwide every year-- there still are limited resources. If you take the case of rare diseases, there are thousands of rare diseases, but there's only so many funds that can be applied and research those specific diseases.
STEVE PINCHOTTI: So there's still sort of a competition for funding. There's a need to focus. And there's really no need to reinvent the wheel. So if someone has already found something out-- learned a particular part about biology or something else out there, then another researcher doesn't have to go study that. Or maybe they can build on that in a different way. So when the outcomes are shared, less recreating the wheel, the pace of scientific discovery accelerates.
STEVE PINCHOTTI: And in the case of COVID and the pandemic that we're all living in, this is an amazing real-life use case at our fingertips. So talk about a global collaboration to figure out how to get to the genetic makeup of this virus, and how are we going to attack this? How are we going to cure it? How are we going to come out with vaccines and other medicines to treat this?
STEVE PINCHOTTI: Really, has been an unbelievable global collaboration to get to where we are today. Still have a long way to go. We'll still be dealing with this for many years. But this is an example of what can happen when information is shared at a very rapid pace. And in terms of just overall research outcome sharing, we're still in the very, very early days.
STEVE PINCHOTTI: As I mentioned, traditionally, it's been mostly about manuscripts and publishing. And I think where we're moving to is where publications are moving from being the thing in scholarly communication world to being a thing. And they'll still be around, still be very important. But we're seeing different things happen where you're seeing all these different manifestations around push for open access, get the information out there faster, abstracts, manuscripts.
STEVE PINCHOTTI: Earlier in the process, registered reports where someone can register what their experiments are going to be. So more information is being shared each and every day. And we're going to keep improving the way that these artifacts are made available to researchers worldwide. So we're talking about things-- and Carly had mentioned this earlier-- things like code, and software, and data, and samples, and many other things in addition to the manuscript.
STEVE PINCHOTTI: And ultimately, what that will mean is that researchers will spend less time replicating their experiments and verifying results. So something that they want to test out that they read about will be much easier in the future. And I know it's still a long way to go, but that's where things are headed. And to take a step back, we think about a lot of the advancement that we've had as a civilization, it's all come down to infrastructure.
STEVE PINCHOTTI: The automobile was invented around 100 years ago. But back then, it was tough to drive from one part of your country to another part because the infrastructure, and the roads, and highways were not there. So things we take for granted today, like massive highway infrastructure, and gas stations, and now electric charging stations-- all of those things have to be in place to enable goods, and people, and information to move faster.
STEVE PINCHOTTI: Now we all have in our hands GPS communications and mapping technology. And you just get information with the smartphones. The iPhone was only invented less than two decades ago. These things are all relatively new, but the more the infrastructure is there, all this information will move much faster. And we're seeing that happen each and every year-- the pace of innovation, the speed that's happening out there is enabling much greater information to happen and move at a much faster pace.
STEVE PINCHOTTI: And so for research outcome sharing to happen at scale globally, these things need to improve-- the infrastructure needs to improve. So it will continue to evolve. Every year technology improves and the ability to share data sets, and samples, and all the things that I was mentioning will become as easy as a click of a button.
STEVE PINCHOTTI: And that should be the goal-- how do we help accelerate research? Think about it around like a 10 times improvement. If you're a research organization or research funder, how would you improve laboratory results? Or how would you improve what we're doing on a level of 10x-- 10 times improvement. And that could be 10 times the amount of productivity, 10 times reduction in time.
STEVE PINCHOTTI: If you think about it at that scale, you start to think about, what are the things that really need to be in place to have that happen and facilitate that kind of speed so that drugs, and treatments, and things out there could happen instead of 10 or 15 years, down to one year or less? And so you really need this global sharing. You need all the things we're talking about so far-- this open access-- and you really need these standards, these inter-operable protocols, and standards, and open APIs so that all the systems that are being developed out there can talk, and integrate, and interface with each other in much faster ways than they can today.
STEVE PINCHOTTI: And so this brings me to the point of the infrastructure of the research ecosystem is what we've been talking about-- so all these persistent identifiers, and organizations that have been working on this for decades and decades, and NISO with all their standards work. This work needs to continue to evolve. And it will. And it will continue to improve.
STEVE PINCHOTTI: Persistent identifiers are key for all this global metadata sharing. Standardization and adoption will accelerate. And data will just become more consistent. And all of these things-- some are relatively new, like we were talking about like ROR-- and some have been around a lot longer, like ORCID and the work that Crossref has been doing. But tremendous work by many organizations, and this will continue to standardize, and evolve, and will accelerate the information people can share and leverage for all the research that they're doing.
STEVE PINCHOTTI: And one way we're doing it is something called Altum Insights. And this is just an example of how data can be aggregated together. And we're one of many-- Kristen mentioned Dimensions. There's a new product out there, open source, called OpenAlex if you want to get a lot of manuscripts. Certainly, Web of Science and Clarivate.
STEVE PINCHOTTI: There's lots of organizations out there that are aggregating this data. But this is really critical, because research organizations want to spend less time aggregating all this information and more time analyzing it. Altum Insights was initially developed for Dr. Fauci's team. And they were struggling with, we have to fund all this research. How do we connect it to the outcomes?
STEVE PINCHOTTI: How do we connect it to the outputs, whether that's patent, a publication, a product, a clinical trial? And they were spending way too much time manually curating all this data. So this and other tools will help researchers and help research organizations get this information faster and ultimately expedite the analysis that they can do, because there is less time pulling all this information together.
STEVE PINCHOTTI: So this is a really exciting time. There's many organizations working on this. And I think we're going to continue to see this evolve in some really exciting ways. But ultimately, the future of where all this is headed is with artificial intelligence and machine learning. And this is something that may have been a really bold claim 20 or 30 years ago, but today, I think it's-- the things that we're seeing, this is just obvious to many folks out there is that the advent of this and what's happening with technology and the pace of innovation and discovery with all of this-- AI and machine learning algorithms-- we'll continue to see generations of products become more and more capable.
STEVE PINCHOTTI: So like I said, we'll spend less time aggregating all this information and searching for it, and we'll just be working with it in our workflows and our work streams. It will just be embedded into our daily lives. And all of this information that's out there that's being generated every year from this $2.5 trillion a year of funded research, will be weaved into our lives all through the research ecosystem.
STEVE PINCHOTTI: And it'll ultimately allow the researchers and research organizations to just save millions of dollars and lots and lots of time and money overall in the ecosystem and ultimately accelerate innovation and knowledge. So many exciting times ahead. Thank you very much for your time today. And I look forward to the Q&A session.
JONATHAN CLARK: Great, thank you. And a big thank you to our three speakers in this session-- to Carly, to Kristen, and to Steve for their really thought provoking talks. I hope you've got lots of questions, and comments, and things ready, because we're going to move over and start the conversation right now. [MUSIC PLAYING]