Name:
New Directions : Changing Research Landscapes
Description:
New Directions : Changing Research Landscapes
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/6fdccc2f-ba81-43c7-b564-0338b4b7ff33/thumbnails/6fdccc2f-ba81-43c7-b564-0338b4b7ff33.png
Duration:
T01H02M29S
Embed URL:
https://stream.cadmore.media/player/6fdccc2f-ba81-43c7-b564-0338b4b7ff33
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/6fdccc2f-ba81-43c7-b564-0338b4b7ff33/GMT20241002-123520_Recording.cutfile.20241002145329889_galle.mp4?sv=2019-02-02&sr=c&sig=GOCP0YiRPu2GVzoxp3ZPBqN939Puqm%2Bx%2F389wO0tlP4%3D&st=2025-04-10T19%3A19%3A10Z&se=2025-04-10T21%3A24%3A10Z&sp=r
Upload Date:
2025-03-07T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
9:15 and we actually do start right up at the top of the hour. So thank you all for being here. Welcome to day two of the new direction seminar. And before we get started, I just want to, in case you haven't met the amazing Jenny Herbert, she's going to be taking over. We've been co-leads for the last couple of months on the seminar, but she's going to be taking the front seat and I will be behind her for whatever she needs for the next year going forward.
So please, everyone, welcome our new directions lead, Jenny Herbert. All right. Thank you. My first act is new directions czar. I'm going to read you administrative updates for the day. So let's go ahead and get into it.
I am shorter than letty so I'm going to stand back a little bit. All right. Welcome to day two. Thank you so much for continuing to be here and for joining us in general, new directions. I think what's so special about new directions is the fact that it's this nice little intimate seminar and not a giant conference.
So all of you being here and continuing to engage is really what makes it so special. Another Thank you to our working group volunteers. Something really important to new directions isn't just the people in this room, but the people that put together all of these awesome panels. So thank you to everyone on this slide. Everyone here has full time jobs and is just doing it as volunteers.
So a big Thanks to them. And if you have been here and you've thought to yourself, this is so interesting, but I wish they covered x, let us know. And also maybe join us. We would love to recruit you. So get in touch if you're interested in getting involved yourself. All right.
Thank you to our event sponsors. Similarly, we could not do this without our event sponsors, namely cadmore media, DXL, the data conversion laboratory, dimensions, origin and Silverchair. And then next going through my PDF. Here we have housekeeping items. So the Wi-Fi password is written. It should be written on your table. It is welcome to net zero percentage.
I think looking at what's on your table will be much more fruitful than hearing me say that, though. The meeting hashtag is spend 2024. Please use it so that we can see when you're promoting the meeting. Please silence your devices as a courtesy to our speakers and your fellow attendees. And if you're taking any calls today, please try to do it outside and a little bit away so that you don't accidentally have the rest of us listening in.
If you're doing it in the hallway, closed captions will be enabled for all sessions. And similarly, if you are asking a question, please try to do it in a microphone so that particularly our folks online can hear you and anyone in the room that might not be able to hear can hear as well. There are still spots open for the Library of Congress rare books collection tour.
So at 3 PM today, there's going to be a tour of the Library of Congress rare books collection. If you would like to join and to chat with other attendees while you look at rare books, please do. There are still spots, so feel free to, I think, get in touch with Susan, who is over there in the blue and she can hook you up. And if you are missing a pair of headphones, please go find Melanie.
She is sitting at the front desk near all the swag. So if you don't have a pair of headphones or if you have headphones, please make sure that they're still in your bag. So please go check and ask Melanie about them. Once again, another reminder, if you are in person, please speak during a microphone or on a microphone. And if you're asking a question, please identify yourself by your name.
That way everyone here knows who it is, asking an awesome question, and it's also helpful if you share where you're coming from. So we can know a little bit more about the perspective that you're bringing when you ask your question. All right. Virtual attendees, welcome. We're so glad that you're here. Thank you for joining us.
If you need any technical assistance, please post a message in the chat or email SSP. Net.org again, that is SSP. Net.org the chat is probably going to be the fastest way for us to be able to help you, though I promise I'm almost done. There's still time to purchase SSP and the Scholarly Kitchen merchandise, either online or at the registration desk. Please buy merch.
First of all, it looks awesome. Second of all, if you don't buy merch, Melanie has to take all of it home with her. So support SSP and keep Melanie from having a 50 pounds suitcase and buy some merch. Also, there are aprons that you can buy, which is very appropriate for the Scholarly Kitchen. They're not physically here, but you can still buy them and they will ship them to you.
So buy an apron. And lastly, a reminder of SSP code of conduct and today's meeting. We are committed to diversity, equity and providing an inclusive meeting environment that fosters open dialogue and the free expression of ideas free of harassment, discrimination and hostile conduct. We ask all participants, whether speaking or in chat, to consider and debate relevant viewpoints in an orderly, respectful and fair manner.
For more information on the code of conduct and how to report a violation. See the code of conduct tab in whova. I think that now is Zoom, which you can find other under logistics. If you are trying to look for that and you can't find it, feel free to again get in touch with Susan, who is over in the corner there and she can help you. Lastly, lastly, a reminder that the annual meeting call for proposals is now open.
The deadline to submit is November 4th, 2024. Go to the SSP website for more information and we want your feedback. So when you're done here, we are going to send you out a survey. You can also just access it right now through this QR code. Please tell us everything you like and don't like. And if you find things throughout after submitting it that you want to share with us that you're not a huge fan of or that you really like.
Feel free to submit it multiple times. We truly we want all your feedback, so please share it. And with that, we can actually move on to the content. So thank you. All right. Oh, boy. Can everyone hear me? Great Welcome.
My name is Ana hatch. I'm a program officer and scientific strategy at hhmi. You heard from my colleague Michelle yesterday. As my title implies, I'm coming to sort of research, communication, and scholarly publishing, but from the lens of the recognition and reward system. So before I started at hhmi, I was the program director for the Declaration on research assessment, called Dora for short.
As many of, Dora is famous for its call to do away with journal based indicators for the purposes of research assessment. Instead of really offering structured narratives as a way to capture and describe a broader range of scholarly achievement. So, you know, my work at door really centered on community building and creating resources to rethink and reimagine assessment.
But now, as part of my role at hmi, I'm focused more on implementation. So really thinking about how we put these principles into practice. And, you know, so part of that is sort of leading a working group focused on assessment. And the other part of that has been developing a training program on peer review using preprints that Michelle and I have been co-creating.
So so that's a little bit about me for our panel today. Simply put, we're going to talk about the ways that research is changing, the ways that how research is being done, the ways that how it's being communicated, how that's evolving, and then also the ways that research funding is changing too. I'm really excited to be joined here today with Morris Johnson, who's a social scientist at the FDA with Jennifer Kemp, who's a director at stratus, or the strategies for open science.
And Jenny Peng, who is the executive publisher at Oxford University press. So we think, you know, with our panel, we have a lot of different stakeholders within the research landscape covered. But we also look forward to hearing sort of questions from you. And we'll reserve time for that at the end. I know you heard a little bit about me, but I think we also want to hear a little bit about the panelists too.
So, Morris, do you want to introduce yourself and share your background? Sure Hello. Morris Johnson Junior. Just in case my dad is watching. I am a social scientist. I've been in the public health research field for 15 years. Currently, I'm at the FDA in the Center for Drug research evaluation.
Previously, I was in the Center for Tobacco products. And before that, I was a researcher for a private social science research firm. Which most of my clients were the FDA. The agency for research and health quality. CDC, hrsa, and so forth. So a lot of social science research, a lot of my work is in qualitative and quantitative research. I'm more of a methoxide of side of things.
Survey development. But a lot of my work is also focused on evaluation doing qualitative quantitative research. A lot of my work is looking at health disparities, and I have experience from the academic side of things, from the private research side of things and the federal government side of things. And I have to say, my thoughts here today are of my own. They do not reflect the FDA or the agency.
HHS, I have to say that. So thank you. Hi good morning to everybody in the room. Can you hear me? And Hello to everybody on Zoom. So, like Ana said, I work at strategies for open science, which also includes open scholarship. I just want to be clear that we're inclusive of both those terms.
Before I worked in consulting, I worked at crossref and Springer Nature and highwire press, and I started my career as a librarian, so I feel like I cover all of the biases. Hi, everyone, I'm Jenny pang. As Ana mentioned, I'm executive publisher at Oxford University press. I lead our publishing team that oversees our open access and owned journals portfolio.
I also work on open access and open research strategy in the US. Prior to joining oup, I worked at Wiley. I have also worked at a contractor for the National Institute on aging and the Association for Research in vision and ophthalmology. So a blend of experiences as well. I'm also on the board of directors for chorus, which is at the intersection of funders, publishers, librarians, service providers in helping funders track compliance to their policies.
So Thanks for having me. Thank you. So I think the first question I have is going to be sort of a wide ranging kind of look back over the past decade or so. And I'm wondering, Jennifer, since you have sort of the consultant sort of view of the landscape, if you could give us your take on what's changed over the past decade.
Yeah so I'm not going to limit myself to just a decade, but I'll try to keep within a few years of it. So one of the things that always jumps out at me when I think about this is kind of the size of our community. So it's one of those things that a lot of us say that, particularly in scholarly publishing, it's a very small world. And I think that's still true.
But I think the tent is bigger and there are more people in it than there used to be. And more people have different kind of perspectives. And I think we have a long way to go on some of that. But but I think that is a change in the right direction. And related to that, I think of some of the conversations that we were having yesterday, some of the topics that we'll talk about today and how very, very contentious they can be and how very, very contentious those discussions were for a long time.
So some of you probably don't remember back that far. But for those of you that do, you know, some meetings, librarian and publisher meetings were pretty, you know, they were pretty raucous for a while. And if you look back like the Scholarly Kitchen, for example, some of the discussions in the comments on posts that are older than a, I don't know, a decade maybe, or more. You know, things got pretty heated.
And I think now, you know, I'm sure behind the scenes, you know, some of those things still happen. But I do feel like we're able to have conversations with more people involved from different points of view that are more constructive than they used to be. So I think that's a positive thing. I'll just mention quickly for now, I of course I know we'll come back to that later. And there was a really good discussion on that yesterday.
And COVID, of course, I have to mention that. And the effects on publishing and the increase in the pace and the need to get things out faster. And so related to that is preprints. So definitely one of the developments of the last 10 years. Archive, of course, has been around for more than 30 years, but particularly with COVID. But starting before that was is an increase in preprint servers and the amount of content that is published as preprints, whether or not it ends up later in journals or elsewhere.
And then related to that, I think, is open access. And I think it is within the last 10 to 15 years, probably that topic, which was so contentious and certainly remains so for a couple of reasons, has really sort of taken hold. So it's less a question of if and it's more of a question of how and when and how much and things like that. And then finally related to that, of course, are funder mandates in particular.
So for the context of this meeting here in DC, it's of course, necessary to mention the OSTP memo, both the Holdren memo, which is a little bit more than 10 years now, and the more recent Nelson memo. So I think that's where I'll start and stop for now. Thank you. I'm wondering, you talked about sort of this shift in discussion and conversation tone, as you know, starting off more contentious, but it's been shifted to sort of more constructive.
Do what motivated that shift? Yeah, that's a good question. I mean, I think that that acceptance, that sort of inevitability of, you know, this is happening. You know, winning over some hearts and minds, maybe, and people realizing that, you know, they can work together to figure things out, even if they don't always want to. But, you know, there there practicalities come into play at a certain point in all of these things.
And like they are and will with, you know, I and others where people figure out like this is actually happening now. So we have to have a response to it. We have to figure out a strategy. And I think that that is probably the most of it. Hopefully it is just more related change and more like a. Sarah was talking yesterday about the difference in an approach to partnerships.
I do think that there is more coming together in the community and less, you know, segmentation among the stakeholders. At least that's my hope. Jenny, you're a publisher and you mentioned that, you know, part of your portfolio is including open access. Do you see that partnership sort of building and changing for the better over the past 5 or 10 years or so, especially around open access?
Yeah, I would say traditionally speaking in a subscription based model. You know, we worked really closely with libraries on that particular model. But we find that, you know, with the acceleration of open access, the stakeholders that we work with and partner with to come up with these business models and to have a sustainable way forward for our journals. It's gotten a lot more diverse.
And because of that, there's some additional complexity, I think. And right now, what we're seeing is a lot of experimentation from different communities. And so part of our job as a publisher in, you know, being stewards of our society partners journal content is ensuring that we're exploring all of these different types of models and figuring out a way forward that will represent the diversity of our community.
But I think what's been interesting to me, I actually asked somebody who worked very closely on the rollout of compliance on the Holdren memo, whether conversations were as tense then as they were around the Nelson memo, and he said yes, if not, if not more so, which was which was interesting for me to hear, because I do see there being some backlash against publishers, I would say.
And so I think we're still kind of navigating that, that tension and hoping that we can be in these rooms, having these conversations to navigate that. Thank you. Yeah, please. I'll just add, I would say from a researcher, from my perspective, what I've seen, the impact of that open source is now I have to account for that in my budgeting.
You know, like if I need I need to publish something, I need to account. All right. What journal am I going to go to? And do I have the budget for my limited budget already to account to get this published? And so I think that's one of the challenges of from a when you have to do your research, what's the Avenue you're going to have if especially if it's a federal funded project to get that out and disseminate it?
How many times can I disseminate it? Where can I disseminate it? Can I cut it into different angles? I have to just pick one. I want to have the budget for one publication. Absolutely Thank you for that. Any other reflections we want to make over the past 10 or so years? All right.
We're ready to start looking towards the future. OK, so, Maurice, in the next 15 years, what do you predict will change or see changing? So this is a very loaded question. You know, I think from, it was noted, you know, the use of AI and how it's being applied is going to impact from the start of a research project to how it's disseminated. I think trying to figure out how to account for it, how to note when you used it, why you used it.
Do you believe in the methods of that algorithm for that particular ai? Is it transferable? If somebody else wanted to do that same exact project, would they get the same results? Trying to understand that now with this application, because I think right now the understanding of AI is still a Black box. What's going into it.
And what are we getting out of it? And so trying to understand how to leverage that, that that technology to, you know, develop the research questions, conduct your research and then to disseminate it. And I think how do you account for that at each phase is going to be something we're going to be grappling with over the next few years. Yeah do you have any ideas how to account for that?
So, you know, I think being up front and just noting that you use it is the first step, just being accountable and saying, we applied this framework to, you know, you know, I'm doing a systematic review and I conducted, I have over 100 articles. Did I use I to summarize those articles to start, or did I go through each one and write that up? That's something that you should know.
Because whether you know, having three people on the team review it and summarize, and then you come together and see whether that you have the same findings or using some technology to put all those, put all your inputs in and have an output. And then you might edit it is two different approaches. You could get two different interpretations. And being clear about that. I think is key to doing the research.
And from a publisher perspective or from a viewer perspective. Are you OK if you use the AI to do so? You know, I think there's going to be a tug and pull going on going forward. And I think, you know, I've seen, you know, in my review of trying to put through papers, it is a mixed bag of who's going to be your reviewer, of what you put in and what you know, whether there's somebody who's very focused on methods and whether want to question every method you did, or somebody who's just more conceptual and say, did you test this?
Did you look at this? Did you look at this? And so I think we're going to get a mixed bag going forward. And I think the best way to go to address that is just to be straightforward and noting when you are using it and how you're using it. Absolutely and Jennifer, I know that you think a lot about attribution. Do you want to speak on this?
Yeah I do. Actually I want to follow up on two points for that. First, maybe we can pull the people in the room. So first, can you raise your hand if you're a publisher? Because I know not everybody. OK can you keep your hand up if you have a policy for authors about acknowledging use of ai? All right. Excellent that's great.
That seems like it's almost everybody. So Yeah, we were talking about this when we were discussing the panel about how there are a lot of policies about acknowledging use of AI. And like you said, you want to be transparent about it. What level of detail is needed? How are the tools named? You know, is there a standardized approach to this. And standards came up yesterday as of course, they always will in these conversations.
So it sounds like most of you folks are familiar with credit the contributor roles taxonomy. Is there an option to extend that to include ai? For example, how will style guide citation style guides address these issues? So I think it's one of those things where, you know, everybody wants this information included. But moving towards something that is a little bit more standardized, maybe machine readable, interoperable, and getting that into the metadata is also going to be important.
But the other thing I want to follow up on to this point is I think one of the good outcomes about all these discussions about AI, or at least the potential for it, is this focus on, you know, accountability and transparency in general. So we're all very interested in understanding when AI is used and how and things like that. And I think, you know, there's no reason to limit that focus just to AI, right?
There's, you know, issues of retraction. So this is something that has maybe changed over the last 10 or 15 years, but also not that much. You know, there's still a lot of retractions that aren't necessarily, you know, done at all or done in a timely way. And there's still, you know, some discomfort around the idea of retraction. So while we're taking this opportunity to talk about AI and transparency and just getting things onto the record, then things like retractions and other information that can be made available, I think is, you know, it's good to take that opportunity to talk about that as well.
Thank you. And Jenny, I know, sort of. In our earlier conversations, when we were just kind of figuring out, you know, what different tools can be used with AI, how would how are you thinking about that as a publisher? So, so the perks of being on the second day of a conference is that you get to hear what came before, but also you aren't the first one to say it.
So so I think there's a few things and it will sound a little bit repetitive for, for people who are here yesterday. But I think as a publisher, you know, in thinking about scientific communication particularly, I think when we talk about funding mandates that are trying to encourage equitable access to scientific information, I think about this, this access component to it.
And what that actually means, right? Because if you have access to the original article, that doesn't mean necessarily that the public finds it accessible. You know, not digitally, not from a comprehension standpoint. And so in thinking about how we actually democratize access to knowledge, there's huge potential for the way that I can summarize and distill key elements of scientific research to be more comprehensible for a public.
Also, as the publishers in the room probably feel very strongly that research integrity issues that we're seeing is, is it's monumental. And so what are some of the safeguards that we can put in place using AI to help us both from a prevention, but also from a detection standpoint, minimize some of the labor and administration related to that?
Absolutely Thank you. I'm wondering so there's a lot of attention, of course, that's rightfully focused on AI. But besides artificial intelligence, what else do you see as sort of changing over the next 15 years? Do you want to start, jennifer? Sure, I can start. I think, and this does relate to AI.
So earlier sharing in general, I mentioned preprints earlier that, you know, very much driven in recent years by the COVID pandemic, I think will accelerate as, as it should and not just sort of, you know, preprints, you know, in kind of in writing up the text of research, but also data and code and software. And so that relates to AI, because if you're going to acknowledge that I was used, is that does that become a software citation?
You know, data citation and software citation are still difficult things for everybody to kind of wrap their heads around a little bit and decide what is the best way to do them. But I think that's very necessary. And, and the kind of thing that will increase, at least the attention on it will increase. So again, you know, maybe this is an opportunity while we're talking about AI and what goes into that, to put a little bit of that focus on data and software citation.
The other thing that I, I think we'll see more of, I hope that we see more of is again, related to data and that's fair and care. So, you know, we've been talking about the Fair data principles for a long time, and I think that there is a lot of interest in them. I see a lot of interest in that, but it is hard to assess. Right? it's hard to assess for a lot of publishers and other people.
Is this really fair? How do we assess that? And I think care, the care principles for indigenous data governance is sort of related to that, but but less familiar. And one of those things that again, I think people see and they think, OK, this is something else that we have to add to our plate, right? We this is something that's of interest. We think it's important, but we're not really sure how to do it.
So I think that all of these things are related. They do tend to be in the kind of data realm of things, but I think they are related to getting information out there earlier and to putting some focus on these related outputs, to being able to kind of tie them together in the metadata is very important. Right you want to know the whole breadth of outputs from your funding from here's the original data. Maybe there's preregistration, the code that was used, the AI that was used, the preprint, the journal article and getting all of these things tie together.
Because without that, you know, they are sort of siloed. And it's hard to see what the full breadth of all of the work is that went into this and what all the outputs are. So I would be interested to know if there are people who, you know, have experiences that they want to share. When we get to the Q&A, particularly around fair and care. Oh, sorry.
Could I just jump in here? Actually, this is a really good opportunity to bring up an initiative that oup is partnering with Silverchair on called census impact. And part of that, currently we're just piloting with oup content. But the goal for this is to aggregate information from multiple publishers, multiple platforms to deliver impact metrics back to funders.
Right because there's information that helps them track compliance. But things like usage metrics, all metrics potentially in the future. You know, data reuse, things of that sort. It's all kind of segmented right now. So this is kind of a neutral industry initiative to try to get all of that information to deliver back to the funders, to be able to really assess, you know, what is the value of this initiative versus another.
Does the version of record deliver more than another version? These are things that we're hoping this dashboard can kind of answer the questions to. But I also wanted to comment really quickly on kind of the Affordable Care Act and things of that sort, because I think in looking back at the, you know, and you mentioned what did the last 15 years look like, what do the next 15 years look like?
I think in the last 15 years, we did really see an emphasis on the opening up of the research ecosystem, you know, not only from an open access standpoint, but in international collaborations within the research community. But as geopolitical dynamics change, we're actually seeing some closing down of some of those collaborations and closing down of the ecosystem.
I think part of that closing down is, you know, previously this emphasized emphasis on centralized resources, but I think now more and more so there's localized resources and infrastructure that's being built. But what makes this really challenging is if it remains localized, then it isn't necessarily discoverable to other research communities. So in thinking about interoperability, right, I think that's really important and flexibility.
Thank you so much for bringing that point up Yeah It's not enough to be open if it's also not discoverable. And I want to jump back to one of the points that you made, Jennifer, which caught my eye because I've been thinking about this a lot from the lens of a research institution, is that we want to know about research outputs and not just the journal article, although articles are very important, but we want to know what outputs get created along with the article.
And I think what would be great putting this out, sort of as a call for those who have the technical skills that I do not, but a citation format that's able to connect articles with the outputs, you know, that are the foundation of the article. So if there's a citation that can connect any data sets to that article or protocols or open peer reviews, I think would be hugely helpful.
I totally agree. And and I have had this discussion. I find her talk to her about this. It's really interesting. I know I don't know how much you kind of have to share of what your thinking is, but it seems like one of those things that's almost so basic. Why didn't why weren't we always doing this kind of thing? And I completely appreciate that.
There are very practical, you know, logistical kind of cultural standards for different disciplines and things like that. But, but I but I think it is a really interesting thing. You know, you're looking at a list of references, a bibliography, and you see the citation, then you realize, Oh, OK, this paper are actually, you know, had this data set and things like that. And it's, you know, making those things visible in, you know, I mentioned the metadata.
That's hugely important. You know, that's more for kind of the machine users, but for actual readers looking at a paper to see that all in one place, I think would be really valuable. Absolutely and I think there's two benefits to having a citation that connect, that can connect different research outputs that are related together. One is exactly like you say.
It improves research communication, particularly for researchers who want to take a deeper dive or want to be able to use a data set or look at a method and try it in their own lab. But also, I come from, you know, the recognition side of things. It's super useful to be able to include all of these outputs in a citation, because it allows for more granular credit, because authors might not be the same on every output.
So when you're being able to OhioLINK and rearrange in different ways, you're being able to then convey a more nuanced picture of who did what within a research project. I just want to. I think to the previous question about the change of landscape, I think related, if we talk about data and making data available, I think that's great.
And, you know, I think particularly if somebody using the public data set, knowing where that came from. But I think one of the challenges is also a lot of proprietary data sets. And what do you do with that? What can be shared about that data? How does somebody follow up with that data? Because there's a lot of funding that went into it that's going to keep it proprietary for a while.
But if the data is available, I do think one of the challenges is you could write the best methods side of your paper. It doesn't how transferable it is without actually seeing the program. The hardest part of the data is the data cleaning. And so getting access to the programming that was used for the cleaning, I think it's something that you don't see as often that I would like to see more often, particularly if it's a public data set and you're putting out those results because your decision, the decisions on how you clean your data and what you're going to do with your data really impacts the outputs that are coming out of it.
And I don't see that many people sharing their programs of what they did to clean the data, to see what happened, so that you can repeat it on your end. If I could just jump in briefly, just to make sure that we're kind of clear on being inclusive across disciplines. So we were talking in our prep call about qualitative data, not just quantitative data, which has its own, you know, additional set of challenges.
And also, you know, not just journals and preprints, but also books and other outputs. One of my biases I didn't admit to up front was that I'm very much a books person in many ways. So if you are looking at your output at an institution level, and you have an author who contributed a chapter to a book but isn't listed as the editor of the book. And the book doesn't have chapter level information in the metadata, for example, then it's kind of hard to know that you have faculty that contributed to that book.
So, you know, they're different formats and content types have some of their own challenges, but, you know, very much kind of in the mix in all of these topics here. Yeah just to add to that as well, I think a lot of what Maurice is saying about data sets, research data actually applies to Publisher Data as well. We're talking about all of these different types of cool AI tools that are going to ingest all of this information, but actually a lot of indexers, aggregators will tell you publisher metadata is still very much in need of some cleaning up.
So those are certain actions that publishers can take right now to help anticipate some of those tools of the future is using things like persistent identifiers and, you know, making sure that your XML is, is tagged correctly and clearly. Absolutely so we've been focusing a lot on research outputs. So I think it's time to make that natural transition to how do we recognize research outputs.
Because of course sharing outputs. And then the recognition and reward system are intimately connected with each other. So looking at you, Maurice, as the researcher on the panel, how would you change research recognition and researcher recognition? This is, I would say, throughout my career, has always the hardest part of the work you do. From a contractor side of things, you know, if you were hired by a client to do something, how much can you put yourself and recognize the work that you did compared to putting your client's name on it?
Who were the players involved on all sides of things, from leading the project to managing the project to doing the data, cleaning to doing the data programming, to writing the actual report and then writing the manuscript that came from that report. There's so many players involved. I think one from a researcher perspective, you need to go when it comes to if it's understood that a manuscript is being developed from this project, it needs to be understood.
Have that conversation at the beginning. Who's going to be the contributing writers of this paper? I don't I don't think that conversation happens often enough at the answer. I think if you do that, you mitigate a lot of the challenges that happen at the back end. And then I think we need to have a better understanding of what we can put in our acknowledgments.
And who do we acknowledge and when do we acknowledge? I feel like there's always a focus on acknowledge your funders. And if it was a collaborative project, acknowledge the organizations who were helped you with that project, but you never really see the acknowledgments of the person who was the date, who was the project manager throughout that whole project over the past five years.
When do they get recognized? How do they get recognized? Should they be recognized? So I think having an understanding, I think trying to fully flesh out who has been involved, and when and where and trying to better articulate that in these manuscripts, maybe at the back end, maybe having a more expansive acknowledgments page so that you can say, you know, special Thanks to x, y, z, who did the programming for this special shout out to this person who actually United these 20 care coordinators to come together and work on this project.
There's a lot of work that goes into research, and I think it always just comes down to who is the head, and it's just by default. But that doesn't mean that necessarily did all the work. They just let it. And I think there needs to be a better balance of those two, those two places. And I think the best way to start that is to, at the onset, before the project starts, say, who's going to be the lead on these papers and what's the intention?
Because that's going to help things on the back end. And I've seen these clashes happen from all aspects in academia, within the federal government, within the private consulting. Everybody feels burned at the back end. For some, somebody is always going to feel burnt. And how do we try to mitigate that? And if somebody is feeling there is I don't think there's enough leeway to say, I understand that.
Let's try to accommodate your perspective. I think there needs to be, I think there's this old school perspective of, I was the head PhD, I'm on it and that's it. Conversation over. And I think we're I think there's a generational push going forward to be more inclusive. And if we're going to be more inclusive, we need to also recognize everyone throughout the whole process.
I think that's such a great excuse me, such a great plea for something like the contributor. Roles taxonomy for credit because it does exactly that. What I'm not sure of, and I'd love to hear from anybody who knows more on this is what the uptake is on using that taxonomy. And if there are any kind of analysis out there of what are, you know, what are the common roles, maybe what are the roles that are an option in the taxonomy that are used less how, how familiar are people with it?
Because that's exactly the kind of thing that it's designed to do. And, you know, could in theory, be expanded to include, you know, additional roles, additional not necessarily just, you know, publishing, like you say, but and I know it does include some of that, but it does seem like there's a potential for expansion. So I am curious if there are any analysis out there?
I'm sorry I didn't get a chance to look, think or to look for that before the panel. Yeah, I think, you know, it's interesting because we use the journal article as the baseline, right, for the research output. And it's still very much the main form of currency within the research ecosystem. And a big part of that is this culture of publish or perish. And a lot of us, I think, because we all belong to the same ecosystem, can't really explore other forms of research outputs that perhaps as much as we'd like for researcher recognition because of that, that hierarchy.
Right and I think that because there's such an emphasis on the research article and authoring research articles with various incentives, you see a lot of research integrity issues where there's gift authorship, ghost authorship. Who gets to be listed as an author? I think it's also very important to consider this, you know, beyond in thinking about like, international collaborations and how this also fuels neocolonial research practices with, you know, helicopter research and things of that sort.
So, you know, I think at some point, we really need to keep talking about research assessment and how we change that. Absolutely and you know, I can share. So from my perspective, we don't care about the venue where our researchers publish. We want them to publish wherever they would like to publish and wherever they feel comfortable publishing. But for assessment purposes, we really like we care about the discoveries that they make.
That's what's important to us. So to send a signal within our community, we removed journal names from our bibliography for our researchers. And this was something that we rolled out slowly over sort of 18 months, where first he holds science meetings for our scientists every so often that, you know, for presentations, at science meetings, for the citations, we asked to remove journal names and use the pmid instead.
We're able to use pmid because we're a life Sciences Institute. I don't know the exact number, but I feel like it's 99.99% of our publications are indexed within PubMed. And if they're not, we let researchers use dois. So Yep. So on sort of talk slides. It would be sort of authors your pmid or Doi. And we got positive feedback generally from the community.
So that was expanded then to poster slides. And then once that change was in place, we started integrating it into our research assessment processes. And there are two ways that we support our researchers in this. So we created a citation style in zotero that replaces journal names with PM IDs. And we also in our sort of online system, we created a web form to input sort of entries within the bibliography that has journal name automatically removed from that.
And this, you know, we want to send a signal to our researchers that, you know, when we're doing assessment, we're not thinking about this journal hierarchy. We're we're thinking deeply about the research that you've done. But to our reviewers in getting materials without journal names on them, it's also sending a strong signal to them that hhmi does not care about where the researchers are publishing, because they really want to know what that discovery was.
So it reinforces what we're telling them at the start of the review meetings. Can I respond to that? Yes of course. So as a publisher, I mean, I understand the need for experimentation, and I understand the philosophy behind removing the journal name, but just based on what we see from the publisher side, journal brand matters.
I think there was a slide yesterday perhaps in the competition. Competition, cooperation and consolidation. Yes, sir. Conversation about the nature portfolio and this pyramid of this portfolio, and how these new entrants in into these subject disciplines have become really formidable competitors for established journals.
And that's because of the power of the nature brand. And so I think, you know, publishers will go where, where researchers are. Right if, if there's, there's a change in how they want to publish, will meet them there. But what we see is that journal brand and reputation, it takes a long time to build that trust. And that journal brand becomes a trust marker for the research community.
And so oftentimes those are the ones that received the submissions. The researchers are voting with their feet in some ways to show that, you know, the journal does still matter. Yeah and I think, you know, our motivation in doing this because like I said, you know, we want researchers to publish where they would like to publish. We don't want them to feel the intense pressure to publish in certain places.
We want them to, to feel comfortable publishing wherever they would like to. And know that, you know, when they're under assessment at hhmi, it's the research that matters for us. And I, I think the tide is turning a little bit, too, because you hear strong desires and calls to action from the community that, you know, we want to be, you know, judged more so on the work that is produced and with a broader picture of research achievement than what's captured in a brand or like the journal impact factor.
You know what's captured in a number. And numbers feel good, right? Like they're concrete. They're very easy to compare. But there's a lot in a number that you don't see and it's not capturing. So I think with these structured narratives that's giving an Avenue to describe contributions in a more nuanced way.
And this is a roundabout way to say that, you know, there are tons of global initiatives right now that are working on rethinking assessment. So certainly there's Dora. That's the one I'm most familiar with. But there's also kura, who has now more than 400 universities in Europe working on research assessment reform. There's falak, which is in Latin America also, again, sort of working on assessment from the lens of how do we're, you know, getting away from the journal impact factor and the sense that a lot of work from South America isn't indexed by web of science and the other major indexers.
You know, there's Helios, there's tefen, there's petai, there's a lot of organizations. So this movement is growing. And I think it's something that's going to have to be worked on through partnership with all stakeholders. OK we have 10 minutes left in the panel. I think there's a lot that we can continue discussing. And I see someone already up for questions.
So let's open it up, please. Schultz, I think and this is directly to you. So well, I don't want to be too contentious. So you talked about this idea of removing the journal from the submissions that you're receiving an HDMI. And I understand that because you're trying to look at the research. So let me take that a step further, because the journal isn't just a name, right?
We we're making the claim that there's work that goes into reviewing that paper and publishing it. Right so why even care for them to submit it to a journal? Why not just have them submit a paper directly to you since you're doing a review of the work? So what's the point of having them submit it when you're doing a review of that research independently? Anyway? so I'm, I'm wondering, why don't you just take it a step further, or are you planning to take it a step further?
Really, really good question. I think, you know, part of that comes down is that, you know, we want our researchers to share their research, how they would like to share their research, like authors should have control of that. My colleague yesterday, who you heard from Michelle, is really thinking deeply about research communication at hhmi and sharing. So I think she can give you a more nuanced answer.
But that's definitely on top of our mind is how we can share information. Yes kasdorf. I'm going to ask a follow up question to the first question I was going to ask, because you answered that one already, which had to do with citation isn't enough. Just knowing that what you need to be able to do is actually find that thing and use that thing that is being cited. So you did.
Thank you, Jennifer, for picking that up on that. So the implication of that is we're actually getting closer and closer to actually being able to reproduce the research, because now you potentially have access to all the things you need to actually do that except incentive. Now it's completely reasonable that the incentive of the community is contributing to new knowledge, right? That's why we do what we do.
But is there any progress on this kind of intractable problem of even though we're now getting to the point where it's theoretically possible to reproduce some research, who's actually going to do that? Is there is there? And the reason that I'm bringing that up in this particular panel is that it strikes me that that probably the only party that can deal with that is the funder. In other words, you know, if you can actually fund reproducing the research, then there's somebody who's got an incentive to do it.
But without that, it's just not going to happen even though it potentially could happen. And certainly in your field, you know, biological sciences, it can be really critically important because that's the ultimate validation of the research. Right peer review is great, but it can't compete with. We actually tried this. And did we get the same result or not?
I mean, I think there's massive amounts of duplication in the research ecosystem, right. And some of it. We're not even aware of because those negative results don't get published. The corollary question I was going to ask, what about negative results. Yeah and I think this goes back to research assessment. You know what is a valuable research output.
And right now a lot of the time, the focus is on kind of novelty or impacts. But some of the most impactful things are ones that just lay the groundwork for where to explore various research topics in the future. But I think, you know, one of the things that the recurring patterns of that have come up these last two days, it's around, how do we change a culture?
And that's very challenging. And I think oftentimes one of the questions to ask is who isn't in the room in those conversations. And so, you know, funders are a key aspect of this. And so our institutions publishers, sometimes the Office of Research who helps translate and educate the research community on these things, sometimes they're not part of this conversation. And I think what's really, really important is that we all sit-in a room and have these conversations.
And I'm speaking from a social science perspective. I'm not life sciences, but I would say my training as part of a doctoral program is, what are you going to contribute that's different, right? It's never I have an idea that's been done before. I have this other idea that's been done before. So it's in your training that if you're going to do new research, do research? It needs to be new.
And I that speaks to the culture. You want to come up with a project? It's like, I have this idea, Oh, that was done already. But mine is slightly different, but it's not different enough. And I think that is the key of trying to figure out. We need to get back to what? Science the old school scientific process. You know, I've finished it.
Let's see if we can repeat it. The repeat is not a stressor. And I agree there needs to be more. I think we have to have funders really get back to that. We need to have a, you know, a threshold before we say let's get a new angle. And I just wanted to note that from a cultural perspective, I was taught always get a new angle, even though I'm I might have a similar idea that somebody had.
If it's not different, it's not going to be public. You know, I need to find something that's a new result. And if it's no or is it even worth going to publish? And we need to fix that culture. I had my doctoral, my dissertation was no results, and just knowing how to flip that and say this is actually a good result. And I think we need to focus on. If you do have a null result, not to think that you didn't do good research, you did good research, say what you did, learn how to articulate what you did, why you what you, and interpret those results and make it translate those results for the community.
But I think we're always just like we needed to get a significant result. And that's just a challenge that we need to overcome. I think the importance of this culture change also like hinges on like if I is to be successful, there needs to be a negative results within the training data that it's using. And wouldn't that be useful as a Csordas article type? You know, you have your editorials, people will look for review articles to have a type that is, you know, null results or, you know, something that was a study that was reproduced.
Would that be a useful thing to have in the ecosystem for people when they're doing literature searches? Absolutely we have about 1 minute left. So really quick question. Yes I'd like to ask Matthew Salter from Athabasca consulting. I'd like to ask a question about data. One of the projects I'm working on, we find, and maybe unsurprisingly, that incentivizing researchers to collate their data and make it available is that they have so much else going on and they don't get credit for doing that from a lot of their institutions.
And there are a number of ways you could address that. But particularly I'd like to ask the panel and people in the room about their thoughts. And maybe we can discuss this, you know, over coffee or whatever, about the idea of data authorship in a paper. So that you have a separate category which addresses the people who contributed the data. If that was someone different from the people publishing the paper, or people who within the study made a big contribution from data.
And whether you think that's a good idea, what publishers would think of that. It would really helpful to have some thoughts on that. I just want to jump in. I agree. And that's kind of what I was getting at earlier about even the programming and just knowing who did this program and why did this program. And understanding the decisions, the data, there's a lot of work that goes into things and that that attribution part of the conversation.
And I think we focus more on just again, finding that no result and publishing it and not speaking the process of doing the research and getting those out. And I did want to note something else that's kind of related. I do think we need to also focus more on before a research project is conducted, putting out the research protocols and publishing those and having those available so that people are accountable to saying, hey, you, you said you were going to do this.
Did you do it so that if it wasn't a result, you could actually follow up on that? And I think that's also a piece that a lot of people aren't doing the research protocols, but they're just putting out what they did. And I think we need to fix that as well. I think the conversation on this, it's I think it's critical as we look at the next 15 years.
Right it's going to be all about data. So we got to clean up our data and be ready for that change and make sure that it's discoverable. So looking back at Farah and establishing those best practices early on to make that happen. Absolutely I think that's the perfect sentiment to end the panel on. Thank you so much to all the I want to make one statement. I'm sorry.
Yeah go ahead. I there was when we were talking about the changing landscape and what we can do. I also think the translation of the research, you know, we're talking about the data and, you're talking about other materials that are being developed. I do think we need to also, I think what we can see in the next 15 years is putting together more two page summary.
I think we always focus on the abstract. And that doesn't mean the community can translate that either. Whether we create infographics that, you know, translate what these results mean and what they're saying so that somebody in the community can follow that and say, OK, I get why this research happened and what were the results? Is another push that we can go in the future.
Sorry to cut that off. No, absolutely. Like I saw a really great example of that from the scholarly Communications Lab based in Canada, where they had taken their research article and had an Illustrator just create like a one page illustrated graphic walking through. And it was really easy to read. I still read the article, but I kind of knew I was able to orient myself much, much more easily when I was reading it.
OK I know we're like 1 or 2 minutes over, so I just want to Thank the panelists for a really engaging discussion and for the audience questions as well. Thank you so much, everyone. Thank you. Thank you.