Name:
From Impact Factor to Impact Framework: Transforming How We Evaluate Research
Description:
From Impact Factor to Impact Framework: Transforming How We Evaluate Research
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/96ffdf63-37d1-4ac4-92ab-539c20f2b847/thumbnails/96ffdf63-37d1-4ac4-92ab-539c20f2b847.png
Duration:
T00H54M58S
Embed URL:
https://stream.cadmore.media/player/96ffdf63-37d1-4ac4-92ab-539c20f2b847
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/96ffdf63-37d1-4ac4-92ab-539c20f2b847/GMT20251008-153020_Recording_1920x1080.mp4?sv=2019-02-02&sr=c&sig=Grm0tC0NNgNQhPQ7dWRd32Mo9fTUF%2B8yujiJ1lTzpO4%3D&st=2026-04-03T12%3A15%3A44Z&se=2026-04-03T14%3A20%3A44Z&sp=r
Upload Date:
2026-04-03T12:20:44.0614457Z
Transcript:
Language: EN.
Segment:0 .
Go for it. Go for it. Hey, everyone. Hey, everyone. We're going to get started with our final panel of the new direction seminar. So if you could please just take a seat, and we'll get started with our last panel of this 2025 new direction seminar.
Thank you so much for your attention and participation, both in person and online. It's been a pleasure chatting with all of you and connecting throughout these two days, and we're just really pleased to be your final panel of the day, and we'll get started. We're going to be talking today about from impact factor to impact framework, and that was meant to be a provocative title, and we're really going to dig into more than just the GIF.
But it's something that is still, primary as far as current journal metrics. It's going to be a roundtable panel style. So we'll be thought experiment discussions and also a dissection of the current metrics and methods being used in scholarly publishing I'm your co one of your co-moderators, Jamie Devereaux with SAGE Publishing and also accompanied by John and I will let him introduce himself and we'll go from there.
Thanks, Jamie. And Thanks for everyone at SSP for organizing such a great conference and for each of you for hosting us. I'm John gerstell, I'm the publishing director at the American Political Science Association, and along with Jamie, I was the co-organizer of this panel, and I'm very happy to hand it over to Mary McVeigh. Marie McVeigh. I'm an ethics expert at Elsevier science.
As recently as a month prior to that, I was at Mary Ann Lieber in journal submissions or peer review operations and publication integrity prior to that at Clarivate, doing product development and journal selection. And that goes all the way back to a stint at Elsevier doing site score strategy. And prior to that, many years at ISI Thompson scientific doing JCR production and quality control, as well as journal selection for the foundational parts of web of science.
That's it. We have a wealth of knowledge on this panel, so I encourage you to think about questions as we go through our prepared topics, and we're going to invite the group to contribute near the end. But I'll turn it over. We have us three in person and we have three virtual panelists as well.
So Ana, may I turn it over to you for your introduction, please. Absolutely Thank you so much, Jamie, and Thank you so much for the invitation to be here. I'm really excited to be part of this discussion. I am a program officer for scientific strategy at the Howard Hughes Medical Institute. So in this role, I work on initiatives to improve research integrity and accelerate discovery through innovations in academic publishing and researcher assessment.
Previously, I was the program director for the Declaration on research assessment called Dora for short. So I come to our team and open science through the researcher assessment lens. And that's yes, that's my introduction. Thank you for having me. Thanks, Ana. Dmytro Hello, everyone, and Thank you for inviting me to this forum.
I'm dmytro ilchenko from Clarivate. I'm leading the Institute for scientific information that is responsible currently for various metrics, methods, evaluation techniques and other R&D across our business. Before Clarivate, I did rankings at SHS so very well familiar with all the pros and cons of whatever rank is or ranking is.
And before that, I held several academic positions in universities, including researcher roles and deputy vice chancellor roles. So happy to share my experience and happy to contribute to the discussion today. Thank you so much. Over to Meredith. Hi, everyone. I'm excited to be joining you today, and I wish I could be there in person, but next year I'm Meredith mesurier, and I'm currently the executive vice president for the publisher market at Digital Science.
This is a role I've been in just about half a year, and my job is to make sure Digital Science is doing everything it can to support publisher needs and goals. Prior to that, I've been in editorial and publishing roles for quite some time. I started my career at Elsevier doing content handling in an editorial role. I spent quite a bit of time at Springer Nature, mostly focused on portfolio management and strategy, and also at TNF doing portfolio strategy.
Really excited to be here today. Thank you. We really do have across the board experience and a wealth of knowledge as I mentioned. So we're excited to jump in with our questions. But just before that we have a question for you all that we're going to Pose and have it populate in a word cloud that we're going to bring into the discussion. So Susan and Bob, if I might ask you to put that up.
So the question is, what does achievement and scholarly careers mean to you. The results should populate in a kind of keyword word cloud. And there's a menti code at the bottom. Do people need that. 81532813 do we need to repeat that for everyone.
Are we good. OK OK. Sorry and now we've come to Spain. We're going to. We're going to let that go in the background. Oh, here we go. Look at beautiful. Impact benefiting humanity. Awesome So please go ahead and answer that.
We'll jump in with our first kind of prepared question and let everybody get their responses in. And we'll bring that into the discussion as we move forward as well. So the GIF journal impact factor has truly been a cornerstone metric for decades. And 1975, I believe, was the first journal citation report. So that's. Coming up on The 50th.
I guess this year, 50th anniversary of that report, so many, many years, obviously, within our field and industry, it's been evolving as how it's presented and contextualized. So our first question and I'm going to pass it to Mary, is from your perspective, what role should the GIF continue to play in research assessment and how can it coexist with newer frameworks and indicators.
I'm going to say that to me. There are two parts to this question. There's the question of journal metrics in assessment, what they have been, and what they will become. But I think there's a better and more interesting question at the heart of this, which is about the role of journals in assessments. Journal metrics have been used because metrics about journals have been useful.
Journals are useful. Journals have been a critical part of how research is structured, discussed, and distributed. But that is changing. Journals are a broader and a different thing than they were in 1975. Thank you for bringing that in. They're not subscription only. They're not content driven only they are not the only creators or hubs of topics and communication communities around those topics.
They are now including platform level collections of tens of thousands of articles. And so the journal itself tells you less and less about the individual article, because it's a bigger and bigger thing. Citation metrics have been used because citation metrics have been useful. Citations had been an unobserved signal. They were a thing created by scholars in the process and for the purpose of doing scholarship.
And that is also changing. We look at the title of this whole meeting, which is the incentives have changed and people are changing their behavior as driven by that incentive. While the overwhelming majority of citations are still exactly what they have always been, they are a call out. They are the identifying the shoulders of the giant that you're standing on.
They are pointers to critical, prior and concurrent works that are shaping your thinking. But I can speak from the trenches of journal selections at ISI and Thomson and Clarivate, as well as from JCR production and data development, and from my current role in production integrity at Mary Ann Lieber and now at Elsevier. This is changing because incentives are driving behavior, so I want those facts to drive our behavior.
And I would argue that the journal impact factor and other journal metrics, they're not a cornerstone anymore. I would call the journal impact factor kind of a landmark. And I want to give you a fun fact. In the city of Philadelphia, there is a 37 foot high bronze statue of William Penn standing on top of city hall. It's a local landmark.
Interesting word. For about 90 years, it was the tallest spot in the center of Philadelphia. It was an unwritten agreement that no building should be higher than the hat of William Penn on top of city hall. That was how we navigated. You can find your way through Center City because you always had that high point on the horizon.
That is changing. There are now a dozen or so buildings around William Penn. Within a couple of blocks that are higher than that. It doesn't change that the statue is there or the height that it has always been, but it changes how we can use it, and it changes what other markers are around it that can be used. A really nice analogy.
Dmytro, I'd like to bring you into this question. I know we've talked a little bit about, kind of broadening into this framework perspective. Marie has laid some groundwork for that. Let's let's hear from you. Now Exactly Yeah. And as Marie mentioned, journal is not the only metric at all. And we have been always advocating for profiles, not metrics approach as a principle of responsible research evaluation.
And as part of this effort. Last year, we at the Institute for scientific information published a so-called framework for evaluating broader societal impact of research that can provide a holistic view of an institution, of a researcher and in the context of the discussion of a journal as well. So let me discuss a bit more and explain a bit more about this framework.
So the framework itself was created to account for significant challenges in evaluating the societal impact of research. And I will briefly touch on three major challenges. The first one is that there is no one size fits all approach. So societal challenges are diverse pandemics, environmental degradation, climate change, growing social economic inequalities, et cetera.
This means that measuring impact in each area needs its own unique approach. An indicator that may be a relevant signal of a technological impact. For example. Patent citations will be inappropriate for impact assessing any human capital development or policy making. For this reason, the framework that we developed categorizes societal needs into eight societal facets under which we derive from very well known pestle model political, legal, economic, human capital, medical, et cetera.
The second challenge is the lengthy time lag between the research being published and its actual societal impact. Very well known fact. It can take 10 to 20 years for a fundamental research. To reach the public use. It may take less time for applied research to impact society, but there is still a time lag. Funders, research managers and general managers can't wait this long to measure the research.
And that is on track in terms of their expectations and targets and goals. That's where the concept of potential societal impact comes in. So this will measure the. Potential proxy of what the impact can be. And also traditional retrospective metrics should be also in place. So that's why we will try and balance within this framework, both retrospective indicators and forward looking indicators, which will be especially useful for the content that didn't have a chance to acquire much of citations or any other signals of impact.
And the final third challenge that I would like to outline is the need to balance quantitative and qualitative methods, regardless of the method. If it's a quantitative method, for example, traditional bibliometrics citation or media mentions or policy citation, it's always a bird's eye perspective on an impact. Very often it's very hard to Zoom in using quantitative methods to the mean, to specific peculiarity, to specific aspect of the societal impact or broad impact.
That's why qualitative methods are still super important. For example, use case surveys or reviews of experts, and balancing both types of methods is essential for a fair evaluation of any research, including research published in journals. As a result, we are working now on fulfilling this framework with the right metrics and methods in place and testing their applicability on institutional level, on researcher level, research project level.
And in future, I would be happy to see how these concepts are applicable for the journal level evaluation. Thank you. Excellent. Thank you. I think before putting that to the panel, perhaps we'll go to the next question over to Ana. And then we're going to start bringing in the panelists to each chime in.
But over to you John. Thanks so for Ana, what are the most promising frameworks or metrics you've seen that move us toward a more holistic evaluation of research impact. Thank you John. So going back to the word cloud that we asked for the audience response earlier, I was, you know, the primary word that was very large was like impact. You know, scholarly achievement is impact.
And I always get stuck on this question because how do we define impact. Like what does research impact mean. And for me, from my experience at Dora and speaking with different researchers and different communities and different contexts, I've come to this conclusion that impact can change depending on what?
Like what is being assessed, who is being assessed, for what reason they're being assessed, and in what place they're being assessed. And, you know, trying to reconcile that with quantitative metrics, I think can be limiting because numbers are really good at telling you a specific thing, but they can't tell you everything. And so I really take Dimitri's point that, you know, there's no silver bullet that we can use for research or assessment.
And one of the qualitative methods that I've been really struck by that I think has been sort of brought to life over the past five years or so, is this emergence of structured narratives, often referred to as narrative CVS. It's something in 2020 that the Royal Society and the UK started developing, called like this resume for researchers. And it's certainly taken hold by a number of the research funders at Dora.
You know, hmi also uses structured narratives in our assessment. And what I like about structured structure narratives is that you're asking researchers to comment on scholarly achievement in a specific topic area. And it's something. So there's consistency, right? Like everyone's given the same question and saying, you know, how would you describe your achievement.
And, you know, it could be scientific impact. It could be impact to the broader community. So kind of getting to the societal impacts that Demetrio is talking about. So there's consistency across responses. It also gives researchers themselves agency to describe, you know, what they think their achievement has been. And then reviewers are able to look at these narratives in combination with the other, other evidence that's provided within their application or assessment materials, and make a judgment on that.
So I certainly see this structure narrative as useful in getting at impact in ways that can be limited by quantitative metrics. Thank you. Does anyone else want to jump in on the question of newer, newer frameworks. All right.
The next question is also for Ana. Shifting away from only reliance on the GIF requires cultural buy in. I think this gets to the point of the exercise that we did in the roundtables yesterday. What strategies have been effective or could be effective in changing researcher, funder, and institutional mindsets. Yeah, I can highlight two examples from HHMI that I think speak well to this.
And, you know, well to our goal to de-emphasize journal names in research or assessment. So the first example, I want to talk about is replacing journal names on research or bibliographies with PM IDs for assessment. We're we're able to do this for a couple of reasons. One, we support scientists in the life sciences. Most if not all of our research is indexed on PubMed. So doing a replacement of journal name with PMID.
That's something that we're able to do within our research context. And this is something that was a process. It was introduced slowly. So HHMI holds several science meetings for our researchers throughout the year. And we initially asked our presenters at the meetings to replace journal names on their slides with PM IDs and then, you know, got early, early initial positive feedback on that.
And then ended up removing journal names on posters as well. So now our scientists will do, author names your PMID. And the next phase of this was replacing the journal name on the bibliographies with the PMID is. This has been in place for about two years now. Feedback has generally been positive. Advisors have expressed that they hope that it helps them focus on the assessment of the science without being distracted by where it was published.
So that's sort of the first big example. And then the second big example. Maybe at first glance, you wouldn't think this is like a changing how we think about assessment. But I'm here to argue that I think it can be a powerful tool. And that is a peer review training program that I'm leading called transparent and accountable peer review. We use this training to teach early career researchers at HHMI how to write constructive, collegial, and public peer review reports on preprints from someone who's been working in the assessment space for eight years.
I'm really excited about this because we're able to teach our early career researchers how to articulate impact. Outside of the journal framework, preprints don't have an impact factor. So when we're telling them to evaluate the article, we're forcing them to really think about, like, what are the strengths and weaknesses of the science presented.
So that's I think is a big learning tool. I think it's great as a way to stimulate cultural change among the younger generation of scientists, but I think there are a couple other audiences that we target with this as well. And so as part of the program, our participants write these public peer review reports with their research mentors. So we're also introducing this idea of preprint peer review to their mentors.
And before the reports are posted on pre-review, we send them to the preprint authors themselves. Our our trainees trainees are putting a lot of work and effort into these peer review reports on preprints. We want them to benefit the authors, so we send them to the authors anonymously on our participants behalf. And so that's the second example. It's not quite as sort of direct on assessment, but I do think it's a good example of cultural change in thinking about how do we shift mindsets about the ways that we talk about impact specifically, you know, something that, you know, we've seen with the training is in practice reports.
People will make comments about, this article is ready, you know, to be published or it meets the standards, for publishing. And that's when we asked the participants questions like, well, it is a preprint, so it is published. But can you tell us a little bit more about what does it mean that it's ready to be published. What are the strengths that you'd like to draw out. Where does this work sit within the larger context of its field.
Thanks, John. Thanks, Ana. Before we get to the next question, which is targeted specifically at Meredith, I was wondering if anyone else wanted to chime in on the question of strategies or new practices. I could just say, I mean, thinking about the primacy of the journal impact factor. For many years, you know, it's 50 years old and we're all very used to it.
And that cultural change around, you know, broadening out and thinking about things in a bit of a different way. I mean, we could just draw on change management principles generally, right? And like if you want people to change, you need to show them why that change matters to them and what they get out of it. And so I think, you know, as Ana said, it's really interesting to see such a prominent funder kind of moving away from that.
Like we're judging a researcher based on the impact factor of the journal in which they publish, which, of course, is an aggregate metric and doesn't tell you anything about their specific paper. And if more funders do that, and if institutions do that, and if, you know, publishers continue to not have just the journal impact factor be the only metric that they're emphasizing around their journals, which things have come a long way since Dora with that.
But I think it really does take collective action across the entire ecosystem to show the value for the researcher. And not just solely focusing on being in a journal with a high impact factor. Really is tying into the theme of our conference about incentive. Meredith so Thank you for that.
And really tying that back to our underlying thing that's been going across the panels. Anything else to add. I think that it troubles me to think that the journal is the problem. I don't think that the journal itself is the difficulty. I think judging a journal by a journal impact factor only is a difficulty.
And I would argue that what does. I brought up at the start that I think a journal is important, and I think a citation metric is important. What does a journal tell you about an article in that journal. It tells you that the article has been through a process that the journal has committed to, with people who have put their reputations and their scholarly careers on the line to maintain that process and that reputation, that they care about their topic and that they have extended the work to this.
In a purely theoretical sense, there is absolutely nothing preventing Nobel Prize quality work being published in a predatory journal. But would you read it. Would you find it. Because that journal has not committed to a process that we have recognized is an exchange and a community value of the scholarly community to read each other's work, collect and comment on each other's work.
So I think The Journal does matter. And I think eliminating the journal's commitment to the article does a disservice to the article as well. Now, whether or not there is, I think we can all agree that water runs downhill and there's no one metric. Those are kind of de facto, and we've all set up the straw man of the impact factor for my entire life.
But that's not the point. The point is, what else. And yes, cultural change is a long, slow migration. And we've all seen it. We saw it happen to get here, where the journal impact factor became this kind of completely unhealthy, disproportionately influential metric. If we got here, then we can get away from here as well. And I have a much more radical thing to say about that.
But we need to move on. I will bring it up if I can later. If you're on virtually, you know, you can drop questions in the chat that we'll be getting to in a few minutes. And if you're in the room here, start to think. Think about it. But I have a question for Meredith and then other folks can chime in.
We should recognize that research can have an impact beyond citations, for example policy influence, clinical applications, or community engagement. How should evaluation systems capture and reward these contributions. Yeah, I mean, I'm not sure that there's one size that fits all. But it would be good as a community to start thinking about some ways that we can do that.
You know, in my role at Digital Science, I don't want to just plug Digital Science. One of the things that attracted me to the company was I always thought that the altmetric was an interesting, an interesting thing that gave a bit of more context around how an article is used. It doesn't capture everything. I think that we have to decide for particular fields what are the most important things, right.
So obviously, you know, certain fields are going to have impact in different ways. You know, medical impact is medical research has a much different impact than like, you know, for instance, material science or something like this. But I think generally speaking, it's I do agree with Marie that we can't throw citation impact out, right? Citation really do reflect that.
You know, your work is being read, noted and hopefully built upon. But yes, like bringing in these other things in a context dependent way where we can actually align to the greatest degree possible so that they become recognized. I think that's one of the things that is why the journal impact factor is. So, you know, like has such a strong hold, is that people pretty much get what it is, right?
It's easy to understand, it's easy to use, it's easy to compare. And I think, you know, it's nice to have nuance. We all agree. But we also need to have some sort of framework in which, you know, we can operate within. And so trying to balance that is also really important. I have a question from the online group that will go ahead and ask.
Now this is from Megan McCartney of Wiley. It's directed for you, Ana. How is measuring outcomes that replacing journal or is working or reducing bias. Yeah, that's a really good question. So while we've heard positive feedback from our advisors, we haven't measured outcomes yet. And part of this is because it wouldn't it wouldn't change our thinking.
We're we removed journal names because we wanted to help make sure that our science is being measured based on, the contents of an article, the discovery itself. So we haven't done that yet. We have heard positive feedback. And really it's stemming from our motivation to make this change, which is that we think it's a good thing to do. We don't want our scientists to feel pressured to publish in high impact journals.
We want them to be able to publish wherever they want. Hhmi doesn't care where our researchers submit their work. So part of it was to relieve pressure on our scientists. And we also wanted to send a signal to our advisors as well, that we really want you to be focusing on the discoveries when you're assessing the contents of their application materials. Yeah Oh good that question and another question for you.
We've heard that grant reviewers and chairs for department or panel who evaluate promotion and progressions at an academic institution are not experts, and rely on impact factor and journal reputation to evaluate the person who's either applying for the grant or tenure. How would this change affect the evaluation process. Do you mean removing journal name in that or replacing journal name in that context with PMID or something.
Yes OK. Thank you Judy I see I see the response on the chat now. Another really good question. That's something I'm putting my sort of Dora hat on with this answer. And you know, in this situation we I. What we had recommended at Dora was taking the bibliography, replacing the journal name then with like 2 to five sentences from the scientist explaining what was the impact of the paper and how they contributed to that article.
Knowing that if you were to do this to your bibliographies all at once, it would be a pretty big lift. But if it's something that each time you add an article to your bibliography, you're also adding in this short blurb of like, here's the distilled take home. Here's how I contributed that. That would be a useful tool for evaluators. And then I think.
Let me I'm going back and reading the question in the chat. Out either applying for a grant or seeking tenure. Yeah you're also going to want to think about the composition of the review panel. So if in a department, if you're missing expertise, figuring out how you could bring that expertise into the Department from somewhere else to be able to assess that candidate.
And to get back to Meredith and something that you were speaking to a little bit about having, you know, the impact factor as a framework and having some sort of framework as we're moving into different the new normals and things, particularly with, you know, the mass proliferation of journals in OR and across the board, some of questionable quality.
Do you think because of this, there is a renewed interest in impact factor. You know, site score and as an objective measurement of quality. And maybe we'll start there and you could speak to that. Yeah and I think I'm going to be really direct. Yes I've been involved in many open access journal launches. And you know, you did say some are of questionable quality. And that's absolutely true.
But I think one thing that, you know, a lot of people working in the open access space that don't want to be publishing journals of, you know, questionable quality and are doing it because they have, you know, a real passion for accessibility to science, feel that kind of constant nagging, you know, in the back of their brain of like, open access equals low quality, right? That was kind of characterizing to some degree.
You know, this has changed. But the conversation around open access, I think the conversation around open access journals is more complicated and most people now realize that and realize that there are a lot of quality open access venues. But like how do you separate yourself from it. And, you know, I'll be really Frank, the impact factor is a very useful tool for that.
And, you know, we would see over and over again you would launch a journal and you would do all the work to, build a great editorial board to have a great mission to make sure it was speaking to, you know, a community, to have the editorial team being out there and recruiting papers. And it was a heavy lift, and then the impact factor would come and all of a sudden people would be interested because they just could understand it. Right and it's also something that is just very visible to everyone.
So coming back to having your editorial board members being out there trying to recruit papers, you know, to be able to talk about the quality of the process, the quality of the journal, the vision of the journal. They can't reach everybody, right? But an impact factor is something that you can put on your website, and it means something. And it means the same thing for every journal, more or less.
And so I think that is the challenge that we face when we start thinking about how do we round it out right beyond something that is purely a citation metric, you know, and beyond something that, quite frankly, you know, can be somewhat gamed is that it needs to be something that can be prominent, that can be understandable, that can be globally accessible. Right and that's where I think, you know, when I come back to this, it's nice to be able to think about things on the individual level.
And we should do that more. Right because papers in a journal are not all the same. And if you look at the distribution of citations that go into an impact factor, there's some papers that are very highly cited and some that are not in every single journal. Right but I think that we really need to think about can't reinvent the wheel for trying to understand, every journal, every paper.
And so we need to agree to some, some sort of framework. I don't have a really clear vision of exactly what that is, because I do think it will vary across fields. But Yeah, the impact factor. I don't think should go away. I just don't think it should be the only thing we're going to open it up soon to question. So please have those in the back of your mind.
I do want to come to something that she's mentioned to me before about the journal impact factor being kind of a conversation between researchers. I wonder if you could, building off what Meredith said and maybe and, you know, speak to that as you like, but that's what came to mind when she was speaking. I wonder if you'd like to comment on that in the years that I worked on journal impact factors for me. One of the things that I think about in terms of that dynamic, that dialogue between Article impact and journal impact, that is a conversation and a dynamic.
The fact is that journals with high impact factors will attract more high impact articles. I would also argue that the fundamental value of having a journal impact factor is that you have passed selection process and been indexed in major indexes. That's the recognition of the journal component of the journal impact factor. And then there's just a number after that.
I would also here's where I will I bring some heat. And that is the fact that what makes people trust that impact factor or any metric, is both transparency and accountability. I did a lot of work across a lot of years to make those impact factors transparent, and at this point in the product, you have those Article citation histograms, 18 years of development work on that because it matters, because you have to understand the texture that's underneath that number, not just the number.
My next battle as a publisher and as a consumer and a producer of metrics is how do we make that metric now accountable in this environment where any and every metric can and will be gamed? As long as you are measuring that outcome, people will play to that outcome and they will do the outcome goal, not the behavior goal. So how is the impact factor accountable. When we publish an article with the best of intentions and it's wrong.
We as publishers are accountable. We have to issue a correction. We have to issue a retraction. We have to cope with that. We have to issue 100 retractions because there was a hole in our system and we didn't know it. What is the impact factor or any journal metric done to be accountable when you're wrong. Do you fix it.
And I don't mean the annual update. I mean you're wrong. And do you fix it. If my impact factor is published and it's too low and I lose subscriptions, or I lose submissions because I am judged by that impact factor, where is the accountability of the metrics provider to fix that metric. And dmytro, if you want the example that I have in my mind, we can talk.
Could I just add to that because I couldn't agree more with what you say, Marie like, and I'll just say from my perspective, I've never worked for Clarivate, but I feel that know, I see the evolution from like perhaps in my earliest days of publishing, where it was like the impact factor kind of separated the haves and the have nots, right? The elite journals, from the non elites to there being a lot more interest in, you know, reflecting really doubling down on that like quality aspect of the impact factor.
And I think we're all really as a, as a industry, thinking about how do we respond to the junk that has gotten published, because we all know that a lot of junk has gotten published and, you know, working, you know, in major publishers, like no one wants to publish the junk, but then it does get published. And trying to deal with that is very painful. And I think, you know, we are starting to get better as a community in terms of having that accountability.
But I would argue that we still have some ways to go. Can I just jump in to add to I think it was so interesting hearing your perspective and your perspective, Meredith, just now, because it kind of in my mind is thinking about, well, who is the journal impact for. And ultimately, I think it like it was designed for, you know, like libraries to help them decide which journals to publish.
It really is like speaking to the journals. And I think that's this key tension, right? Like, you know, as, as a metric, it is very useful for journals, but for researcher assessment, it's off. You know, it's not appropriate because it's speaking to the level of a journal and not even the level of an article or the researcher themselves. And navigating that tension is really hard to do.
I mean, that really ties in. Something that came up in the panels yesterday was that, one person doesn't have to be the expert on all things. One metric doesn't have to be, you know, the end all be all to both. Research researcher assessment, research assessment, internal assessment.
So I think that I think this conversation has really evolved, even, you know, in the past three years. So we're looking forward to seeing you continue to evolve. We have about five minutes left. I'd like to open it up to the room. I think we have one at least one more question on the chat. Did we get to this. But there's a question in chat from Jennifer alberghini. It says there's been a lot of talk recently about the problem of predatory journals.
Is there a concern that moving away from these traditional markers might increase this problem. And have there been ideas about how these concerns might be mitigated. I mean, I'm happy to say, because I guess it kind of follows a bit from the open access journal question is like, I think there is a concern, right? Like if we said, you know, we're going to we don't like the journal impact factor.
I absolutely agree with Ana that I think some of the issue around it is conflating that measurement of a particular researcher or particular research output with a journal. But if we, you know, if we throw out the journal impact factor because we don't like the idea that, you know, that it's distilling down everything that's published in a journal into a single metric, and that it's a metric that can be to some degree, gamed.
And I also agree with Marie that every metric can be gamed. You know, then what do we have. Like, what do we have. That's the source of truth. And I don't think that we have really quite yet gotten to that point to have the confidence in, you know, in, in a set of other things that give you that like stamp of quality or believability, you know.
But I don't know. That's my opinion. I do think that there is some degree of risk. It's an established metric for reasons. Been around for 50 years for a reason. It's not perfect, but nothing is perfect. And we should add to it, as opposed to move wholesale away from it. Yeah I just wanted to add quickly we should move beyond metric.
Single metric, of course. But we also need to drill down to a single metric. Right and understand what constitutes it, what, why it's so high, why it's so low, et cetera and drilling down to the individual contributing records level can tell the real story behind this or that high-low value. Without the drilling down, we will. We would never understand what predatory journal is, right?
Because they would potentially have high well or I would say medium journal impact factor, but only when you look inside it, inside the citations, inside the citation records, you can understand why we have concerns. So I completely agree. Go and beyond, but also not in terms of breadth of the metrics, but in terms of depth of the analysis of each metric. This is the way forward, I think.
Interesting Thanks, dmytro. Yeah and this is related to Hi, Stacy Burke, American Physiological Society. We obviously publish Physiology, which is a, you know, a subdiscipline. It's a very, you know, narrow focus, but it affects a lot of different sciences. So we've been integrating the field citation ratio. I think I got it.
That was one of the metrics. And my question is, how do we get to that depth that you're talking about when we're talking about a variety of different industries in one metric. So how do we get to that deeper understanding and how do we relay that. And with the Dora hat on Anna as well, I know that they've got this research assessment framework that they've been doing.
And I'm just wondering if that is, you know, hoping to help the impact factor, let the, you know, less reliance on the impact factor or if anybody can talk about that because I really do think it starts at the institution. I think before we talk about doing away with this metric or that metric or using this metric or that metric, we have to say that the problem arises from the need for research assessment.
And unless we address it from that direction, what do you want to access. What do we want to value. I'm pointing to the word cloud, which is not here. Until we decide what we want to value and therefore what we want to look at. And what we want to look for, then people will use whatever hammer they find around to pound whatever nail they think they're looking at.
And the impact factor was a very useful hammer for a very long time, and that's how it ended up in that situation. Dean needed to evaluate a new hire in virology versus a new hire in political science. I can't read the paper in political science with the same depth that I could also read the paper in virology and say, which one of these is the better paper.
I can't do that. No one can do that. And reading these scientists statement or the scholar's statement about what their paper is valued, that that's also not giving me something that I can lean on. So they go to the librarian and they say, how do I evaluate these things. They're like, well, we have this thing that tells you what the best journal is.
Oh my God, a number. Yay! let's run with that because it looks objective and we think we know what it means. So then the deans start using that. So now people will optimize their behavior towards the number that's being used to evaluate them. Now publishers are saying why are people not submitting because you don't have an impact. Well, how do we get a better impact factor.
So publishers begin to optimize and you turn that cycle. And I have literally witnessed that cycle. So the question is, how do we solve the problem or address the problem of research assessment in a way that's going to let people have this set of tools, not making the tools live out there in the world, but giving them something useful. Metrics are used when they are useful, and they're less used when they're less useful.
Really coming back to, again, the incentive and everything we've been talking about these past two days and we're at time. So this is a really rich discussion. And I would, I think it could be the set of a much longer discussion, but it really is, you know, it's going to be solved by everyone in this room. I look forward to continuing the discussion both today and next year at this time, and we'll have new things to discuss around this and other topics.
I'm going to cue it over to co-organizer Ginny, who's online to close us out. And Thank you all for your participation and to our panelists for your time and effort today. Awesome Thank you so much, Jamie, and Thank you so much to everyone on the panel we just had. That was truly excellent. One of my colleagues is watching the conference as well, virtually today, and she was messaging me on Teams about how much she enjoyed this panel.
And I have to echo that this was so excellent. Thank you to everyone who joined us for New Directions this year. It has been such a pleasure to be able to think so deeply together with you. Over the past day and a half. Recordings will be available to all attendees, whether you're virtual or in person, within the next 48 hours, so look out for that. Thank you again to everyone who contributed today from our amazing working group, our phenomenal speakers, SSP program director Susan Patton, Bob at the Au Cadmore media data conversion laboratory, Digital Science, and Silverchair.
To all of you who have participated and fostered progress in scholarly communication through thoughtful conversation, which sounds really cliché, but truly, I feel like the thing that's valuable about conferences is the opportunity for all of us to come together. It's what takes us away from just reading a blog post or something. Thank you to Oh, I'm done thanking people.
Sorry I'm rereading. Please complete an evaluation form. So that we know what to keep and what to refine in future years. If you would like to be involved in planning sessions yourself next year. If you thought this was so great, I can't get enough. You can email or LinkedIn. Message me, Jamie, or the folks at SSP. If you're thinking, I have no idea how to email you, I think the SSP email's on there so you can feel free to email them, and they will get it to the right place.
If you are in DC, you are hungry and you are interested in digesting. See what I did there. What you heard today. Stick around and Jamie and Kristen will be heading out for lunch. So Jamie and Kristen please give a big obnoxious hand wave so everyone knows exactly who you are. You can follow them.
They will make sure you are Fed. Everyone will be paying for their own meal, but you'll get to hang out with people, so that part will be great. Similarly, if you would like to learn more about the agu's net zero energy building design, the sustainability practices that were utilized in the building's renovation and daily operation, and the four key principles that inspired the building's renovation.
Head to the lobby at 1:30. So if you've thought this is a really beautiful building, learn about how it's not just beautiful, but also extremely functionally designed. And lastly, if you are with us online, jump over to the break room. If you still have a few more minutes, chat with us. Afterwards, we can talk about what we discussed today, what we found enjoyable.
We can share any lasting pet pictures. And with that, those are all of our announcements today. Thank you again for coming. I will stop talking so you can all go eat. Thank