Name:
New Directions 2019 | Advancements and New Directions in Scholarly Indexing
Description:
New Directions 2019 | Advancements and New Directions in Scholarly Indexing
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/958360d1-a624-4d45-a2c0-da35f96220b3/videoscrubberimages/Scrubber_45.jpg
Duration:
T00H46M57S
Embed URL:
https://stream.cadmore.media/player/958360d1-a624-4d45-a2c0-da35f96220b3
Content URL:
https://asa1cadmoremedia.blob.core.windows.net/asset-be23cd95-96e7-46d7-8ff5-6ac5e3d7ce86/ND1904- Advancements and New Directions in Scholarly Indexing.mp4
Upload Date:
2020-07-09T00:00:00.0000000
Transcript:
Language: EN.
Segment:1 Introductions.
[MUSIC PLAYING]
SCOTT DINEEN: So good afternoon. I really want to start by thanking Sophie Reisz and Tom Thrash for organizing this panel. So I'm going to tell a lie about Tom first, to start off with. So Tom calls me up and he says, "Scott, I would like you to be on this panel." I've known Tom a long time in this industry. He said, "The the panel is going to be on the topic of COFF-- Call off the impact factor.
MARIANA BOLETTA: OK, I can leave, right?
SCOTT DINEEN: Wait a minute, it gets better. I said, "Tom, COFF-- call off the impacter-- we can't do that. That spells CWOF, not COFF. Researchers make decisions about where to publish, even about what research to pursue, in part, based on their understanding of how they'll be evaluated and what the metrics will be for evaluation.
SCOTT DINEEN: And certainly, important decisions are made about tenure, and promotion, and peer recognition, based on metrics. And so this panel is going to deal with some of those topics, including why the impact factor is still so dominant, how that impact factor and other metrics are changing. Is the article level metric an important aspect we should look at? We've heard today about the fact that the preprint may become the new normal.
SCOTT DINEEN: And how does the citation of the preprint factor in? So I'm very pleased to welcome the panel. We're going to have a 30-minute discussion and leave 15 minutes for questions. I have immediately on my right, Carl Leak, who is a Life Sciences librarian at George Mason University. To Carl's right, we have Mariana Boletta, who's the acting Executive Director of the Web of Science from Clarivate Analytics-- Deputy now, thank you.
SCOTT DINEEN: Some of you will know Adrian Stanley from his role as past president of SSP. And you may need to stand up, Adrian. His current role is Managing Director of Publishers at Digital Science. And we're also very fortunate to have researcher, Jesse France from the Naval Research Laboratory.
SCOTT DINEEN: Jesse has also served as a topical editor on one of the OSA journals. And I'm Scott Dineen from the Optical Society. I'll be moderating-- not really answering a lot of the questions, but if publisher issues come up, I could likely speak to those. So as you can imagine, we had an interesting time planning what we were going to talk about in just 30 minutes on this topic.
SCOTT DINEEN: And we've come up with three questions. We're going to try to talk around these three questions. And the first-- I'm going to read out. And Mariana has agreed to field the first question.
SCOTT DINEEN: So what's the scope of scholarly indexes and metrics that do influence a researcher's reputation or a journal's reputation? What are the metrics and indices that we should be talking about today on this panel?
MARIANA BOLETTA: So I suppose this is working. It's working, OK. So I did not stand up. I am not leaving. But I'm not here to defend the impact factor, obviously not. So what I want to say is that out of the 60 years that the Web of Science has been in existence, I dedicated 20 of those years from my own career. And I worked in various iterations under ISI, Thompson, and now Clarivate, always in the same capacity-- evaluating journals and making those hard decisions.
MARIANA BOLETTA: What is worthy enough of being covered and what would be most useful for our users. So if your journal is not covered yet or was rejected, you can blame it on me or my team. But anyway, joke aside-- it's a very hard mission. And the mission changed over the years, obviously. When I started 20 years ago, that pyramid of trust that starts with the honest researcher, the author, the editor-in-chief of the journal, the publisher of the journal, and so on, was still strong.
MARIANA BOLETTA: It was standing and had very few crumbling bricks in it. Things have changed, as you know-- not only with the information overload, with the emergence of the electronic journals, but things have changed with integrity as well. So obviously, we had to change with it. So our mission changed in that now, we don't only evaluate and are very careful to select the best, but we also have to be very careful that we provide service that is trustworthy and useful, like you said, to our users-- be the researcher, the research manager, the funder, or whoever the user is.
MARIANA BOLETTA: And I think maybe you wanted to address that, Adrian too. That it really all depends on, if we talk about metrics, who those users are. And obviously, the metrics will be used differently-- sometimes misused-- like the impact factor often is. And I think one of the questions is that, right? How not to use metrics like the impact factor. So in any case, I wanted to frame the discussion and maybe explain very briefly-- or really, jump start the discussion-- in saying that as the Web of Science provides two components-- two very important ones.
MARIANA BOLETTA: One is the bibliographic-- the content component. And obviously, this is what we are selecting for. So my team has been selecting journals for coverage in the Web of Science. We do not select journals for the journal citation reports specifically. That's a different team. So in order to provide a trustworthy and meaningful metric, obviously, you need the data for that.
MARIANA BOLETTA: And we provide that. So as the only real citation database that selects in-house objectively journals or resources-- because we select proceedings and books as well-- we think that we provide that base for a meaningful metric. And it's not only the impact factor. As you know, the Web of Science is not only an index of content. It provides information on citations for authors, funding, and many other metrics.
MARIANA BOLETTA: So an index, in that respect, is not simply a list. Even though you would be surprised. Many of our customers just call in or send in and just ask for it-- I want an impact factor. But they don't realize that we just don't provide an impact factor offhand. First, the journal has to meet the quality criteria. So I think this is what we are going to talk about.
MARIANA BOLETTA: What are those important metrics? When do they become meaningful? How we should not use them or what is needed. And how can the community in general offer suggestions and give us ideas on what is needed? Because the session is called New Directions, I want to take this opportunity to say that we've grown with the demands of the market.
MARIANA BOLETTA: We are in the transition now. Not only leadership, we're in the transition of developing a new platform for the evaluation process. We have grown the team. There are new features offered both in the Web of Science and in JCR and Insights. So there are new directions. And there is hope, I think, also with other providers. Obviously, we can learn from each other.
MARIANA BOLETTA: And there was a question about preprints. I don't know if there is time to address that one. But we also revived the institute for Scientific Information. And there are experts there and analysts. And they are looking into all these new developments, including preprints.
SCOTT DINEEN: Thank you. Adrian, this might be a good portion for you to take up. Other than the services that Mariana has mentioned,
SCOTT DINEEN: how would you round out the scope of metrics and indexes that you think are important, or influential, or emerging?
ADRIAN STANLEY: Yeah, thanks everybody. And obviously, talking from my perspective of Digital Science and a product we develop-- Dimensions. Which really isn't an apples-for-apples and we try not to be. We created a linked research knowledge network that connects publications, citations, grants, clinical trials, policy documents, patents, to do a sort of broader evaluation of the whole research community network.
ADRIAN STANLEY: So when we're thinking about what metrics are, we're looking at a very different bigger picture for science and scientists. And our policies index more, and collate more, and make the links, but let the community decide what's important, what's not important. So trends like seeing data citations, making data available, the broader impact of alternative metrics for things like the RAF in the UK are all very different measures that people are using.
ADRIAN STANLEY: And we try to make the tools and data available for other people to do that assessment.
SCOTT DINEEN: Is ResearchGate a place where researchers go to find important metrics? How far does that go in terms of the reach of--
ADRIAN STANLEY: That's a good question. I mean, Altmetric tracks all the readers and saves from Mendeley. So it's mendelian in that sense. But in same way, where are people reading, digesting, using any downloads.
SCOTT DINEEN: Jesse, when we were having our planning calls for the meeting, you made some observations about the researchers you work with and what metrics they're aware of.
JESSE FRANCE: Yeah, I'm speaking, by the way, from two perspectives. As Scott mentioned, I have been an editor for several publications from the OSA-- the optical Society. So I've been doing that for six years. And I'm continuing to do it. And I'm also a scientist who's publishing. And I see how people use metrics to select where to publish and also how it affects people's careers, and promotions, and that kind of thing.
JESSE FRANCE: And from where I stand, I think there's a very limited understanding of metrics. Most people rely on impact factor at the general level and h index at the author level. And it's very much limited to that. And I know some people at this meeting are involved in developing metrics that are much more informative and much more useful. But among the scientists that I work with, it's very much limited in terms of what's used.
JESSE FRANCE: And I think the education level, the awareness level of scientists is very low. I work at a government lab. It's probably better in academia. But even there, people I speak with in academia-- I think their understanding of metrics and their use of them is very limited compared to where the field is and what has been developed in this industry.
MARIANA BOLETTA: I want to say something. I agree that it's misunderstood, that it's poorly known. So our approach now-- I mentioned the new workflow. I brought some pamphlets there to explain the extent of our coverage in the index and everything that's offered. The point that I want to make is that it's not just that number alone. It's not just the metrics.
MARIANA BOLETTA: Because everything that we cover in the core-- at least in the Web of Science, this is how we do it-- is carefully and consistently curated, and continuously curated. What we want to make known to the community, to the researchers, and maybe to the research manager, and all the other personas that are involved in the publishing landscape is that just being covered because the material was curated and decided upon the value and the integrity of it should be an aspirational goal in itself.
MARIANA BOLETTA: I know it's very hard to sell that to the authors. But some of them get it. And a lot of the major publishers that we have talked to get it. Before, there was the only option-- either they were covered or not. You had to get an impact factor or not. Right now, we have this mighty disciplined collection that we started in 2015.
MARIANA BOLETTA: It has a poorly-chosen name, and we probably will change that name. It's called the Emerging Sources Citation Index. That's where we put in journals. And I think when we discussed it with the panel before, there was a question-- what do we do with the journals that really don't have an impact factor? And there might be an author who is a very-- probably not a Nobel Prize winner, but has a very important contribution in a very narrow field.
MARIANA BOLETTA: I don't know-- birds from Antarctica or whatever. And such a journal would have very little chance of having the citation performance so that it eventually gets an impact factor. And we recognize that. And that's why we want to stress the fact that just being curated and being accepted into the Web of Science in this quality collection the way that we call it in-house-- the Emerging Sources Citation Index-- is an aspirational goal in itself.
MARIANA BOLETTA: Yes, there is no number attached to it. There is no impact factor-- let's put it this way-- attached to it. But that author would see his work in the Web of Science. And the publisher would be able to tell the author. And the EIC will be able to say, send us material. Because your journal is actually covered in the Web of Science. So that is a stamp of approval in itself. It's not a metric, but it is a stamp of approval and value because it's been curated.
MARIANA BOLETTA: And it's not just the result of an algorithm.
SCOTT DINEEN: So Carl, in your role at George Mason University, I know you deal with an extensive array of data to help drive decisions. I assume that might be what you want to lead with.
CARL LEAK: Right. And I can also piggyback on the fact that we also play a role in trying to work with faculty to see some of these limitations that there are with the impact factor. And what we're finding is that within the library field, there is a disparity between the expertise and bibliometrics overall among librarians and those who can provide a service. So a lot of libraries have a branch who know about the impact factor, but they don't provide a service.
CARL LEAK: And that can vary depending on the support you get from your overall library team. But what we try to do is try to win small battles, because we know that the researchers and faculty depend a lot on the impact factor. And that might be all they see-- or the h index. But what we try to do is try to widen that gap by talking about some more relative metrics, as opposed to absolute ones-- like, for example, percentile rankings.
CARL LEAK: Where we can use those to say, all right, outside of the impact factor, this might be-- even though it's not the end-all-be-all, a journal ranking in the top percentile of a group of journals might give us some inroads into making those connections with faculty and helping them see there are other metrics outside the impact factor. But we know that we don't have a lot of time and real estate with faculty because of their busy schedules and ours.
CARL LEAK: So we just try to pinpoint, based on what their goals are, just small things over time and that we can introduce them so that they know that there are other things out there. And over time, hopefully our goal is that they embrace some of these concepts.
SCOTT DINEEN: Thank you. That brings us to the next question. It was a good lead-in to it. There are journal-level metrics, there are article-level metrics. We wanted to maybe talk a little bit about the merits of journal- versus article-level metrics. Certainly, there are high-impact papers that are published in journals that don't have high-impact factors Maybe Mariana or Jesse-- you might want to take the lead on this one.
SCOTT DINEEN:
JESSE FRANCE: Thanks. So the journal I was previously an editor for-- Applied Optics-- I think it's about halfway down in the JCR rankings in its category in optics. It has an impact factor of around 2. So it was somewhat lower. But there were occasionally articles in there that were kind of foundational for their fields and got a lot of citations and are in this somewhat lower-- or moderate, I should say-- impact factor journal.
JESSE FRANCE: But they're articles that everybody in the field knows about. And there are also another kind of article that maybe metrics don't capture as much that are articles that get seen by the right people and maybe don't get referenced. And the article is referencing them don't even get a lot of references. But they're in a field where they provide a key piece of information.
JESSE FRANCE: And these are very important articles. And they're sitting in a journal that does not have a very high impact factor, yet there's still of significant importance in the field. And I don't know that there is a way for metrics to capture that kind of importance. But I'm a strong believer that there's a place for journals with very strong metrics and with weaker metrics.
JESSE FRANCE: I don't think a journal has to have a very high impact factor to mean doesn't A high impact factor doesn't reflect the total value of a journal. And I think some people are aware of that. But I also think there's a tendency to overlook these journals that have somewhat lower impact factor, even though they can be important.
MARIANA BOLETTA: I totally agree. And the way that we evaluate journals-- we take that into account, actually. I'm a linguist by profession. So words are important to me. Impact does not only mean impact factor. So we're talking here about various impacts. We're talking about the intellectual impact, the impacts for the career of the researcher, the social impact, the economic impact, and so on, of everything that's published.
MARIANA BOLETTA: So you probably are aware that we have a new workflow now that is much more transparent. And you will see where I am going with this. So we have 24 quality criteria. And if a journal meets those 24 quality criteria, it qualifies to be covered in the Web of Science. Four of the criteria-- the last stage of the evaluation-- we call impact criteria.
MARIANA BOLETTA: And those four are the estimated impact factor-- obviously, we use it as any other person does. We calculate the estimated impact factor based on our data. We look at the citation performance, not the citation activity-- also, the activity, obviously, because some people have activity, but they're not performing at the level that is expected for the top quartile of the respective category.
MARIANA BOLETTA: We look at the citation performance of the editorial board. But-- and this is the answer to your question-- ultimately, because it is a team of experts in their area, and most of them know their collection very well, and they know their field, and they're looking for hot topics and for important things. But those impacts that I mentioned before-- not only the intellectual impact of creating networks of authors and institutions, and so on-- but the other impacts.
MARIANA BOLETTA: The impact for science in general, the importance for the well-being of society, for promotions, economic impact are very focused and important topics. So the last criterion in that decision-- editorial decision-- is content relevance. So obviously, some journal will always be in the bottom quartile. And there will be journals at the very bottom.
MARIANA BOLETTA: What we want to do is find now-- and I mentioned the Institute for Scientific Information-- the think tank. We have specialists there who are working now on benchmarks to help us make that decision. So that it's not just the gut feeling of the editor. I know that this topic is very rare important focus niche and new. And consequently, even though the journal is not well-cited and it doesn't have a high impact factor, I still want it to be there in the main collection and have an impact factor.
MARIANA BOLETTA: However, we don't think that a journal where the trend shows that over the last 15 years of its publication, it had an impact factor of 0.02. Meaning, it was cited one year zero times, and the next year, two times, and so on. Such a journal, obviously, doesn't have impact in every sense of the word impact. And such a journal-- even though it meets the quality criteria, there's nothing wrong with it.
MARIANA BOLETTA: Obviously, it should not belong in the elite collection of the journals. And then such a journal can move in the general population of the quality collection. But it's not an elite journal. So in other words, if we have to choose between an international journal on a global topic in biotechnology, that ranks at the bottom and the journal in a very rare niche, field, or hot topic, or whatever.
MARIANA BOLETTA: Obviously, the journal with the one important article that really breaks the glass ceiling of the impact factor will be covered. However, the other one will not be and most likely will be moved in the collection. And that's another principle that we've been following over the last 60 years. The collections are not static. They are dynamic.
MARIANA BOLETTA: They are moving. We want the collection to be stable, but we also want to reflect the actual performance of the journal.
ADRIAN STANLEY: Just to add one point on, Scott, at the individual article metric level, there are new measures like the field citation ratios that then normalize within fields. So those are being used, I see, more for individual assessment in normalizing in different ways by the bibliometricians and people like that.
SCOTT DINEEN: Is that consistent with what you're saying as well, Carl?
CARL LEAK: Yes. I will say, those are very important. And I can talk about George Mason specifically here. Because what our mission was-- we have seven institutes at George Mason that focus on various disciplines. And what we've found is that a lot of times, the administrators of these centers wanted to know metrics about their researchers there. But the problem was, these centers are multidisciplinary.
CARL LEAK: So what we had to do was to use metrics that normalized that work in context with those with the field that they studied in so that it would be balanced. Because otherwise, you will have sort of a lopsided kind of a image of one researcher at the bottom and one who looked to be more prolific in their research. Whereas, within their field, this one researcher was probably just as relevant, if not more than someone else.
CARL LEAK: So that's very relevant in terms of what we do and as librarians, how we have to keep things in context when we communicate with researchers and administrators as well.
SCOTT DINEEN: So given how much organic growth has brought us to the point where we are now, it's understandable things are messy. But what if we were to start from scratch? What if we had the opportunity to create a fair and effective system of scholarly metrics for evaluation? Where would we start? I wonder if everyone could have a chance in the last about nine minutes we have to weigh in on this question.
SCOTT DINEEN: Jesse, you're at the end of the table. It might be nice to go down in order.
JESSE FRANCE: It's a good question. I don't know where to start with this. I think the other panelists might have more perspective of how to do that than I do. One thing that I will point out is that what I think is important from my perspective is that whatever metrics we use are used in context. And I think it's really a mixed bag when people are using, say, an author's h index to determine whether they should get promotion or get a particular grant.
JESSE FRANCE: I think that there is a real mix of using the metric properly and putting it into context and having a nuanced understanding of what the metric really means. And on the other hand, I've seen some egregious misuse of metrics, where I know a group within a government lab-- I won't say which-- but it's a US government lab where they have a rigid formula for promotions where part of what the formula is, every article, you get points for the impact factor of the journal you publish it in.
JESSE FRANCE: And it's divided among the authors. And it's divided up by the number of authors. And so authors are kicked off of publications-- contributing authors. It's an absurd misuse of what metrics really tell you. And I guess in answer to the question, I don't really know how I would restructure or design a new metric. But I would say that what I think is most important, from my perspective, is that metrics give you a lot of useful information.
JESSE FRANCE: But I think they have to be put into a bigger picture and understood in the larger context. So yeah, that's my take on it.
ADRIAN STANLEY: Yeah, I was actually joking with a colleague who does this-- Mike Taylor, who is, I think, on the screen somewhere. And we're trying to figure out-- obviously, there's pockets of people trying to do exciting research and look at these areas of challenges. And it can differ in different disciplines. But we thought the only way you can really do this was probably get all the right stakeholders in one room together, lock them there for about a year, not let them out,
MARIANA BOLETTA: Wait for the white smoke.
ADRIAN STANLEY: Feed them a little bit. But wait for the white smoke. But it really is, I think, about a broader set of metrics, but how they're being used by all the different stakeholders, normalized, open, available, in different ways. If I was going to do this again, but--
SCOTT DINEEN: I think we need experts like Carl to help make sure things are fair and normalized. But Mariana, maybe your final words on this question.
MARIANA BOLETTA: You probably know what I will say. No, I agree 100% that the main thing is that it's used in context. And just to mention two of the misuses that we see on a daily basis, there's a misconception in some markets. And they're not poorly-informed markets where they think that we actually assign the impact factor the way that the teacher gives a mark on a paper-- one of the misconceptions.
MARIANA BOLETTA: And the other outrageous thing that I've seen-- one of them-- is some institution or organization in Germany who decided some sort of calculation where they say, OK, half of the research published in Germany is in German and the rest is in English. So what we will do is we will take the impact factor, divide it by 2-- or multiply it by two. I don't even know. And that will be the impact factor of a German journal.
MARIANA BOLETTA: So misuse is ridiculous. It should be looked at in context, definitely. And I agree with Adrian as well that it should be a combination of factors, not just one-- several. And like I said already, one of the directions in the Web of Science group is to look at these different dimensions that we can add to the impact factor.
MARIANA BOLETTA: So the answer is, if there weren't no impact factor, I would invent it. And because it's very simple and easy to use-- provided that it's used correctly. And I think it's here to stay. But it has to be improved.
CARL LEAK: And very quickly I will say, as a librarian, that from the standpoint of organizing information, if we can do it all over again, it would be great if there was more of a partnership with people like the people on this panel, and librarians, and faculty members, and researchers, and also accrediting boards within their disciplines, to try to understand what research goals are based on the disciplines. Because that also drives what kind of metrics would apply to that discipline.
CARL LEAK: So if there is a way to get more of a inter-communication among all these different professions and disciplines to have people understand-- if this is where your discipline is going at a university, or wherever, or within a discipline, then these metrics apply best for what you want to do. Because a lot of times, we try to learn these different skill sets. And then we want to go apply them, but they don't really match to what the outputs are.
CARL LEAK: So I think if we could start over, then that would be what I would like to see, is just more of a more fluid communication among these different areas so that people can see-- we do this kind of research. We want these kind of outputs, so these metrics work best for us. So that by the time it gets within a discipline, it might be too late when they're using it the wrong way and all of a sudden, it becomes a part of practice and accepted.
CARL LEAK: And it's hard to correct when it gets to that level.
ADRIAN STANLEY: Just a quick chat out for something we're actually doing within SSP2. But we talk here about the sort of administration, research, and publishers. But we also should be thinking about what funders want out of research they fund. We have a funder task force that is looking at things like that.
SCOTT DINEEN: Yeah, I think SSP certainly has a role in helping people coordinate and communicate much better. Look, we've barely scratched the surface on this topic. But please thank our panelists. We're ready to take questions for the next 15 minutes. But we can thank them. [APPLAUSE]
SOPHIE REISZ: I guess I'll start off with a question from our virtual audience-- Mike Taylor. Hi, Mike. His question is, how do the panelists make a judgment between an article in a journal with a big impact factor, but no citations, versus an article in a journal without an impact factor, but no citations?
SOPHIE REISZ: How do you tell an author they published in the wrong journal?
ADRIAN STANLEY: You want to go first, Carl? [INAUDIBLE]
CARL LEAK: I hate to go first just to say that I don't know. But in that case, I'm not sure. And that gets to the point of gleaning reputation just because a journal has a high impact factor. And I think that goes back to what we were talking about earlier where the zero doesn't necessarily mean zero. And there is more that has to come into play as to what does that particular piece of work mean within that discipline.
CARL LEAK: There are other ways to figure that out as opposed to zero citation. So I'm not sure. I don't think I would ever tell someone they published in the wrong journal. So I can't touch that from there.
ADRIAN STANLEY: I'm just going on the-- are there other metrics aside that? If it has 20,000 downloads and a huge metrics score, that has some value, even if it doesn't have a citation. Mike, do you actually have the answer to that question?
SCOTT DINEEN: What other questions do you have for the panelists this afternoon?
AUDIENCE: Hi. Anne Stone, TBI Communications and also on the Organizing Committee for Transforming Research with Mike Taylor. So a follow up to that-- for Carl, how many times are you asked within your role to support promotion and tenure evaluations? And what kind of education does that opportunity afford you to re-evaluate metrics that could be useful?
AUDIENCE: And does that enable you to have those conversations about what matters to your discipline?
CARL LEAK: So unfortunately, we don't get a lot of inquiries based for that reason. Because a lot of times, we get asked to do something without knowing what the reason is when people ask for these metrics. What I can say is that we do see it as sort of anecdotal. And I don't mean this to be statistical or something that's a guaranteed number. But we do find that faculty coming up for tenure and promotion-- I will say, the first time around, that first promotion-- we are getting asked for support for metrics for those reasons.
CARL LEAK: And that's coming primarily-- as far as we know-- from the Health Sciences area. And I'm in the Life Sciences. But we do have a lot of people in the Health Sciences asking for these kind of metrics. And the reason why it is, is because the current dean of our College of Health and Human Services came from the NIH. And that's where a lot of the bibliometrics-- a lot of the things that we've learned came from NIH with the work that Chris Belcher is doing.
CARL LEAK: So when Dr. Lewis came as Dean of the new college, she pushed that on the faculty. So a lot of our inquiries outside of that school-- we don't know the reason for why they're doing it. But we're doing it and we leverage that time to try to tell them about the limitations of some of those metrics that they use.
ADRIAN STANLEY: Just going back to that first question of how would you do things differently and thinking about you as an audience of publishers. One thing-- it's not necessarily metrics and impact factor-related. But I would take a leaf out Anne Michael's book and have more people who've done data science courses and understand data and how to analyze and tell the stories from that within your staff as publishers. Because I can only see data and metrics becoming more and more important-- whichever form they are.
ADRIAN STANLEY: So that's one thing I would do differently if I'm looking at the future.
SCOTT DINEEN: Yeah, that topic came up earlier today, that decisions between librarians and publishers and all other parties are going to need to be more data-driven. And we're going to need to get on the same page about what is fair and reasonable in terms of those types of data exchanges, for sure. There was another notion that came up in our rehearsal calls that maybe I'll throw at the group. Jesse, you mentioned that the notion of the article itself is changing.
SCOTT DINEEN: We've talked about the lifecycle of the article, maybe, including the preprint. And I know in physics publishing, that's been the normal for a long time. But there's also the data set that's published with the article. And all kinds of other ways that the article is not what it used to be. Are there any thoughts on how that may affect metrics or what might influence the new better system of metrics?
JESSE FRANCE: Well, I don't envy you all in having to develop metrics for the situation. Because as Scott said, we discussed this a little bit that the whole notion of what an article is, is changing. Because the preprint-- in physics, we have archive. And I know there are similar things in other fields. And then the preprint in the journal and the final published article itself-- it's not this clear single publication event of what an article is.
JESSE FRANCE: And I know that different metrics handle this different ways. And from my perspective, it's kind of a mess right now. You see citations to the archive preprint and then citations to the actual article. And I think that's hard to track and probably a big challenge in this field.
MARIANA BOLETTA: I don't know if it's known that Web of Science also has a data citation index. So we obviously collect that information. We have a dedicated team to do that. And talking about an article and data sets-- yes, sometimes they are cited themselves. But oftentimes, they are included in a data paper and we created an article type. I believe it was two years ago. And the accepted format of a data paper doesn't have to follow the format of a research article.
MARIANA BOLETTA: So it usually includes information about the source, the data set, and so on. So some of the citations are captured there.
JESSE FRANCE: And who initiates that format? Who decides an article has that format?
MARIANA BOLETTA: We have a team of bibliometricians who decide on a journal profile. So the journal profile is analyzed and then, for instance, in a new journal, they will decide. Also at the recommendation of the editor-in-chief and so on, so we collaborate with them. And it's decided which articles are called articles, reviews, meta analysis, whatever. So it depends on the article type. But we have the new article type-- obviously, cite-able-- included in the calculations data paper.
MARIANA BOLETTA:
ADRIAN STANLEY: And just on the point on preprints-- I think Crossref were definitely doing some work on how to map preprints to articles. But I think it's a community discussion again. Like, where do you go from preprints, or conference posters, and other types of early publications when that work-- how close is that to the finished article?
SCOTT DINEEN: And Jesse alluded to the accepted manuscript, which wasn't part of the new normal we talked about before, but needs to be included in that whole chain, including how the DOIs are minted, and how things are connected together. So certainly, being advocates while agreeing on proper citation procedures and then enforcing them in our different roles is one thing we all really can do that is a practical way to help with metrics collection and fairness.
SCOTT DINEEN: Are there any other questions from the group?
SOPHIE REISZ: This is my own personal question, for Karl, specifically. How much does the impact factor of a journal determine your decision whether or not to include that journal title in your library collection?
CARL LEAK: So that it's not something that's used consistently, but we do use it a lot upon request. And the point at which we use that for collections-- I'll speak for myself as a subject specialist for the Life Sciences is when we have a ask for a publication that is very expensive. The faculty is trying to make the case for it. I will use the impact factor as just one way to include that when I have to justify that purchase.
CARL LEAK: But we don't have a systematic approach to using that across the board for collections. But that's just one way that I do use that.
MARIANA BOLETTA: I have a question for you, Carl. Am I allowed to ask a question.
SCOTT DINEEN: You are allowed to ask a question.
MARIANA BOLETTA: Thanks, for asking that. Because I wanted also to find out from Karl and maybe from the audience-- when an author submits an article, is he more worried that he will submit to the wrong journal because it's not a trusted journal, even if they know that they will have a lot of chances for the article to be accepted? Or are they more driven by a white list of trusted sources-- the way we offer it with the Web of Science?
CARL LEAK: So I'll take this time to say that I don't speak for George Mason in this particular instance. This is my personal opinion. Unfortunately, the question hardly comes up in terms of when faculty publish. What we do find is that the faculty-- at times, they have to publish somewhere. And their frustration or their anxiety is not based around either of those criteria, unfortunately.
CARL LEAK: It's based upon, for example, when there is a cost assessed for them to publish, they ask if the library supports funding for them to pay that cost. But for us, we don't get a lot. Now, I'll tell you where it does happen is at the student level. Graduate students or undergraduate students alike-- where they have assignments or projects, where they have to look at journal impact factors for potential places to publish as they prepare their graduate projects.
CARL LEAK: So we do help them in that instance to get a sense of the journal impact factor and the important journals in their field. And we use it as a segue way to introduce other things, such as the Eigenfactor and other things that might help limitations with that to try to get a full scope of what makes this a so-called good place to publish as I prepare my project for publication.
CARL LEAK:
SCOTT DINEEN: Please thank our panel one more time. I believe we have a short break next. Thank you. Thanks to all of you for participating. I really enjoyed it.
SOPHIE REISZ: Excellent job, everyone. Thank you so much. Again, a round of applause for this fantastic topic and panel. We have a few minutes before our next discussion. So please feel free to take 10, 15 minutes. And then we'll start our next panel.