Name:
Only You Can Prevent Research Integrity Fires!: What Practices Contribute to Improving Research Integrity and how can you Help?
Description:
Only You Can Prevent Research Integrity Fires!: What Practices Contribute to Improving Research Integrity and how can you Help?
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/ecb7989a-134d-481e-935a-06b08b50aac4/videoscrubberimages/Scrubber_1.jpg
Duration:
T00H59M43S
Embed URL:
https://stream.cadmore.media/player/ecb7989a-134d-481e-935a-06b08b50aac4
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/ecb7989a-134d-481e-935a-06b08b50aac4/session_5c____only_you_can_prevent_research_integrity_fires!.mp4?sv=2019-02-02&sr=c&sig=nmyyfgZHKnJ5ls%2FINYaeF%2BUiSoYaP5%2Ft6UE5lfwhhVo%3D&st=2025-04-29T19%3A19%3A31Z&se=2025-04-29T21%3A24%3A31Z&sp=r
Upload Date:
2024-12-03T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Good not much around here anymore. But once all this emptiness was rich with scholarship, successful businesses, stellar reputations. Over there used to be a great publishing house that used to hold some of the best sources for papers. It was rich with new ideas, scientific breakthroughs.
Over here was a beautiful, lush library with patrons could seek access to trusted resources back. There was a great rolling quad where students and faculty could discuss their research. But not anymore. Not since paper Mills. Citation boosting manipulated images. Fake author schemes. Somebody got careless with research and.
A small fire started once. People didn't think much of it. And grew and it destroyed. Once lost, it could take generations to rebuild. Think before you carelessly accept that paper took the life out of research integrity. Fires before the research integrity.
Fires destroys the scholarly research ecosystem. Only you can prevent research integrity fires. So Hello and welcome to today's. This afternoon's session. Only you can prevent research integrity fires. My name is Todd Carpenter on the executive director of niso, the National information standards organization.
And I am playing the role of Smokey Bear. We will be providing throughout this session what I hope will be a fun and entertaining tour of research integrity. A number of strategies and tools that can help you prevent research integrity fires. Joining me are Patrick hargrett hilcorp's Vincent, Lizzie Marie McVeigh, Ivan oransky, and Jodi Schneider, who are going to act out a variety of.
A variety of skits for you as we discuss research. Integrity fires. Much like the Ad Council advertisements in the 50s, 60s, and up to the. This very day, we'll be providing a number of short vignettes about how we can protect research integrity.
And I hope you will all enjoy this. It has been an experience putting this program together. Preventing forest fires can be a challenging activity. Part of what we're trying to do here is protect scholarly research ecosystem. And we can do this through education. We can do it through tools. We can do it through metadata sharing. And we'll touch on many of these as we discuss.
So let's start. Who doesn't like a campfire I'm going to tell you a little story about a researcher I knew some time ago, back in a simpler time. Now, all the researchers were having a grand old time in the spring, getting ready to publish their research results.
And there was one early career researcher in the field. He was working hard, but the bear cub didn't think about the implications of trying to skirt the system. Somebody got careless. Someone thought they could cut corners. They thought it wouldn't hurt anyone, but. Oh, no. That fire is spreading and it's ruining our reputation.
Sometimes research integrity issues are what cause our expectations in the Research Integrity process. But obviously things have changed a lot. Not every researcher today is as conscientious as he was back in the day. Some people do make honest mistakes and we need to correct them. But some people are using new tools, generative AI systems, image manipulation, networks of reviewers, and methods to try and get ahead without the merit of working to achieve their accomplishments.
And these activities cost have costs for all of us. They waste resources. And as we seek to monitor and police this ecosystem. And each of US is trying to do our part. And it starts with education and outreach. Organizations like cop, STM, SSP, Crossref, and niso have been working on building an infrastructure to protect research integrity. Monitoring groups like retraction watch and numerous scholars in our system are studying researcher practice and highlighting research and integrity.
It's something that we all contribute to. It's something that we all can benefit from. So we'll start off with our first vignette. I'd like to bring the first group to the stage. No, not in front of the OK I don't. I love the forest of knowledge. There are so many fun things to do here.
And so much to learn. Oh, OK. The sets, the tents. All set. Let's start a fire and enjoy some nice s'mores. But the fire scares me. Don't be afraid. A fire is a very useful tool for making light and heat and for cooking.
It's perfectly safe as long as we follow proper precautions and use the right tools Like what Well, first of all, before you build your fire, you should carefully select the location to ensure it's a good one away from plants, brush, or other materials that could easily catch fire. A nearby stream would be perfect. So you have water at hand if you need to put the fire out quickly.
And when you found a good spot. You should look for some stones and/or put some stones in a ring to avoid the fire spreading over the ground. Finally, make sure you have some tools at hand to put the fire out quickly like a fire extinguisher, a fire blanket, a shovel to cover it up with dirt. We always need to plan for the best, but be prepared for the worst.
How do all of this. Well, the Park Service has some very helpful information on what you should do before, during and after your camping trip when it comes to fire. Their website is a great place to find information and resources that help us every step of the way. Howdy folks! So our camping area for.
Good evening Ranger. Did you come here to have s'mores with us. Maybe, but I'm mainly here to check and ensure you all are safe and keeping others safe, especially when it comes to making fires. We did a lot of research. It took a long time when we reviewed the park rules, brushed up on our fire making process, and brought along some tools to make a fire and put it out safely.
Sounds like you're doing all the right things still. Can I see an ID for one of you guys for our records, please. We like to trust, but verify. Don't be so rude. Hang on. One fire. Getting out of hand can cause many problems for people and animals.
That's why we have rules and procedures and why we educate folks. So, Mr Ranger, what do you do to prevent forest fires Don't be so rude, my daughter. That's quite a case. It's a good question, really. And there is a lot that my colleagues and I are doing, some of which will be visible to you. But there's also a lot that's going on behind the scenes.
We provide some resources to folks who come to the forest and to those who depend on it as a natural resource. We check in on campers like yourself to make sure they understand and they follow the rules and that they have the right equipment. Most of the time, that's really just a routine check, but sometimes we have to go through their belongings to make sure they're not planning to make fires with gasoline or something crazy like that.
But we also have tools and systems to check that there are no wildfires brewing. We have watchtowers in the forest, and we use satellite images and a network of fire spotters. And of course, we have a whole network of Rangers and firefighters who collaborate a lot and can easily contact each other to share their knowledge and their experiences. All of that so that we can continue to enjoy the forest of knowledge.
Thank you, Mr Ranger. So did we pass inspection. Can't stay, so I give you guys an A plus. So just like the forest scholar, communications is a precious ecosystem and we need to be constantly vigilant to keep it safe. Plan for the best, but be prepared for the worst.
We should equip researchers with guidelines, guidance and tools. The buckets, the shovels, the rings of stones to uphold the integrity of their research publications and avoid the fires from spreading. We should trust but verify, and publishers in particular, should make sure they have an understanding of who they're dealing with to prevent misbehavior, and also so that let people who are creating the forest fires can be held accountable.
And of course, we need to be on the lookout both at the level of individual interactions, much like how a forest Ranger inspects the belongings of someone that they don't quite trust, but also at the level of watchtowers and satellites that survey the forest as a whole. Over there, you might see a pattern of fabricated data and plots that could be indicative of the activity of a paper mill, or over there, a questionable network of fake reviewers.
That could be the start of another fire and only visible from a distance. A pattern of multiple similar submissions which might be a bad actor playing with research integrity matches. The integrity hub is such a watchtower, combining the views and the insights from many different perspectives to spot patterns that may not be visible to any individual on the ground.
And sometimes the start of a fire can be very small, like embers flying through the air. Each ember could rapidly spread a fire if it's not put out immediately. So we need many eyes on this to notify us quickly and to prevent them from spreading. The integrity hub offers such eyes through applications and resources. For example, a tool that checks for signals of paper mill origin for duplicate submissions, and by providing resources that help you select the right tool to check images in submitted manuscripts.
Another metaphor that I think is apt no one can do this by themselves. We need many eyes on research integrity from partners that are spread throughout the research ecosystem. Partners who can share what they see, what patterns they notice, and then everyone else in the network can react and protect our scholarly record. We need collaboration amongst the firefighters, and we need to be able to communicate and collaborate and act quickly.
This is why collaboration is key for the SDM integrity hub, and at the core, it is a collaborative, community driven program that fosters knowledge, share co-creation and in that sense, while tools and infrastructure are a key part of it, it's more than that. It's also about consistent legal approaches, about policy frameworks, business practices that are consistent across the community. So while only you can prevent forest fires, you cannot do it alone.
So the next vignette, we will talk about the crec project and ISO'S new recommendation, recommended practice on communication of retractions. Expressions of concern. So no matter what we do to prevent research integrity fires, sometimes they do get started.
We need to limit their spread to minimize the damage that they do to the rest of the researchers and others in the ecosystem. Once a paper is identified as problematic, we want that fact to be clear to everyone who comes across it. If a paper is unreliable where there are methods or results have been called into question, other researchers need to know so that their own work doesn't enlarge the fire by either, depending on unreliable, unreliable results or by citing a paper so that it continues to reach more and more people.
Many parts of the research forest can be affected, and even if it doesn't mean to be, even if it doesn't mean to be, even if it doesn't start the fire. So we will bring together some researchers to highlight this. I was a child prodigy. Now I'm a researcher.
Gee, it's great to get out of the lab for a bit. Sometimes I get so busy with our research trees, I forget how large and beautiful the research forest is, and how we all depend on other projects and other trees. It's a whole ecosystem out here. It sure is nice and we deserve a break.
Our project is well on its way. We have to start getting ready to publish. I was looking through some our notes and I found a paper. We read a year or so ago. The one that kind of started us checking further for the consequences of their results. Oh yeah. By professor Heimerdinger. We tried for months to replicate his results and we did get close, but there were some really amazing findings in there.
Like really amazing. Super amazing. You could even call them fantastical. I could not make up data that good. The linear regressions, I mean, r-squared of 0.999999999. Who gets results like whose results are so consistent. But it was peer reviewed and published and we really did use it.
Our work really does stand out in its own. We were super careful and our results weren't exactly conflicting with Heimerdinger S model, but we just took a different branch. And built up some evidence that supports it. He was working on such an arcane part of the field. It could be that his effects were really specific, just in that part of the model.
It's still pretty important to our work, so we will need to include it in our references and discuss its relationship to our own work. He cited by so many other papers, including a lot of the other papers we used. Did you spill that Siringo, on your jeans. Why am I smelling smoke I don't know. Yeah, smoke.
We should take a look at that paper again. Do you have a copy. Yeah here it is. It's kind of dark here. Does this help. Yeah Yeah. Thanks What's that.
Oh before we include it in our references, let's search a bibliographic database and see some of the literature related to it. What's this. The paper title has changed. Now it says retracted before the title. That didn't used to be there. Let me see that.
Open the PDF attached. It can't be. It just. I just printed this copy a few weeks ago. Look at the PDF. It's watermarked with the word retracted. And the title. It's also been updated. It says the date of retraction was just a few days ago.
And now there's a link to a notice. What does this mean. Problem paper is a fire in every reread. Every reuse of that paper is like taking a burning stick. Spreading it around, increasing its damage.
And it was funny. In rehearsals. Fire fire. A project organized by episode called crack is nearing completion. Crack stands for communication, retractions, removals, and expressions of concern. The effort has the double purpose of identifying a paper whose editorial status has changed, but also of propagating that information to the stakeholders.
So that they know. All of the stakeholders in the research forest need to know when something gets retracted. And it's not just starting with the original fire, not just showing where the original fire started. That's important. But there also is this issue of propagating awareness to the places it spread. Like all the people who cited doctor Jagermeister's paper before, knowing that it shouldn't.
So crack includes recommended practices for publishers, aggregators and platform services about the creation, content, transmission and display of item metadata. Metadata metadata like the musical. Yes, that is another way that metadata can make the information system work on the slide. So let's concentrate here.
Crack outlines the necessary metadata. You can see for items that have some important changes in their editorial status, like an expression of concern or a retraction. It includes recommendations for the necessary components of notices of retraction or notices of editorial concern, or even the very exceptional cases of Article removal. The primary aim of the recommended practice is to establish best practices for metadata creation, transfer, and display for both the original publication and the statement of retraction, removal or expression of concern, with the goal of facilitating the timely and efficient communication of information to all relevant stakeholders, including these ones.
Although retraction remains relatively rare, it's not really as rare as we'd like it to be. And the rates seem to keep going up, I have to continually ask Ivan, what's the number now. What's the one and how many just keeps going up every time I ask. So what that means is we really need to follow best practices. The need for these best practices has increased metadata transfer and display has increased along with the growth of retractions.
Removals again, still rare should be justified. Don't do them unless you really have to. And expressions of concern. It's really crucial that researchers who discover a publication be able to find that out, ideally before they are incorporating it in the new work. And that only can happen if all the copies, all the copies of the publication are properly marked.
So the correct recommendation also extends guidance so that publishers can consider what actions to take beyond their own system. Aggregators Original authors. Their institutions. May be peer reviewers, editors, editorial boards, preprint servers, associated data repositories, and so on. So ultimately, we have to fully extinguish that fire, right. Before we can be sure that it won't reignite.
And every copy of the misinformation has to be properly updated. So we need to so we've lost our fire standing on the fire here. So you could just imagine that going on. Our paper is saved.
So it's important to know that retractions are like a tool, and like every tool, that can be used to help and support research integrity. But not everyone is always careful, careful about protecting their research ecosystem. Not everyone is a good actor in our community. Not everyone adheres to research, integrity, safety practices. We need to be concerned about those people too.
They could just as easily burn down our research integrity ecosystem and destroy the underlying trust that we have spent decades and centuries trying to build. Some of these people only care about themselves and their own careers, and they don't care about the destruction and the action of the actions. They care about the implications of their actions. It's for this reason that we need tools like retraction watch.
We need awareness building and signaling mechanisms so that people can understand what's happening throughout the ecosystem. I mean, who doesn't like sitting around a campfire, roasting marshmallows, eating s'mores, and keeping warm on a cool spring night in the Green Mountains. I sure do. But not all campfires are bad. You need to make sure that you keep them safe.
Sometimes there are good reasons why a paper is retracted. A researcher might find an error in their methodology or data set. A post-publication reviewer might spot a flaw in the analysis or an error in the software code. We all make mistakes, and the scholarly record is even stronger when errors are identified and corrected and fixed. Far from being a point of shame, researchers should help fix the problems and they shouldn't be shunned or punished for their honesty and rigor.
This process helps to ensure the scholarly record by finding problems and correcting them. However, this is certainly not all the case in all. For all retractions, expressions of concerns, or withdrawals, there certainly are bad actors who care little about their research ecosystem. They start small fires and will do little to help prevent their spread. And we need a group of monitors who can pay attention to all of these actors.
And this is where retraction watch comes in. So with this, I'm going to pass to Ivan to talk a little bit about retraction watch and what it does and how it can help. Thanks very much, Todd. I think I'm the only person in this sort of ensemble that's playing himself exclusively. But I did.
I did audition for the part of the bear. But I made the mistake of shaving that morning, and I guess that disqualified. Maybe next time. But Thanks for the chance to talk a little bit about what retraction watch does, and a little bit about what retraction watch doesn't do, because I think that's probably just as important.
So I think in the vein of spreading awareness, obviously we've been at this since 2010. Some of you may have known about it since then. Some of you may have just been hearing about us today. Welcome and Thank you. So what we do is report on retractions. That's literally what it says on the tin. We do that in a bunch of using journalistic methods. So we have a bunch of different techniques.
We obviously do some research. We interview people. Some of you in the audience have heard from us. We submit public records requests and do those sorts of things that journalists do. In terms of spreading awareness. It is I think I'm always sort of. I was happy to read this study that came out last year, and I'll just mention it.
It turns out that when we report on a retraction, if you exclude those big ones that everyone is writing about or everyone hears about your garden variety retractions, if you will. It turns out that there's a correlation between US reporting on it and it being cited less often after it's been retracted, which is not something we I mean, we're very happy to hear that we didn't set up a trial ourselves to do that, but it turns out that's the case.
So I do think that even for the scientific literature itself, there is some value in raising awareness like that. We also report, though, on anomalous behavior that may or may not lead to a retraction. But there's a sort of story brewing right now in Spain about someone who has just become he's just been elected as a rector of a major university in Spain. But we had reported a couple of years ago that he basically likes to cite himself a lot, like he creates PowerPoint presentations, puts them onto his University server, and then sites them in future PowerPoint presentations.
And anyone here from Google Scholar. Like, it turns out you all pick that up as a citation. This guy has the highest H index in Spain for strange reason. So the actual retractions associated. But it's a kind of thing we report on. We try and hold players accountable. Some of you may have felt that yourselves and I would say, I'm sorry, but I'm not actually. And I don't think you'd want me to say that.
But we hope players and often those are institutions, whether they're publishers, whether they're universities, whether they're individuals, any mix of that. We also have, though, we're doing the right thing category. So as Smokey was just saying, it is really important to I would say, embrace honor and cherish good behavior, even if it is done at a sort of potential career cost we maintain we created and we may continue to maintain the retraction watch database, which hopefully you're all aware of.
That is now that was acquired. So it's part of Crossref since last September. It's free to air, it's completely open. But our work continues on that. And we're just delighted at that agreement that allows it to be open and also gives us a significant financial stability. We also serve as a source for larger journalistic outlets, whether that's we're asked to comment on something, we're asked to maybe even partner with them on reporting something out.
And so I think sometimes people and certainly those who haven't heard of us before might say, well, OK, we'll just ignore those, whoever that person is calling one of our reporters or something like that, they're welcome to do that. But it often doesn't go well either in the pages of retraction watch or somewhere else where someone picks up the story and wonders why they weren't talking to us or something like that, and of course, speak at conferences, universities, et cetera.
And so we sort of have become a sort of press friendly source who can talk about these issues. I just briefly want to say some things we don't do because there's sometimes some confusion about that. So in terms of our role in preventing forest fires, we don't actually comment on the forest fire. We don't look at that fire and go, that's a great looking fire. I mean, sometimes we're tempted to say, that's a great looking fire.
Be a shame if anything happened to it kind of thing. But we don't actually comment on and we don't ever. We don't ever advocate for attraction. We as our awareness of our work has grown. I think some people are justifiably confused about that. We even see some coverage from around the world that says things like, retraction watch retracted this paper and I'm like, that's not a thing. But OK, you do you but we don't do that.
In all seriousness. And I think it's really important for editorial journalistic integrity that we don't and we don't comment on any cases. We haven't reported on people asking us to do that. Sometimes it's not all that helpful. The other thing we don't do is a little bit of a sort of opportunist, opportunistic addition. Here is the other thing we don't do is we don't report on as many cases as we'd like.
Jody was maybe asking me to say what the number was, but I'll say it now. But about 1 in 500 papers is now retracted. As many of you may know from reporting in nature, based on our database and other sources, there were actually well over 10,000 retractions last year. We have a staff of depending how you count them, about three FTEs and only half of them are actually on the journalism side.
So we're obviously not going to report on all of them. And honestly, they're not all that interesting. And so we'd love to even do more. My last sort of bit here is just things you can do to help retraction watch. And most of them are sort of things that many of you are already doing or may know about. We love hearing stories, whether they're on the record. Off the record.
Any direction. We love them, particularly if they have documents attached. And we love our scoops. We are journalists. You can encourage other people to do the same. You can also encourage people to talk to us directly. There's kind of been and I just maybe I think they checked for tomatoes and eggs outside of the door, but this may get me pelted with whatever you have left in your pockets.
We prefer to talk to the people doing the thing or not doing the thing rather than spokespeople. We understand the need for that, but we do always prefer that. We have a Google form where you can let us know about retractions. Obviously, if it's a bulk retraction, which has happened a lot lately, probably easier to just send them all in a spreadsheet and we'll happily receive them that way.
And just send US News items of interest. We do we have a daily newsletter that about 17,000 people are signed up for. Always happy to have more. It's free and we have a weekend reads that gathers all those send US items. And if we think the community would be interested in them, will include them. And then finally, since I have the podium, I will say, if the prompt is things you can do to help support retraction watch, you can do it quite literally if you'd like.
We are a nonprofit and I'm a volunteer, so Thank you. And I'll give it back to Smokey the Bear. You might think that all fires start the same way. You might think they might all spread in the same manner, and that fighting each other is often done the same way.
But here you might be surprised that every fire is quite different from the last. Perhaps this fire was started by a child playing with matches. Perhaps this fire was caused by a careless camper. Perhaps this researcher was trying to push a corporate agenda. Perhaps that researcher was trying to push a political agenda. There are also those who are concerned about their promotion dossiers, and believe that no one will notice if someone games the system just once, to allow them to push them into a faculty position.
It might be the wind. It might be the humidity that causes fires to grow and spread. It could be that the publishers weren't sharing data. Alternatively, it could be that there was no consistent way to signal that a paper shouldn't be shared, or allowing its retracted outputs to continue to live on in other repositories. Studying these factors can help us understand the scope of the problem.
What are the criteria that lead to its spread and how can we minimize those conditions. So joining us today is Doctor Jodi Schneider, who'll be telling us a little bit about some of the research that she's been doing to study the spread and the process of research integrity challenges, and how we can track the work she's been doing to track its impact. Learning more about these processes will help us understand the problems we face and how we can help avoid some of these problems.
So, Jodi, tell us a little bit about your research and some of the things you've been doing to study this delicate ecosystem. Thanks Thanks, moki. I started studying retraction because an undergrad came to me and wanted to write a paper. We took an example that had already been published about we just wanted to replicate the paper and that thing that we were replicating.
It was about a paper that was still getting retracted, that was retracted and still getting cited. It was published in 2005, retracted in 2008 already by the point that we started studying it. It had more citations after retraction than before. It turned out that it was worth following up on a paper. We got the original authors that we'd found out about this from when we wrote the paper about it.
It had been 11 years since it was retracted. It was still being cited. And that just shocked me. Why would people be citing something. It was a paper that it was the only really good paper to cite for the things that was being cited for. It was a human trial. It was unique. Nothing else could provide that evidence.
It was really, really problematic. I looked at it and I said, well, from the publisher page, I can't tell it's retracted from the databases. I try to get to the retraction notice. I tried eight different databases. Embase was able to get us through to the publication, to the retraction notice everything else. PubMed, you get this big error linking error linking errors all around.
Lots of different databases. And so I just couldn't. I haven't been able since then to stop studying this issue of how do we stop people from inadvertently citing retracted papers. There's lots of documentation around many, many other researchers who have looked at how bad the situation is in some fields, like half of the retractions, half of the papers that are retracted, you can't tell from the publisher site or you can't tell from the databases.
I recently read a paper where they looked at anesthesiology and talked to researchers who were citing those papers after they were retracted. They didn't an overwhelming percentage. Something like 80% of the people had no idea the papers were retracted, and yet they had tried to do the right thing. They had looked at the databases. They had checked things, many of them.
And so that's the situation that we are still in. I really don't want us to be in that situation. I want really good data all over the. I love nothing to be retracted, but that's never going to happen. So what's the next best thing. We need to know what's retracted. And Thank you for the retraction watch database. The best data we have.
The best thing that's happened this year is Crossref acquiring a license to open that data and make it available. So it's really for everybody now to use. And I would encourage you to think about if you're not already using this open, free, available retraction watch data, you should be. So anyway, I could talk the rest of the day about this problem.
So you can find me at some later point if you really want that conversation. My husband will be grateful. So yeah. So Jodie, could you tell us so if you noticed any changes in the data that you've been monitoring as you've been doing this research, perhaps it's too soon. But I like to think we're making a difference. In this process, have you noticed any positive impact getting better.
I mean, the retraction watch data being open in February, my PhD student who's working on we wrote a paper about four different multidisciplinary data sources and just looking at the consistency between them. No, they're totally inconsistent. Now we're looking at 11 different databases. Looking back, things have gotten a little better since the previous poll.
We had pulled data in April 2023, a little bit before the Crossref acquisition of retraction watch data. So it's getting a little better. The PhD student came to me and said, well, between February 14 and February 28, there's been this massive change. He was looking at in that case, web of science. And he's like, what's going on. And Yes, it was a load of a bunch of data.
So, so data consistency metadata being improved. So there are places where the metadata is getting better. But we want that to be universal. And this, again, is why really make sure that you're checking against and using that for transfer papers for aggregators. Lots of places where things can get lost. I remember the first time I asked Ivan how many papers were retracted.
It was like oh, one in 2000. Then it was one in 1,500, 1 in 1,000, 1 in 500. It just keeps going up. So it used to be this little dusty corner that we thought we could ignore these days, the silver lining of many problems that we've seen recently is we know we can't ignore these problems. We have to pay attention to what's retracted. We have to make the data easy.
We have to make the display better. And I'm excited that the standard that's recommending how to do this is on its way out the door. The last meeting of this morning, very, very close to finalizing the bids. So what's well, you've done a lot to help get us and advance this the study in this space. What's next. Now that we have things like the integrity hub of the community.
Where should we go next. What's on the horizon. Implement the recommendations. Implement the correct recommendations. Implement the correct recommendations. If you haven't heard of crack the crack website is there the draft standard is there. Hopefully this summer the final version will be out the door, but it's not too soon to start looking at it and have conversations.
You've seen a number of the folks here have been involved in that work, and so all of us are good people to talk to about that. Well, Thank you so very much, Jody. Thanks, Mickey. And I'd like to close by, first of all, thanking my Merry band of actors.
We have some time left for questions. I don't know if there's any. Anyone monitoring the Zoom room. Yes excellent. So if there are people that have any questions, we have some of the leading experts in the world of retraction study and trying to prevent research integrity fires. So if anyone has any questions, we would welcome them.
Happy to get started with the question from the chat. Editorial offices are notoriously overburdened and understaffed. What are best practices for prioritizing research integrity in a proactive way. Marie No.
We don't have that. That would not be what we have present in this. I would say that I do pre-publication research integrity at Marion Liebert, and we have the standard array of both integrated and externally invoked tools for simple identifications. One of the things that I think is probably our most important first line of defense is educating the editors and informing the authors and reviewers of your policies and the needs of the journal, and why these things matter.
I find that empowering the editors to use the online or the integrated tools in our peer review system, like iThenticate, like proofing, like other matters like that, the unusual activity detection and algorithmic surfacing of anomaly. Don't I wish that any of those tools could just make the decision for them, but they can't. And so I think that our critical endeavor is to engage them with their best judgment.
I've had many an editor ask me, so this is a paper mill, right. And I'm like, I can't tell you that definitively. I can tell you that we have a series of circumstantial indicators and you start to say, how likely is this event. Not very likely, but there could be an explanation. Well, how likely is it that this is also true of this paper and. Well, that's also unlikely. And for them to co-occur is kind of unlikely.
It's a recommended reviewer. They took an hour to review your original submission with 15 tables and 25 Western blots. How likely are these things. And then when you start to accumulate the unlikely events, you start to have more confidence in the fact that this is not a submission that was created or reviewed in good faith. And in that same vein, I know that as a system, we really want to provide authors what we've always provided authors, which is here, is what you could do to improve your paper.
Here's why I'm rejecting your paper. Here's what I see as a critical weakness. When you are dealing with a bad faith actor, that's the worst possible thing you can give them. They have not given you a good faith submission where you owe them a good faith editorial rejection. They have given you a bad faith submission, and the more information you give them about how you detected this, the more information they walk away with.
To slip past your defense the next time. And I think that's a fundamental switch. We all want to act as though there's nothing wrong, and as though everyone were a good actor. The people who voluntarily retract their papers because there was a mistake. I did that in a lab, voluntarily retract a paper. They're not the problem. The ones who don't retract the paper are the ones who swear that they didn't duplicate that image, or we lost our original data.
How tragic. Sometimes that's true, but sometimes it isn't. So you have to go on what your instinct has taught you about the quality of the research and the development of the circumstantial evidence around it. Well, follow up, I just want to just for The Mike. Thank you. So how should you then phrase your rejection of such a paper.
Would it just be the one word reject and then move on depending on how far into the process you've gotten. Sometimes that's the answer. If you're pulling that up at the desk and you're pulling that up relatively quickly, then a simple rejection is probably your best bet. The further downstream that you are issuing that rejection, the more delicate the balancing act becomes, and again, the more you are exposed.
In a circumstance where you don't want to assume bad faith, but you cannot assume good faith. So yeah, it's a bit of a dance. Thank you. I had a question that was kind of related to Jody's research about how hard it can be for researchers to spot papers that have been retracted is I'm not going to ask you to name names, but is this a problem across all of the publishers, or is this a problem that is clustered in certain groups of publishers based by size, geography, or other things.
I don't actually know if it's a great research question. The work that people have done has it's really manual work to study this. So for instance, there's a paper about mental health fields. Or there would be papers in other specific fields I think cancer you know radiology cancer research. I think there's a paper on this. So it's only a little bit that's been studied. But what a great Thanks for the research question.
Yeah what I'll say is that I don't know that it differs BI Publisher in terms of discoverability, but there are some pretty significant differences BI Publisher in terms of visibility particular or the way that they flag and what they do in terms of overriding HTML pages to get into the weeds a little bit or making things disappear without letting anybody know or including a notice. And the publisher that does all those things badly is I'm sure, represented here, but everybody does them badly to some extent.
And so I think that's part of the importance of crack is, which I was happy to also take part in. Let's standardize this. And I actually think everyone sort of has at least a gut instinct about the best way to do things. But it hasn't really coalesced, and that's why I think crack is really important. Hi, my name is Jennifer.
I'm with the Permanente Federation. I have a question, but I'll start by thanking Ivan for one time retraction watch had covered a journal that I once published, which may seem counterintuitive to be thanking you, but sincerely Thank you because it drives the appreciation for that work forward. It's very meaningful. So it's not the mark of shame, I guess, that some people might think that it should be, but I guess I have a question, perhaps for Jody.
I'm not really sure. I know we see authors who will do kind of whatever it takes sometimes to gain the impact factor or sorry, their h-index rather. I'm wondering if any of the results of maybe what Krech is working on for those forthcoming guidelines will have any bearing on the individual researcher level, or if there's any transparency given around that. Thank you.
I don't think that there's much in track for individual researchers, and the gaining metrics is a completely separate and really important problem. And I think that's a problem that can get fixed by incentives and probably by nothing else. There was a panel yesterday on deep impact, and I think if we really want to fix incentives, we have to think about how researchers are evaluated. That's an international I mean, global multinational problem.
The systems are different every place. It's really hard to fix. The advice in general for individual researchers is, well, if you're walking in the forest and you're talking about your paper that you're going to cite doctor or whatever, go look in Zotero, in papers, in EndNote, in a system that's a library system that's using third iron. And I mean, ask Ivan for the latest version of who's using the Crossref retraction watch data from the database, because that's where individual authors can really more easily find out from the publisher side, help authors by doing checks of bibliographies, ideally at multiple stages.
Papers can get retracted 40 years later. It's not the authors fault if they put the paper in, and then in the two or something years that the thing got had to get through the process. The paper they're citing has been retracted. Publishers really can help it at that manuscript surveillance part, and the data is freely available. Now there's no reason not to use it. If I may add you to that before we move to the next question about that visibility.
That's the link is the visibility of retraction corrigendum information, et cetera. And at the risk of this sounding a little bit like a plug, another project that I work on is called get FCR, which is basically around or primarily around entitlement signaling. But we've recently extended those capabilities such that also at the point of a link, whenever folks click on a link that is mediated via GET FCR, there's a check in the background to see if that article is known to be retracted in the Crossref retraction watch database.
And then that's this also signaled to the researcher at the point click. So that's supplementing the efforts that of course, still need to be done by publishers, aggregators, et cetera. But it's basically another pathway to get visibility to the fact that the work has been retracted. Stephanie Dawson, science open I had a question about preprints, how people are dealing with retractions of preprints, because this is something that it's much more author driven.
The authors are there's much less editorial checking before those go online. And what we see, we have our own preprint server on the platform is that one of the biggest problems are young researchers posting things without permission of their PIs and then coming in a panic. Really just saying, I have to take this down right away. My pi is going to kill me, I'm going to lose my position. And we have these really frantic young researchers and we have already given it a PhD and we really like OK, well, we can retract it.
This is going to also not look great. Can you talk to your PI. But this is the situation that we run into on a weekly basis. We improved it somewhat by putting a rather high fee, a retraction fee if you want us to retract your paper, please make sure you have got everything. It's really clear you're going to have to pay money if you ask us to take it down, because people really think like, could you just take it off the internet because can you just get rid of it.
Because it's actually a problem for me now. And I was wondering if anybody else is seeing this issue with retractions and preprints. Yeah, it's sort of evolving. We do include where we see them. Sometimes some other preprint servers do make them literally disappear, so that makes it difficult to find.
But we do include what a lot of preprint servers call withdrawals rather than retractions. We have some editorializing we do about that. So we do include those in the database. If it's got a Doi it's sizable. I mean in that sort of. So therefore something needs to happen to signal. And frankly, some people are sometimes surprised to hear me say this, but I would be perfectly happy with a world where retraction actually wasn't a thing, where you just actually had some correct signals and some accurate signals about what had happened to that thing that lives at the Doi that we're not at that world, and I don't think that's going to happen before.
I retire, but so it's an evolving thing. And every I think that every preprint server sort of has its own approach. I should, by the way, since Stephanie is in the room, I should also disclose that as part of my actual day job at the Simons Foundation. I am the program officer for a gift grant to Cornell Tech for the archive. So I'm kind of a big believer in and proponent of preprint servers as a sort of conflict of interest or disclosure there.
But I think that it's evolving, and I'd actually love to see something. It's not that it isn't incorrect, because if it uses the word retraction or what have you, it would be, I think, sort of covered by that. On the other hand, it'd be great to have a sort of secondary, if you will, another look at how preprint servers can deal with this.
All right. Yeah Hi. I'm Megan. I'm a scientific editor with little severe. And just a question. We talk a lot about the and what Doctor was just saying about the different reasons for retractions. Sometimes it's a researcher discovers they made an error and they are coming forward and voluntarily asking to retract.
And sometimes, of course, we know it's not. It's someone acting in bad faith. I know that the retraction watch database does include the reason why something has been retracted, but is there any thought about creating separate categories, separate a sort of retracted for an honest mistake at the researchers request versus a forced retraction from the publisher.
I mean, the quick answer is not explicitly because we would rather have, again, a granular kind of explanations. And some people have said our reasons for attraction are too granular, which we appreciate. We would rather have a narrative and a retraction notice that included the right sort of information that could allow people to tell the story, because it's often nearly impossible to tell and a force, someone there are actually retraction notices as the authors retracted this or even retracted this voluntarily.
But my definition of voluntarily doesn't actually include a if you don't do this, we will retract it. Or if you don't do this, your institution says we're going to fire you. That's not voluntary. And was it honest error. All of these things. So it's just complicated. And so we would rather have a fulsome notice that we can and others can pull from.
There's some really cool work I think taking our taxonomy and overlaying it into whatever people want or nesting it in ways that we welcome that again. It's all open. People should do whatever they want with it. But I think it starts at having a fulsome, detailed retraction notice and then we can figure it out. All right.
Well, we have reached the end of our time. I hope you've enjoyed our little Mary experience. Thank you all so much for attending. Thank you for participating in this scholarly publishing conference. And I look for the correct recommendation should be published in the next, hopefully. Fingers crossed. Weeks, weeks.
So thank you all and have a safe trip home. Bye Yes, Yes, we so good.