Name:
Ethics in Publishing
Description:
Ethics in Publishing
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/31b4d2cf-c581-4407-aa97-79343039c53b/thumbnails/31b4d2cf-c581-4407-aa97-79343039c53b.png
Duration:
T01H00M22S
Embed URL:
https://stream.cadmore.media/player/31b4d2cf-c581-4407-aa97-79343039c53b
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/31b4d2cf-c581-4407-aa97-79343039c53b/GMT20220728-150130_Recording_gallery_1760x900.mp4?sv=2019-02-02&sr=c&sig=%2FtikLtFdhrR7GdHLdm0JapAZvdYwcKhwtl6uyP7p2k4%3D&st=2024-11-22T18%3A41%3A58Z&se=2024-11-22T20%3A46%3A58Z&sp=r
Upload Date:
2024-02-23T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Thank you and welcome to today's Ask the experts panel. We are pleased you can join our discussion on ethics and publishing, David Meyer's esp education committee member and the lead publisher at Wolters kluwer. Before we start, I want to thank our 2022 education sponsors. Martha j. Editorial open, Athens, silverchair, 67 bricks and Taylor Frances f 1,000.
We are grateful for your support. A few housekeeping items. Phones have been muted, so please use the Q&A panel to enter questions for the panelists. Our agenda is to cover whatever questions you have, so don't be shy about participating. This one hour session will be recorded and available following today's broadcast. A quick note on keys, code of conduct and today's meeting.
We are committed to diversity, equity and providing an inclusive meeting environment, fostering open dialogue, free of harassment, discrimination and hostile conduct. We ask all participants whether speaking or in chat to consider and debate relevant viewpoints in an orderly, respectful and fair manner. It is now my pleasure to introduce our moderator, John Warren, director and associate professor masters of Professional Studies and publishing at George Washington University.
John formerly held the positions of director of the George Mason University press marketing and sales director at Georgetown University press and director of marketing publications at the Rand Corporation. He has a master's in international management for the School of Public Policy and strategy at the University of California, San Diego. John is a preprint speaker at international conferences and has authored several articles on digital publishing and other topics.
And now over to you, John. Thank you, David, and Thank you for inviting me to be the moderator for today's session. And Thanks for all your work helping to set this up. So I'm going to introduce our three amazing panelists. We do want to have a lot of questions from you in the audience, so please give us your questions. We have a couple of prepared questions, but we want to hear from you.
That's that's the whole point of ask the experts. Before we start, though, I do want to mention a couple upcoming events. Esp and the publishing program at George Washington University have determined that we'll have ethics week ethics and publishing ethics week in October, this year, mid-october. There are a couple of events I'll mention briefly here and I'll put the links later in the chat.
The three ethical challenges in scholarly communication panel on October 12th that SSP is putting on. And then we have the GW ethics and publishing conference sponsored by SSP and IU presses, which will be held October 14 as a hybrid conference in Foggy Bottom in Washington, dc, as well as virtually. We'll have both virtual and in-person palace and we have right now the call for presentations available, which I'll put in a few minutes in the chat.
So let me introduce our three amazing panelists. We have Dr. David Shannon, who is director of scientific outreach at the American Society for investigative pathology and director of the continuing medical education program at the Journal of molecular diagnostics. She's a leader of the woman in air ethics collective program manager for women. And I accelerate and raise programs and expert on the policy exchange.
She's a biomedical researcher, an expert scholarly communicator and a mentor in the fields of scientific research, scholarly publishing and AI ethics, especially for women and minorities. She's co-chair of SSPS diversity, equity, and Inclusion Committee and was named AI makers 150, AI and analytics leader and influencers 2021 list and the 100 brilliant women in AI ethics 2000 2022 list.
Sorry we also have with us Chris Graff, who is research integrity director at Springer Nature. He has a long he has a long standing commitment to excellence in research and publishing, research, integrity and open research. He's chair of the governance board of the STM Association integrity collaborations hub and members of the program committee and chair of the poster and award committee of the 2022 World Conference on research integrity.
And last but not least, we have Amy clueless, who is direct American director of the ethics and IDR inclusive diversity with equity, access and accountability at the American Society for Microbiology. Amy is responsible for oversight and management of Azan's organizational ethics policies and procedures. She oversees and facilitates the reporting processes, discussions and resolution of allegations relating to science, ethics, misconduct and harassment.
She received her PhD in molecular genetics and microbiology from Stony Brook University. So as I said, we have a few prepared questions, but let's start with each of you briefly describing and briefly, because we can take the whole hour on this for sure. But describe your background and your current work on publishing ethics and what are some of your current priorities.
So why don't we start with you, Achebe. Thank you so much for that wonderful invitation, for introduction and for the invitation. S.p. just in the spirit of ethics, to set the record straight, I am a past co-chair of the CDI committee. So just to complement the wisdom from other panelists, I would be focusing a little bit more on tools and technologies that would hopefully complement the expertise of my fellow panelists.
As a background, I'm a researcher who transitioned into scholarly publishing. So if you want to ask me anything about automation of the editorial processes, which is mostly used of robotic process automation, the most rudimentary form of artificial intelligence, that would be a good thing how AI is now writing content and how as publishers we could deal with that. The use of new tools like authenticator apps like dilemma game to explore ethical concerns within the research community and editorial offices, and perhaps how to peer review and vet the data that is being generated by emerging technologies and also in the current political landscape.
I think I would stop there. OK Thanks. How about you, chris? Great Thank you for the opportunity to share a few things. So I as per my previous introduction, I lead research, integrity and editorial excellence at Springer Nature.
I also serve on a wonderful collaborative model for addressing research integrity with the STM Association called the integrity hub. I think I'll just say one or two things. I think it's true. I think we could probably all recognize it's true that a large proportion of the ethical concerns, integrity concerns that we think about are about what for a research publisher, about the research itself that we publish and the people who did the authoring of it.
So so basically they happen a long way upstream from where our work is as publishers. They happen where the research itself is done and we just become aware of it through the publishing process. And so while I think it's true that we can and are doing as publishers more to address these problems systematically and the integrity help from STM association, you know, is going to go some way there. But it's also true that other actors have a responsibility to address these issues, too.
And those would be the organizations that fund and employ and set policy for research and for researchers. So I guess that's where briefly bridging to the integrity hub from and the collaborative effort led by the STM association, which is a beautiful thing because a broad coalition of those who are able to act and are interested in acting is, I think, what we need to go to the source, but also working with universities and funders, as well as publishers, as well as where publishers have a concern bringing publishers together into a coalition that can make change happen and deploy shared knowledge and shared infrastructure to address those problems that we also share.
So, yeah, a couple of thoughts for you there upstream and collaboration. My main themes there. Thank you. Thanks, Chris. And I you also served at the as co-chair of the coalition of publishing ethics of COVID earlier as well. Amy, how about you a little bit about your background and current priorities as well?
Absolutely Thank you so much to the organizers for the kind invitation to participate in today's webinar. So at the American Society for microbiology, I have worked to centralize assam's processes, policies and communications relating to scientific ethics and misconduct. And today assam's ethics, resources, policies and procedures are centralized in order to provide a greater uniformity in responding to ethics issues across the society.
So the ethics department works to maintain the integrity of the scientific record within asme 16 journals by developing, enforcing and informing the community about our policies. And we design and implement policies and educational modules to really help our authors conform to the best practices and publishing ethics. And we engage with the editors and chief of our journals to investigate allegations and manage the ethics decision processes for these allegations.
And more recently at azm, azm I efforts and activities have also become aligned within the ethics department. So AI and ethics are really powerful cultural components, especially in the area of stem, and that they are intricately linked. And so integrating both ethics and Dea is an exciting is really exciting at azm as it provides maximum impact and synergy for these programs. So happy to discuss further about how we took more of an organizational approach.
So we had really begun and were initially focused on publishing ethics, but we have really expanded this area within the society as well. So thank you so much. Great so let's start with the question that maybe each of you could address and why don't we go in reverse order? We'll start with Damian and then go Chris and then Chad. So I think we see a lot of attention on ethical issues and publishing in recent years.
Is it your sense that ethical issues are increasing? Are there more complaints being reported or more attention on these issues, or is it all of the above? Amy, what do you think? What's your sense of this? Great question. So I would say in recent history, yes, there has been an increase. And I think part of that has to do with how as academics transition back into the labs as well, and that the universities are working through investigations at the institutions themselves and that there is an increased awareness of these situations as well in the broader scientific community as well.
And so I think that those kind of two factors are leading to an increase in awareness and the reporting as well. Great Thanks. How about you, chris? Great I go. Our ethical issues increasing. I think it's I know it's true that my team, the research integrity team at Springer Nature is seeing a 15% year on year increase in the number of cases reported to it.
If you do the maths, it's about one case for every 300 published papers. It's about one case for every 1,000 submitted papers. Maybe half of those result in retractions. So maybe there's one retraction per couple, a couple of thousand papers. But of course, the retracted papers and even the cases aren't papers from the air that you're publishing in there from years gone by.
So that's a numbers. I think it's absolutely true that journals, teams are increasingly burdened by ethics problems mean and I'm happy to be on a team and leading a team that can help to address that burden. And given our recent interest in and. Kind of increased awareness of systematic manipulations like those.
Like those like those that paper Mills do. I think it's not hard to predict that the number of retractions sector wide is going to see some sort of spike. In the coming maybe this year, certainly by next year. Yeah so. So that's my answer to your question. I think the answer is yes.
I think ethical issues are increasing, but also the reporting of them is increasing. I think it goes hand in hand. So what's your perspective on this? Do you see these issues increasing? And I you have particular expertise and I which of course, is definitely going up. Yes, I want to say yes, but I want to say that with a grain of salt, because I don't have the technical data, you know, qualitative and quantitative data that Chris has to back the claims that he made, which hold.
I do believe that there has been an increase in the number of cases. And I want to say it could partly be because of the exposure now, because we are communicating about any misconduct so much more and people are a lot more vocal about it. There are so many platforms on which people can raise concerns. So maybe there's not only a heightened awareness to highlight some of the misconduct, which could be as simple as someone releasing a correction and it could be as severe as leading to a retraction, which is astounding when Chris mentioned one in 2000 paper are being retracted.
So I would say that's one of the concerns or one of the reasons for increased numbers could be heightened awareness. The other thing from the tools perspective, tools and technology. Oh, sorry, Chris. I'm really sorry. I didn't I don't didn't mean to say that one in 2000 papers is retracted.
I think that's not probably not correct. Anyway, sorry to interrupt. So OK, I guess we'll come back to that because sorry if I took that many cases, but anyhow, sorry. OK, I stand corrected. So one of the other things I think is the use of emerging technology, because a lot of technology that is being used as tools for generating these data or analyzing the data is not that open.
So there are currently the peer to peer review system is limited in the way that research could be vetted. And if any misconduct is recognized in the use of these tools, that can have really negative impact. I just in the chart posted a paper that recently came out in science and I haven't read it completely.
It came to my attention yesterday, but there was apparently a study in nature in 2006, which has been heavily cited, and it it hypothesized something it put forward a paradigm shifting model which people started trying to replicate. So billions of NIH funding went into that kind of research. Other people's research got molded in that direction. And it now turns out that that data, which which was the foundation of all the research happening for the last so many years, was fabricated.
So, you know, that just shifts the whole entity of data in a whole field. And the last thing I do want to mention, which we have not touched upon so far, is the preprints. So the pandemic has seen a huge increase in the number of preprint articles and for obvious reasons, of people being able to share their science more, more openly. But a lot of time the peer preprint data is not peer reviewed embedded in the same way as some of the other established journals are doing the peer review process.
But if this data gets cited, these articles are cited, does that make them more mainstream? Does that make that notion that hypotheses those findings more acceptable? So that's another level of ethical concern that we had not seen in the past and may have led to the increasing number of us seeing these ethical concerns. But I did have a question for you, Chris.
When you say that the cases are going up 15% year over year, which is a huge number, is it just the published literature? Are you even accounting for data that is being published on preprint servers? So I can answer that quite straightforward way. These are just the cases that we see at Springer Nature. The cases that we receive in the research integrity team, journal teams or other people sharing with us a concern that needs the advice and help of the great team that I lead.
And I. Some of those are relatively easy to resolve. Others aren't and lead to lengthy and lengthy pieces of work to work out what to do. But Yeah. They think they are in the majority about articles published in journals, in the minority, about content published in books or things that aren't journals.
But but they almost never are related to preprints because. So because we don't publish preprints in my part of the company, although I do know that of course Research Square publishes preprints it's preprint server, but they deal with their things separately from us. So so. Yeah, mostly I'm talking about journal cases here. OK Thank you for that piece of information recently for our society manager notes.
We have started soliciting some very timely content from preprint servers to our journal. But having said that, it still undergoes the peer review process that we have in place. Thanks so much, Chris. OK so we touched we touched on this next question a little bit in the previous question, but why don't we continue a little bit?
What are some of the new activities in terms of ethical breaches or concerns that you're seeing for, for example, reproducibility, data sharing, hiv? You mentioned preprints. What are what are some kind of new ethical breaches or concerns? And anyone that wants to answer this is fine.
There's a tumbleweed here, so I'm very happy to go first. But would either of you like to go first? No, apparently not. OK, I'll go first. I got a couple of things I could mention. I'm just going to pick one of them and then see what I steal. Thunder of Amy or Chevy. So I'm just going to mention systematic manipulation.
I referred to paper Mills earlier. I think we've become aware recently in publishers and publishers about the problems that paper Mills create. I think paper Mills have been operating for longer than we've become aware of them. But now we are. And we're on it. It's true that we have through organizations like cope, a committee on publication ethics, been thinking about how to manage well, how to manage honest but fundamental mistakes through questionable, maybe gray area kind of practices right up to the kind of misconduct and systematic manipulations that I'm talking about here with paper Mills for a bunch of years.
And so so thinking about the systematic manipulations that paper Mills conduct. Has already begun published something on this back in 2019. But action with technology, perhaps to help prevent paper Mills from doing their work is now in development via the steam integrity hub that I mentioned before as well. So I think maybe that's dodging the question. You said what's new?
I think systematic manipulations from paper Mills are new to us at publishers and we're newly energized to address them. So that's my answer. OK Thanks. Amy, how about you? Within We recently had expanded our open data policy a few years ago, and so we have been getting more readers bringing to our attention that authors may not have deposited the underlying data or sometimes may not be willing to share plasmids or reagents as well.
So we have been seeing more of those types of concerns in the recent years. And so we really are working to be more proactive. And you know, when authors are now submitting papers and manuscripts to that they need to ensure that the data has been provided in an open data repository, and that we're also adding that to the reviewer's checklist as well as the editors.
And so we really do want to have all of the data that would be needed to reproduce the paper and the results be accessible to the broader scientific community. Yeah no, that's an issue that's come up a lot. How about you? Anything that you've been seeing that's kind of new, so it's sort of going to resonate what has already been said, especially what Amy mentioned. So ACP, many journals are both scientific titles and we're seeing a lot more data coming our way.
And that is, you know, technology based like microarray data and not even the reviewers can get that kind of information. And then there's a lot of images that get stored on repositories. So one of the things that we've been trying to do since these are massive data sets that have to be hosted somewhere. Oftentimes the authors try to put them behind paywall, and that is not useful for our readers because they need additional resources to be able to access that.
So something that we have been really implementing is a request to all authors to make that data publicly available, either through a publicly available repository or to host it somewhere, which it's not behind a paywall, but not the journal site per say, and to have a longevity associated with that. So that data remain available. And essentially we discourage any publication of the data or inclusion in the manuscript if they're unable to entertain that request.
So I think we've been doing pretty good in terms of making data available for others to be able to replicate going forward. One of the other things, which is, you know, I you were talking about new activities in terms of ethical concerns or breaches. So I do want to highlight something that hasn't yet happened or it's happening as we speak. So I talked about.
And I put the link in the chat about this artificial intelligence language model, like the generative pre-trained transformer three or GPT, GPT three. It recently, within a couple of hours did the literature search to synthesize a scientific paper which got published and the group who was doing this kind of research, they even checked with GPT three.
They had its consent to publish the article to check that when you are submitting your article to a journal and immediately gave a consent, right? So now it's gone into the peer review system. But are we there yet? Are we in the editorial offices positioned to peer review content that is not generated by humans? And what kind of peer review would that entail?
I want to say we may be looking at algorithmic accountability boards going forward because we have to vet the technology that went into trying to come to an outcome or reporting of finding. Right so that is currently missing. So I think these would be some of the emerging concerns that we may be talking about perhaps in a year or two. Then Chris.
Chris put a link to that. In a recent article that I wrote for the Journal of electronic publishing, I used GTP three to write a couple of paragraphs just as an experiment. And it was kind of interesting. So we do have a couple questions from the audience that we'll get to right now, and we would like to have other questions from you. So please do put them in the Q&A.
Our first question is from Heather Spence. Thank you, Heather. This question is pretty specific. So if none of our panelists have the answers, perhaps we could just speak more broadly to this. But this is a very good question. In fact, somebody recently asked me this question as well. What are your thoughts on how to handle revisions to edited textbooks when there's a change in chapter authors between editions?
Is it OK to retain some text from the previous author when the new edition is being published with a new chapter author? If so, how much text is OK to recycle? In this case, the publisher holds a copyright for the text. Should previous edition chapter authors be named in the new edition if some of their text is being retained? Again, this is a pretty specific question, but as I mentioned, somebody had asked me a very similar question just a few months ago.
Any of you work on textbooks? Want to tackle that one? It's a. I'm not on the publishing side for the textbooks, but I am editing a book with another editor and it's going to be a second addition to the existing one. So I would say even and it sort of goes the same way as the research, right? A bunch of people in the lab did the research, but then they move on to a different setting and the new researchers will either finish up the work or they would do work on the revisions to actually get it through the pipeline.
But that doesn't mean that you have to remove the credit for people who originally submitted content or perform the research. I think a little breezy part here. You know, ethics is not all Black and white. That's how I used to see it. It's all shades of gray. So I think it would be very hard to determine which part of the content came from which contributor.
So I don't know. Depending on how much content you retain, if you can identify who particularly contributed towards it, then you credit that individual. But I think as a general best practice, it would be for you to inform all the authors with a copy of what you're submitting, to let them know that they are listed as co-authors and if they would like to drop themselves out.
If they no longer feel like a credible contributor to that content, then they should opt to drop their name. I think that may be a good approach. I know, Chris, you had unmute. You had something to say. I think COVID it all really well. The Chevy. I don't really have anything to add. Thank you.
Yeah, I think did cover it well. And again, this was a separate case that somebody asked me my opinion on a few months ago. But in that particular case, it was a textbook author. One of them had retired and really wasn't interested in contributing any more and allowed it to happen. But but I agree with you. I think that some kind of credit could be given. Some of you may that the nisoor recently came out with a holes taxonomy on credit, which which might be helpful for something like that.
So we have another question from Amelia Arturo, and this one is for Amy. How do you ensure that authors data have been deposited to a public repository upon manuscript submission? Did did you need to hire additional staff to accommodate this incorporation of an open data policy within your journals? Excellent question, Amelia.
It is it's a great question. Thank you, Amelia. So we have a check that our authors need to certify that they had deposited their data and that they did, in fact, provide a data availability statement or paragraph. And upon submission as well. Our editorial assistants do also go to see identify if they authors did check the box and go into the manuscript itself to make sure that the data availability statement or paragraph is listed there as well.
And that we also do request that the editors to confirm that the authors did conform to the Open Data policy as well. And so prior to publication, we do also make sure that the links are all active in the data statement, the accession numbers, duis, et cetera. Right anybody else have a perspective on that question?
I know it was for Amy, but. But you might also have a. All right. Well, let's go on. We'd like to get more questions from the audience, but let's go on with a few of our prepared questions. I think this is a good question. I think could all give some perspective on this.
Chris, I you have been involved with COVID maybe you could start here by talking a little bit for those in the audience that might not know about the coalition of publishing ethics, kind of what cope does. But the question is what should a publisher do after exhausting co-op guidelines and not having a resolution? So I don't know if you want to tackle that.
And, you know, maybe just briefly talk a little bit about what cope does and how that works. That's the easy part. The questions, the hard part. Right so cope is actually not the coalition, but the Committee on publication ethics. It's some 25-year-old not for profit registered charity, which is a membership organization. And it.
We'll promote some research and publishing integrity by delivering guidance and processes and sort of standards, if you like, as well as a place to bring issues for discussion and the development of new thinking around New emerging issues. It's a brilliant organization. I loved being a volunteer for COVID.
I volunteered for many years with them all. I love it to bits. And I believe this. I don't know whether the deadline is closed, but if you want to get involved, they were until very recently maybe still are calling for a nomination self nominations for trustees right now so. So perhaps there's an opportunity for you if you want one that's the easy bit.
Check out the website. It's full of a rich source of source of information. The second part to your questions was, what should a publisher do after exhausting guidelines and not having a resolution? So, I mean, an easy way to achieve answering that question is, come on, give us an example. Right without an example, it's sort of hypothetical and impossible to say.
But if you wanted a rule of thumb, if there really isn't a way to. To resolve a concern with guidance, which might would include talking to all the involved parties. The authors, the people who raised the concern, maybe the journal editors. Right, would involve talking with the University or the place of work where the research was done. Right so you're going to have to have gone quite a long way to actually exhaust cope guidelines.
But the rule of thumb I always fall back on is do least harm. It's often it's easy to say. It's often hard to work out what is doing the least harm. Sometimes not doing anything. Is doing the least harm and you might not be happy with that. And that might conflict with your own organization's principles. So Yeah. That's what I do.
Really exhaust the guidelines, talk with people rather than emailing. I mean, really try and work with people to find a resolution and then if all else fails, pick a solution that you feel is the least harmful for everybody, bearing in mind that at the end of the day, most people are going to be dissatisfied. Right right. The original authors are going to be dissatisfied.
The person with the concern is going to be dissatisfied. You're probably going to be dissatisfied, but at least you've got to the end. So that's what I'd say. You sound kind of like me, Chris. Cynical, optimistic on myself. Anybody else want to tackle that question? I think that was a great answer, Chris.
I don't know if anybody else has experienced with these kind of issues with co. Well, why don't we start there? He covered it all. Exactly so we have another question from and this one is from Leslie MacIntosh, thinking about the purpose and challenges with special issues, example being a space where that first guest editor can more easily get publications through without optimal, rigorous peer review.
How are you balancing the benefits of having a topic of concentration with some manipulations of the process? So interesting question. I mean, in my experience, special issues are not necessarily, you know. Lacking peer review. But I guess that's part of that question is, are they treated are special issues treated differently?
And if so, how can you balance those benefits? So oops. Go ahead, Amy. I them. Our special issues undergo the same peer review process. We do have if they are sponsored by an industry stakeholder. We do acknowledge that. And we do give that potential conflict of interest on the website itself.
But I don't necessarily think within our portfolio that they undergo any less scrutiny through the process. I could add a little bit and I feel the pain of the person, you know, Leslie, I feel your pain when you talk about the purpose and the challenges, because at my organization, I'm the one who is responsible for seeing these thematic issues through the process.
We just dropped one because of some of the challenges and we have four in the works currently. And with that background, I agree with Amy that the content undergoes same level of peer review and in some cases it's even a little bit more extensive because we just want to cover our bases and be sure that we are not being biased. So if we usually collect two reviews or three reviews, oftentimes the invited content has three reviewers plus also comments from an associate editor just to make sure that we are not being biased towards invited content.
Having said that, since it's invited come a content you I mean the possibility of out rejecting something which may not meet the bar is somehow lowered a bit. So we work very closely excuse me with all the contributors to help them enhance the quality and rigor of the content that they're submitting towards a special issue. Having said that, one of the thematic issues that I was seeing through one of the guest editors who is the high profile leader in the field, had a contribution in the thematic issue, and then we ran it through our plagiarism check.
It turned out to be all right, so we don't essentially contributed to that. Author it's possible that that's a high profile leader who is busy, but one of the other junior members maybe perhaps tried to collate data for that particular review. So I still remember it was painful because we had the rest of the content all lined up for that thematic issue to go concurrently in our public print issue.
But then we worked with them for, I want to say rigorously for a month and a half to fine tune. The content was there and they were restating some of their own findings because it was a review article, but we ensured that they were not just highlighting what they had done, but there was an extensive mention of the other available literature, so they were just sort of dropping in their findings in the perspective of field at large.
So these things often happen and I think it's like double guessing ourselves. But we learned that sort of vetting invited content or content for thematic issues with a lot more rigor than we would any other content. Chris, how about you in your office of research integrity, the special issues have any different issues than other issues?
Yeah so, Leslie, Thanks for the question and JV. And Amy, Thanks for your responses. I testify for Congress last week and told a bit of a story then, which I'll repeat here. Which is about identity theft and fraud. I'm the legitimate editor in chief of a journal. I was approached by somebody who looked and looked by their email address and by the way, that they approached the editor in chief to be a legitimate researcher.
And appointed that person as the guest editor of a guest edited issue. Right and then because the editor in chief thought that this person was the person who they were claiming to be, he trusted that person and let them do what they wanted to do, which was to publish a lot of articles and to check the boxes for peer review, but basically invite fake peer reviewers to do a fake peer review or probably even do it themselves.
It was a paper mill. It wasn't a legitimate researcher at all, and it wasn't until a large number of articles had been published. And an article that particularly was out of scope that the editor in chief went. Hold on a minute. It's weird going on here. I'm going to write to the editor and guest editor and find out what's going on.
And so he wrote to this person at their legitimate email address, the legitimate researcher who he thought was the editor, guest editor for the guest edited issue. And this person replied and said, hold on a minute. This has nothing to do with me. You've been deceived here. And that's when the alarm bells rang. So I. I'm with the checks that you all described. I would have one note of caution that I'd share with everybody.
And that is to be super careful about your identity validation, so that if when you're appointing a guest editor to an open guest edited issue and be really sure that they are who they are, say they are. And that's not an impossible thing to do with finding out or asking them to provide institutional email addresses, for example, actually talking with them, perhaps even a video right before they embark upon being the guest editor of the issue.
That will go a reasonable way towards addressing those concerns. But there are other things, too, of course. So just a word of caution there that people aren't. Always who they say they are and. And they can take you for a ride. So well, that's an amazing story because I hadn't heard that. It sounds like a case of, like, super fishing or something. Yeah well, Thanks.
So thank you, Leslie, for that question. And we have another question from Amelia. Plagiarism checks can be so important. How often do you all at your respective institutions do plagiarism checks on manuscripts? Are all manuscripts checked by default or are they checked only upon being flagged? Excellent question again.
I can kick it off. So I assume for our review titles, all of the manuscripts are screened using cross-check by authenticate or I authenticate at the initial submission step. And for our research based titles, all of the manuscripts are run through cross-check at the revision stage. And so we had a situation a few years back that led to us really going ahead and having this offered and consistently approached through all of our journals at the revision stage, because we actually had a phishing manuscript come to within one of our journals and we had sent it out for review.
And it was a reviewer who noted that it was almost identical to a recent paper by that same group. And so ServiceNow as a system of checks and balances, we do it uniformly across our journals. Yeah you want us? Yeah we follow a similar protocol as well. And I wonder if it we started doing that after the set paper that I talked about before that incident happened very late in the game.
So now it's integrated in our peer review system. So we use lots of years editorial manager platform and there is a program called identikit that sort of runs through the manuscript. And one of the other reasons for doing that was that we were getting some articles from certain geographies where people were sort of copy pasting, not really aware of plagiarism being something that is taken seriously here or should be taken seriously everywhere, but definitely taken seriously in the North American journals.
So the idea was to facilitate them in writing their content. So when it goes to a reviewer like Amy mentioned, they're not flagging it and it's coming back or getting rejected, but it's actually getting peer reviewed for the content that they're presenting, which is novel in that particular manuscript. So the process is, I believe, integrated into the system right from the get go. So even our very first virgin submission, even before revision, we sort of go through that and I believe the authors receive a report and if it's over a particular percentage, they're given an opportunity to revise it and then resubmit before it even goes.
The peer review undergoes the peer review process. HIV Chris sure. So we use plagiarism detection, the systems you've already mentioned routinely. I don't think the challenge is the technology. I think the challenge is what to do with what the technology tells you. So like all of these things.
Right and. And we've also so we've added to that technology. We've worked out how to identify the papers that are OK, that score statistically well enough to be a green flag. And so that's no longer a thing that a person has to think about. We've identified the. Papers or the statistics scores that are required for papers to be a red flag.
And so those, again, are papers a person doesn't particularly need to think about. Leaving in the middle. The proportion of papers where human judgment must be applied. And and so focusing the attention of our everyone's got limited resources, right, but focusing the attention of our limited resources on those papers that really do need attention and need a conversation with the author or need escalation to the editor in chief or need some sort of human judgment in order to help them proceed.
So I think that's pretty novel, actually. I think and I think it's beneficial. So that's what I'd add. And I do want to underscore also what she said about author education. And I know this when I was at George Mason University for four new, new people coming from international countries, certain countries like just educating them on what plagiarism is and that it is harmful because it's not understood in all cultures what it is or how it works.
Let's see. We did have a question in the chat. We'll take the Chris did a little bit of answering that in the chat. This is from Mike Rosner. Could the panelists please comment on the current state of algorithms for detecting image manipulation? And I know this is an issue that's really growing in importance as well as challenges.
Interestingly us right at the start about new concerns. And there was a concern report we know about Western blots. Right and Mike Rosner pretty much invented looking at image manipulations way back. So Western blots are familiar territory, but a new thing, at least for me, from about a week ago, was crystallographic structures.
And they really considered those to be target for manipulation. But apparently they are. And one of the interesting things about software is that it has to keep up. So what I do know about the commercially available software solutions that are out there is that they are quite good at some things, but not good at all things, and that's fine, right? As long as what your software is designed to be very good at, then you can use it appropriately.
I also that the STM association, through the work and with its integrity hub, has asked the providers of commercial actually or non-commercial tools that do this to share information with the hub through a sort of a Q&A, a sort of process to identify what the defining features of those tools are to enable comparison, and then publishers to sort of basically help publishers to be able to choose the right tool for the right job.
So, I mean, that's a pretty good step to. Thanks we do have a couple other questions from the audience and. And these are also related to our previous questions on plagiarism. And Candice has one which I think might call self plagiarism or she calls it text recycling. So what do you recommend for publishing office?
Creating workflows, dealing with text recycling, some text recycling as appropriate because researchers build on existing work. Other times are recycling the same data and words, which is not appropriate. What should editorial offices do to distinguish between the two candidates? Mentions that we receive over 5,000 manuscripts per year, which is a lot, and anybody want to tackle that one.
I can just briefly talk about the scientific research part because again, our journals are scientific publications which are research based. So yes, some text recycling is all right because when you're writing materials and methods there's only so many ways of writing that because you're using the same tools, they're from the similar manufacturers or oftentimes the manufacturers and the processes. You have to follow that step.
So there's no other really novel way of writing that content. So we do allow text recycling to a certain extent in materials and methods section of our articles. But I would just say no matter if you're writing something for the 10th time, there are many different ways of putting forward your thoughts and comments. So I would refrain from text recycling in any other parts, especially, I mean, definitely not in results because if it's published, it's not novel unless you're citing our resource and in discussion because the field is moving forward.
So whatever you're proposing for your next publication has to change some in the context of the field at large. And my last thing is, I don't know, Candice, what kind of journals do you work with if you're getting 5,000 manuscripts a year, we are open to a cascade journal effect. You can send some of the publications our way. Thank you.
Go ahead, Amy. Oh, I was just going to follow up on what savi had said and a point that Chris had made earlier is that for we kind of have a threshold of about 30% for our research based titles using the cross-check or authenticate. And then one of our ethics specialists will go in and look to see where the overlap is. And similar to what Chavez was saying, if it is in the Materials and methods, then we tend to send it to be assigned to the editor as well.
OK I think maybe we have time for one last question, and I'm going to combine another question from our star questioner, Amelia Arturo with anonymous attendee. These are both about responding to plagiarism. So who takes on the task of responding to the plagiarism, to evaluating whether the software has appropriately flagged the situation and advising the authors what to do?
The anonymous attendee asks kind of similar question how? How have you dealt with editors who've received these reports, not using them or relying just on the score? So kind of combining those two. And this will be our final question. I would quickly mention that it's the editorial office, the editorial assistant, whoever is managing the decision letter.
That person would be in charge of flagging it and letting informing the authors. And in terms of whether the authors are complying to just minimize the score so it passes through the system versus they're actually making some changes. We haven't really looked at that. But then our peer review system is pretty robust. And oftentimes the manuscripts go to two rounds of revision when the content gets massaged a lot.
So I would be hoping that we are eliminating any intentional plagiarism at that point for sure prior to publication. Any other thoughts? Amy or chris? Yeah we have our ethics specialist review the check scores and that they will follow up with the authors directly requesting that they rework the potential areas of concern as well.
And in the more severe cases, we do provide resources to the cri website as well as on the journals website, just so that authors are educated about potential self plagiarism and plagiarism concerns and how they do really impact the broader scientific community as well. I just take the left of the two questions very, very briefly. If an editor is an editor in chief or a member of the general team is not doing the things that they're intended, they're being asked to do, I use a plagiarism check report then that needs to be dealt with.
Right and something's going wrong in their workload or maybe in their training, maybe in their personal life, or maybe they just aren't able to cope with it. I don't know. But there's something that a good manager would help that help that editor with. So yeah, I address it, I would say. Great well, I want to thank our panelists.
I know I've learned a lot. And I want to thank everybody for your questions and for attending. David, would you like to say any closing? Yes, Thank you, John. I want to thank you and the panel for a fantastic discussion and Thanks to everybody who participated and those who sent in questions.
We want to Thank, again, our 2022 education sponsors, rfa, Jay and Jay editorial open, Athens silver chair, 67 bricks and Taylor Francis f 1,000. Evaluation requests will be sent by email and we encourage you to provide feedback. Please visit the cesp website for information on upcoming programs. The next ask the experts webinar will be on October 5th, and it will focus on AI and publishing and we hope you'll be able to join us.
Our new direction seminar is September RA21 two 20 second in Washington, DC and online. The deadline for early bird registration is August 19. This discussion was recorded or registrants will receive a link when posted on the cesp website and the session is now concluded. Thanks, everyone. Take care.
Thank you. Thanks Thanks for doing this. See you then. Bye bye.