Name:
The Scholarly Kitchen: Staffing Up For Research Integrity
Description:
The Scholarly Kitchen: Staffing Up For Research Integrity
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/e57561df-a949-49d8-8779-5b03a470e28e/thumbnails/e57561df-a949-49d8-8779-5b03a470e28e.png
Duration:
T00H59M20S
Embed URL:
https://stream.cadmore.media/player/e57561df-a949-49d8-8779-5b03a470e28e
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/e57561df-a949-49d8-8779-5b03a470e28e/GMT20240417-150032_Recording_1920x1080.mp4?sv=2019-02-02&sr=c&sig=nc4m6ftU%2BzU1kCmymx8P4VKKn29VCGl7N%2BPSRh85FXE%3D&st=2025-07-02T14%3A53%3A59Z&se=2025-07-02T16%3A58%3A59Z&sp=r
Upload Date:
2024-08-07T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Welcome, everyone. We'll just wait for folks to assemble. And then we'll get underway. Welcome we'll be getting underway in a moment.
OK, I think we're good to get started. Thank you and welcome to today's SSP webinar. The scholarly kitchen staffing up for research integrity. Before we start, I want to thank our 2024 education sponsors, access innovations, openathens and Silverchair. We are grateful for your support. My name is Laurie Carlin, CEO at delta think and SSP Education Committee webinar working group chair.
Before we get started, I just have a few housekeeping items to review. The attendee microphones have been muted automatically, so please use the Q&A feature in Zoom to enter questions for the moderator and panelists. You can also use the chat feature to communicate directly with each other and the organizers. Closed captions have been enabled. You can view captions by selecting the More option on your screen and choosing Show Captions.
This one hour session will be recorded and available to registrants following today's event. Registered attendees will be sent an email when the recording is available. A quick note on SPS code of conduct and today's meeting. We are committed to diversity, equity and providing an inclusive meeting environment that fosters open dialogue and the free expression of ideas, free of harassment, discrimination and hostile conduct.
We ask all participants whether speaking or in chat, to consider and debate relevant viewpoints in an orderly, respectful and fair manner. We would also like to promote the SSP generations fund to help us reach our goal of raising $500,000 to ensure the future of our fellowship and mentoring programs and DEI initiatives. Scan the QR code for more information and to donate. It is now my pleasure to introduce our moderator today. Lisa janicke Hinchliffe is a professor and the coordinator for research professional development in the University library at the University of Illinois at urbana-champaign.
She is also well known for her writing as a chef in a scholarly kitchen. Lisa, over to you. Thank you, Laurie. And Thank everyone who is joining us here today. I'm very excited for today's panel. Research integrity is a broader field than the headlines might have us believe when we read all about the sleuths that are working outside of publishing companies and outside of the employment structures of our publishers.
And while the sleuths themselves are, of course, providing a very important service and helping us highlight that there is a growing area of concern, the sleuths themselves are not the ones who are responsible for addressing this problem within the publishing workflow. So also audience knows that research integrity is a challenge. Every single one of US is facing this in our publishing workflows, our communications with authors, and our attempts to preserve and keep integrity in the scholarly record.
So what are publishers doing in order to achieve a solution to this problem, or at least as much a solution as possible. So that's the question on the minds in my mind of scholarly publishers. It's an area of growing concern. But what is effective. How do industry players approach this function. And what can we learn from each other by knowing about how different publishers are staffing up for this important area.
So I've gathered together today a wonderful panel of four participants. These four participants have responsibilities in their portfolio for research integrity, as well as many other things I suspect they're going to be sharing with you directly their name, their job, and how they got into this area of work as we get started. So I'm going to go ahead and start with Kim.
Hi thanks, Lisa. And thanks, everybody for the invitation. My name is Kim Eagleton. I'm the head of peer review and research integrity at IOP publishing. So we're a society based publisher specializing in physics. We do journals, books, conference proceedings, the usual.
We have. Well, let me start with how I got into it. A meeting in Pizza Hut. Is the long stuff. Like that's how the long story starts. The short story I've, I've been interested in publishing, not necessarily academic publishing, but, but from a very young age, I did a publishing undergraduate masters like cut me.
And I bleed books. I'm publishing through and through. I do not understand physics. So for everyone at iop, apologies and research integrity, it's always been a little bit of part of most of my jobs in publishing as I've moved through various editorial roles. It became one of my key areas of focus in 2019 when I returned from maternity leave and my company was amazingly willing to take me on in a remote capacity part time.
Because your life turns upside down when you have children. And research integrity was something that we needed a kind of centralized, considered approach in at the time. And so since then it's been a massive part of my role and it's now a six full time person team at IOP compared to the half a person. It was when I started on it in 2019. So the scale is big.
Wow I didn't realize how much this has grown and how quickly for your organization. Amanda, let me turn it over to you next. Sure So my name is Amanda Schulz and I'm a publishing ethics specialist at IEEE. We are an electrical engineering based, electrical and electronic engineering based company. We are also a society publisher and we, like him, said, you know, we do all the we have periodicals, we have the journals, we have the conferences, proceedings.
You know, we publish the gamut. We So I have only met for about a year, a little over a year now. I worked at a publishing prior to this as an in-house associate editor working on their Applied Physics review journals. I had this opportunity to come over here about a year ago and was really interested in doing publishing ethics. It's such a growing field and it was just, you know, really great opportunity that I had to come here and, you know, similar again to Kim's story this used to be just like a one or two person operation up until very recently, about two years ago, in fact, when our team specifically has seven people on it.
And then there's also various societies within IEEE who also do ethics within society. So it's growing rapidly here. Wow very quick growing across our organizations here. And you. Hey, everyone. My name is adhir Misra and I work at Sage Publishing. I'm the research senior research integrity and inclusion manager.
I think to tell you a little bit about how I got started in this is I was trained as an experimental geneticist so very far, very far away from publishing, but still very closely intertwined. I've always been interested in research, in publishing. Ethics is such a core part of being a researcher is to follow the ethical principles. But as many of us researchers always encounter very difficult situations around ethics, especially around academic publishing every day.
So I've been interested in this area for a long time. I've worked as a journal editor at other organizations. I've worked in publication, ethics specific roles, and I came to Sage it towards the tail end of 2021 as their first research integrity manager. And from then to now, we've got four full time editors working on centralizing all of our publication ethics activity across the portfolio. So our remit really, really sits within Sage Journals.
Hopefully everybody is already familiar with sage, but we largely focus on the social sciences. We have some societies that partner with us. We've got a lot of STEM journals, so we see quite a lot of different things compared to other publishers. And yeah, it's, it's a hugely varied role at Sage and it's quite growing very, very quickly.
Great And last but not least, Yael. Thanks, Lisa, and thanks, everybody. It's really great to be here. So I'm Yael Fitzpatrick. I'm the editorial ethics manager for pnas, the proceedings of the National Academy of Sciences here in the US. I got into scholarly publishing many years ago after initially planning on being a scientist myself.
Life sometimes takes strange turns. I wish I could say that it had happened in a Pizza Hut. I want to figure out a way to have that be part of my story too. But I spent many years actually in scholarly communications on the design and art direction side of things. So a bit different than a lot of people in this group. But I was the art director at the journal Science for many years, and part of my role there included working on assessing image concerns in research figures.
And that was something that I, I had just kind of like a natural thing for. And I had additional specialized training in image forensics, and it was something that just really satisfying to work in concert with editors who are also scientists, where I would look at something from the perspective of just the pure visual, they would look at it from the perspective of is what we're seeing here, potentially changing the science.
And it just became a very interesting part of my work. And from there I've kind of moved on and had research integrity be in more general terms, part of my day to day. So that's my nutshell story. Great This is really fantastic. It's really interesting. Honestly, how many of you are so new in your roles. We're the first person and now there's many more, like in a very short period of time.
So I think the staffing up part of the title of this webinar is really accurate. So let's dig a little bit deeper into the actual work. And you know, here it could be your personal work, but probably maybe more the work of your team if you have a team. So you know, what is the work and where do you report organizationally, sort of where does this sit within the overall function. So I know you spoke a little bit about this sitting in the journals, but let's start with you and maybe you can say a little bit more about that.
Yeah, sure. So I mentioned already that I lead our research Integrity Team. The team sits within the broader editorial division, and that's why we largely have remit over, say, journals. I think in a nutshell, we effectively resolve the complaints that are raised to us on the content we publish. That's the fundamental aim of the team, is to resolve the concerns that are brought to us in a timely, effective manner, working with our journal editors and various stakeholders that might, you know, the sleuths that you mentioned, working with them effectively is a hugely important part of this role.
But also part of what we're trying to do is obviously detect some of these things earlier to be a bit more proactive, prevent them from reaching publication or reaching the, you know, the submissions desk for a journal editor. So all those sorts of things are hugely important to my team. We obviously work cross-functionally to achieve a lot of our aims, so we have to work with our peer review operations team.
We have to work with technology teams, contracts and legal teams. It's a really important part of our role as research integrity specialists is to work with different specialisms to really bring this to life, because I don't think we can achieve this on our own. And a huge, hugely important role is played by our journal editors whom we work with very closely, because without them they are really our stewards of our research integrity.
So we support them, we advise them, we guide them, and we create effective policies that can really help our authors understand our position on these principles and creating overarching guidance so that we can really solve some of the problems that we're seeing. Great, great. I suspect we'll hear some similarities, but maybe also some differences as we go across the different organizations.
So, Amanda, you want to fill in a little bit more here. Sure so as I mentioned before, our specific team has seven people, which includes our director of publishing ethics and conduct, Luigi longobardi, which I believe many of. And so report to Luigi and Luigi and the rest of the team are situated within our publications unit at Tripoli and also, as I mentioned before. So, you know, we are is made up of a bunch of societies and some of the societies again, have their own designated teams for working on their quality of misconduct or publication quality and misconduct.
Like the conferences unit has one of those as well as some of the larger societies like the Computer Society. Ultimately, everything comes back to us because we are for EEE as a whole. We also our team is responsible for working with our volunteers to update our policies regarding author misconduct. We have the operations manual, which is part of our pubs unit and it is a very, very large document.
And part of that, you know, includes how do we handle allegations of misconduct, how can you report allegations of misconduct, what is the workflow when we do have an allegation of misconduct, who do we inform? It's very, very detailed. You know, we work with engineers, so of course it's detailed to the highest extent. And know, our editors on each of our journals are ultimately kind of our first line of defense when we do see things beyond plagiarism that happen.
You know, plagiarism is pretty easy to detect automatically. But when we start to see these bigger issues, you know, the editors are really great help in identifying that, as well as our readers who report items to our team. We also have an ethics hotline. If you ever want to report something anonymously, anonymously to us. That's a lot of infrastructure in addition to the work itself. Yeah, so that's, that's really interesting, y'all.
How about in the, in the context you're working in. So I am I'm officially a team of one at PNAS working within the editorial office. It's, it's a relatively new role at PNAS. I've been in this role for three years now, and this role was newly created. Most of the work was definitely done previously just by various different people, with the exception of image forensics, which is the one kind of new part of it that I bring to the table.
But even though I'm like officially a team of one, I'm definitely not working on my own here. I work in concert with a lot of senior staff at PNAS and then having the resources of the entire membership at the National Academy of Sciences is really invaluable for us. And so turning to a lot of members there for expertise about the subject matter and sometimes beyond the Academy as well.
So I'm officially a team of one, but depending on how you slice it, I'm a team of thousands. Right, right. I can't imagine anyone is doing this work in isolation in their organizations. And obviously, the size of one's publishing output is also going to have some scaling effect on this. So, Kim, last but not least, so we like pretty much everyone else we sit with in the publishing department.
We've, we kind of sit alongside what we call publishing operations. We do a lot of internally managed peer review within IAP. So we work very, very closely with our peer review teams and our peer review editors, but of course colleagues in production and, and what we would call publishing development as well. I think what's the only thing I could add to what's already been said, which is very true for us as it is everyone else, is that testing and trying out new technology and trying to help develop new technology is an increasing part of the role and we'll probably come on to talk about it later.
But some of the real innovation within our company is happening within this team. So we're often the first to try out new bits of software and, you know, and try and make it work with the existing systems that we've already got. We've got a couple of fantastically motivated people who are interested in AI. So we're kind of telling them the problems that we have in research integrity, and they're like I can think of a fancy computer way to fix that, or at least to identify it earlier in the process.
So there's what was, I would say is what used to be quite a reactive responsive unit is turning into now quite a proactive, technically minded unit, which you know, is for really good reason. And I think it speaks to what you said earlier about trying to be prevent trying to do that preventative work. That's where most of us, I think really want to be. Is stopping any of this stuff getting published or even, you know, into the peer review cycle as much as possible.
And it feels like we're fine. We finally getting the opportunity to do some of that work now. Yeah, I mean, obviously, the further it sort of travels into the workflow, the more resources have been expended on something that ultimately needs to be punted out. Obviously worse if it gets all the way through to publication, but even the peer review you're using reviewer time, et cetera.
I'm going to go a little off script because I think there's a good question in the chat that is really relevant to this right now. So you all can decide who wants to take it. But the question is, does your team advise editors? Do you make final. Who makes the call. When you have to resolve like a OK, this piece looks problematic.
So are you all the ones saying, yeah, it's out. Is that the editors? Does it depend like, what's your role there. I don't mind. Amanda looks ready. She unmuted first. So, the giant document, the operations manual that I mentioned.
Allows us to point to policy when we do have issues. So it's very easy to say have or the operations manual says that in cases of this situation, you do this. We also have because IEEE is a volunteer led organization and our volunteers are extremely dedicated. They often form everything has to have an ad hoc committee whenever there's a complaint or an issue. So you have multiple people agreeing on an answer.
So it's not just a single person or a single editor saying like no, this can be published. Obviously sometimes things slip through the cracks. But we always have a team of people who look at it and a team of people making a decision on if it's appropriate for publishing. And I think that probably explains it takes time. Yeah and our team also provides recommendations to the editors if they ever get stuck because we are ultimately like the knowledge, the knowledgeable one of this operations manual.
So if they need help, we're here to provide suggestions to them of where to look or what a good corrective action may be for a person. OK Kim, do you have anything to add to that or. I guess it's different in that we don't have the kind of committee volunteer structure that I know IEEE does. And again, it's different because we have that internal internally managed peer review.
So a lot of the time it will be ideally a shared decision that the research integrity officer and the editor will agree on, and I should say to within an inch of their lives. Our peer review editors are trained in research integrity. They cannot get away from us. So a lot of the time there is mutual agreement. And much like somebody else said, you know, the policies are really clear.
Our editorial policy is very clear. We have as much as possible best practice. And so on. 144 pages. My God. We Not all of that is for misconduct, not just section eight. So, yeah, as much as possible. It's a mutual by mutual agreement.
And we'll always take the time to have a discussion about why we believe a particular action. But if, if, if, you know, there was a fight in the car park, then ultimately I think the research Integrity Team would have the final say. But it's never been put to the test. So yeah, I mean, I think what you're saying is it speaks to there's a collaborative attempt to be cohesive and it's good that you haven't had a moment where there's been such a divisive perspective that somebody has to have the final say and instead you're able to work these through to the point where there's a shared understanding of what's trying to be achieved here.
And that guides then that. So yes, I didn't mean to say it was 144 pages on research integrity, but I thought, you know, sorry, it's the librarian in me. People mention a document and I'm like, I can get this for you. So it's just, it's instinctual. Like, I can't help it. OK, we've got a few other questions coming in, but I know we're going to get to some of them as we talk, so I'll just bring them in.
So let's go back here. So I mean, we are hearing so much about paper Mills these days, right. The complete fake paper. There's no research there. Et cetera. We're hearing certainly a lot about image manipulation. And we certainly have sleuths definitely working in that area. Elizabeth blick, for example, who gave a great keynote at the SSP conference last year.
But research integrity is not just all paper Mills and image manipulation. In fact, I suspect it's mostly not. Maybe I could be wrong, but it strikes me that research integrity is a much broader thing than what we're seeing in the headlines. So I'm wondering if I can ask a few of you to say like, OK, what's an example of something else you end up dealing with in this case.
And yeah, I'll get you to get the first word this time. So a couple of categories of research integrity that we deal with that don't make the paper, so to speak, are authorship disputes and duplicate submissions. Authorship disputes comes up a lot. Sometimes pre-publication, sometimes post-publication. It can be anything from author number to things. They should be listed as author number one or anything. Something that's more common that happens is we'll get an email from somebody after an article has been published and they'll say hey, I should have been listed as an author here, and here's all this evidence.
Why can you please investigate. One of the things I want to mention that is just because I'd love for there to be more awareness broadly that that's something that editorial offices don't have jurisdiction over. So pnas, like most journals, do not adjudicate authorship disputes. And the easiest way for me to explain why that is that we just it's just not in our purview when somebody sends like proof.
We don't have the jurisdiction to go and subpoena lab books, you know, to really get all of the evidence that is necessary to make these calls. So when there's an authorship dispute, we just we say, you know, you guys need to that out on your own. Here are some resources for where to turn if you can't it out on your own. And as a journal, we will then respond to whatever the final decision is.
But we will not make that decision. But one of the other things, it's kind of a different not in the news things is duplicate submissions. And one of the reasons that I love talking about this that these days is that so much of our work involves negative things. You know, we're talking about misconduct, we're talking about integrity issues. And so with duplicate submissions, it's been such a heartening example for me over the last few years of a positive thing in our industry.
So the nutshell of what a duplicate submission is, is that pre-publication an author might submit the same manuscript to more than one journal. They're kind of. Like throwing their hat into multiple rings, and that is disallowed by most journals. And the reason it's so difficult to catch when this happens is it's pre-publication. So I think most of us use iThenticate software or other software to check for plagiarism to see if a manuscript is already out there in the literature.
But when it's pre-publication, there isn't something that's already out there for software to check against. So the only way that I've ever really seen of having authorship dispute or sorry, not authorship duplicate submissions flagged is if a reviewer has been asked to review the same manuscript for two different publishers because they're a subject matter expert.
And then they say wait, hey, this is weird. And they'll contact the journals and say, I think you guys might be looking at the same thing. And the thing that's so heartening about it, too, on top of that is that it seems like there's been a lot of really willing cooperation between journals now to investigate and address that. There is such a deep, steeped and necessary level of confidentiality in our work, but this is one case where it's been really great to see journals saying, yeah, we're willing to confidentially share this information with you so that we can sort it out.
And that's just like, you know, just that warms my heart. That's great. Really interesting. The authorship, the differences between ethical misconduct or concerns around misconduct. You know, that publishers adjudicate some of it. But there's other players here who might adjudicate other parts of it. And as Ben mentioned in the chat, the stem integrity hub is obviously working on this issue of the duplicate submission.
So, Amanda, is there anything that comes to mind that you deal with. You know, in addition to these kinds of things we've been discussing. So exactly what Yael said. Yeah, sorry, stumbling the, you know, we, we see those same problems and again, authorship disputes say exactly what you said, seconding that and some of the other things that we also see.
We have additional problems with reviewers that we've seen that they've reviewed for another journal and then they try to submit the work to us, claiming it as their own. We see that the AI issues are obviously, you know, growing. Another thing we see a lot of is the citation stacking or manipulation from editors, reviewers and authors. So we see it from all angles. And you know, just the standard like plagiarism and stuff, you know, is probably our most prevalent issues.
So do you feel there's anything that comes up in the social sciences context in particular, your. Representing that area in a certain way with the focus that Sage has in this area. Yeah, I think what we see are obviously all of the issues that we've talked about so far. There's this additional complexity that I think this field research integrity really sits on very much STEM focused principles.
So I think someone said at the start a plagiarism is really easy to detect. Well, it's not in the social sciences, sadly, because people don't directly copy words, they copy ideas and then it becomes really challenging for us. Very similar to an authorship dispute to then adjudicate who came up with this concept first. And often it's somebody who's met someone else at a conference.
Somebody acted in bad faith and their collaboration just disappeared and now they're very upset. Things like that I find quite difficult for publishers to really disentangle, investigate or resolve some of the other things that we also see are. Politicization of content and really looking at things like academic freedom, hinging on things like hidden conflicts of interest or hidden agendas.
We're starting to see this also. I mean, we've always seen this in medicine, really in health care, where people have been affiliated with commercial organizations and furthering their agendas, commercial agendas via research. But it's very much alive in the social sciences. But again, very difficult for us to investigate. We don't really have any guiding principles on how we could even investigate if an editor was using a journal as a, I don't know, a tool to further their political interests.
I think I saw something recently where certain journal editors have been asked to come to the US Supreme Court to prove that they weren't being influenced by a federal government during the COVID 19 pandemic. I find that that's really alive and well in the social sciences very difficult to investigate it, and that takes up a lot of time and resource to really dig into what's going on. So those are some of the unique challenges that we're facing in the social sciences space.
Yeah, that's I think that's a really great point. I think this is true across a lot of publishing questions and comments that we are often working from an stem centered strategy. And then it sort of know, I mean, it's certainly a basis, but it's not comprehensive. And also, I think what you just brought in is something we haven't talked about as much, which is OK, we've talked a lot about author misconduct.
Honestly we have not talked as much about editor misconduct as you were just raising the like OK. Where the ethics of the editor and in politicization or accurately representing the breadth of the field or what have you, as well as though a journal has a scope, right. So these sorts of things. But we also haven't talked as much about peer reviewer misconduct, which I presume is also something that is on your agenda.
So I have to say this, I'm amazed how positive you all are given. Like at some level, the degree to which I'm like problem there, problem there, problem there. So I think we're very fortunate to have you all kind of working in this area, sort of having a vision of what you're able to contribute. So OK, in spite of your great positive attitude, I'm going to turn to what is your biggest challenge that you're seeing right now and how you're trying to organize to address this emergent issue.
So, I mean, what happens if scholarly publishing can't meet this challenging moment with successful strategies like what's at stake and what are you trying to do about whatever these biggest pressing issues are. So, Amanda, what challenges are you thinking about these days. So you just led me into the perfect segue because I was going to say adapting to how dynamic the misconduct issues are becoming.
You know, especially like I'm going to use AI specifically as an example here. So software can be used to fabricate, manipulate data. It can be used to paraphrase software or as paraphrasing software so that it's not your typical definition of plagiarism, but it's now like, you know, you changed a few words, but you know, it is the same, but it's not the same. So, you know, we see a lot of that, you know, has allowed for paper Mills to run more rampantly, you know, in addition to everything else that AI'S causing for us.
But because of that, though, you know, our institutions feel like everyone on the call seems to, you know, say how much their organizations have ramped up their units. And, you know, that's especially true for us. You know, our team has grown a lot in the past couple of years. And, you know, we're continuing to scale up the way that we work. And, you know, it allows for a designated team to work on these issues.
So that we can be more focused on, you know, finding all of the new, different softwares available and we can pilot those and see what's working the best to detect the known issues that we're having. Even more importantly, things like the stem integrity hub, which is allowing us to collaborate with other publishers and say, you know, what issues are you seeing. How can we work on this together.
What's a way that we can share ideas. And that has been really influential in the way that we work, because everyone does see a slightly different problem. You know, just listening to us right now and it's really great that we can have the opportunity to say hey, have you seen this software yet. Like, this is really good at checking for techs. It's better.
It's the best one we've seen or, you know, whatever the situation might be. So I think. You know, that's what keeps us so positive, is that everyone is working to a shared goal here. Great so, Yeah. What's keeping you up at night. Well, I think AI is definitely keeping me up at night. But something that's more just to speak broadly is the ongoing erosion of public trust in science.
It's something that's been happening for years. There have been specific reasons why people feel more comfortable saying that they know they don't trust the experts. And when there's issues with research integrity, I think that can easily add to it and it can give people something to point to, to say, well, you know, this just proves why I shouldn't trust anything. There was a really great and chilling example that I saw last week.
Somebody posted about the solar eclipse and how it was really depressing that scientists could predict this thing to within a second to the point that people were making travel plans and, you know, doing this. And yet sometimes these same people are not going to believe scientists when they point to things about climate change. And that was just that was really, really sad. So if there's anything that we can do as people working in the Research Integrity space to help offset that public erosion in trust in science, that's that keeps me up at night.
Yeah I mean, it's kind of interesting that we're working for trust in science and having to deal with the fact that ultimately we have people who are scientists who are not trustworthy. So there's a part in which it's like, well, like if all these people were trustworthy, we wouldn't be scaling up our. So there's a real tension there between wanting the trust in science while admitting that all of the actors are actually not trustworthy.
And so what is the reasonable response from somebody, you know, who's facing this, you know, who doesn't understand the kinds of systems that we're talking about. I think it's very challenging. And, you know, you know, maybe the eclipse is something different than the. Do I. You know, I don't know. I just think it's an interesting thing to talk about the difference between the science and the actors who are doing the science.
And I would say I would add to that, too, because that's an interesting point about how, you know, as a journal we might have a human curiosity about why something happened, why research misconduct happened. But at the end of the day, that shouldn't be part of the discussion. It needs to be kind of just the facts thing. You know, why somebody did something or didn't do something is kind of irrelevant.
We just want to make sure that by and large, that just the final information is accurate and how it got to that point becomes a secondary part of the story. So, Kim, I know you are very active in a number of industry groups and probably the rest of you are as well, because the kind of field that we are in, we're very collaborative, engaged group. But I'm wondering if you could draw on some of your industry wide work to say, OK, what's the bigger picture that people are really paying attention to.
I think there's a couple of groups that I touch on there. So cope is obviously the kind of industry guideline that many of us adhere to, and many of our journals are cope members. What cope have been trying to do is engage institutions more, and I think that's really important work. And I think that is one of the things that keeps me up at night. And I know I've seen, you know, posts on the scholarly kitchen.
You know, publishers are taking a lot of heat from all sides here, and it doesn't always feel like we're particularly well supported by other players in the community and would include funders and institutions in that. And I think cope are doing some really good work in trying to reach out to institutions both educationally but also learn from institutions. What is it that we could all be doing better here. And so I think that's one.
The other piece of work is the stem integrity hub, which is really trying to address this challenge from a technological point of view. And that's been hugely beneficial for me, both just in terms of personal development, but sharing intelligence and knowledge with other people facing into these challenges day in and day out. We're learning from each other all the time.
We're sharing more information than ever and fantastically stem, you know, so far footing the bill for all of this. So, you know, only through donations really is this work all being done. And we're in an amazing position. We're trialing a number of the different offers that the integrity hub has. And I just feel really privileged to be a part of that.
And I think it's really key that we all accept that technology and humans are a part of the solution. It cannot be one or the other. And to completely rely on technology is foolish. But but, you know, to completely rely on humans, it's impossible to scale. So it really has to be a compliment and huge credit. And Thanks to stem for what I think is really kind of leading progressive work in this area.
That said, there is a lot of technical choice, technological choice on the market now, and we're almost flooded with it. Actually there's been a real proliferation of companies offering solutions, most of them startups, some of them from people in publishing, which is brilliant to see that innovation happening. But applying those technological solutions at scale is now a challenge.
So how do we integrate those solutions with our current software and ways of working at the point that we're not introducing more inefficiency, it like we have to be able to do this at pace and at scale, and that is a challenge. And I think we all have to get wise to the fact that our profit margins are they have to reduce in order to address this. You can't do research integrity on the cheap.
As much as we'd all love to, you can't. You need to invest in the people and you need to invest in the technology. And those are discussions that I find myself having reasonably often with, I should say, a very supportive board of directors. But I'm the person no one ever wants to hear from. I'm either telling them there's a crisis on the horizon or I'm asking for more money. You know, it's sort of interesting as we're sitting here talking, there's a real similarity to the challenges libraries have within their institutions, right.
We don't bring in any tuition revenue. We cost money for the institution, yet we're critical to the quality of the institution. And so it strikes me that research integrity work, you know, the staffing and the technology, et cetera, has a similarity there. You're critical to the quality of the work, which means to the ability to have revenue, but you don't bring any revenue yourself.
And so you're always making this argument about the value you bring to the process without being able to say that you directly bring revenue to the process. If anything, if I'm sorry to interrupt you, if anything, we reduce revenue because we're the people who are rejecting submissions like we're making the numbers go the wrong way. I suppose in an APC model for sure. Sorry Yeah in an open access model, yeah, absolutely.
Yeah so it's, it's just know, as, as the librarian for my entire career. And I was the president of the Association of College and Research Libraries who kicked off the value of academic libraries initiative was, which was a whole initiative around libraries being able to articulate this value proposition without having a sort of without being able to say, look, we also bring in revenue.
No, in fact, we actually cost you a lot of money. So I'm very, very sympathetic to this and think it's a really challenging situation because people are always the like, well, of course, we need you, but how much of you do we need. And so, yeah, this is really interesting. So, I mean, it's clear there's a lot of work to do. And I want to turn our discussion back for a moment to the actual research integrity professional.
Let's say we've got people in this webinar listening later who are like, I think I'd like to do this work. This sounds interesting to me. It's growing area. So presumably there's jobs, you know, that sort of thing. So what do you think the knowledge and skills are needed to work in this area. I mean, as if you're in a position to be hiring, what are you looking for.
So agile? Let me start with you. Sure Yeah. So I think I'll start with knowledge. And I think nobody will dispute this, but I think really robust understanding of peer review and publishing operations is a really helpful set of knowledge and skills to have in this role. Because if you don't understand the processes up until the issue reached you, it's going to be really challenging to resolve it.
In terms of skills, I really like to think of them as skills and behaviors. I think we're talking of two research integrity professionals. I think the one that's reactive, that's handling the things that come up post publication, but also the proactive side where we are starting to do more around technology, working with other departments in, in the publishing industry.
So I think from my perspective, somebody who is successful in this area has to be very detail oriented and workflow oriented. And I think following a process is hugely important. Now, more than ever, because people are challenging the decisions we make as publishers, especially when something is published, people get very upset. If you talk about retracting their work, they will challenge it.
So your process needs to be ironed tight. We adhere to cope guidelines, but you know, cope guidelines, cope flowcharts don't apply to every single situation. So really having transparency around that process, adhering to that process is really important. I would also say that for the proactive research integrity individual, I really like to see creativity and experimental behaviors. Again, that's very much related to workflows.
I'm, I'm an experimentalist by training, so I know that you need to follow a process. A scientific process is important. You can't just go off and try any tool on three submissions and say, hey, this works really well. So I think those are, those are very key skills and behaviors that help people be successful in this area. From my perspective, that's, that's really great. And I really love the notion that you have to have this sort of creativity as well as this ability to design an experimental approach or mindset to bring to it.
Can you've mentioned technology a number of times, so I'm going to ask you to scope maybe and say like, so what do you think the technology skills people need to have. I mean, do they do they need to be able to program these tools or evaluate them. Like what. What's the tech skill set for this. Professional programming does not hurt.
And we actually find that a lot of the information we get from the sleuthing community is people who've built their own computer programs, you know, in the evening, outside of their day job, and it's fantastically useful information. So I would agree, like the ability to code, the ability to work with big data sets is really important. You know, statistical skills also incredibly helpful. But AI is probably the real big one, at least a comfort with AI.
But as I just said, you know, speaking to that creativity piece, the ability to think how technology could solve a problem, you don't necessarily need to be able to do the coding or make the app, or whatever it is, but you need to have the creativity and that way of thinking to visualize how technology could be a part of the puzzle. So how technology could help. And I suppose that's part of evaluating the marketplace now is like people will be very happy to take your money, but is this going to actually be helpful.
Yeah and understanding how those technologies are built and how they are, how they are working and what the data is behind them is really important. We've tested a number of technologies which I won't name for obvious reasons, but where we've had a really high false positive rate and we've had to drill into what exactly is it that's causing that. And you need to be able to do that fine tuning in order to say this tool is right and this tool isn't.
And so that real comfort in data and analysis is becoming more critical than ever. Yeah, yeah, Yeah. I mean, obviously there's an interpretive ability that has to come out like you get a report from a tool. But yeah, I remember one. I was looking at one of these tools. And I was like, I don't understand the label on that. You're asserting an interpretation of this data point, not just describing the data point.
Are you sure you want to be asserting an interpretation from your technology or do you want to be providing a data point that the team should be interpreting? And I think that's an interesting I think for good reasons. They want to be able to say like no, I can tell you what this means. I'm much more skeptical about the technology's ability to tell me what it means, even though I think it often does a great job of pulling out data for a human to consider.
So it's an interesting, as you said earlier, like it's a technology and a human solution, which strikes me as part because it's a technology and a human problem as well. So we have some great questions in the chat. And so I want or in the Q&A, I'm sorry. So I want to turn to those now and I'll just do an attempt to make some sort of sense of order. And what I'll probably do with each is maybe ask one or two people to respond.
So if there's one, as you hear me ask it, you're like me just maybe put your hand up and I'll be watching for that. So there's a lot of question here about the training and development function that I don't think we've talked about as much. Like how do you onboard editors to collaborate with you. How do you train your peer reviewers? Like what.
What kind of training and development program do you have in place for your community. So we talked a lot about policy development, tech evaluation, all these things. I don't we've talked as much about the training and development role you might have. Do you have that role. How do you execute it.
I'll share how we do it at iop, but I'm sure it's relatively unique because of the internal peer review function that we have. So when somebody joins our peer review team they typically go through a three month training process. They will be working on live manuscripts, but all those manuscripts will always be checked by a more senior colleague. And research integrity is an absolutely critical part of that.
It's not like one module, it's a bit of every module. So every task that they go through, there's a research integrity element to it. So that's our, our internal staff our, our external training is Floopy for want of a better word, it's really difficult to pin down for our reviewers. We have a peer review excellence program which they can complete online, but of course it's not mandatory.
And we run webinar series for our editorial board members. And we provide documentation, but it's not mandatory. And how much of it goes in is questionable. You know, there will be some incredibly diligent members of the editorial boards and there will be some who just don't believe that this stuff happens. And I know that for a fact because I've had those conversations. So it's for us, we have a massive advantage in the way we do peer review.
I can't imagine how you train a cohort of 300 journals worth of editorial boards. I'd love to learn from the other panelists, to be honest. I can take that one. Amanda so we actually just had this meeting last week, so we have over 200 publications and we have something that's called a panel of editors meeting that happens once a year. And it's for all of the editors in chief.
And, you know, there's a bunch of information that gets passed on during this. It's like a three day meeting, I believe. And one of the things that it's presented on is update from the publishing ethics team and things that we're doing and things that, you know, part of it's a training session, especially for the newer editors, and then those editors report it back to their respective editorial boards.
But, you know, that's something that Tripoli does. And then we have a lot of the same things that Kim also mentioned with the reviewer training, but it's not mandatory or anything very similar. So one group large setting and they pass it on for them. They also every new editor also gets onboarded by their respective journal manager, too. So also staff journal manager is working with the editor in chief for each of the journals.
Yeah, that seems a pretty. So that's an interesting question that could be asked about the effectiveness of such training, the adequacy of such training. You know, there's technology solutions, there's training solutions. When do we need, which I certainly see all those tweets that come out where people are like, you know, this was rejected because the editor said it had, you know, x percent plagiarism, but it's not plagiarism, it's citations and it quotes.
So why is it being, you know, and clearly there's just something that's gone wrong in the process. You know, and, you know, you kind of see it as a training problem more than anything, but then how you actually get a training solution in place. S for volunteers is a very challenging situation. So there's, there's another question here that I think is really interesting, which is we've been talking a lot, I think about the signals that tell us that something is a problem.
Are you also beginning to develop signals that say, actually, this one looks pretty good, like the it's a sorting rather than it being a detection function. A sorting function is kind of how I would say it. So we see this in the sense of trust markers or I think the piece in the scholarly kitchen yesterday, which I'll link in a second, was sort of saying like, you know, can we look for the Peacock showing its feathers, sort of saying like, no, you can trust me.
Are you starting to work in that way. So Yale is right on this one, so go for it. So I'm going to play a little bit of devil's advocate here. So I will start by saying that I think that everybody who works in this space develops kind of a Spidey sense and you kind of trust your gut. But the flip side of that, the devil's advocate part of that is that it I think it's extremely important to never take something for granted.
There are just too many cases where something will It's so clear what has happened and who did right and who did wrong. And then there's this plot twist and everything that you thought you knew is fact is completely blown out of the water. So keeping a mindset of you, you sometimes just really don't know what is actually going on. So I'm sorry.
I know that we all have positive attitudes, but that's a little bit of like a maybe, sometimes not. Sure so mean. So there's definitely the like, you know, trust, verify kind of thing happening. But I guess I'm still wondering know. There are these sort of trust marker things or things that people are trying to investigate.
And maybe this is even in things more like paper mills, like we've seen people report on things like tools that are in the marketplace now that will tell you that this author who's on this manuscript has never published in this area before, in spite of the fact of having a publication record, or alternatively, this author publishes in this area all the time. In fact, here's how many times they've already published in your journal or with your publication so that what I'm sort is this coming in or is it really mostly focused on the detection of the problematic.
Yeah Kim I would say it's mostly still the problematic stuff, but increasingly there's some really interesting innovations in this area. So the orchid trust markers project is, is potentially really useful if we can get institutions engaged with it, that would be great. And things like data availability statements and the actual posting of raw data is not foolproof by any means.
And I absolutely hear you feel like you can't take anything as a given, but it's an indicator the presence of a preprint that significantly predates the submission with the matching author list and so on and so on. Little things like that are all helpful. It's not a magic wand by any means, and I don't want to give paper Mills any clues, but there are, there are little, I think, little digital footprints that good research should be typically leaving, especially when we talk about open science.
And the presence of those footprints is really reassuring, really reassuring. Like I said, we just need institutions to get into supporting that and, and if not mandating, at least encouraging their researchers. It makes me realize I should do my own disclosure here. As chair of the orchid board, I think a lot about these trust markers. So little, little make sure I disclose my at least sort of vested interest in this approach.
But it was in the Q&A. I want to be clear. So we are so close to the top of the hour here. We could talk about this for so long. We time just flies. So I want to acknowledge this great team of people that we've had on our panel here today, their amazing insights, their willingness to share their experiences. So honestly and forthrightly, if you would, please join me in round of applause for our panelists, as well as the SSP staff and volunteers who made today's webinar possible, as well as the scholarly kitchen for its support of this webinar.
I know I personally have learned a lot today. I've enjoyed the conversation and I hope you have to let me turn it back over to Lori for a final word. Thanks so much, Lisa. Thank you everyone again for participating in today's SSP webinar. Thank you to our speakers again for sharing their time and expertise.
Really fascinating. I agree with Lisa. Please complete the evaluation. By scanning the QR code, we encourage you to provide feedback and to help us determine topics for future webinars. Registration is now open for the SSP annual meeting scheduled for May 29th through 31 and early registration ends on Friday, April 19.
So get your registrations in by then. Thank you again to our sponsors access innovations open, Athens and Silverchair. And today's webinar was recorded and all registrants will be sent a link to the recording when it's posted on the SSP website. And that concludes our session for today. Thank you, everyone.