Name:
Working Together to Preserve the Integrity of the Scholarly Record in a Transparent and Trustworthy Way
Description:
Working Together to Preserve the Integrity of the Scholarly Record in a Transparent and Trustworthy Way
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/717442f5-bffd-4b58-a401-421ad189e421/videoscrubberimages/Scrubber_3324.jpg
Duration:
T01H04M36S
Embed URL:
https://stream.cadmore.media/player/717442f5-bffd-4b58-a401-421ad189e421
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/717442f5-bffd-4b58-a401-421ad189e421/session_5d__working_together_to_preserve_the_integrity_of_th.mp4?sv=2019-02-02&sr=c&sig=JOhvezJTGPnbYbdcQKW4uBSZVjfoPGoujD2W3wi35Ns%3D&st=2024-11-22T18%3A08%3A38Z&se=2024-11-22T20%3A13%3A38Z&sp=r
Upload Date:
2024-02-23T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
And also ships. And second one was mainly focusing on peer review. And the last one was coy and a number of other issues that our members are interested in. And we had a workshop for how to be listed on Scopus and many of the Asian countries and journals that they're looking for certain.
Goal is a sculpture. I don't know whether that's a good thing, but that's the reality. And we had a two days workshop for that and we had last year the second workshop in online about the peer review. And this year already we have had one, as everybody says, and chatgpt about authorship and what's going to happen and what we're going to do it with publication.
And this is the science editing twice a year and newsletter four times. And this is what I've said and timetable for year 2023. And other than that, we have a publication workshop with the government and other organization and also consult and give a lecture on publication ethics at the government and governmental body and other organizations.
And in order to have this qualification for. The certification exam that we run, that one must take this training credit for three years and then qualify for the examination. With that. Thank you very much.
Thank you. Great to hear the training that's happening with the Science Editors in Korea. I'm now going to move on to a section about open metadata and the research nexus and how that can help to tackle these challenges around research integrity. Lovely so as I'm sure many of you know, crossref is a member organization.
For those in the scholarly communications community to register identifiers and metadata about research objects which we then make freely available through our open APIs. On the screen I'm displaying an example that was registered by one of our members. It was the Korean vaccine society and some of the underlying metadata about that research object.
We now have over 18,000 members in 151 different countries, and our member base is very diverse. Many of our members no longer self-identify as publishers. We have universities, societies, government agencies and ngos, libraries, museums, some banks, even a couple of botanic gardens. And in fact, our largest group of members, 34% Now consider themselves universities first and foremost.
Basically, if your content is likely to be cited in the research ecosystem and considered part of the evidence trail, then you should join crossref. Our members have so far registered over 146 million records with us. These records include the sort of things you probably would think about journal articles, book chapters, conference papers, but also some things that you might not realize, such as grants, peer reviews, preprints.
And it's very easy to focus down on these individual research objects and the metadata about them. But actually it's in the relationships between these objects and how they act on each other, that the really key context comes out to help us preserve research integrity. And we tend to talk about the research nexus, and I'll explain a little bit more about that. There's an image on the screen, which aims to explain the research nexus, and I'll describe what it shows for those who can't see what's on the screen.
There are six differently colored circles on the screen. The center circle describes the individual research objects, so journals, books, data, preprints, peer reviews, and also the actors involved funders, institutions, authors, researchers and around this inner circle are five other colored circles which show the relationships between these objects and actors and also how they act upon each other.
Funding, creating, posting, responding, and using. So rather than just focusing on the metadata about the individual outputs or the individual actors, the research nexus captures the relationships between these elements. And it's this extra context that's key to preserving the integrity of the scholarly record. Having this metadata and extra context freely and openly available makes it easier and faster for the community to make decisions about the trustworthiness of organizations or their published outputs.
Being able to see the authors involved, their affiliations. Who cites what? Who funds what peer reviews, what post-publication activity there's been. This all provides vital context for folks to evaluate what they're looking at, and conversely, it can make it harder for parties to pass off information as trustworthy. If either this context isn't there or this context starts to make things look slightly suspicious.
Now everyone has their part to play in making the research nexus vision a reality. We heard on Wednesday from Elizabeth that it takes a village, and that's really true here. You know, institutions need to work with their researchers to explain why these things are so important and why these metadata elements are so important. Authors need to ensure that they're passing them on to the publishers they work with.
Publishers need to work with their platforms and suppliers to make sure that they are collecting that metadata and passing it to crossref. And we need to make sure that we're collecting it in as easy a way as possible and also distributing it to the community in a way that's as easy as possible to consume. So here's where crossref sees our role in this. We're focused on enriching metadata to provide more and better trust signals while keeping barriers to membership and participation as low as possible to enable an inclusive, scholarly record.
And you'll see that we mention keeping barriers to membership and participation low, and that's very important. Full participation is key. If organizations are blocked from participating to the Nexus due to cost, lack of information, or they're maybe just starting out, that reduces the completeness of the record. We aren't here to police things, but we are here to ensure that the community has a full, clear and open picture of the landscape.
And there's a lot that we're doing to enable global participation, which I haven't got time to go into today. But if anybody is particularly interested, do just come and grab me at the end of the session. So I thought I'd spend a little bit of time talking about funders. As we know, they are key to this and sadly we don't have funder participation today.
But crossref has been working with funders for quite a few years now. Funders can actually join crossref as a member and register identifiers for their grants, so publishers can then include metadata about specific grants that have funded specific work in their funded outputs. We spend a lot of time talking to funders. We've got a funder advisory group.
We've also been working with the open research funders group, the altum Europe, PMC, the OSTP and orchid's funder interest group. And this is what we're hearing from funders. Yes, they care about open research, but they do also care about legitimate and traceable research and they do want to be involved in these conversations. So if any, and they are looking for publishers to work with them on this.
So if you do want to reach out to funders again, do come and have a chat because we might be able to broker some broker some conversations. And funders are particularly interested to see if there are retractions or corrections on published outputs that they have funded. One publisher, one funder we spoke to relies informally on a community contact to notify them of updates, corrections and retractions to content that they've published.
And that's manual. And there's absolutely has to be a better way to do that. And spoiler alert it is. It's open data. So what can those in the audience do to help? Well, those who are crossref members can start by making sure you're including this extra context in the metadata that you register with us.
We mentioned corrections and retractions, but references grants, particularly now there are identifiers for individual grants, include that in the metadata. And without further ado, I'm going to pass on to Web of science nandita. Thank you, Matt. Can we just maximize the slide, please?
My eyes are not managed. I can do without the notes. Super so am. Can you hear me? I am nandita chaudhary and I'm the editor in chief and VP of web science. Could we just have it so I can see the slides? Sorry I don't need to. Excited because I can see what I've written on the slides that be really helpful.
Perfect So I want to start by saying a few words about perverse incentives and unintended consequences. So any kind of evaluation that uses just data in a very well metrics rather in a very formulaic fashion and relies on single point measures is not good practice. And this is encapsulated in goodhart's law, which is shown on the right of the slide here, which says that when a measure becomes a target, it no longer is a good measure.
And we've got a sort of a hypothetical situation here where you've got a nail factory. If they are rewarded for having many nails, they'll make tiny, tiny nails that are useless. If they are rewarded for making heavy nails, they'll make enormous nails that are useless. And this happens in all walks of life, including. Where we work in the research ecosystem. And what happens is it creates these perverse incentives and can result in unintended consequences in that the scores themselves becomes what people are aiming for rather than what the scores are meant to measure.
So the misuse of bibliometrics and the overreliance of bibliometrics within research assessment is driving fraudulent behavior. So metrics, if they're used in a responsible way, are really helpful in research assessment because they can add objectivity and they can counter any human biases and subjectivity in peer review. However, we need to remember that metrics bibliometrics are very, very powerful normative tools that can drive changes in behavior.
And so they should never be used only by themselves as a replacement for qualitative. Oh, sorry. As as a replacement for the. As a replacement for peer review. Now, what's happened in research assessment is that there is this drive to have as many publications as possible and to be cited as much as possible.
And this is what's driving a lot of the fraudulent behavior that we're going to be talking about on the panel today. And as Amanda said, what I often say is, unfortunately, we've got gone from a situation where sort of fraudulent behavior was a kind of cottage industry. was individuals or small groups of cohorts defrauding us to a much more sort of industrialized, wide scale scale problem. And what's happening is that research integrity is being compromised and the scholarly record is being polluted.
So as there's more and more pollution of the scholarly record, it's more and more important that we can find trustworthy sources of data which are then used to create trustworthy metrics. So at the web of science, we have a rigorous selection process to make sure that we filter out any of the journals, books and conference proceedings that want to be included into the websites. And to give you sort of a sense of context, only around 15%, that's one 515% of journals that are submitted for evaluation actually pass evaluation and enter the Web of Science core collection.
And the core collection has six different indices in it, one for books, one for proceedings, and for journals. So the four general indices we've got the three historic ones that are subject specific for the sciences, social sciences and arts and humanities. And they're shown at the top of the diagram here. And then we have our multidisciplinary index, which covers all disciplines.
So when I say only 15% of journals enter, I'm talking about ISI because ESI is selective, but it's not competitive. We're not looking, for example, for high citation activity. If you've already got 1,000 genetics journals, journal 1001 comes in and it passes out 24 quality criteria, it's allowed in, however, to get into one of our subject specific indices. Often you'll hear them called the flagship indices.
It is competitive, so you have to pass the 24 quality criteria, which we're basically looking to see a journals does what it says on the tin. If it's saying it's doing something, are they actually doing something? We then apply an additional four impact criteria to look for the journals that have the most scholarly impact in their field. And as a sort of measure of scholarly impact, we mostly use citations.
I should step back. What I didn't mention is that once a journal has entered the web of science, it can't assume it's in for life. We periodically reevaluate journals to make sure that they are still meeting our quality criteria. And if you want to know more about evaluation, our evaluation process, all our criteria and policies are on our website, the URL is on the slide.
And we also have regular open house sessions where one of the website's editors go through our processes, and then that's followed by a Q&A. So index journals, as I mentioned before, that no longer meet our quality criteria are delisted. And as there's more and more fraudulent behavior, we are spending more and more time re-evaluating index journals at the expense of evaluating new journals.
And so we've always been very reliant and very grateful for community feedback, telling us where to sort of devote our attentions when it comes to reevaluating journals. Because otherwise you can imagine we're looking for needles in a Haystack. We've got over 22,000 journals in our collection. We can't reevaluate every single journal, you know, frequently. However, that's quite reactive.
And what we've heard already in quite a few sessions is as fraudulent behavior increases, we are having to become more and more proactive in our approach to finding bad content, and that's what's happened with us as well. We have recently had the delivery of an internally developed AI tool that helps the editors know where to focus their attention by looking at some hallmarks of journals that are at risk of going rogue.
So once the tool identifies these journals for us, the editors then apply their normal manual process of applying our 24 quality criteria. And you may have seen that there's quite a lot of noise earlier this year when we delisted over 50 journals in one go. And following that, there's a sense I heard from a few people, oh, but why are you penalizing publishers for being transparent?
Why are you penalizing publishers for retractions? I just want to state very clearly we do not penalize publishers for being transparent, and we certainly do not penalize publishers for retractions. Retractions are a very healthy way of keeping the scholarly record clean. All the journals that we delisted were based on evidence we saw from what was on the journal website, the content they were publishing.
We weren't sort of given any inside information from the publishers about which journals they themselves may have been investigating or were able to peek behind the curtains to see their peer review purpose processes. We delisted journals where the content they were publishing clearly had not been subject to rigorous review just because it was very, very out of scope. It was nothing to do with the scope of the journal or it was just nonsense in some cases.
So another learning we had from the listing of the 50 journals is that there was a degree of confusion and almost frustration that we didn't list out which journals had been delisted for failing our criteria. And we listened to that. And we have something called the master journalist, which isn't behind a subscription barrier, and that should be the authoritative source to find out what is currently covered in the Web of Science core collection.
It's updated once a month, and from last month's update from the May update, we are now listing which journals have been added to the web of science, which journals have been delisted from the web of science, and the listings can either happen because they failed our editorial criteria or because, for example, they failed one of our production criteria. Sometimes publishers just stop sending us content, and also sometimes a journal will look as though it's been delisted just because it's changed its journal title.
So it will be covered still, but with a new title and not the old title. So one of the things that's really key for us is communication and collaboration with the rest of the community. So back in 2019, we sort of tipped Retraction Watch off about an investigation that we had conducted in house of one of the first examples of one of these sort of authorship sales scams.
It was it was a Russian network, but the authors and the articles for sale were from, you know, global throughout the world. And Ana belkina has been really taking this up. And she regularly publishes instances of articles that are on this website. So people are aware of that. In return, a couple of weeks ago, she got in contact with us to say that we were, in fact indexing a journal that had been hijacked, which allowed us in turn to act quickly and make sure that we weren't taking any more content in from this journal, putting it on hold and investigating.
So on the web of on the master journals as well, we show when a journal is under investigation. Sometimes that investigation shows the journal's fine and we take the tag off. More often than not, that investigation leads to that journal being delisted. So for my final point today, I'd like to mention an upcoming change in the JCR release that's coming later this month.
And at the moment, only the most scholarly, impactful journals in the sciences and the social sciences are eligible for a journal impact factor. The diff from this month's JCR release all journals in the Web of Science are eligible for a diff, and this means 9,000 additional journals will get a diff from 3,000 and publishers that will get a journal with a diff for the first time.
And this will increase our coverage of journals from the global South with a diff at least 5% and there'll be an extra 8% of journals that are fully open access that gets the diff. So why are we doing this? The Jeff was sort of introduced in 1975 when we didn't have the problems with research, integrity and fraudulent behavior we see now. So having a clear indicator between trustworthy and untrustworthy wasn't as necessary.
What people wanted was an indicator of high scholarly content of high scholarly influence, rather. The world has changed. What's important now is to have indicators of trust. And by giving all the journals in the core collection that have passed our quality criteria, Ajith provides that clear indication. These are journals you can trust.
They don't have to have high scholarly impact. They don't have to be cited a lot. But what they do have to do is have processes in place that allows us to rely on what they are publishing. And that is where I would like to end. Thank you.
Good morning, everybody. So I'd like to start with this slide with this anthology of headlines that speak to many of the issues and the challenges that we are talking about today paper mills, retractions, sham science and then a nice topping of I LMS that in some cases sort of supercharged some of these existing discussions but also brought to the fore a whole new array of questions and concerns.
This slide also shows how many of these topics and challenges have been picked up by both specialists and mainstream media. And I think that really speaks to the fact that integrity is really at the core of what society expects, and I would argue rightfully so, from academia and from scholarly publishing. I think that that criticality of trust and integrity is also reflected in the theme of the SSP conference this year transformation, trust, integrity, and also the tagline advancing trusted research really speaks to that centrality of integrity and trust.
Before I talk about the integrity of just a few words about SVM and solutions is International Association for academic, professional publishers. We have over 140 members spread over more than 20 countries representing all scholarly disciplines and has always been active in the space of Standards and Technology developments. And that engagement has really been taken to the next level about two years ago when solutions was founded.
SVM solutions being the operational arm that is tasked with developing, running, shared service, common infrastructure for our members and the wider scholarly communications community, and that I have the pleasure to lead on a day to day basis. The integrity hub, I think is a perfect example of that charter for solutions. It's a broadly scoped program to equip the scholarly communication community with data intelligence technology to protect research integrity.
And there were two things that I'd want to pull out of that mission statement. First is the word equip, which really speaks that the program is meant to be an enabler. We're really doing this for and by the community. And the second thing I wanted to pull out is this combination of data, intelligence and technology. This is not just about technology. Technology is a very important component for the work that we do within the integrity hub, but it's not exclusively a technology play.
Collaboration is very much at the heart of everything we do. Going back to the analogy from Elizabeth big in the opening plenary, it takes a village, right? Also in the session on research integrity yesterday, I think the importance of collaboration really shown through multiple times. And we see and we hear from all of the people, all of the stakeholders and our members that we speak about that research integrity is really critical for all of those stakeholder groups in scholarly communications.
It's also an area where it's quite apparent how you can be stronger together, how sharing information and sharing intelligence is a good thing. Be able to look at the problem space holistically from different perspectives and come up with solutions in a better and more efficient way. By bundling forces within the hub, we're actively collaborating with a large number of individuals and organizations, many of which are also in the audience or at the conference.
Cope ORCID crossref pubpeer problematic Paper Screener. Clear skies, just to mention a few. So always a risk of leaving parties out. So I do apologize. Not intentional. And all in all, we have over 75 individuals from across 25 different organizations that are actively collaborating in the integrity hub in some shape or form in the way we've organized that work is through a combination of governance, board task forces and working groups.
So we have one working group on paper mills, one on image alteration and duplication and one about simultaneous submissions. And of course, there's touch points between these, right? They don't work in isolation, but we focus on sort of a use case by case basis. This is a slide that shows all of the logos of members participating in the up.
Again, just to underline sort of the traction and how this work really resonated with our members and the community at large from the start. So the integrity hub as a program really rests on three pillars knowledge and intelligence policies and frameworks and enabling infrastructure. Let me say a few words about each of these pillars.
So knowledge and exchange knowledge and intelligence is really about that. That exchange of information, exchange of knowledge on an ongoing basis that happens in the working groups, in the task forces that I already mentioned briefly, they meet at a regular pace. And this is where a lot of this constructive dialogue is happening. We also organize events.
For example, we've had two quite successful research integrity masterclasses, most recently at the spring conference in Washington and in the London December event. Before that, we'll organize another similar event in December this year in London. So save the date. Um, and then also making available educational training material, for example, videos, a series of videos about image manipulation.
On policies and frameworks we collaborate with COVID on the development of editorial policies policies because of course, once a potential research integrity, concern or breach is flagged, it's also important that editors, research integrity staff know how to act on that. And we put in place a legal framework that gives publishers all the assurances that they need to be able to have a common capability, look for patterns in submitted manuscript across different journals, across different publishers, while at the same time respecting the confidentiality and the security and the privacy of that information.
On the enabling infrastructure pillar. In part, that's the infrastructure that has the common functionalities that are required to address all of the use cases that we spoke about. So one platform that has a lot of the data management, the data flows, the work orchestration, et cetera. And then on top of that, individual tools that address specific use cases.
We've recently launched the MVP, the minimum viable product for a paper mill detection tool, and we're also quite far, along with starting a pilots to detect simultaneous submissions. I'll say a few words more about that in my next slides. This is a high level blueprint of the technical architecture of the hub. So zooming in a little bit on that technical infrastructure, it's really an enabling infrastructure that connects, on the one hand, data manuscripts with a series of tools that analyze that content for specific signs of manipulation or fabrication.
It's built as a very modular system by design, which makes it adaptive and also extensible so that we can easily develop or plug-in additional tools. Some of those are developed in-house by SDM solutions or in collaboration with our members, but also integrations with external tools that are on the market. I also wanted to call out that the hub is really specifically meant to be a tool for decision support.
It delivers signals to editors, research integrity staff that call for additional investigation. But the hub is never meant to be a system that makes these yes, no decisions autonomously. A bit more detail on the paper mill MVP. We're quite excited about that. So this is a tool that operates on top of that enabling infrastructure that collects a number of tools and a number of signals delivered by tools that are already being used today, but then combining them together and moving them upstream.
In the session today, Luigi spoke about moving from reactive to proactive to proactive, I call it here, moving from corrective measures that are already in use today to look at published material, but bringing them upstream so that they really become preventive measures and prevent the type of material that we're talking about to enter the scholarly record in the first place.
This work was also picked up by some major media, was a very nice article in the Financial times, so it was quite rewarding to see that the work that we're doing is also being seen in that context. The next steps. So we've just launched the MVP. We're very busy this week and next week, but onboarding new publishers and users to start using the tool and getting some real live feedback and real life validation.
We've assembled a bit of a test corpus so that we can start to measure baseline precision and recall for the system. And of course, we're always on the lookout for additional tools that we can instrument and connect to increase the accuracy and precision and recall of the hub as a whole. Um, we're about to start a pilot to look for simultaneous submissions.
We see that as a problem in and of itself. It brings a lot of additional load on the peer review system as a whole. But also we recognize that often simultaneous submissions are an indicator of paper mill activity. We have about 10 publishers that we're having active conversations with to join the pilot, and I'm quite excited about it because at least to my knowledge, this is the first time that we will then have the technology and the frameworks in place to be able to detect the same manuscript that is being submitted to different journals with different publishers, different editorial systems at the same time speaking of editorial systems to really get the scale, of course, integration with editorial systems is very important.
We currently have technical integrations with one editorial manager and we are continuing with those integrations and also with other editorial systems. And that brings me to the end of my presentation. I'll hand it over to Pat. Questions and discussions afterwards, of course. Thank you. Good morning.
I'm Pat frandsen. I'm director of publications and platforms at SPI. I'm also a member of COVID council and for today's presentation, I'll be wearing my cowpat. Hopefully everyone in the room is familiar with COVID. We are the Committee on publication ethics. We provide advice and guidance on best practices for dealing with ethical issues. We are committed to educating and supporting publication ethics and good publishing behaviors across the entire ecosystem.
So that includes publishers, universities, research institutes, funders, authors, reviewers. The picture on the right will give you an idea of what cope looks like. So cope council is a volunteer group. We have about 40 members. Wide international reach, which includes publishers, editors, active researchers. So we have a really good broad perspective on issues attacking our industry.
We are also a member organization. We have individual members, we have journal members, we have publisher members. We're launching a University member program as we speak. And there are more than 14,000 people that are actually active with COVID the moment. So just quickly, a little bit of background on us. So our purpose, again, is to educate and advance knowledge and methods of safeguarding the integrity of the scholarly record.
Our vision is to create a future in which ethical practice and scholarship is a cultural norm. And we basically act on three core principles to both promote that purpose and that vision. We support practical resources to educate and support our members and the scholarly publishing community. We are a resource, an objective resource. We provide leadership in the thinking on publication ethics cases.
Some of you might have submitted cases to cope. We handle these on a pretty regular basis where publishers will come across kind of a thorny issue that they might not have encountered before. And that there's not good literature on. We'll debate it as a group and provide recommendations on steps forward and we act as a voice. We try to be neutral. We try to be objective.
Kind of remove ourselves from a specific situation and provide that level of impartial feedback that publishers and institutions need. So just quickly and we'll gloss over this slide. But these are basically the policies and core practices that are required to reach the highest standards of publication ethics. We have a very, very thorough vetting process for member publications and individual members and publisher members similar to websites.
You know, I think the typical member application takes me a couple of hours, and we have a high, high rejection rate at the moment. So we have a really rigorous set of standards that we apply when looking at our members. And we also do you know. Occasional reviews of members to make sure that they are continuing to meet the core principles behind the organization.
This was touched on a little bit yesterday by Mike Streeter, but we've got some recent resources that I'd like to call your attention to. I think it was on Tuesday or Wednesday of this week, we released guidelines on special sections and guest editors. I think everyone in the room is aware of the vulnerability at the moment with special sections and, you know, identified that we've had a very busy year, to say the least, and went through and created guidelines that we hope the community adopts as a way to actually safeguard from fraud within these areas.
The image in the middle is the paper we worked on with STM on paper mills, so we have guidelines on paper Mills and best practices to avoid those types of that type of abuse. And we have a statement on chatgpt and its appropriateness in the scholarly publishing area. So if you're not familiar with those resources, come to our website. Check them out.
So worth the read. So what we're up to now and historically cope has been, I would say, publisher focused. We're focused on journal publishers, but the academic research ecosystem is obviously very broad. And we recently started a pilot to include and expand membership to universities and research institutes. So this launched in May of 2022.
We originally started with 11 partners. You can see their logos on the right hand side there, and the idea was to actually work through a process and come up with tools that impact the universities as effectively as they're impacting publishers. So what we're looking for in terms of University members and Research Institute members are organizations that conduct and publish original research and do so through standard publication outlets, journals, society publishers, commercial publishers.
We're encouraging increased training and research ethics and publication ethics on campus and providing tools to these universities and institutes to do that effectively. And we're starting the application process a little bit later this year. We are being methodical in terms of our approach. We don't want to accept 100 new members immediately. So the goal is to probably add about 10 to 12 later this year.
And then grow that program over time. So as part of this, we're developing new resources and resources that are specifically geared towards universities and research institutes. You know, within the corpus, we've got lots of different resources. And again, historically they've been focused on publishers kind of a gap in our knowledge base there.
So we're developing new e-learning modules that are designed specifically for research integrity officers at these universities and institutions. We're creating tools to help establish training programs on publication ethics. We also offer confidential advice when problems come up, similar to what we do for publishers so we can act as that independent voice, that independent review board in cases of ethics and the University and Research Institute members will have access to all the various training materials that we've published over the years with publishers in mind.
These include case materials. They include forums or guidelines, various webinars. The goal here is to actually link research, integrity and publication ethics. And apologies if the font is a little small on this chart, but we see research integrity as the activities that are happening in the labs. So at universities, at research institutes, in the case of corporations, corporate R&D labs, you know, and that's kind of the starting point that if you've got issues and fraud and malfeasance in the labs, it's going to trickle over and carry into the publications.
So the goal with this is to bring these two groups together, these two areas research integrity, publication ethics and establish a common ground between the two. And basically this is the vision that we're working towards at. So I will also point back to Elizabeth's talk where she says it takes a village. It absolutely does. You know, our goal here is to create a culture of publication integrity together.
We're trying to bridge the gaps that have existed for a while in scholarly communications. Get people on the same page, get people collaborating and communicating more effectively, better information sharing. We're trying to strengthen the network of support and education and debate in publication ethics. I think this conference has done a really good job at, you know, highlighting a lot of ethical issues and bringing diverse and different perspectives.
And I think, you know, one of my takeaways leaving here is, yeah, I mean, there's a lot of work to be done and it's got to be done in a collective fashion. So again, you know, to the second bullet point, we want to collectively solve these issues and provide a future that's a little bit cleaner than it is right now. We want to improve communication and collaboration across all stakeholders and publishing, not just universities, publishers, publishing vendors.
You know, if you're at a pond or silverchair, right, you're hosting all the content you need to be aware of these issues as well. And we want to foster a culture of shared responsibility, not finger pointing, shared responsibility, where when something comes up, there is a way to disseminate that information so other groups aren't negatively impacted down the line. And I think, you know, the big question and we'll probably get to this as part of our panel discussion, you know, we've been talking about this for the last three days, but how do we get there?
It's great to have all the aspirations, but eventually you have to execute on them. And I think that's the goal for the next little while, is how do we take those steps forward? How how do we put those plans in place to, you know, get our industry back on solid footing and kind of prevent some of these issues that have cropped up? And Thank you very much.
Thank you. Thank you all. We do want to open it up to questions from the audience. But before we do, I'm going to use my moderator's prerogative to ask my own question, if that's OK. So we've been talking a lot about it takes a village and we've had a lovely chat about how we're all working together as friends to solve this problem.
But we're talking about this because there's a problem and there is a villain in our village. But who does the panel think, that villain? Who or what does a panel think that villain actually is? Hilke, and I'll start with you. Thank you. So to stir the pot a bit, I was tempted to say the impact factor. And then I know I'd love to respond to that, so I'll give you the opportunity to respond to that.
But on a slightly more serious note, and also thinking back about the discussion yesterday, I think it was Luigi who said that there really aren't any winners in this scenario, especially thinking about paper Mills and fabricated research. It's bad for research institutes, it's bad for researchers, for publishers, bad for funders, bad for society, et cetera. Um, I also think it's important to keep in mind that of course we speak about the exceptions here, right?
The vast majority of researchers are bona fide following the ethics and the standards for their community, doing everything with the best of intentions. Um, and I think that. That that is sort of the baseline and that that has led to also a system of trust. I think scholarly communication is largely a trust based system. So thinking back of the village, right, where people can trust each other and at some point the village grows and it becomes a city.
And, you know, we start maybe to need a police force or a civic institutions. In that context, I think, you know, the only true villain I would see here is, you know, those entities, paper mills, et cetera, that find ways to structurally exploit that system of trust in at an industrial scale, I think is the right way to call that even, you know, the individual researcher that that is under so much pressure that they see basically no other way out than pay a couple of thousands of for authorships or a paper not condoning that behavior.
Absolutely not. But I'm starting to really see them as the village of the story. I think it's those actors who saw an opportunity to really exploit that trust based system at scale that would point to as the villain and the impact factor. Of course.
Obviously impact factor. Why not? So thank you. I'm actually going to agree with a lot of what heel says, but I'm going to sort of phrase it slightly differently. And I think what's important is for us to understand and differentiate between all the different ways that the scholarly record is being polluted. And the first one is one of unintended consequences.
Sort of going back to my first slide, if you remember, and it's not the impact factor so much as the misuse of the impact factor and the H index and all these other very reductive single point metrics, I don't think the research assessment community intended for this to happen. But it's happened. Another reason things are sort of going South is that a variety of stakeholders aren't really doing enough or what they're doing is ineffectual.
Another reason is that a variety of stakeholders are just being ostriches and putting their heads under their hats. And so these are all factors. I don't think we can call them villains so much. As heel says, the true villains are the researchers and the facilitating entities, such as paper mills, that are deliberately setting out to defraud people.
So I think that's it. There are villains and there are people that through their action intended, unintended, ineffectual, lack of that are facilitating the problems. Joey, do you want to. Was just. Uh, I think I. Talk a bit of personal experience and different from the different angles.
I said where I was from and back then I was postdoctoral fellowship. I think there's a lot of training courses like herb and ICC biological safety course and radio safety course, and I don't recall that I have had a research integrity course. I'm sure that they do have it. I from the list, I see four people, including myself from the NIH.
So if anyone from the NIH and please tell me that we have it now. Um, having said that, I think for young scientists and juniors and they are hungry about the papers and you mentioned about misuse of h-index and impact factors. And I think that we have the sometimes we don't have a choice to me and the best way of approaching this is that they take time, advertise and, and educate these young scientists was right and wrong to make sure that they know.
And what we're saying has to be delivered to a way farther than what it is now. I think that's about the first. Yeah and I would add, I think. Probably the biggest problem is our incentives are out of whack. You know, the goal of research is to advance knowledge, disseminate knowledge, improve the human condition.
And we've created a system that's so wildly incentivized that it's ripe for abuse. So, again, that's a collective effort to change that whole culture. I'm not sure it's possible to do it. I doubt it is, but we can take steps in that direction. The other thing I think as an industry, we talk a lot about this at cop is we're incredibly siloed. Publishers don't necessarily talk to other publishers about their problems.
Right? we kind of keep what's in our house internal. Hope it goes away. But we don't share these experiences and these stories as effectively as we should. And I think if we did that, a lot of the issues that we're facing right now might not have materialized. The problem is we allowed it to happen. I guess the question for us as an industry is, how long are we going to continue to let that happen?
At some point, we've got to take steps. I would completely agree with that. And I just want to add, as one of the problems is the stigma, the stigma associated with retractions, the stigma associated with universities or other research institutes having someone on their faculty that's been associated and that's stopping progress. You know, one of the reasons that retractions can take so long is a publisher will go to an Institute and say, you know, we've had this allegation or we've seen this, can you investigate?
And they are you know, they're faced with a wall. We need to have a society where we are willing to admit that there may be people in our midst that are intentionally or otherwise doing harm. And look into it and not be stigmatized for it, be rewarded for trying to make things better and trying to clean things up. Yeah just as a quick follow up. So again, one of the things we talked about within cope, I think it applies to websites too.
It's not necessarily that these things happen to you, but it's how do you handle. Are you willing to admit there was a problem? Are you willing to take responsibility? Are you willing to update workflows and processes to prevent it from happening again? So I think it's again, it's the reaction, not the fact that it happened. But how do you handle yourselves as an organization once it's happened?
Great stuff. Thank you, panel. So questions from the audience, please. I think we had some online. Is that right? Thank you. We have a question online.
What are some of the approaches to maintaining trustworthiness and preserve the integrity of the scholarly record? When the boundaries between peer reviewed preprint and AI generated content are blurring? Blurring that is a big question and I'm struggling with where the commas are in that. Is it is it differentiating between peer reviewed and published or I and not I?
So I wonder if we should take those as two different things. Maybe so maybe start with preprints and fully published content. I have to say a couple of words about that. So on the Web of Science platform, we have different areas where we have things like journals that are fully peer reviewed. So that's the Web of Science core collection.
And we also have a preprints index. So, you know, we always have this tension between the need for speed and the need for rigorous review. And I think the best way to sort of serve our community is to make sure that we have both types. Peer review is slow. Preprints are fast, but we make sure that those things are kept separate so people can decide with their particular use case in mind where they want to find their information and how much they need to trust that information.
And yeah, I think that's the bit. I mean, I obviously crosses over into both and that's a whole other can of worms. If I can follow, I would agree with that. I don't have the data, but I would and I'm sure, you know, the issues that we're talking about also happen with preprints. Right we should not be naive about that. I would hypothesize, though, that it's mostly prevalent with published research, diversion of record.
Thinking back to the incentive systems. Right this is where people are still mostly assessed at. So these perverse incentives would point more in that direction than preprints. But equally, we should not be naive and I'm sure there are many preprints that that might suffer from these questions. I would also think that some of the technology and the solutions that we're currently applying for the peer review process could very well be applied for preprints as well.
So I think that's an interesting Avenue to explore. I think the question is indeed a second question. Right that could play into both published works version of record, as well as preprints. a we see I of course, being applied to fabricate research, manipulate research, but equally using I to detect it. So it's interesting how that's going to pan out. Um, I'm not sure if Oleg is in the audience. Oleg always likes to do sort of a shout out for reproducibility, so I'll do that in his stead.
I do think also an area where we could focus on more collectively and I know work is being done, but give that a bit of additional impetus. It's really focusing on the reproducibility of research and sort of let the tides rise because, you know, if the general level of reproducibility will be higher than it's going to be easier to spot out fabricated works that clearly are not reproducible.
And that, I guess, brings us into open research. At the moment. There's too much emphasis on that final published articles. If we had transparency throughout the process, that would help as well with reproducibility and integrity as a whole. And then to make that link, maybe you know, the linkages between various stages of the research from preprints to author manuscript to that, you could sort of follow the chain of the development.
I think that could be a good indicator. Also referring back to your presentation on all the context that crossref can provide. I think that could be helpful to of, you know, establish that chain of development. Yeah, I think that's a good point actually, that it's how these aren't separate objects or items. They all relate to each other and being able to follow that evolution of the knowledge and the relationships to them is key.
And also transparency. Transparency about the level of review that items have undergone. I know I think the BMJ have a preprint server, but they actually do fairly rigorous selection on it. I think they have a fairly high rejection rate even for their preprints. So just transparency about what review process is something that's been through up to that point.
Yeah and I think the other big thing is the container probably doesn't matter. It's all scientific content at the end of the day. So if the journal article or a book chapter or a video, you know, I think have to approach it with the exact same approach. You hope that it's scientifically correct, but I don't think can take anything at face value these days.
So putting my publisher hat on, we publish a wide variety of different formats. Um, and, you know, peer reviewed journals by far, and books get the most scrutiny. But, you know, we go through our videos, we listen to it, we look at the transcripts, we look at the slides. You know, we look at conference proceedings. We don't do too much in preprints, but anything that you're putting out there to the public.
I think should take some responsibility for ensuring that it's actually valid. Correct and appropriate to share. That's great. Hi, Lori Carlin. Delta think something that's been on my mind this week as we've been talking about research, integrity. And Pat and I actually had a little conversation about this yesterday as well, is the larger question of what this does to the reputation of scholarly publishing.
Our work is not just viewed within the research community, but there is a public out there that is going to more and more hear about this issue and associated with fake news and coming back to that trust and transparency and how do publishers address that issue to make sure that the public is still relies on scholarly publishing and relies on research and believes that there is truth being published.
And it seems to me that that needs to be part of the conversation as well. So I think the first step is owning it, admitting there's a problem and taking steps to fix it. Right it takes a while to rebuild trust. But if we try to hide behind what's going on right now, it's only going to get worse. So, again, go back to the idea of transparency.
How do we fix this transparency and being transparent and being honest, I think that's the first step and what is inevitably going to be probably a pretty long process for all of us. If I could add. Yeah, so I agree. I think owning it is exactly the good way to frame it and also explaining what that means.
Right what are the practical actions that are being taken? What is also the process to announce if something has gone wrong? And I think really explaining how these things work, what a retraction means, what it does not mean, so explaining, I think, you know, you cannot explain enough how scholarly communication works in and of itself. Right? there's this the whole process of self-correction, which works on longer cycles.
So owning it, explaining it, not taking sort of assuming for granted that people understand how this works and being proactive about taking steps to fix it and talking about that. I would agree with absolutely everything that Pat and Eric have said. And perhaps we should be more cognizant of the fact as well, that the audience for scholarly communication has changed dramatically.
And yet we still write as though we're writing from scholar to scholar and perhaps a little bit more context for the broader lay audience would be really helpful. You know, there may be a sense that because you've read it in a trusted source, they already know we need to sort of perhaps explain. But, you know, all sources are not equal. But even within the scholarly corpus, it's not written in stone.
It's the provisional state of knowledge now. And things are going to change as we learn more. And I think that's something that often can get mistaken, that just because something was true yesterday doesn't mean it's going to be true in 10 years time. That's not necessarily fraud. It's just the way the scientific process works. Yeah um, just to quickly add up to your talk about the universities late this year, I think that's quite important thing, not only because is thinking about the universities, but also whenever I see my juniors and students, I think I look back myself and I think that that's quite important for youngsters and junior scientists that they from the beginning.
So if I ask someone to what to do, then probably go back to the basic and then provide the systems. And let them know so that they can probably they do not make a mistake that someone has done it already. I think that's important. So we've got -one minute. Let's go for one more.
Um, I just wanted to ask, in this age of altmetrics and spreading of, of research beyond just the scientific communities, I work for a society that publishes on, among other things, transgender research. And often we find in our altmetrics that what is being picked up and spread has nothing to do with the science of it, but more with the politics of it.
Has that been part of your discussions, your considerations on these matters? Because it can create a discussion that we don't anticipate. We don't we we're not sure we want to be a part of or maybe we have to be a part of. But what are your thoughts, if any, on such matters? Um, just, Just just a short answer. I think the issue is quite important.
And often time, even myself, when I go back home and we normally blame the. Mega publishing systems and publishers and so on. And also we are playing the system. But I think you're right, that system is not the problem sometimes or quite often politics and politicians and all the way back to the top, then they have to. Be willing to see this problem and don't think system or publishers can solve the problem by itself.
Have to work together. Top to bottom. For me, the question underlines a bit of the discussion that we just had right, where I think there is sort of this dual responsibility for all of us to, on the one hand, make sure that we do everything we can to make sure to protect the integrity of the scholarly record and make sure that what we publish is as good as it can be, while at the same time also explaining that.
You know, not every article always is 100% correct. Right that's how research works. So also explain to politicians and the general audience how that works and not to take just a single soundbite from an individual paper and base your policy on top of that. And I think the big thing is we control what we publish, right?
So if it's correct, if it's gone through a rigorous review process, you know, I think that's our responsibility as publishers. What people do with that can't really control, you know, in the case of spy, we publish across a wide variety of areas, everything from medical devices to missiles. So we're incredibly popular in some areas and not that well liked in others. We can't control that.
Our duty is to serve our community, publish in the areas that these people are doing research and disseminate scientifically factual, objective research that hopefully is used for good. Sometimes it's not. Fantastic well, Thank you all. That's actually an excellent question to take us into a lunchtime discussion. I think so.
Thank you all for your attention. And if you could just join me in thanking the panel for their presentations. And discussion.