Name:
Addressing Research Integrity with Identity Verification
Description:
Addressing Research Integrity with Identity Verification
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/bcd0c8aa-81ff-408a-aeb2-5d8c008e4e62/videoscrubberimages/Scrubber_1.jpg
Duration:
T00H54M32S
Embed URL:
https://stream.cadmore.media/player/bcd0c8aa-81ff-408a-aeb2-5d8c008e4e62
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/bcd0c8aa-81ff-408a-aeb2-5d8c008e4e62/SSP2025 5-30 1100 - Session 4C.mp4?sv=2019-02-02&sr=c&sig=EN%2FI1in70Ci%2Bv8HkOHseDsgPjzQmufPu4JRZY62UBzc%3D&st=2025-12-05T20%3A58%3A07Z&se=2025-12-05T23%3A03%3A07Z&sp=r
Upload Date:
2025-08-15T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Hello and welcome to this panel that we're going to be having on addressing research integrity with identity verification. You may have noted on the program itself, it stated that hilco core is the Chief Information Officer at STM solutions would be moderating this panel. Unfortunately, hilco has been prevented from traveling for the event.
And so I'm standing in his place. So I am not hilco. Obviously, though, you may have thought that hilco was a female name, but it is a man. And so my name is Caroline Sutton. I'm a CEO at STM, and I am joined today by 3 excellent panelists. Joining me is Ralph, who is senior director of Strategic Partnerships at American Chemical Society ACS.
We're also joined by Tim Floyd, founder and CEO of lib links. And you will have seen Tim, of course, active with SSP and on stage during the lunch yesterday and perhaps last night at the dinner. And finally, Teresa fucito, who is director of content operations at AIP publishing. We look forward to talking to you about research identity work this morning. Over the next hour and we will have some time, about 15 minutes at the end for your questions and comments about what we've been working on.
So let me see if I can get these to advance. OK I'll do this instead. Let me just hit Next then. There we go. Am I handing over to you now, Ralph, for this bit, or is this me. No, I'm still going. OK apologies about that, everybody.
Just a reminder here as well from SSP that we have the core values of community, adaptability integrity and inclusivity. And sorry something's happened here. In terms of the other slide I had. Yes And I want to remind everyone of the code of conduct. I'm sure you've seen this at other presentations during the event, but this is really, really important and we will be abiding by this as well during this session.
With those basics covered, I'd like to hand over to my colleagues to talk about fraud, to friction and a real world example here. Thank you. Thank you. To start us off, I just wanted to say I think that we are all really familiar with the current landscape research integrity is under enormous pressure, not just from AI generated content, but from persistent, highly organized paper mill operators.
It's fair to say that the hindawi incident was a wake up call, but unfortunately it wasn't an isolated incident. Every publisher is vulnerable. But why. Because the pressure to publish faster has left cracks in the system, especially around identity. But speed alone is not the only driver. We also need to consider the broader academic incentive structure of publish or perish culture that can unintentionally reward volume over rigor, making both individuals and institutions susceptible to manipulation.
Identity exploitation is a symptom of a larger systemic pressure in our response must address both the operational gaps and the structural drivers. So I want to share with you today a real world problem. It's Doctor Wren and the paper mill. This is fictional, but I want because I wanted to protect the sensitive information. But this is all very relevant. So a paper entitled novel biomarkers in fill in the blank is submitted to a reputable journal.
Clean formatting, clear methodology, recommended reviewers. Everything looks great. The editor assigns an author suggested reviewer and receives a glowing review within days. The paper is published within three weeks. Wonderful except six months later, a whistleblower reports irregularities. The authors don't exist.
The email is fake. The affiliations were unverifiable. Worse, the author list has been altered not once, but twice. Our investigation reveals the authorship was sold, the paper was ghost written, and the reviewer is part of a manipulated network. One of the listed authors had no idea their name was even used. This wasn't a fluke. It's a business model.
Next slide. Yep yep. This kind of fraud thrives on gaps in accountability. That's where identity. Where identity verification matters. Approximately seven years ago, AIP publishing started requiring ORCID for contributing authors, and we're currently working toward expanding that to include all corresponding authors.
And since many of our authors are reviewers, we've even discussed the possibility of requiring ORCID for reviewers as well. And we know it's not just about a single solution. AP publishing also adopted the use of credit taxonomy to increase transparency around author roles, but transparency only works if it's tied to real, verified individuals. Researcher identity verification is not foolproof, but it's one of several critical tools that, when linked together, can strengthen research integrity.
There's still a big question around where the identity verification belongs in the workflow, especially for reviewers. Should it be an invitation at registration or earlier. This is where the editorial process meets the technology. And I know that Ralph and others on the panel may have thoughts on that. Yeah and the scenario that Teresa just described is probably familiar to other publishers in the room.
It certainly is to ACEs the idea. I mean, we're battling submissions with fake authors, submissions with fake reviewers, submissions that have that kind of identity manipulation is becoming much more commonplace. And if you go to the next slide. Yep back one. Sorry that's me. Sorry, I think that as an industry, we are starting to get some awareness that this is a problem.
I found this is an article in ordered publishing that came out just earlier this month, in fact. And so it's an open access article that Wiley published. You can look it up. The authors were starting to get questions about a paper that was published that they had nothing to do with. And they someone fraudulently, like, impersonated them. And was publishing a research work in their name.
And this is an article that is looking at that scenario. And as an industry, kind of putting the question out there, how big is this problem. And is this something that publishers really ought to pay attention to. So that's what we're here to talk to you about today. We have some recommendations for us to consider. So I have to note these were Norwegian researchers. And so I live in Norway.
And so that has been hitting all of the news. So we're all aware of these things now. Yes so we're wondering if you also are experiencing this in your day to day work. So we've set up a mentee. So if you go to menti.com you'll see the code here 83603910. Or of course, you can also scan the QR code there. We'll give you a few moments to log in there. And then we will shift over to taking a look at what we start to see emerging.
To get rid of this. All right. I got to just get it. What are we seeing here, Ralph. Some have an experience of themselves, but almost an equal amount who are experiencing similar.
One person is shocked. I like the folks that are saying, yeah, this is new, but it doesn't surprise me. So definitely leaning toward. Yes seeing something similar or exactly this. Yeah Yeah good. Great we had a second question here as well, didn't we. Go ahead.
Oh is there I think. Yes do you have other scenarios to share. Anonymized? of course. We can give a couple of minutes if you'd like to also throw in some thoughts there. Do I need to our response to you. Why don't we. We'll skip this one for now.
Yeah get back to our slides. All right. Actually, should I carry on. Yes OK. All right. Great OK. So I'm going to take you through the results of a working group that we've had within SDM that Tim and I have been involved with for, gosh, coming up on two years maybe or so.
It's been quite some time. You can see the members of the working group here. You can see that it is a very good representation across the industry, from both publishers as well as solution providers. And we've been studying this problem, as I say, for quite some time. So we have produced two reports, one that came out in October of 24, and another one that came out earlier this year in March of 25.
The first report really lays the background, kind of makes the case for the fact that we as an industry, need to be focused on this problem. And the second report articulates some recommendations that we have, and we're going to be sharing some of those recommendations with you here today. But the problem that we're facing here, look, we all know that research fraud is on the rise.
But as I want you to think about what we as an industry have done to date to combat research fraud, by and large, we're focusing on the content itself that is coming into a submission system. We have ways of detecting for duplicate submissions. We have ways of detecting for paper mill submissions. We are looking at images to see if they've been manipulated. We're looking at citation networks. We're looking at things like that have to do with the content itself.
But as we in this group took a survey of our own research integrity offices to say, what kind of research fraud are we experiencing. I want you to note that many of the things that we are experiencing today are related to some form of identity manipulation. Whether it is suggesting fake reviewers is a very common one. Fake guest editor applications claiming fake coauthors, like I mentioned on that paper that just came out.
So these are things that have to do with identity manipulation of some type. So as an industry, we have done very, very little to really focus on identity verification. And we're kind of leaving the door wide open, frankly, for exploitation like we are starting to see today. So what can we do in order. What do we mean by this. Well, let me give you an example.
On my very own website. This is our publishing center that we put out, published by chronos hub a few months ago. And if you have not submitted before you come and you can simply register for an account and we just ask for your email address, well, there is nothing preventing me from saying I am Tim Lloyd 123 at gmail.com and I can go ahead and I can submit an article and I can impersonate Tim and we would never know the better of it.
That's the kind of thing that we are saying we need to do better. Because when that happens in a typical submission workflow, when the person controlling the submission itself has the ability to also impersonate reviewers or impersonate co-authors and actually be the same human being behind all of that creates a system that's ripe for exploitation like we are starting to see today.
So what can we do to solve this. Well, we looked to other industries like the financial services industry, the legal services industry, and they have a very robust know your customer rigorous process in place. And, what we're saying is we as an industry need to start adopting more things like they are doing to really verify the identity of the researchers, that frankly, we are all doing business with.
So what does that look like. We think the different elements of this framework would be come up with do an assessment of how much trust do you want to have for a particular situation within your submission workflow. What kind of verification steps can you employ in order to verify the identity of the researcher. Evaluate at different points in the workflow and make decisions as you go.
Take action as you go. This is the framework that we're putting out there, suggesting that we all be thinking about. And we have articulated some objectives and principles as well. Look, we know we have to be inclusive. We know that we do not want to disenfranchise early career researchers, for example, that may not have a long pedigree to prove who they are.
And so that's a very important thing, proportionality. This can't be incredibly onerous. This has to be proportionate to whatever role the person might be playing within your ecosystem. Obviously adherent to privacy principles, it has to be feasible and the accountability. I want to highlight here that this is kind of like publisher accountability. There were a couple questions.
I've heard some questions in some other sessions here. Asking, do publishers really feel accountable to this identity verification process. And I think by virtue of the fact that we have been working together for so long and that you're all here, I think there is an awareness of accountability here. And that is that's awesome, because I think that we all have to really work on this together.
So when I talk about assessment, you have to think about real world scenarios where different levels of trust are employed. So when you take money out of a cash machine Michelle Ackerley there's a certain level of trust, when you go. And I verify your identity for air travel, there's a level of trust. We certainly hope that when someone has the ability to wage weapons of mass destruction, there is a higher level of trust and a higher level of verification that is in place for something like that.
So if we bring this to our industry, you might choose to have a different level of assessment for an author submitting to you. You might have a heightened level of assessment for a reviewer that is reviewing an article. You might choose to have an even stronger level of trust for a guest editor who is actually making decisions on what should be published and in an article. And this is all up to each publisher to decide or each journal to decide.
What is right for them. Now, when I talk about trust, we think there are two dimensions for trust here. One is how certain are you that you know the identity of the person interacting with your system. So the evidence of that person's individual identity. And then for us, another level of trust is what is the evidence that they have that they are a bona fide researcher, that interacting with your system as well.
And it's the combination of those two dimensions that builds these levels of trust. So if you really don't know who you're interacting with and they have no kind of pedigree to offer, you're in a 0 trust situation. If you don't have confidence in the individual but they show historical pedigree, maybe you have a little bit of low trust and you can see how these things build to that high trust level where you have both dimensions in place.
And that's really what we think that we ought to be shooting for. So let me bring this home a little bit more. So if you log in with what I'm calling an opaque email address like a Gmail address, that's the scenario that I showed you that's like a 0 trust scenario. You don't know who they are. You don't know. You can't verify what they've done in terms of their academic bonafides.
I'm going to say that if you only log in with an ORCID ID as well, you're kind of in that low trust scenario as well. You really anybody can create an orchid account, right. I can go create one today. So you really don't know who you're interacting with. So that's pin that thought for a second because I have something more to say about this in a moment. If you log in with an institutional identity OK, now that institution, presumably before they give you an account, they have looked at documents.
They verified you as a person, they know who you are. And that institution can convey that to the publisher. That's a more high trust scenario, because now I know who you are, and I can know what you've done. Now, logging in with an orchid account that has trust markers is also a high trust scenario. These trust markers are things that publishers can push into an orchid account, so I can't just do that on my own.
And so that's what's building that pedigree that we can inspect. And that's a more high trust scenario and frankly document verification services. I mean there are services that will like scan your passport to verify who you are as a human being. That may be more difficult to verify your academic bona fides, but that's some trust scenario.
And then finally, a direct contact. I mean, if you can't do any of the above, having some kind of direct contact between the editorial office and perhaps a guest editor, for example, might be a way to get to that higher level of trust. So I want you to focus on the fact that we have this range of methods that largely we already have in place today. We're not inventing anything here.
We just have to use some of the tools that we have in our industry today in order to solve this problem for us. So I'll hand it over to Tim, and he can show this more in a real world scenario. Okey dokey. Thanks, Ralph. There are seats in the front two rows, so if any of you at the back would like to sit down, I'm happy to pause for a second or we're good.
OK, great. Thank you. So let's walk through a simple example of what this could look like in practice. And this is the Made up journal of identity prototyping. And to be very clear this is a basic prototype to illustrate the workflow. It's not designed in any sense.
So don't take this and give it to your UX people and say copy it please. So we're about to enter an editorial workflow and we're asked to sign in or register. Standard stuff. The first thing that's different is that our login page offers two options in addition to the usual email address.
So the first is to sign in with your orkut account. And the second is to sign in with your organizational single sign on. And as Ralph noted, both of these methods can offer stronger trust in user identity. So let's assume the user takes the familiar option of entering an individual email address. So in this case, it's a Gmail address. We know this doesn't prove anything other than you control the Gmail account associated with this address.
So you enter the correct password and you sign in. And many traditional, maybe most traditional editorial login workflows would be done at this point. But we want a higher level of trust. We want verification that you are the individual you're purporting to be. So we offer you a menu of options. And so in this case, the options from top to bottom there's verifying with your institution.
So that's Federated authentication. Or you may be familiar with it as a shibboleth. So that's where you select your institution. You're redirected to that institution's identity solution. You enter your email address or other credentials and validate it there before returning. The second form is you could verify that you have an institutional email address. So that will generate an email that you can click on to prove that you control that email address and has a domain associated with an institution.
You could verify with ORCID by signing into a personal account, you could verify with a government document. So some sort of government issued identity card like a passport or driver's license. As Ralph noted, there's a variety of ways that do this. I used one recently to validate a bank account, and they're pretty simple and painless. So this offers a route to those who don't have an institutional relationship to leverage or verify another way.
So this could be a conversation with someone in the editorial department who can then personally verify your identity. So let's use the first option Federated authentication. So we see a screen asking us to confirm our institutional affiliation. So this is so the workflow knows where to send us. We select the University of somewhere. We're forwarded to the university's login page.
We enter our credentials. That organization verifies our identity. Assuming they're successfully verified, we return back to the editorial system with sufficient trust. You are verified in green to be able to continue with our submission workflow. So verification can be fast. In this scenario, we started with our Gmail. Then we selected our institution, had our credentials verified with them before returning to the submission.
Should take less than a minute. Verification can take longer, so if you're using physical documents that need to be scanned, submitted, reviewed before verification. Scanning might be just clicking a picture with your phone. This example uses an online passport verification service where you take a picture of your passport page. You submit it to a third party verification service, presumably several minutes once you've got the documents in hand.
But this isn't a process you'd be expected to go through every time. So once you've verified it this way, a publisher you could decide how frequently to require reverification. And verification can be blindingly fast. So in this scenario, we're logging into an orchid account that contains trust markers from the institutions confirming your affiliation.
So you simply enter your ORCID credentials and you're verified. This takes a few seconds. So hopefully that prototype workflow gives you a good sense of what this looks like. So let's review some of the recommendations from our STM working group. OK our core recommendation introduce user verification on editorial platforms, and hopefully that one would be pretty obvious by now.
It's basically what we just showed you. More specifically, personal opaque email addresses like Gmail and Yahoo shouldn't be acceptable for verification purposes because they can't verify anything about the user's identity or expertise, simply that they control that email. Now, it's not a problem to use an unverified, unverifiable email address for correspondence purposes on an ongoing basis. It's just it needs to be coupled with an additional step for identity verification.
So do the work up front. They could use something else for correspondence. Decide how often you want to verify that. There we go. Our next recommendation is for organizations in our industry, all of us to contribute trust markers in orchid records. So the example here shows the difference between an orchid record listing a published paper.
That's the one on the left versus birding, one where the publisher has verified the publication. Big difference. Another example would be an orchid record listing a research affiliation versus one where that research affiliation. Rather, the research organization has verified that affiliation. Our third recommendation is simply to work together as a community to improve the framework.
So the thinking to date largely reflects the work of a pretty small group of people, including myself and Ralph, albeit with the experience of the organizations behind us. To make this embedded and sustainable, we need more collaboration. So two particular areas for collaboration are listed here. One is recording and sharing aggregated and anonymized data. So one example is user journey metrics, which will help us understand where the friction lies in the process and where people get stuck so we can engineer better workflows.
A second example is data on the correlation between verification and outcomes, so we can prove whether or not verification is effective at reducing incidents of fraudulent research. Because at the end of the day, that's what this is for. Another area for collaboration is using our collective insights to create an improvement feedback loop. So for example, what are appropriate trust thresholds. Should Federated authentication get a higher score than government documents.
Are there other trust measures to consider. It's important to minimize the impact on legitimate researchers, and especially those that lack access to technology and infrastructure that we're used to in North America and Western Europe. Practical experience will help us better understand how to improve those use cases. And lastly, we can reduce community costs by sharing infrastructure and research efforts.
So our working groups are an example of this already. We can do more as our community starts to tackle these recommendations. We also recognize there are plenty of challenges to operationalize these recommendations, which is why we're engaging with you in this way. One of the most obvious is that not everyone has access to verification methods, hence the need for flexible approaches that can include the sorts of conversations that editors presumably, hopefully, already have to validate contributors.
Another one is the risk that the additional friction caused by verification frustrating legitimate researchers. And while this friction can add mere seconds for some users, for others there will be longer. So we recognize this, and our proposals rely on the belief that the benefits of reducing research fraud outweigh the costs to both publishers and the researchers directly impacted by these recommendations.
This, in turn, relies on us recognizing the costs of doing nothing, which include reputational damage and the considerable cost of dealing with rising levels of fraud during and after the publication process. Finally, some ideas for future work is in this area. In this area rather so a verification pilot is already under consideration and will turn our clickable prototype, the one I showed you just now into one or more working applications so we can test real world scenarios for the various verification methods.
A calibration pilot takes this one step further by testing how the outputs can be integrated into editorial workflows. So, for example, not all publishers and not all publications face the same level of risk and therefore require the same level of verification should a higher level of risk be required for cancer research versus 17th century musicology. If guest editor roles are a higher risk than researchers, should they require a higher level of trust.
User testing is obviously needed to ensure that the user experience is simple and intuitive, and offers no more friction than necessary to address the risk assessed for a particular publication. And that's down to you to assess. A related issue is exploring additional verification methods. So we're most familiar with the ones we use regularly. But we recognize that there will be communities outside North America and Western Europe that rely on different workflows and technologies for identity verification.
We need to identify those and incorporate them into our planning. There are lots of stakeholders involved to make this work that live outside of editorial workflows. For example, the research community themselves, the institutions and publishers that can contribute trust markers, the technology and vendor community. We need to engage with these different stakeholders to ensure their expertise and needs are reflected in solutions.
I mentioned shared infrastructure earlier. There's a piece of work to explore what that might look like. Recognizing the importance of anonymizing data and avoiding anything that could be construed as collusion. And sharing data would also enable us to perform the cost benefit analysis needed to rely are needed to ensure a sustainable investment in this area.
It seems intuitive. It's a good investment, but we should collect the data needed to prove whether that's the case. And now let's have some questions. Thank you. Thank you Tim. I do have some questions. And before getting into those questions, I just want to pick up on a word that you stated, Ralph, that I think is so important.
And that is accountability. And I want to just emphasize why that is so important that we as publishers are holding that accountability for the quality and the correctness of that record. Most of my time is spent in advocacy and policy related activities. And one of the things that I'm always pointing out, that the difference between the type of content and why you need publishers is because there is accountability in the system.
When we're thinking about that version of record and the work that publishers do. Because if something is not correct, we will hold ourselves accountable for correcting that and doing the work to do it. But equally, we're taking accountability so that it doesn't get incorrect in the first place. And I think that is missing from a number of the other kind of alternative systems that get talked about.
And so I just want you to understand how important that work is to you. For me, when I'm out talking about the work that you do and why we are still so important as publishers in that role. Let me turn a little bit. Now we've seen the framework, we've seen the recommendations. So I just have to get to that question. Is this actually going to solve the problem.
Teresa well, will it solve the problem. I don't know. Or how are you thinking about the implementation of this in terms of attempting to solve that problem. Well for us, we're looking at taking a layered approach, starting with low friction steps like orchid and now rolling it out to the contributing authors on the articles and gradually building to strong to stronger safeguards.
But it's also about balancing that rigor. Practically and phasing it in across the author or reviewer and editor journey to reduce those the friction points. But that's how we're approaching it at this point. Phasing it in once one process at a time, I think that makes some sense of just trying to start somewhere and then add on and thinking about a process. And we see that with a number of other things that publishers are needing to implement at this time.
So how do we weigh then. The pros and cons, both long and short term in this. Yeah Tim or Ralph, would you like to pick up on that. How do we. So I agree with starting. We need to start somewhere. That there are steps that we can take that are not that onerous. When this was what really crystallized this for me, when I did some research with this working group, I went to visit some of the preprint servers in our industry.
And I noticed the preprint servers are actually doing more than we are as publishers for our submission systems. When you go to submit a preprint for many of the preprint servers, if you only use a Gmail address, it flags you and it says, well, wait a minute, please give me your institutional address. And if you don't have that get a message that says, OK, you can submit, but it's going to go through increased scrutiny because we don't like just having a Gmail address.
I have yet to see a publisher site that does something like that, and that's backwards. I mean, we ought to have a higher level, a higher bar that a preprint server does. So at least that bare minimum when you talk about pros and cons, just that bare minimum of having some type of identity verification for authors coming in is it's got to help. Yeah, yeah.
And if I were still a publisher myself and for many years, I had a very small publishing house, I would be thinking about you. I'm concerned culturally about introducing things that individuals, authors in particular, are going to move away from because of if there's this extra step in the submission process or some other hurdle I'm going to have to go through. And of course, we do have some examples from previously from credit and other types of things that we've introduced.
How are you thinking about those types of trade offs, and how do we culturally make some changes in our communities that we're working with to understand this. I think I'll take a crack at that first. Some of the lessons that we learned from the implementation of credit, that it's not just about the technology. It is about it's a cultural shift. And you need to explain the why what's in it for them.
Why is this helping you. What's the benefit we want to gain editorial and editor buy in developing our communication around all of this, and have some consistent enforcement around it, and the same is true, I would believe, for identity verification. The earlier in the process it's introduced, the better it has to come with a need, the need and the support for it.
And for us. As we used credit along with orchid, they're meant to work together because with orchid it tells you who did the work. And then credit is what work did they do. So it just adds another layer of transparency to the research and to the content and supporting research integrity. Sure yeah. Explaining the why is a really good point.
And, there are different dimensions to that explaining the why, a little bit difficult maybe to explain the why to a researcher that's confronting the work flow to you, but explaining the why to your editors, to your editorial boards. I know that some of the publishers on our working group have started that dialogue where some of the members of the team here have actually gone to their editors conference, to explain why this is coming and be prepared for it.
And here's to hold ourselves accountable. Here is the reason why. So that really resonates with me. Tim, did you have something. Yeah oh, sorry. That was loud. Please, loudly and boldly come with your comments. I'm not eating the mic on this one. So one way to think about this problem is that it's really a series of problems, some of which we can solve in the short term.
Some will require more investigation than we can solve longer term. So it's not a sort of unitary thing we implement and we're done. So for example, some fraudulent identities are being submitted on behalf of researchers that are ostensibly affiliated with institutions that are able to verify identities using well-understood methods like Federated authentication.
That's low hanging fruit to bar those ones out. If you start moving up the tree of complexity, then we've got researchers affiliated with institutions where the institution can't support these methods or where there are barriers in place to make it difficult. So, for example, in India, many researchers have multiple affiliations and prefer to use a personal email address than communicate through multiple email addresses and then at the top of the tree.
Some of the really complex stuff rather will find independent researchers who don't have institutional affiliations, who come from maybe communities where they lack easy access to any means of verification. So as we climb this tree, we can solve problems one by one. And what we're doing is we're increasing the effort bracket's cost required by bad actors to game the system with fraudulent identities.
So will we ever get rid of it completely. Almost certainly not. But reducing it by a significant percentage would in turn reduce the amount of bad research downstream that will expensively having to deal with. So I think a good analogy here is the introduction of credit scores, which has taken quite a while now. The first credit reporting agencies were in the 1800s.
FICO was set up in the 1950s, the first universal credit score was introduced in 1989 and started being used in 1995. And I'm not saying that this will take anywhere near that long, but it's a journey. It's a process. And the key thing is that we start engaging with it as a community and recognize that with credit scores, it's not a one and done. It's a process.
Sure and whilst I can't direct everyone to start adopting this, but it's clearly the more publishers that begin to move in this direction, the easier it's going to be for all of us to be doing this, because you will be encountering similar types of solutions at each publishing house and each journal, you look to submit to. Yeah, I've said that. Look, let's face it, there is kind of a potential first mover disadvantage here.
We are introducing some friction into the system, and we as publishers really don't like that. We want to garner as many submissions as possible. And if you are potentially turning somebody away just because they perceive it to be a little bit too difficult. That's a problem. And so we recognize that. And that's why we think while we are not agreeing to do this together, we think that to hold ourselves accountable, we all ought to be looking to take the appropriate steps that we each up to our own determination.
We're putting this out there saying, this is a problem. Here are the ways that you can choose to solve the problem. And we hope that we all make that choice to do that. Theresa, please. And I just wanted to add, when you're thinking about what other industries do you have to authenticate yourself for your banking information to log into your cable network, to log in to purchase things online.
So requiring a valid way to authenticate an author and verify their identity. I realize that it will introduce friction, but it is also something that's very commonly required. So I grapple with that. I don't really understand the resistance to it because I pick my phone up and it does my biomarkers, I have to log in for everything. So why is this different.
Yeah and I think as you are all speaking with this, I think that we were talking about this as a cultural shift. We need to bring our editors along with us, our editorial boards. And I think you're starting to lay out some of those talking points and telling that story that we have all of these things when we're logging into our bank. What we are doing is a very, very serious activity. That scholarly record, that scientific record is a very, very serious thing.
And so we should treat it with that seriosity that other things in our life. Yeah before I open to the floor, are there any other further comments that any of you would like to make. Tim, please, I just wanted to ask everyone to think about it from the point of view of the bad actors. So, one of the reasons I suspect so much is happening in the last five to 10 years is because it's so easy to automate this.
You can automate the creation of a paper. You can automate the creation of identity. You can use AI to make extremely believable people to correspond with. So you can create workflows that may not have any humans involved at all. And by simply requiring some verification steps, what you do is you shine a light into dark alleyways. Bad actors like the anonymity and pseudonyms.
If you force someone to use a real identity, it's very hard for them to start committing research fraud on a regular basis. So the scalability of research fraud really starts becoming problematic once we start asking identities. So while there are many areas where it may be challenging for us to solve problems simply by starting to ask for identity verification regularly makes it really painful for these volumetric, scalable enterprises to do this.
And as Ralph said, we've been letting them run free for a very long time, and the technology is helping them at the moment. Yeah thank you. We'd like to open up for questions now. Oh, I'm so happy that someone immediately jumped up to go to the microphone. That's always a good sign active group, please. Hi thank you so much for the session today.
So I wanted to ask about, immediate action steps. I think what you presented today is a framework for verification at the beginning of the process. But just thinking about editorial offices right now, today that are receiving thousands of submissions with non-institutional emails. And so first I would just say, do you think would you advocate that it's a best practice, even in the absence of one of these other systems, that if you get submissions with non-institutional email addresses to do verification on the authors.
I also just wanted to suggest something that we've done, another verification step I didn't see in your presentation, which is to ask the authors to provide a letter of institutional verification from a higher up at their institution who they've worked for. So that's something else that can be done outside of a phone call. Another thing would be looking for the same email address in other published works.
So going down those steps. But before journals get this system, would you advocate that everyone starts to incorporate this into your submission check workflows. Basically, for all authors that don't have an institutional email address. Yeah My opinion is starting with the front door, that author facing front door is probably the best place to start.
And your idea about having the letter you those kinds of manual processes. That's what I kind of meant about those manual processes. Absolutely, you might employ so the idea of doing some kind of a manual check specifically on Gmail and Yahoo. Email addresses. Like I said, preprint servers apparently do. If you have the ability to take on that additional work, sure.
That would be a great first step. I think we are also looking to our platform providers that are out there to start offering Federated authentication. Which many of us have used shibboleth like on our delivery systems. We've never used them, on our submission and peer review systems. Well, we need to get that technology more available to all of us in that area contributing trust markers like Tim said.
Let's build up orchid. Let's build up those trust markers in orchid so that we can inspect orchid records and see if they have those trust markers. Those are what I see as some of the immediate steps that we could be taking. Thank you. And I think the other bit that I really appreciate in this report is that trying to think about not everything requires the same level of scrutiny, so that we can also maybe having those conversations in in-house teams of thinking about how does our content and our type of communities line up and how would we structure those things.
Teresa, I think you had another comment on the last question before we go on. Well, I think it's just similar, but there are the manual steps and they can be labor intensive. Other things are sending an email out to all of the contributing authors. You've been named as a contributing author on paper x, see if there's bounce backs. But those are manual steps.
And that can be I recognize that can be a bit labor intensive. So if we can do things in a more systematic way with automations, I think that would really, help reduce that friction on your team. Thank you. Next question, please. Hi, Amanda. Oh, sorry. Go ahead.
Tim go ahead. Tim I just wanted to reinforce the value of recommendation number 2, because it's very easy to jump straight to. Oh, my God, I don't to put more friction in front of our editorial and our researchers. And how are we going to cope with that. Recommendation number 2 does not change your or need to change your login user experience. And it's a long term investment in our community.
So for every publisher here it's really valuable for your researchers to have markers in ORCID. And not only helps you, it also helps the rest of the community. And, it's a fairly painless thing to do. There's a bit of technical work to do it, but it's not rocket science. But it doesn't have all the challenges associated with thinking through how am I going to impact researchers.
So please, if you're going to take some things away, one of them should be go back to your technical teams and say, can we start exploring orchid trust markers. Because there's a real win there for everyone. Sorry please go. Thank you. Please, Amanda. IEEE So we have and I'm sure other publishers in the room have a problem with reviewers.
And I know guest editors were mentioned in doing identity checks there. Do you have any plans or recommendations that you can maybe spout off now for doing this with reviewers knowing, of course, that we already fight the Battle of fatigue with reviewers. This is particularly where if everyone was doing it, it wouldn't be as painful. Go ahead Ralph.
Yeah so so review the reviewer scenario is similar to what I described the author scenario to be. So you could imagine in the case where someone suggests a reviewer that only has a Gmail address. All right, flag, wait a second. Do I really know who that reviewer is. You could imagine that when a reviewer comes to log into your system, if they're logging in just with a Gmail address, wait a second.
So, I mean, I just think it's similar. And you could also have a reviewer, like Tim was saying, look at the orchid account associated with that reviewer as well. Is this somebody that has published in the space, that they are being proposed to review in or not. So I think it's all those kinds of heuristics that we need to be looking at. Thank you.
And I'm recognizing that this cost time, it costs effort. Time is money at a time when I know that there are a lot of pressures on all of our systems and all of our associations money wise. But I'm equally feeling that this is a moment in time where that trust in science and the scientific record has never been so under threat, and so absolutely everything we can do to ensure that we are having as clean a record and proper record as possible is just so important as part of that argumentation for this.
Yeah Tim. Yeah and just sorry. No, please think about how much money we are going to be spending as an industry this year and next year, and the future coming years on research integrity solutions aimed at combating content fraud because we let the fraudulent content in at the beginning. So when you're thinking about the cost of this think also about the future cost.
We're having to tackle a lot of issues because we did not check the front door. And if we start checking the front door, I suspect we'll find a lot of that. Content fraud will be simpler to deal with. It will go down in volume. So there is a cost saving as well from doing this up front. IT costs us a lot more like a lot of problems in society. If you tackle them up front, you don't have to expensively tackle them elsewhere.
I think that's a really good point, because I can imagine there are some folks in the room who recognize this as an issue but are thinking, how do I go back to leadership and sell in this idea that is going to cost more time, money, and staffing to do some new additional checks or adding work into our workflows. Is there anything more, Ralph, you could add in terms of how do you present that upwards in your organization to make the case.
I agree, and I think the messaging back to leadership is something like, OK, we need to do a better job at our front door, with identity verification. Gmail addresses don't cut it anymore. And there is that nuance there that Tim said it is OK to use a Gmail address for correspondence. There could be very legitimate reasons why some researchers need to or want to use something like a Gmail address for correspondence purposes.
That's fine. It's just not fine enough for identity purposes. If you want to use a Gmail address, you've got to go through an extra step. That's the nuance that I think we have to get to our leadership, to say that we aren't changing things fundamentally here. We're not saying no more Gmail or Yahoo addresses. We are saying if you want to use that have to pass another identity verification step in order to get into the system.
That that's what we're saying. Yeah and I think one of the things that I know that we at STM will continue to do mentioned, for example, taking a look at additional verification systems. And for instance, China is obviously a major market in terms of the submissions. It's also where we've seen some issues that are related to some of this.
And I think that's an area too, where we can continue to do that work to support all of you, that we can then feed these types of things back to lighten your workload. You're not going to have to go out and find all of these verification methods, but that we will be doing some work to try and identify those going forward as well. Yeah any further questions from the audience.
I know that we're starting to move towards the end of our session, but are there any last thoughts from my panel before we wrap up. And again, if there's any last question, please. We do still have about seven minutes or so. I guess I would just say talk to your platform provider about this. Those platform providers respond to their customer requests and we publishers are their customers.
So help us to heighten their awareness and urgency, to start building some of these verification methods into their systems so that we can leverage them. That's a good point. Very good point. Yeah so during this, of course, we were presenting this report and this is available for feedback. So we're also interested in are you seeing things that we have missed or are there other thoughts.
So we have the QR code if you'd like to both access that report. But again, we're open to receiving feedback during our community consultation process right now. I also encourage you to share this with colleagues, not least of which those who are in your research integrity teams. Again, pushing on the fact that we need our access and identity and technology policies to follow along and keep up with the times of what we're experiencing today.
It looks like a long report when you first click on it, but it's actually big font. So don't get scared. It's actually quite an easy read, I would say. So ladies and gentlemen and guests, thank you so much for joining us for this. I want to thank all of my panelists and all of you for a good discussion and for your attention this afternoon.
I think we all agree SSP is a great event. So it's been good to see all of you and I wish everyone a good rest of the day. So please join me in thanking our panelists. Thank you.