Name:
AI in Scholarly Publishing—Risk Versus Potential
Description:
AI in Scholarly Publishing—Risk Versus Potential
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/50cca24c-cb80-4c90-bdfe-32a99a1a6c68/thumbnails/50cca24c-cb80-4c90-bdfe-32a99a1a6c68.png
Duration:
T01H00M47S
Embed URL:
https://stream.cadmore.media/player/50cca24c-cb80-4c90-bdfe-32a99a1a6c68
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/50cca24c-cb80-4c90-bdfe-32a99a1a6c68/AI in Scholarly Publishing GMT20230315-150128_Recording_gall.mp4?sv=2019-02-02&sr=c&sig=t8NiT%2FiYDTAtD6fjVbP39fEXrUbkfOfoREH%2BCMoUvL4%3D&st=2024-11-19T19%3A27%3A32Z&se=2024-11-19T21%3A32%3A32Z&sp=r
Upload Date:
2024-04-10T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Hello, and Thank you for joining us for today's discussion. Artificial intelligence and scholarly publishing risks versus potential. This is the second event in SSP'S 2023 webinar series. I'm Jason Pointe, lead of SSP'S education committee webinars, working group and also the moderator for today's discussion.
Before we get started, I would like to Thank SSP'S 2023 education program sponsors Morressier, Silver Chair, 67 bricks and Taylor & Francis, F1,000. SSP is grateful for their support. I also have a few housekeeping items to review. Attendee microphones have been muted automatically. Please use the Q&A feature in Zoom to enter questions for the moderator and panelists. You can also use the chat feature to communicate directly with panelists and other participants.
This one hour session is being recorded and will be made available to registrants following today's event. A quick note now on SSP'S code of conduct. In today's meeting, we are committed to diversity, equity and providing a safe, inclusive and productive meeting environment that fosters open dialogue and the free expression of ideas, free of harassment, discrimination and hostile conduct.
We ask all participants whether speaking or in chat, to consider and debate relevant viewpoints in an orderly, respectful and fair manner. Please scan the QR code shown or visit the SSP website to read the full code of conduct. At the conclusion of today's discussion, you will receive a post-event evaluation via email. We encourage you to provide us feedback to help shape the future of SSP programming.
And now it's my pleasure to introduce the panelists for today's discussion. With me today are Dr. Sonja Krane, senior associate publisher at the American Chemical Society. Dr. Juan Castro, co-founder and CEO of Writefull. And Dr. Marie Souliere senior publishing manager at Frontiers. And an elected council member of the Committee on Publishing Ethics.
Now Melanie, if I can. Thank you. I want to start off today by. Just setting the premise for our discussion and why we're here today. Artificial intelligence, of course, is nothing new to pretty much anyone that would be on this call.
It's been present in some form or other throughout our lifetimes, although a lot of times it's been science fiction or It's been something that has been more focused, at least publicly, on competing against chess champions. But it has, over the last decade or so, been a useful tool in our realm and scholarly publishing. What's really prompted this discussion and other discussions that you see in the news these days was the launch of openai's website and its GPT platform to the public In December.
And you can see here the tremendous and immediate growth that affected it. It's been the quickest app so far to reach 100 million users. It only took two months. as opposed to TikTok at nine. ChatGPT is very easy to use and it's become very present in our daily discussions. It's an example of what's called generative versus analytical. I know we'll be discussing both today, but just to set the premise of this discussion, I asked GPT to develop the topics of discussion for today.
And you can see it. It gave me these four questions. Now I also had it answer its own questions, and I'm not going through these here today, but I am going to post this material in chat so that everybody can look at the document and see how CHAt GPT answered its own questions. But the reason I'm not going to just present its answers is because even ChatGPT says, when asked about the use of generative AI in decision making, the AI systems are not capable of replicating the nuanced and subjective judgments that human editors bring to editorial decision making process.
And that's true. And that's why today we have three experts who all have experience in developing AI applications for use in scholarly publishing and for even being involved in the judgment of how and when those should be used. So without further ado, I'm going to hand over to Dr. Sonja Krane from the ACE and have her give an introduction of herself and her work with AI.
Great Thanks very much. Can you hear me and see the slide? OK OK, great. Fantastic well, thanks, Jason, for the invitation to be part of this panel. I'll just really go ahead and jump right into my slide. So we can move toward the interactive discussion part of today's webinar.
So this figure on my slide depicts the life cycle of an article through the scholarly publishing process, from submission to publication and beyond, and also the ways in which I could be applied at different stages. As publications has been working on, I tools to support various aspects of this life cycle. For example, all of our 800 plus editors at nearly 80 journals have access to a reviewer recommendation tool and a transfer targeting tool.
Reviewer selection is arguably one of so let's see. Oh, sorry, didn't mean to do that. So reviewer selection is arguably one of an editors most important and most challenging tasks. Our AI reviewer tool gives recommendations based on reviewing and publishing history and feedback from users indicates an overall high level of satisfaction with this Tools recommendations. And the AI transfer tool helps editors select an appropriate destination journal for a manuscript being declined at the submitted journal, and that, in turn helps authors find a home to publish their work with one of our journals.
And it helps ACE's publications retain content and the investment that may have gone into the peer review of these submissions. But I want to point out that in our work, we aim to align with recommendations of the guidance document, AI and decision making, which super high level summary it recommends accountability and transparency. And I believe Marie will be touching on this document shortly.
The rest of the bubbles around the outside of the slide represent additional opportunities for applying AI in the publication life cycle. I've iteratively updated this slide over time as the AI team involved in this work has generated new ideas and as progress has been made on various projects. For example, the process of recommending related content to readers of our journals was recently improved as demonstrated by positive user feedback by incorporating AI in that process.
And another recent addition to this slide is a new pi section representing the exciting work that colleagues of mine in production are doing in the post acceptance space. Now I recognize that probably updates are needed now and updates are needed on a really regular basis to reflect the possibilities introduced with the recent explosive and potentially disruptive growth of authoring as well as graphics tools leveraging AI.
So I'm really looking forward to the discussion today. I hope it's very interactive and I think now I will turn it over to one. Thank you, Sonja. Sure Thank you for inviting me to this session. My name is Juan, so I'm one of the founders at Writefull.
Also the CEO. But my background is really as a researcher. Just wanted to talk to you briefly about AI and about Writefull So we've been providing language and metadata services using AI since 2016. We offer a set of APIs to speed up processing of manuscripts throughout the publishing pipeline. And we also have author facing tools to aid the writing process.
So AI is now new, is being used across the board at different stages of publishing pipeline. Here are just some services and APIs that we offer. Not all of them are applied sequentially, so they are really just plug and play and they are also applied at different stages on and off by publishers. So before submission through rival publishers sometimes offer authors just a portal to check their manuscripts.
And this is for language correctness and also for structure. So this is to make sure that manuscripts are in good shape and they are not missing key parts. Like, for example, maybe authors should have an affiliation. Right after submission, manuscripts are being assessed for language, quality and structural checks. And this allows, for example, to automatically push manuscripts that might be missing information or where the language is just too poor or first line manuscripts strayed into the peer review process.
Also, depending on the language quality assessment, some publishers are interested in getting manuscripts automatically edited. So here we're not changing. Content is really just about grammar and maybe spelling so that peer reviewers get manuscripts that are easier to read. And after that. So once a manuscript has been accepted in principle and then it has to go through a copyediting process, some publishers are using Writefull to assess the language quality.
And depending on the language quality, they know more or less how long it's going to take to proofread or to copy, edit, and therefore how much money should be spent in this process. So this is also sometimes combined with this idea of auto editing before. So that you increase the quality of the manuscripts and then when they are sent for copy editing, they are in better shape.
So they should take less. So it should be cheaper to, to, to check. And the final two steps, they are really regarding quality control and quality assurance. And this is really about have the copy editors done a Good job. Is this ready for publication? And if not, can we again auto edit like small corrections, maybe grammar.
So that is good to go. So that's in a nutshell for publishers AI systems that we are providing. And then for authors besides just language improvements. And this is not just about proofreading, it's really about rephrasing, about making texts, more academic sounding. We're offering tools for the generation of abstracts and titles, and this is taking into account the content and is we're saying that this is of course, very targeted for academic writing and also things that help authors, especially those that start and maybe they are not native speakers and they are writing formally.
We offer tools to make it more academic sounding or just to paraphrase it, which oftentimes also correct any errors. So this is basically in a nutshell that the type of services that we're offering. Yeah, I'm really looking forward to continue this session. Thanks and now, Marie. Those right.
Well Kelly. Hi, everyone. My name is Marie Soulier I'm an elected COPE council member. So the Committee on publication ethics has been with them for three years. In my day job, I am a senior manager for the publisher of frontiers, where I work on different aspects of publication ethics as well as research integrity.
And so what I wanted to show you today was some guidance that we actually published a couple of years ago from COPE that I was involved in writing, and some of the key points on it were about artificial intelligence and decision making. So specifically what Jason was mentioning at the beginning, the need for human oversight in a lot of decisions that are made by AI when we use it in different processes for publishing.
So this whole document is available online on the COPE website, and we also hosted the seminar in 2021 with a couple of interesting speakers. Actually, Nishchay Shay, the CTO of cactus labs, is currently here on this webinar as well, and Ibo vandde poel, who was a professor of ethics and technology, and we discussed the implications on the use of the AI in publishing in terms of accountability as well.
And the use of the technology. So this is an important aspect that I'm sure we're going to discuss in the discussion today. And most importantly, I guess this year, the topic of AI has shifted more towards AI writing, even creating papers from scratch and potentially fake papers as well. So cope recently posted the position statement about AI not being allowed to be an author, directly acknowledged as an author on manuscripts that are being published.
The position being that in the I cannot take responsibility or cannot be accountable for either the writing or the work and the research behind the work that has been done. And so this is going to be a very interesting topic to discuss. I think I'm sending back to Jason now. It's great. Thank you very. So for our discussion again, I shared in the chat a link to a PDF of the discussion questions that GPT generated as well as its answers.
And what I would like to do is just briefly share the discussion questions. So the specific question that I asked TPT was. To let me know what four questions. What four main questions would it have for an expert panel on the use of AI in scholarly publishing? And it generated the first four questions that you see here.
What do you see as the most significant potential benefits of using AI and how can these benefits be maximized? What are the most significant risks and challenges, and how can these risks be mitigated? How can they be used to improve quality and accessibility while maintaining ethical standards and avoiding bias? And what are the implications of using AI for intellectual property, copyright, ownership of research, and how can these be addressed?
All excellent questions. Pretty much the questions that this panel and I had sort of bandied up ourselves when we met on Monday. And a fifth question that I asked and that you can find the answer to in the document is I asked GPT who it thought should set the boundaries for the use of specifically generative AI in scientific publishing. So not going to leave this question screen up for the entire discussion because this is meant to be a discussion.
So let's go ahead and ask the panelists, what do you see as the most significant potential benefits of using AI in scholarly publishing? You've mentioned specific applications that you've been part of developing. Please speak to what you think are the strongest applications where it can really help things. Who'd like to start first.
So yeah, sure. Sure I can kick things off. I think that improved efficiency, so faster processing and consistency are some of the biggest benefits that I've seen in some of the applications that we've been working on. Right and I guess I don't want to jump too far ahead, but it comes with the caveats of being aware of errors that could be introduced.
But I'll leave it at that for now. OK great fun. Yeah I would say from my experience as well, we use more automation, I would say for efficiency. And this is something we touch on in the document. The difference between AI and automation that are not quite the same. This is something we can also debate as well in terms of pushing AI to real artificial intelligence.
What we've seen as really important use of it in publishing is really detecting levels of fraud that are performed by humans to a level that are very difficult to catch. So if you manipulate images, if you do plagiarism to an extent that a human cannot remember all the articles they've read in their life but in the I can match. I tend to hate software is a good example, which is actually more of an automation than any AI, but if it uses paraphrasing to detect plagiarism, then it becomes the AI and it becomes very sophisticated ways to detect fraud.
So for me, this is one of the greatest benefits of the use of AI for publishing. Definitely, because I don't think one individual or even many individuals can have that comprehensive encyclopedic background of knowledge. I know I used to handle pathology journals and you know, image manipulation and pathology is a huge matter because all the gram slide stains or gram stain slides look pretty similar.
So, Juan, what about you? Right well, we've had we've had some questions. I do want to address one question that's come up in two different ways. So far from Muhammad and from an anonymous attendee. SSP does not currently have a comprehensive list of all of the live AI tools that are used in scholarly publishing might be something that we could consider trying to put together.
I think it certainly would be a useful resource, but no, we do not have a list to share right now. But Juan what else you've developed at Writefull a suite of tools. And I would assume that those are what we're focused on some of the core ways that you perceived I could benefit. Are there other areas that you at Writefull or considering getting into?
What do you think would be the adjacent or next steps? Yeah, I think something that could also help is helping actually the peer reviewing process. And of course, we are far away from replacing humans in the peer review process, but at least we could have tools that identify things like ethical concerns or the ability to detect that. What you're saying in one section is not what you're explaining in the conclusions.
So these kind of things could be put in front of the same way that similar systems are using are being used by doctors. to know, you know, to diagnose illnesses, we could use similar systems for peer reviewing so that at least you can help alleviate this massive slowdown that the peer review process has right now. Yeah, that's an excellent point in our field in anesthesiology, there's certainly a concern of having used up the pool of peer reviewers.
There's only a finite number of academic anesthesiologists in the world, and there's a growing number of journals utilizing them. But to that point. And I think this is an excellent point to discuss, because it broaches probably the other major omnipresent topic in our world these days, which is the shift to open publication, open sharing of research, and especially the advent of preprints and the sort of place in the ecosystem of preprints versus highly selective peer reviewed traditional journals.
Juan, you mentioned the use of AI in peer reviewing. And to a certain extent, AI could be used to to, as you say, check and make sure that a reference in one part of the article is correct compared to the data that was presented. And so forth. But can I be used to discern something like novelty? You know, it's one thing to determine. Yes, a paper.
I mean, you know, I think for a lot of open access publications out there, it's just about getting published. And so the threshold is just. Is the paper written to the standards of a scientific or scholarly article versus for highly selective peer reviewed journal? Is it novel and does it contribute to the knowledge base in the field? Is I at a point right now, or do you think that it will be soon that there are tools that would be able to make that jump, which would potentially just completely change the game because then it would make everybody just go to preprints potentially, and I do the peer reviewing.
What are your thoughts on this? I think it could definitely help. At the very least, I'm sure that current models can pinpoint parts of the manuscript that might be referring to the novelty. But I think in the end, as you said, like someone who's very specialized in the field should be the one addressing the novelty. And I think that the problem that we have with all these models, even GPT four, is that it's a shallow as the data is.
So you might have insights in your field that are essential to know if something's novel or not. All right. Build on what Juan said. I agree. I think that we are at that point where I can do that, assess the novelty, but it is really limited or we need to be really cautious because of the bias that may be inherent in the data set that was used to kind of make that evaluation.
So that human element is still really important. I would add two things to this. One of them is I mean, I definitely think the technology is there, but I think one of the issues is about training the data and how it can access the data. I'm not going to take this to a debate about closed access versus open access, but obviously there's a lot of papers out there that are not fully accessible so that cannot be trained on.
So that limits obviously a lot of whether the I can detect novelty or not if it only has access. I mean, preprints are great for this open access as well, but a lot of the articles that have been published in the past are not available and not accessible for the app. But definitely, you know, at Frontier is one of the tools we developed was also an algorithm that can detect if a paper submitted matches the scope of the journal or of a particular special issue and things like that.
So definitely the technology is there that it can even detect within the very limited scope where there's something fits within that or not based on text and different things. So there are a lot of tools that are being built around that. I think the limitation is on access to full articles. So part of what I get from that is that you all tend to think that I could potentially be used to determine novelty in the decision making process.
But the limitation on that is, is it's still going to require a human to input what determines the factors that are novel. So any other thoughts on how the benefits. we've. Marie, you mentioned access and this isn't a discussion about open not open, but certainly just for transparency.
Our journal through our publisher is all of our content is made open on a rolling 12 month embargo and we frequently open up other papers upon publication, the papers that we think are important. So our past data is there to be accessed and input into a decision making for the future. And I do think that a lot of content is. But what do you think are the steps that and who needs to take them?
Is it the publishers? Is it the researchers? Is the academic institutions or libraries to help improve this data set and the ability for I to access it? Any thoughts on that? There's a lot of discussions going on in the background with different companies and different publishers about this, because one big thing you mentioned even yourself earlier was the issue of image manipulation.
And one big one in publishing is image duplication or people reusing images, which is one of the questions as well, issues of copyright and things like that. And so one of the big tools that people are looking to have is in the eye that could detect image replication, image duplication and people reusing images that have already been published. But there is no big database of images out there. And so these are discussions that are happening with people who could have these databases, people like crossref or there are different universities or trying to pool as many images as they can.
And some companies have done that as well with all the open access images they could find to try to match some of this. So efforts are happening. And I think from everywhere because everybody sees the need for these things to happen. So either will happen from a society or a consortia will be afraid to use everybody or a company is going to come up with a solution faster and we're all going to buy it.
That's basically what I think is going to happen. So in your plan do you have. Any additional thoughts on that or. I see there's some Thank you to the attendees. We've got some really great questions and some comments coming in here. And I want to just jump to this one. It's short, but I think really pertinent.
What are your thoughts on using AI to suss out editorial bias? Can you know, can it be done, I guess, right now? How would you do it? And how would you make sure that the. AI wasn't biased in trying to determine editorial bias. Any thoughts on that? I think that's an interesting one and something that we've faced before.
And I think this goes by one, trying to walk away from this Black box that AI has right, that you put something in and you get an answer, but you don't know how you actually got to that answer. With some publishers, we're seeing that by just training models that give fixed outputs. And this could be, you know, just language parameters we can assess. manuscripts in a way that you can explain how, you know, we go to that conclusion.
And on the other hand, we see that the outputs are more regular and to some extent accurate than those by humans. Because the problem with humans is also that we don't follow always the same rules, and different humans have slightly different criteria. So we, for example, did some work with ACS and we saw a very high alignment in the outputs that rival would create with respect with ACS and the differences in inaccuracy were just looking that different people would produce these slightly different results.
So in that sense, the use of I could regularize, you know, standardize the outputs, but at the same time, you have to kind of walk away from something like something out. But in between, I don't know what's going on. Great and one of our attendees, Todd, where I think just took some of his comments from the chat and put them over in. But Todd made a very good point about right now, especially with generative AI, we have to be careful about the accuracy of the information that's being portrayed.
And in preparing the questions for discussion today with Chat GPT. I actually asked it to produce a for me. And the first paragraph was highly accurate and I could discern from what it was saying where it had gotten the information from mostly my public LinkedIn profile and society website. And then it was two or three additional paragraphs of a professional and collegiate research record.
That mystifies me because it's certainly not I didn't go to Stanford and but it also bothered me that it didn't include citations, so I didn't really know where or why it was coming up with this reasoning. So that's certainly something that could be problematic. I want to we've been talking about some of the risks bias. What can we discuss about.
Ethical, ethical concerns, ethical standards. How do we impart that into the use of ai? Is it done by limiting the use of AI or again, going to better training of the ai? And who should set those standards. And, Marie, I'm going to point to you, because you've been part of this as a council member, so you're already one of the people in these discussions.
One of the things we focused on in the previous document is a lot the concept of trust. And this is linked to the accountability and who has the accountability that's in the eye. Does the have any interests? And if it does, it has conflicts of interest. And if it has no interest, then there's no it cannot say, yes, I want to publish this paper. It can't be an author.
So all of this kind of links together and whether or not we think the AI can have an interest or not. But from human perspective, it's all about whether we have trust in the AI or not. And that's what we've been discussing from the beginning. If nobody trusts in the decision made by the AI, you're not going to be able to implement it in any of your processes because nobody is going to go with it. I can't remember which publisher tried that, but they had an automatic rejections for papers who had a high level of plagiarism as detected by authenticate.
And this software is flawed. Normally humans look at it and determine is the 24% really at 24%? And they had automatic rejections and people took to Twitter and made a big storm out of it that their paper was not plagiarized and shouldn't have been rejected. So that was one of the first attempts to use the AI to make that kind of decision about acceptance or rejection of articles.
And clearly, we were not at a stage where we could trust that kind of a decision. So I think there's a lot in that realm to think about. And I think this is where who makes the decision ends up being the people that have these decisions made about their articles. It has to be overall the community. And if you want to build a trust in the community, you have to be very, very transparent as to where you use the AI, how the AI is, build all the biases of the AI, as John was mentioning.
So the more transparency you produce about how the AI and how the result was obtained from the AI, the more people will slowly build trust while using it, while having guidelines on how to use it, and the more accurate it will become. Learning from human experience. And then as we build the trust. And I think people are going to be more open to using it and letting it start to make decisions that we would trust humans to do.
So one. I know we're just going to say that. In doing this also, I think is crucial that, you know, as the publishers add more steps of automation through the pipeline, that there is an auditing of those steps and on continuous monitoring of are they bias being introduced in any step that is causing some manuscripts for whatever reason being rejected?
I think that does key also in creating the future training data for maybe a system that could do it end to end. Turned into. Do you to add to that? No, I agree with both of those things. And I think maybe drawing them together, I see this as also sort of an educational opportunity. Right maybe this goes back a little bit to the question about editors and bias, but the outputs of an AI tool could be used to educate editors.
And I think it was one who mentioned the consistent output. Right if editors do have some bias that's being pulled into their evaluation or, for example, the review or selection process. I'm kind of learning from and critically looking at what a tool spits out could be a way of improving their processes as well.
You know, it just makes me reflect on how behind my own journal is and that we're still focused on at our annual meeting coming up next month, we're having a workshop on peer review training for people who are early in their career and interested in becoming part of the editorial board. But are we already five years, 10 years behind in looking? We're still focused on training the people.
And now we have to worry about training this automated software to do the same thing that we're still trying to train people the nuanced judgments and how to recognize their own implicit biases. It's you know, to quote a line from master and commander that the captain says, you know what, marvelous modern times we live in. So there's a couple of questions right now about AI for content generation.
And they hit a number of different aspects that really talk about, I think, touch on the intellectual property, the ownership, authorship and ethics and accountability. So we can have added with these. One of the questions is, what if the publishers themselves are active users of generative AI to produce content? Would they even need authors anymore?
And should AI be-- And Maria know that as part of the cope panel, you should have something to say about this.-- Should scientific publishers consider AI as a co-author of just as a cited tool? So let's start the discussion on that specific question. And, Marie, I'll focus on you. Yeah I mean, the position statement, which has been I mean, I think we publish it a bit after others.
I think nature went first and a few other major publishers published very similar statements. All of these positions are based on using the classic icmje guidelines for criteria for authorship, which includes form main criteria. And we agree that these are the real criteria for authorship that are they're commonly used right now. And if we decide that this is what we follow, then chat GPT cannot be an author because it cannot give final approval for the version of the paper to be published.
It cannot give consent for submissions, and it cannot agree to be accountable for all aspects of the work. So those aspects cannot be met. In terms of being an author, I think we can agree that it can make a substantial contribution even in the analysis of data, and even sometimes its acquisition of data can have robots doing some of the data. Now They can draft the work really well and they can even critically review the work.
But there are some criteria for authorship that are not met. That being said, a lot of people have taken to publishing critical views on this. It's mainly questions about whether the ICMGE criteria still apply in the current world of publishing, especially in what do we really commonly do about authorship for humans. Do we really apply all four criteria for all authors?
We know the answer is no. Is this valid? Are we in a way where we need to move towards making sure we remove ghost authorship, guest authorship, make authorship, authorships for sale all of these what we consider fraud issues? Or are we in a position in the world where we want to move towards less criteria for authorship? And then there would be more authors that qualify because they don't have to meet all of these criteria.
And then in that world, maybe Chat GPT could be an author. So if Chat GPT could be an author in the future, I mean, just even talking about how to address it right now, how are authors? And I'd say Sonja is someone who's responsible for a large portfolio of journals at ACEs. Are you seeing this right now? Are you seeing authors citing or acknowledging?
If they've used it in the writing of their articles, how are they doing that? And how would you optimally like to see that done. So we are seeing a little bit of that. I wouldn't say that it's common yet, but primarily people. So authors are acknowledging the contribution of different AI tools in the acknowledgment section of their submitted manuscript and sort of describing what contribution was made.
We definitely follow the COPE guideline that chat GPT or any similar entity can't be an author on the article, but there's certainly no prohibition to authors using it to help with the. Construction of their manuscripts. Was that? I think you had a two part question. I may not have answered the second part.
I think the second part there was just optimally how would you want it to be acknowledged? But I think did address that. There's a number of questions from an anonymous attendee here. One of them that piqued my interest is about AI and automation have been used for copyediting now for some time.
But there's still quality issues that arise. What's what's the future of that? What's what's the timeline for improvement where we can really expect AI to provide very high quality copyediting and then sort of parallel to that translation services. I know for a lot of publications these days, it's a concern of it's part of accessibility is for so long we've traditionally published our journals in English language, but that doesn't serve a large portion of the world.
But translation is costly because it's one thing to get it translated into the other language. It's another thing to get it translated with the science still being accurate in the other language. So two part question there. What's what's the timeline that we see AI providing really high quality copyediting? And is it realistic to think that in the near term, we're also going to be able to use it for translation?
So maybe I can give my opinion on this. So I think that the main thing that we've seen with copyediting, the quality of copy editing is with regards really with the data that you have available. So if you have a generic data set and that's what you train your models on, regardless of what architecture you use, what you're going to get is oftentimes, you know, something that is very good at proofreading.
So it will make the actual manuscript good in terms of grammar, if you want to go beyond and then you want to assess the phrasing, you know, making sure that you're using the right words for the field. What we've seen is that unless you have something specific to that field, it's going to be very difficult. And this is also the same problem that you have in translation.
Oftentimes, you won't have a parallel corpus where you have for two languages, something specific to a particular domain. So that the model learns the specificities of, for example, chemistry versus physics. Those are the main things we found. Oftentimes you will get models that even if they are very accurate in general, if you apply it to, say, computer science will start changing certain words that just in the General Data set are more common to use, but not within computer science.
Marie, anything that you want to add to that? I think I want to share that anecdote about translations and bombing in general in about AI learning from different data sets. A bit of a combination of everything when we I mean, Sonja also mentioned in her introduction, you know, a tool that they developed to match potential reviewers to papers and your reviewer recommenders. think a lot of people are trying to create tools for that.
And one of the tools I tested years ago was regularly recommending people from the same country to review papers from authors with similar countries. For example, if you have Italian authors, you get Italian reviewers. And it seemed to be a very clear pattern you'd have. Specifically Scotland and Scotland and Hong Kong and Hong Kong. So not the US and US, which is broad enough that it could be a mistake, but.
And what we discovered is that just the subtleties of language from Italians writing scientific research in English were reflected in the AI, how it was trained, and it would typically find just these very strange turns of phrases and would then match Italian to Italian papers. And so there are a lot of subtleties in the way that we write that are not picked up yet or that then bias. Some of the models.
So what's wrong with saying is very accurate as well. We've seen you need to have specific dictionaries for different fields and specific trainings for a field of research. Otherwise, the improvements for copy editing and translations are very difficult to make. So I mean, the original question was how quickly can we see improvements? Again, I think for a lot of things with AI, it depends on having the right data set and how much of it you have and how quickly you can train the model and then have iterations of training the model with humans giving feedback.
So there's one of the questions that was answered earlier I think I want to go back to because it does focus on that developing the data set. And the question is, as more AI tools get developed, what sort of policies are publishers considering around using their content and developing AI solutions? And Sonja, you know, you're going to get tagged with this one because you're the one that actually represents the publisher here, because in case anybody on this call doesn't know, ACS publishes its own content.
But also represents other organizations as a publisher. So in developing your AI tools, is the ACS considering policies to open up its content to also help others develop their tools? So that's a great and complex question. Certainly we are not trying to develop tools that are just for our own use.
You know, that said, it's a complicated machinery, I think, with a lot of decisions that go into opening up the content to make it broadly available. But certainly there are avenues for interested parties to work with us to access our content. I think it's a developing situation, so more to come. Sorry no, I mean, that would be an interesting question for follow up.
If we had representatives from Wiley, Springer, Elsevier and whatnot to talk about. How open are they to sharing their back content to help train tools that they don't own? It's definitely a question that begs collaboration versus competition moving forward. Yeah there is a big push from STM, the Association for scientific, technical and medical publishers, that has a lot of publishers coming together and with hope as well to try to see what our legally possible ways to share knowledge.
And particularly it started with trying to go against fraud and fake papers which are now more and more created by AI. And because the I can be used to track fake papers. But the better the AI is tracking fake papers, the better the fake papers become as well. So it's a big, big bit of a race there. But so STM is trying to put together this STM hub and science hub and trying to get as much knowledge share there as possible and trying to see what can be shared from publishers about fake papers, about what kind of data can be put together there so that we can train some of these algorithms based on larger pool of data for paper mills, for other fraudulent research and publications.
OK I would encourage all the attendees, if you're not already going into the Q&A to look in the Q&A, because we've got an anonymous attendee has posted two links that I think refer back to the question about finding peer reviewers. And so one is trophy science refereed finder and another is a tool from clarivate.
So I. With The generation of content by AI. We've touched on this a little bit more, but how much of a problem or a threat right now does. Does anyone on the panel think it is? The issues of copyright potential copyright infringement?
Marie, you've mentioned a couple of times about image. I mean that's certainly coming with GPT four and its supposedly even greater ability to create images ad hoc because they can conceivably see in a lot of areas where an image could be created for something that never actually existed. And then we'd have a screen that certainly I in the realm of anaesthesia, the most recent, unfortunately, anesthesia has seemed to collect some of the highest profile.
Fraudulent authors and scholarly publishing over the last two decades. And the most recent was a Japanese author who just made up data and published a lot of research letters which don't tend to get as much scrutiny as is a full original research article. But someone can certainly do that with images potentially even now, but with copyright infringement, it's already a problem these days.
It has been a problem in publishing in general, but really with the advent of AI generated content, how do we police that? Do we have the same AI that generated the content? Go back and review it to tell us whether it infringed on copyright. Any thoughts on that? Who polices that? . I was thinking about that the other day.
I said, if you have an eye that polices the first day, isn't there a conflict of interest between the eyes, especially if they come from the same company? We can go very far down the rabbit hole there. It's very tricky about copyright. I think there's been a lot of debate for a while with Dolly. How much creativity. Do you need for it to be a real new creative content created by the AI, especially since we know it's been trained on past data.
We assume humans create out of thin air, but we also base things on. Our creativity is also based on our experiences and things that we've seen in the past. So it's a very tricky time for music. All of these lawsuits with music is too similar to something you've heard in the past. What is the it's very gray areas. And I think it's going to be the same for publishing.
As you say, there are a lot of fake, completely fake images being created. Those are easy because even if they're slightly based on something that there's fraud from the fact that they are not based on real data. So I think we have a slight advantage there in terms of scholarly publishing because we also want it to be based and grounded in actual data that has been done by research.
So it makes it easier to decide whether it's correct or not. One of our attendees, Nishchay Shah, posted sometimes I one of my own inherent biases in a lot of SSP discussions is because I come from 30 years in biomedical publishing. I tend to think that we are the world of scholarly publishing, but I know that we also encompass the humanities and social sciences and physical sciences and so forth.
And she points out that about what quantum theory said earlier in the type of language and style, differing a lot that within academia with humanities, we've got a completely different structure of communication than we do in scientific communication. Scientific communication tends to be very pragmatic and specific. It's supposed to be factual, whereas humanities language can be more descriptive.
And so that's can have the same AI tool having to be trained in a completely different way. Just we've got 2 minutes left. I think this has been a really good discussion. We could go on all day, I feel. But do we have any quick parting thoughts from our panelists today that they'd like to share? I think one.
Go ahead, Juan. So one interesting thing about using or allowing the usage of things like Chat GPT to write parts of the manuscript is that if you allow that and someone later with a similar prompt gets a similar paragraph, maybe an introductory paragraph. Who's plagiarizing, who? And and also the idea that if you have AI's writing your conclusions, writing your abstracts, that skill is going to get lost.
And it's not just that you create your conclusions, is that by writing your conclusions, you can get to new ideas. And I also feel that this might be lost. So it's also the idea that if you have all these AI tools creating these sections for you, you might as well just focus on methodology. And then we change the format to just methodology. Everything else could be automatically generated.
So it's also an interesting kind of almost philosophical question of where do we stop? Yeah Yeah. I guess I just want to add to that it's as we're moving toward more autonomous applications, I think that we're all clear when sort of assistive AI in some decision makings that's been discussed over the past couple of years. Things are fairly clear there, but this autonomous content creation is a big challenge.
I've appreciated the discussion today. It's honestly just opened up more questions for me, but. Definitely Marie. As we discussed. Where do we stop? And I'm honestly worried, given the amount of fraud that we've seen in the past years and how sophisticated it's become, that now there's more and more AI that can help us with.
So many things, but can also help create so many fake papers. And so this is a topic we're also going to discuss with hope in our own discussion on that next week. But we also want to consider how useful these tools are for people who genuinely want to publish proper research and maybe don't have the means. Or as we discuss sometimes some of the tools for improving language and things like that are pretty expensive.
So if they can use that to write their article because they are not native speakers and will allow them to publish their research as otherwise they couldn't, that's also really helpful for society. So there are really pros and cons to consider. Absolutely well, I Thank the three of you. I really, really have enjoyed this conversation. And again, I think we could keep going for quite a while. I've learned a lot.
I agree that this has opened up more questions than answers and that it is somewhat frightening. But Thank you, everybody, for attending today's webinar. And Thank you again to the panel. Of course. Thank you to our education sponsors-Morressier, Silver Chair, 67 Bricks, Taylor & Francis, F1,000- attendees. You will receive a post-event evaluation via email and we encourage you to provide us with feedback to help us pick topics for future events.
I have a feeling we may be revisiting AI at some point soon. Please also check the SSP website for information on upcoming cesp events, including the 45th annual meeting in Portland, Oregon. Today's discussion was recorded and all registrants will receive a link to the recording when it's posted on the cesp website. This concludes our discussion today.
Thank you, everybody.