Name:
Crowdsourcing A GenAI Prompt on The Future of Scholarly Publishing: “Tell Me How To Implement AI Solutions While Staying True To My Values.”
Description:
Crowdsourcing A GenAI Prompt on The Future of Scholarly Publishing: “Tell Me How To Implement AI Solutions While Staying True To My Values.”
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/cf7adbeb-6445-44f8-8344-c8961cce6cb5/videoscrubberimages/Scrubber_1.jpg
Duration:
T01H04M13S
Embed URL:
https://stream.cadmore.media/player/cf7adbeb-6445-44f8-8344-c8961cce6cb5
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/cf7adbeb-6445-44f8-8344-c8961cce6cb5/session_4a___crowdsourcing_a_genai_prompt_on_the_future_of_s.mp4?sv=2019-02-02&sr=c&sig=SLn2c7E4YtgqqZzH5FM8oGxnBh9EmkvyOWfBSUdQXgc%3D&st=2025-04-29T19%3A21%3A57Z&se=2025-04-29T21%3A26%3A57Z&sp=r
Upload Date:
2024-12-03T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Good morning and welcome to this educational session where we will be exploring the practical implementation of generative AI processes and what it means to remain true to our publishing values. Thank you to the Copyright Clearance Center for organizing and in their support for preparing us today. And again, I appreciate this may be the morning after the night before, so you may be a little bit tired. And after an intensive meeting program.
So the good news is that there are no PowerPoint presentations. We will have a video presentation from one of our guest speakers who couldn't physically be in place here. So we want to again, facilitate an interaction with all of you and have an open discussion. So the focus of this session is to focus on the prompt, which I'm sure you're aware of is the core element of every Gen AI tool.
And I'm sure in this room you will have already experimented and attempted to use the likes of ChatGPT, Gemini, Bard, et cetera, and to pose a question and then to evaluate the answer. And what is becoming most apparent is that the more articulated and detailed the prompt is, the better and more relevant the answer is going to be. However, we know that the answers are not unique, so you'll be aware again that the same prompt will result in potentially different answers each time.
And of course that can represent a frustration. So this morning we will hear from three perspectives. We've got the research library specialist. We have a science writer and a technology expert, which we are video presentation, and we have a scholarly society publisher. So I would like to warmly welcome and introduce our expert speakers to begin with. So to my far right, I've got Tiffany.
Tiffany lazo, who's a scientific library specialist at Regeneron pharmaceuticals. Prior to transitioning to the library team, Tiffany spent 16 years as a scientist working on indications in oncology muscle and metabolism. Tiffany obtained her master's degree in biology from Rutgers University. She provides an insight both as a researcher and as a information specialist on the implications of AI and how they are shaping scientific publications.
Next, we have by video Avi staiman, who's the founder of and CEO of academic language experts, and this is an author services company dedicated to helping to level the research playing field for scholars. He's also the co-founder of shiraitodai, which is the first Copilot that helps researchers to supercharge their writing with responsible AI. Avi, as many of you will be aware, is a scholar, is a chef at the scholarly kitchen.
He's a co-host of new books network and a reviewer of Wiley's learned publishing journal. And finally, I'd like to introduce Simone Taylor, who is publisher in chief of publishing operations at the American Psychiatric Association. With a background in scientific research, Simone started her publishing career at Elsevier and via the National Physical library, Wiley and AIP publishing.
And she has developed a considerable international experience in leading adaptive and transformative change. Simone has a passion for helping authors maintain an achievable record in the work in the scientific literature, and is a keen advocate for improving Global accessibility to published work. She served on cross-industry groups for implementing data, citation principles and.
So I should have got to the. We're missing a slide. There we go. That's what I should have had. And so expert in standardizing data policy and works to deliver more equitable outcomes in compensation and career advancement for marginalized groups in the workforce. So welcome this morning. And so just to again, give the instructions for those of you who might arrived late, we want to have an interaction session this morning.
We're using menti.com. So if you go to menti.com and you enter the code 43841967 and that will allow you then to interact with us again. I've messaged each of you individually if you've pre-registered. So I'll just leave that up for a moment. OK I remind everybody of the code of conduct for SSP, which you're aware of.
And so if we go. If you're ready in the software already, I'd like you to Thank you. What type of organization are you from. So let's give this a moment just to understand who you are in the audience this morning. OK, we're all there. So we've got.
Majority of society publishers. We've got a number of service providers. And three each for commercial publisher, University Press and three other. OK Thank you. And the next question is, what are some of your concerns about implementing AI in your publishing processes.
So are they ethical concerns, quality control, cost, job displacement or other. Well so quality control leading the way and then ethical concerns. And I think, again, very, very common concerns amongst scholarly publishing with regard to generative AI.
So with that, I'd like to hand over to Tiffany to give us the scientific library specialist perspective. Tiffany Thank you, Martin. So as Martin already said, I was a scientist for 16 years before I transitioned to a scientific librarian. And now as a scientific librarian, I am deeply immersed in the evolving integration of AI on research practices.
So AI represents a transformative force within the realm of scientific inquiry and technology, and its capacity to analyze data sets, predict outcomes and automate complex processes not only accelerates the pace of research, but it also opens up new avenues for innovation that were previously impossible. However, it is important to approach the integration of AI into our research methodologies with a critical eye, ensuring that ethical considerations and the potential for unintended consequences are rigorously evaluated.
But regardless of personal sentiments, it is undeniable that AI is becoming increasingly integrated into every facet of our professional endeavors. And here is where I look to advice from the great Socrates. The secret of change is to focus all of your energy, not on fighting the old, but on building the new. So while I undeniably holds the potential to revolutionize our field, it is the responsibility of the researchers and the publishers to keenly watch its development and application in a manner that maximizes its benefits while minimizing its risks.
I just want to note here that the critical thinking of the human mind can never be replaced through this entire process. You should continue to be diligent. There will always be a risk of hallucinations. And just to clarify, in case anybody is unaware, a hallucination is where the large language models, which power the AI chat bots are creating false information.
You should always be investigating your sources properly. And this is just with an easy validation. A quick Google search. And this way you have a higher chance of identifying misinformation. When I think about the beginning of my days as a researcher, it feels like another world, one where countless hours were devoted to meticulously combing through journals in the library, identifying relevant papers, and then analyzing them with a highlighter in hand.
There existed a prevailing notion that one must engage in manual processes and thoroughly comprehend them prior to embracing more streamlined methods. However, this concept has gradually become obsolete. We are swiftly approaching an era where future scientists may never have the experience of the traditional methodologies. Many of them already no longer exist. This ever changing landscape highlights the importance for those of us who have been in this career for many years to remain knowledgeable about the technological advancements.
And this is critical to ensuring that our long time, expertise can continue to contribute towards the final objective, which is always driving science forward. AI technologies should serve as tools enhancing rather than replacing our critical thinking skills. These additions are most beneficial when they help researchers streamline their workflow and utilize their resources more efficiently, allowing scientists to dedicate time to critical thought and analysis.
The first topic I want to highlight is methods of discovery. So in the process of reading and dating analysis, as many of us researchers and public publishers dedicate numerous hours to reviewing scientific literature. AI applications are now capable of digesting complex documents and providing summaries in straightforward language, which saves a significant amount of time instead of going through each paper with a fine tooth comb, you can easily skim the summaries and pick the ones that are more closely related to the area of research you are focusing on.
Then you can proceed to do your due diligence and investigate those papers alongside of summaries. We are also using tools that summarize citations stating how many times the paper is supported, mentioned, or most importantly, contrasted. This simple summary of citations can be critical to the quality of research you are building. On the reverse side, you should be using AI for reviewing your own publications.
You can enter, for example, a discussion you have written into the AI platforms. Does the summary highlight the key scientific conclusions conclusions you want to say if your main ideas are not being pulled into the summary. Modification should be made because this is the way that many of the people in the same industry will be doing their own publications. There are already too many generative AI platforms influencing the pharma industry to count.
Some examples are analyzing the structure of small molecules and protein behavior to predict their potential as drugs for specific diseases, predicting the outcome of clinical trials, optimizing dosing, improving manufacturing processes. There are also several previously unknown drug to drug interactions that have already been identified, and this emphasizes the potential of AI in enhancing drug safety and reducing the risk of adverse events.
The next thing I'd like to highlight is ethics, which based off of the Mentimeter response is a very big concern. And that's fair because this is extremely important. But I'd like to highlight that compliance is a behavior that should be applied to every action based on our current understanding. You should never be copying and pasting anything from any resource AI or not unless you have the correct rights and are citing properly everything you publish, whether it's data analysis or vocabulary of a scientific manuscript must be verified.
But here we should allow AI to help this process. Remember, AI is searching a sequence of words and formulating the summary based on the relationship and pattern of those words. It can't critically think that will always be our job. It always comes back, comes back to doing your research on AI and understanding how it can be useful in the process. And lastly, from a broader perspective, this transformation is not confined to the realms of discovery and research alone, but it extends its reach into the very essence of how we conceive, create and disseminate scientific knowledge.
It is crucial to not merely simply adapt to this new standard, but to actively engage with it. I urge each and every one of you to embrace this wave of change. This journey begins with a simple step, a query typed into a search engine that can unlock a treasure trove of insights into AI and its implication for scientific research and publication. By asking questions, seeking understanding and contributing to the discourse, we can collectively enhance the way scientific knowledge is shared and accessed.
Embracing I can streamline the publication process, enhance the discoverability of our work, and ultimately contribute to the Advancement of Science in ways previously unimaginable. Let us therefore set ourselves up for success, not as passive observers of change, but as active participants in the shaping and the future of scientific communication.
Thank you, Tiffany. So we'll. Now have a presentation video presentation from Avi staiman. Again, Avi sends his apologies that he wasn't able to attend in person, so. So this works. There we go.
All right. We don't have any sound. All right. We need sound. OK So maybe in the interests of sorting the technical issue out. If we can. Get out of this.
Maybe Simone, if I can hand over to you gracefully as I'm getting good at this. Thank you. Excuse me. Excuse me. Good morning, everyone, and Thanks for joining us on the last day of a meeting. As Martin mentioned, I'm Simon Taylor. I now work at the American Psychiatric Association.
So following. Excuse me. Following the first talk on the researcher perspective, I just like to give a brief overview of publishing and artificial intelligence. Artificial intelligence is really not new and has been used in publishing for quite some time. Thank you.
A couple of decades ago, we would talk about neural networks, machine learning, pattern recognition, adaptive and predictive control. And now our language has evolved to be a bit more specific. And we talk about facial recognition and voice recognition. For example, in the publishing world, we continue to use artificial intelligence to help us improve processes and develop new products.
If you take cactus communications as an example, and I promise nobody from cactus has paid me to talk about this, but cactus has created a suite of services, paper, pearls, mind the graph researcher life that all aim to use artificial intelligence to help improve the way authors write or improve the way we access information. The advent, though, of ChatGPT has changed our understanding of what I can do so dramatically that now we use the term to refer to anything from writing plain language abstracts to Sky's use of Scarlett Johansson's voice, which are all uses of AI but are slightly different.
From our perspective, we really need to try and understand what opportunities these provide and how we can embrace it in our own work. The challenge we have is trying to do this while being mindful and respective of the intellectual property of our authors, but also being responsible in the way the outputs from these systems might serve the community and the effects it might have on patient care, for instance, for clinical applications.
These are challenges that we must try and grapple with as we embrace the technology. I think it is a technology that we need to embrace to move forward. It helps us. It helps improve the way we work, but there is a risk to using it irresponsibly. And these are the things I think we I hope we can tease out as we get into the discussion phase.
Thank you, sir. Can we try and play this video now. Excited to share with you. About 400 and 20s. I've done my math correctly that what we're looking for.
Yes, sir. We can hear it. I take that for now.
They're not going to go here. Yeah, just put the. I don't know if that's true, but if you do it on a computer, it might. Well by.
Man Here into our. Sorry, guys. I hope you can bear with us for four more minutes. The way I like to think about them is not as a personalized Wikipedia as getting something. I'm so sorry. I tried to look for information. And then get disappointed when the information is incorrect or when they get citations, which are made up, but rather as Wordle on steroids.
So if any of you have ever played, get me the New York Times best selling game Wordle before, what is that essentially you're trying to guess a series of patterns. OK, let's try again. Fingers crossed. Hi, my name is Avi staiman and I'm the CEO of academic language experts and SCI writer.
I really sorry and disappointed that I can't be with you in person, but excited to share with you some ideas via Zoom now. All right. Sorry I'll go again. There we go. Hi, my name is Avi staiman and I'm the CEO of academic language experts and SCI writer.
I really sorry and disappointed that I can't be with you in person, but excited to share with you some ideas via Zoom. Now I understand we have about 400 and 20s if I've done my math correctly. So let's get straight to it. How do we build a responsible prompt for a large language model for ChatGPT, for example. And I think the first principle here is to understand what are large language models.
The way I like to think about them is not as a personalized Wikipedia as some tend to think of it as and they try to look for information and they get frustrated or disappointed when the information is incorrect or when they get citations, which are made up, but rather as Wordle on steroids. So if any of you have ever played the New York Times bestselling game Wordle before, what is that essentially you're trying to guess a series of pattern of letters and words based on information.
And that should inform the way that we prompt because essentially what ChatGPT is it's a very, very powerful, high functioning ability to word predictor of guessing what the next word in the sentence is going to be. Therefore, the more context we give it, the more understanding we give it of what's next and what came before it, the better it will perform. So keep that in mind when you're trying to do it.
I like to think of ChatGPT like Amelia Bedelia. If you've ever read the books before that Amelia Bedelia makes every mistake it could possibly make before getting to the right answer. And I think that we need to think about the large language models in a similar way. It will make any mistake, and there are ways to overcome this. So how do we actually build prompts that work well that are constructive and that are most importantly responsible and reflect our values.
So I've come up with the prompting playbook, and these are some of the tactics that I would recommend using in order to build a prompt. This can be used for all sorts of different kinds of prompts experiment, play around and see what you come up with. So let's go through the prompting playbook. First and foremost, you need to give your a persona, let's call it, or your character a role.
OK, who are they. Are they an academic researcher. Are you an editor of a journal. Are you on production side. Tell the jacho or large language model who it should be, who you are. The second thing is your goal. What are you trying to accomplish. Are you trying to write a text or are you trying to process data.
Are you trying to get information. As we said, maybe not recommended. There are different things that you can do with large language models, and it's important to be very super clear about what your goal is. Number three is level who is the intended, the intended audience and what is their academic level or who are you aiming for. Are you trying to really speak in a very highfalutin, professional, fancy way to a very specific group of niche scholars who understand this terminology.
Or are you trying to do science communication and spread the word far and wide maybe to an audience that's less familiar with the specific subject area. The difference between writing a Twitter feed on a new publication and a professional summary for pharma may look very different one from the other. And it's important that we give that information and instruction to the large language.
Model number 4 is few shot prompting. Now I imagine there are very few of you who have come across this term. It sounds like a complicated fancy term. It actually simply means give it examples. And this is something which I rarely see people doing, but actually it incredibly increases the success of the prompt when you can give it an example. So let's say, for example, at academic language experts, we do a lot of translation and editing.
So when we are working with the large language models and with these tools, we will say, here's an example of a good translation. Here's the French and the English understand what we mean when we say good translation, or if we say, here's a lay summary for a new article. Instead of just saying, well, here's the summary, produce a summary from scratch, we'll give it an example of a well written summary, and that helps to focus it.
It helps to concentrate, it helps to give it context, which is the key to a good prompt. Number 5 is personalization. How do we make it ours. Or what are those things that are important to us that maybe someone else wouldn't exactly need. So let's say if we wanted to create an abstract, it might be important that we want that abstract to be broken up into sections.
I want to understand the introduction, the methods within the abstract. Or maybe I just want one narrative abstract, which covers the entire thing. It's important for us to define how we want to personalize, customize the use of GPT for our own needs. Number 6 is constraints or guardrails. And this is really what it comes back to this issue of being responsible.
You can give GPT rules, say don't make up any references or citations that I didn't give you or don't give me a summary of what it is at the end that you want me to do. Just give me a simple answer to my question or write out a specific task. And by giving it constraints or guardrails, if we do see it going off in the wrong direction, then we can say, OK, make sure to Yes to do this, but not to do that.
Sometimes what you instruct it not to do can be as important, if not more important, than when you instruct it to do number six. Sorry, that was number six. Don't forget iterate, iterate, iterate. If you're expecting. If you're thinking about it in binary terms, GPT worked for me or didn't work for me. I've come across a lot of people and met a lot of people have said I tried it.
It just didn't work for me. Avi well, it's not a think about a student. If you give it a one, if you give them a problem to solve and they can't figure it out on the first attempt, does that mean they're not capable. Well, no. It means that maybe we need to give more context. And sometimes and these tools aren't perfect, they're far from it.
But the more context, the more back and forth, the more dialogue in an ironic way, we need to treat it like a student. And number 7, and this is kind of more for more advanced users is thinking about step by step instructions. So sometimes one of the biggest problems we have is we try to get it to accomplish too much in one shot. So, for example, I'm working on a tool called Ci writer where we are helping researchers to write articles responsibly with AI as a copilot, as an AI, as a tool, AI as an assistant.
And what we learned and realized was that each section of the article is its own beast that needs to be considered carefully and iterated on its own. So if we just asked it to write an entire article, it's going to come up with gibberish. The more we break it down, the more we ask it to write an outline. Now write the first paragraph, the section, the second paragraph, the outputs are much higher quality.
So think about whether you want a one off prompt or a step by step prompt. Let me give you a prompt example on, which I think kind of brings together and hopefully helps you to understand how you actually turn those principles into focus. So I took here an example of a journal editor is proposing a new journal to their editorial team at their publisher. You see here, I put in brackets, which one of the best practices that we just saw goes into this prompt.
So your journal editor I just gave it a role proposing a new journal to your editorial team at your publisher. Well, there I'm giving it a level. I'm telling it. Well, we're talking about editorial team at the publisher. I want you to write an initial one pager. That's the goal. That's what we want it to do on the importance of women's health in the Global South and why this deserves to be its own journal.
Now I want to customize it now. I want it to really be focused on the specific area. The one pager should focus on research and commercial aspects of the potential journal, so I've given it more guidelines of what I want it to actually do. Don't include any specific literature on the topic or data that will fill in later. That's the constraints, that's the guardrails. That's to make sure that it doesn't do things that I don't want it to do.
So the better the prompt, the more accurate and responsible the output is going to be. And I also gave it a model. This is the few shot prompting or giving an example of a one pager that it can build around for the future. Just a note for the wise. It won't be able to read the hyperlink, so you actually need to put the text in there.
Now, once we get our initial output again, don't expect it to be perfect from the outset. You may want to go back and revise your initial prompt if you're going to be using this prompt over and over again. It's important to revise that prompt to make sure it's as good as possible. Or if it's a one off task, you may actually just say what, I'll just have a go into dialogue and I'll and I'll ask for a revised version.
So here was a screenshot that I thought kind of did a good job of calculating this. I like to think of ChatGPT and of large language models as an intern and it's their first day of work and they have an infinite IQ, potential IQ, potential knowledge set that they have. But right now they're dumb as a doorknob and they need our instruction and anywhere where that could go wrong, they will go wrong.
And I think that when and that's kind of like my Amelia Bedelia analogy. And so we need to be very particular specific and exact in our instructions. And if we don't get it right the first time, that's fine. There's always the ability to iterate, to ask further questions and to move on from there. Thank you so much. I think I've used up my 400 and 20s.
I do invite you to connect. I try to post quite often on LinkedIn around issues, around AI research, publishing altogether. You're welcome to connect with me there. Also, I tulip Tuesdays is a free course that I gave for publishers and for researchers on more than 24 different tools that are AI, but especially for research. And I think that's where the future is going.
There are these specific tools that are built to handle and to tackle some of the big challenges in research. Thank you so much for your everyone's time and attention today, and I look forward to seeing you in person next year at SSP. Great job, Avi. And again, he really wanted to be here. He presented at a conference two weeks ago, which in person, which is the European Medical Writers Association Conference.
I can tell you that this was one of the presentations that everybody seized on. You can imagine medical writers really wanting to know and understand how to write a prompt. And I think Avi presents a beautiful step by step process of how to write a prompt and why you need to be skilled in doing that. So Thank you, Simone. We've had your presentation, so we're going to use we're going to have lots of enough time to have a conversation with Tiffany and Simone and myself.
But we thought we'd just continue to drive discussion here with Mentimeter. So do you think I can maintain the quality of work while increasing efficiency. What do you think. Just one person at the moment saying no.
So that might be what you would expect is that we might be looking to generative AI tools as a means by which we can increase efficiency. But can I ask you, Simone, to begin with, what do you think. I think most definitely, yes, because. The whole it's a tool that allows us to for instance, I know somebody who works extensively writing code for Excel spreadsheets just to allow them to do better computer engineering.
And since the advent of ChatGPT, that time has more than halved because you can go into an artificial intelligence tool and say, how do I write code to do x. And it comes up with the things, of course, have to go back and check it, as Avi has said. But that's an example of where I can actually improve the speed at which we do things in the publishing space. As we try to analyze content.
For instance, we already know that we can teach machine learning tools to tag content, to categorize things to make all of that easier. We don't have to do that in. We don't have to do that manually. So I do think having artificial intelligence that's properly trained can definitely improve efficiency. And Tiffany, from a pharmaceutical company perspective, what are your thoughts.
So I think Yes as well. And also it's already increased the efficiency of things. I had given some examples before about some sort of partnerships that pharmaceutical companies have made where they're screening. 10,000 small molecules a day and then finding a molecule that can meet a target to. To treat it. Then they go and they investigate it.
That's impossible. Not impossible. Well, right now it's impossible, but that kind of high throughput. We can't really do that with ourselves. We can't do 10,000. We can't do that ourselves. So, like, again, you have to check everything. So as long as you know that you are responsible and you're the Watcher of that, that's how when Avi said it was like, what did he call it.
ChatGPT an intern. It's like a little assistant. And so, Yes. So for me, it's Yes. As well. And then for you as attendees. What are your perspectives. What are your thoughts on the ability of AI to increase your efficiency.
Or I might turn to the two people who said no. Would you mind sharing your perspective as to why you said no. No At the risk of exposing myself with my boss at the front table. No, it does increase efficiency. I think I resonated exactly with the intern.
I think it's very helpful with some timely admin tasks of creating a zeroth draft. If you have to do something like writing a nomination letter, writing a quick memo looking at the blank page may stump you. But if you can give a very good couple of prompts, it gives you a really good zeroth draft to then go in. And obviously then you come in and put in your own words to make it meaningful. But I do think it increases the efficiency, allowing you to do multiple tasks that need to get done when there is increasingly less time to do them.
Any other thoughts. Coming from the perspective of someone who doesn't engage with ChatGPT or at least not professionally. My concern, just as an observer, would be the education that it takes to understand the ChatGPT platforms and not just how to use a prompt, but what data are you.
Should you be giving them what or what writings. What should you give them in order for this kind of system to process all that and come up with things, for instance, like a memo if you want it to do something summarize a whole bunch of data or a whole bunch of findings, there's got to be, for instance, I'm going to be updating colleagues about this meeting. And it occurred to me, well. If there were a way to work through ChatGPT with that, what would I have to give ChatGPT in order to help me come up with a good summarization or memo or description of the meeting.
Not just general, but the following sessions that I attended. These were the themes and I mean, I'm doing that manually and it's not really it was just a flat one off example that came into my head. But my point is. I still think a lot of us don't even know how to interact with these systems. And it's fun to ask a question of ChatGPT app, about something more generic or known or with information mined from the web, from public resources.
But if we're talking about using actual resources data that we have or data that we collect, I still don't know how you begin doing that. And I work in metrics. So it should be intuitive, but it just doesn't seem that way to me. That's just my thought. I think great, great points in relation to firstly, the underlying corpus of knowledge from which the large language model is learning, and then the credibility of that is, incredibly important, the authentication and the credibility of that data source.
And of course, the problem with OpenAI is it's not open, it's opaque in terms of both its business model. And in terms of the source of content from which it's drawing and scraping information. And that has to be a concern for those of us that work within scholarly publishing. We're highly reliant on knowing the accreditation, affiliation and authentication of both the author and the content which we're drawing on because that credibility lends itself to our business and our brands and our publications.
So maybe from a pharmaceutical perspective, pharma in terms of publications is very compliant with regulation and concerns around privacy and data privacy, particularly around patients. What would your perspective be, Tiffany. So, OK, just I don't want to say one thing before I get to that. So when you're talking about being unaware about the process and understanding that this is for everybody, I think even people that do a lot of research on it, just the one thing I would say is there is a lot of free content out there.
If you're looking for it. And you can really get better ideas of how to use it and not feel as overwhelmed. Because I think when you hear about what these things can do, it's super overwhelming. And when you hear somebody like Avi talk, you can tell that he knows a lot of the background and a lot of the information. Like even just this week I updated my ChatGPT app and I saw that there's a tab.
Now that says research. And now you click that tab and it has all different things consensus, a data prompt, all these different things that you can do. And I was like, whoa, and so things like that. So the other thing I will say is that there are a lot of AI platforms that we as a pharmaceutical company can purchase, but we would purchase access to. But they would have they we would have signed agreements.
So that company couldn't take that and whatever AI platform they're using. But the other thing is you could also always like blind your data in a sense, if you're super worried about it. So I think that also these kinds of situations are being paved out. So, for instance, we have an AI protocol at our company. A lot of companies don't have that.
And that AI protocol is also evolving. It's evolving because also the law is not set on these things. And that's also evolving. So I think there's kind of not a real answer right now. And it does get a little bit concerning because, Yes, something could go wrong. And you don't want to lose the credit or have something come up. You lose your job.
Like these kinds of things are very concerning to people and and valid, very valid. But this is why I say, especially if you work for a company, if anything that you're trying to do, make sure you're talking to the people that know what they're doing, talk to the lawyers, talk to the library specialists, talk to everybody that you need to talk to make sure that whatever you're doing, you're doing it in a way that you won't get hurt by it in a sense.
And Simone from your perspective, working within a clinical environment, the concerns around data privacy and data source again must be apparent. Do you have within your organization, as Tiffany said, protocols and guidelines for use of generative AI. We're beginning to develop those. It's Tiffany was saying it's a very fluid situation because these things keep coming.
New new questions emerge every day, literally. So what we're trying to do. We do have policies around publishing, for instance, I think many other publishers do. So for instance, we allow people to use generative AI in their writing, but they need to declare that when they submit their publications. But we do not allow images from generative AI. So at least those policies are in place.
But there are others that also need to be put in place, sort of source around where you source the data and what data you use in generative AI and how you protect the rights of the original data and the questions that emerge around the use of that data in a product that will generate something new, what would those outcomes be and who has responsibility for that outcome and how does that affect the community.
And particularly in patient care, we have to be careful that whatever use case, we put the content to that we have to be careful that whatever happens doesn't harm people who might want to use the information to treat patients. Great points. So we can move on to another question to again, facilitate the discussion. So what are the key areas in the publishing process where you think I can be most beneficial.
So we're going to build a word cloud from this. So we're sort of.
Guessing we started with production. We're getting into peer review, quality check, research integrity and peer review at the center at the moment. So maybe again from the publisher's point of view. And how do you think I generated I should or could be applied to peer review. I think I'd be very concerned using generative AI in peer review.
It comes back to being what, how much do you trust the outputs of the assessments that the summary that the generative AI might deliver. I think it would be useful maybe as a first pass. But then again, you have to rely on the expert human to go through that output and tell you what's true. So again, if you're going to do that, then there should be clear guidelines in place and there should be clear responsibilities on both sides.
So for now, I would say I am I would be skeptical about using generative AI in peer review. I think that other areas in publishing where we could probably use it to bit more to a higher level of efficiency. Yeah and Tiffany, maybe considering the slightly different type of peer review within pharma for regulatory documentation or clinical study reports, again, would this be at this point in time, the opportunity to consider generative AI as an ability to peer review those kinds of documentation that you're dealing with.
I kind of follow a similar belief as Simone here that it's definitely should be. I think it actually should be used in addition. And it's kind of the same content concept that the I can't critically think so it could make if you're just not watching it, you can't have that to trust like a peer review is a peer review. So like it would be in addition.
Yeah, and times are changing, so who knows where the future holds. But for right now, it's in addition. And what do you guys think who would like to comment. Robert as a chef. I'm looking at you. All right.
I guess I'll lie in my own basket here. I mean, I think the key when you talk about AI, you're talking about generative AI. I actually think tools that actually can increase workflow efficiency perhaps in the production process. Probably the most easily applicable. Now I'm a little like Simon. I can't quite see where generative AI is going.
Certainly in mathematics, you can't just put a bunch of complex math equations together in a large language model and expect something correct to come out. It doesn't work like that. So I'm a little bit more skeptical of that. On the other hand, things like automated proof correction in our world could be a very valuable tool. And anything.
To be honest, anything has to come with attribution. Otherwise, we're not doing it. Thanks, Robert. Anybody else like to comment. So thank you for that. We'll move on to. Next question, which sort of lends itself to efficiency within Workflow.
So how do you envision AI tools interacting with your existing staff and workflows.
So as to Roberts point, they're proofing internal efficiencies, reference checks. Identifying out-of-scope desk rejects. Of course, to automate tedious processes that don't need human oversight. So just looking at those responses, is there anything there that would surprise you.
Or anything that you would like to comment on. Yeah, I put it back to Tiffany in terms of workflow processes and with again, your experience within Regeneron and upskilling staff on generative AI products, what are some of the areas that specifically the processes that you're looking at to increase efficiency and effectiveness.
Are you. You mean in how you in the discovery process. Or are you are you saying for like hiring employees and stuff. I think the discovery process. The OK because content stuff I think that would be interesting. So for us, we are using platforms that will give the relationship of for instance, very specific searches. And we are pulling information from everything that we pay, access to.
But also like preprints patents, there's a lot of things that are in there. And then we are using these platforms that will give us the relationship of what you're looking for. Is it upregulated? Is it downregulated, is it not change. And it will go into these pretty images very quickly can pick. So that kind of thing.
You can do a search in 30 seconds that would literally take you three hours before that. So that's the things that we're trying to focus on because there's not enough time in the day to do all the things that we're trying to do. And so that's kind of where we're at with AI and it's changing as we go. And New things are coming out every day. We're investigating them.
We're OK. Is this useful. Is it better. What is it giving above other things that are available already. And that's kind of and that's kind of where I find that there's a lot of already people that have moved very quickly forward with AI and what it can do. And so for us, it's kind of like one some people are just learning it.
And so that's where I kind of like I kind of encourage people like, get on the train because it's going and just get on at any point and go. And I think that's a great point as well to just if you're not familiar with. Any of the tools or not using them within your workflow, within a contained space. Control space. You've just got to try it to begin with to try and understand the ins and outs of it and obviously have guidance from those who are already up and running and considering it.
But it is almost impossible to keep up with the pace of change. And you'll be familiar with the Gartner Hype Cycle. I think we're still seeing that wave of expectation slightly looking over into the trough of disillusionment and hoping to find some common ground in the near future. And I think that will come through these kind of discussions. So again, Simon, from your perspective, are there tools that you're already using within your organization.
So our organization is kind of slow to this, but we are embarking on a process of trying to automate, automate a lot of what we do that has previously been done in a manual way. And part of that is more not so much generative AI, but things like machine learning tools that can help you scan through your products and do some quick content classification. That sort of thing would take hours to do manually, but we can do that.
We're trying to improve our production processes too, by implementing tools that might check things quickly for us and that sort of thing. But I agree with Tiffany that this is here to stay. And so what we need to do is find out how best to embrace it. I think there is some confusion between traditional machine learning tools and generative AI. Now, everyone just uses the term collectively and that causes some confusion.
But whatever it is, it's here and we need to be able to grapple with it and make sure we put the guardrails in place to allow us to get the best out of it. And I think that's also a good point to make that the I think the semantics are important because we tend to use I as this umbrella term under which now generative AI sits because machine learning has been around for decades in terms of processes.
It just hasn't had the AI hype sticker on it, which unfortunately we're seeing that now, which makes it even more confusing. I'm wondering whether anybody within their organization here would like to maybe share an insight as well in terms of how you either are considering or actually using generative AI tools currently. But would that be to say that.
Not many of you, if any at all, are using any tools at the moment. And so. Let's look at the next question just to get ourselves in over the next five minutes just to finish up. But how important is it to you for is it for your organization to have control over AI tools that you implement, for example, to understand or modify the tools decision making process.
So this is probably not unsurprising. And the leading question you would want to have, I think, control over the process rather than delegating, delegating that to OpenAI or whomever.
But how do then organizations set those controls internally. And I think we've talked a little bit around setting guidelines and governance and process and some of the work that I've been doing. So I'm not before. November 2022 had no understanding or expertise in AI whatsoever. So like most of you in this room, we've had to upskill ourselves by talking to experts who know what they're talking about.
But one big part of the work that I've been doing is not considering which tools to use, because that's not my expertise, but actually how to just manage a new set of tools that can be helpful and set the governance principles within organizations. And talking to for example, some pharma companies and being surprised a little bit that they have yet to establish rules and guidelines and governance around the use and responsible use of AI within the organization and again from pharma and biotech, which is driven by compliance, really strict compliance.
You obviously would expect that. But I think across the broader scholarly publishing ecosystem, you'd want to see that as well. So again, would anybody like to share in terms of what your organization is doing just to set governance principles, guidelines. Even if you're not already using these tools, these tools are obviously out there within the environment.
No so then the next question would be, what measures should be put in place to ensure that the use of AI aligns with your organization's ethical standards.
So establishing a review boards. And having clear policies and training opportunities. Policies that then are approved by all stakeholders. Copyright compliance. That's been a strong subject of discussion. And how a copyright could or may apply to. Text and data and images generated from large language models.
And obviously it's implicit. I'm really pleased to see all our outputs undergo human expert review. So through all of this, the. The expertise of human intervention. And this has been true for any digital transformation, any digital technology over the course of hundreds of years.
Human beings have always been at the center of the design and the implementation and management of these tools. And it should be no different for generative AI. So having that human centered approach is really, really important. And again, maybe if I ask Tiffany, your perspective on how important the human is. So, just as kind of like a final thought, one of the fears, I think, that some people say with AI is like the skipping of steps.
So which I think some people are think that that's what I can do. And I think that is that's where we need to make sure that we don't do that right. Because especially when you're talking about in the pharma industry, you're talking about patients. And while you can use them as tools to save time, to save money, you can't skip steps. And that's where when you're AI is. That's not what AI is for, right.
So like I said, I'm a big advocate for AI. I think. I think it's people should really embrace it. And I also think that people should never think that the human critical analysis is going to be pushed aside for it because all you would do is clean up messes. That's what you would do if you were just letting AI go. It's just you're just going to be cleaning up every mess.
So we don't want to do that. But people get scared like look at this. It gave me a fake citation. OK, so what, the Doi is fake. Throw it in the garbage. What else are we getting from this. I got a lot more than I did if I just did a Google search. So that's the kind of thing. Just use it for what it can do, what it can make you better.
And also just make sure that you're not being excluded from it. You're a part of AI as well. And so you had a session yesterday on value proposition values and ethics. And then what's your perspective. I agree with Tiffany that it's here to stay, but that the human component is vital to all of this, I think I see I more of as an more as an aid than a replacement.
So it will rely on the high quality of content that we can use to train it, and it will also rely on its effectiveness, will definitely rely on the people who then evaluate what the output is and use that to maximum advantage. I think it will need to be evaluated to make sure that we continue to. To make the most of the learnings that we get from it. Thank you so much.
So we're at time. I'm going to leave you with some homework. So this question is a bit of homework, which you can do at home, on the plane, on the train. And I think we'll share all the responses through Mentimeter. So we just make sure that in the Hoover app, if you just register to let us know that you've been at this session, that'd be great. And then we'll make sure that we'll share the Mentimeter output.
So the great contributions that you've given here, and then we'll share this little piece of homework anonymized, of course, but any final thoughts or questions for our panel. OK, I can tell you're all tired, ready to go home. So thank you all for your time and attention. Can we please Thank our fantastic speakers.
And Thank Copyright Clearance Center for organizing this. Thank you very much. And Thank you to Avi. Again, I recommend if you've not connected with Avi stamen before, please do. Great guy and very, very practical in terms of generative AI. Thank you. And safe travels.