Name:
Navigating the AI Frontier: Developing Robust Governance Policies for Publishers, Authors, and Reviewers
Description:
Navigating the AI Frontier: Developing Robust Governance Policies for Publishers, Authors, and Reviewers
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/99911458-377d-4051-987e-e850eccf8c94/videoscrubberimages/Scrubber_1.jpg
Duration:
T01H05M34S
Embed URL:
https://stream.cadmore.media/player/99911458-377d-4051-987e-e850eccf8c94
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/99911458-377d-4051-987e-e850eccf8c94/SSP2025 5-29 1045 - Session 1B.mp4?sv=2019-02-02&sr=c&sig=v3Hlp%2F%2BejF9678BNNFR0C4VaXWuhoCdwMlU%2F4jaJhKU%3D&st=2025-12-05T21%3A37%3A33Z&se=2025-12-05T23%3A42%3A33Z&sp=r
Upload Date:
2025-08-14T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
And good morning, everyone. Thank you for joining us to the wonderful world of AI. So I'm going to kick off the session here. I'm Chris rude. I'm the director of electronic production and product development at American College of Physicians. So just a few things. We abide by the code of conduct. We support SSPs four pillars of community adaptability, integrity and inclusivity.
So the format today I'll do my introductory remarks. Then I'll introduce our esteemed panelists. Then I will the panelists will present their case studies. Then the case studies will be followed up with discussion questions. And then we then we'll leave enough time for you to ask us all the hard AI questions at the end, which we'll be happy to answer.
So please use the mic at the end of our session for Q&A, and please state your name and what your affiliation is. So, given the overwhelming and fast moving changes to the world of AI, we put together a document of resource links that covers AI terminology. Mind you, everything always changes every day, every week, every hour. So we did the best that we could. This document covers AI governance and ethics frameworks, as well as informational articles on the content and search strategies in the era of AI.
So please go ahead and take a moment and scan the QR code. So feel free to look up some of the information as we're going through our presentation. So that way you'll be informed of all the various alphabet soup of terminologies and anything that we're covering today. So before we get started, thank you to those who took the unscientific survey that we presented of four questions.
And we did an anonymous survey so you could be comfortable to choose the answers for the survey so you didn't have to give away who you are. So the survey is still open. So we will be closing it today after the session. And we'll be sharing the final final results of the survey this afternoon. We'll be posting it on the various channels SSP app, LinkedIn and to our various networks.
So as of May 23rd the survey had 100 anonymous respondents. So thank you again. And we wanted to gather insights on the AI experience through four key questions. So the questions that were presented were. Have you used AI as part of your job function. So many of you did 92% 8% No so that's quite a bit. And, you probably use Copilot ChatGPT Gemini and a whole slew of the various elements in AI out there.
Now this is where it got a little interesting. Question do you plan on using more or less AI as part of your job function in the coming year. So the logical answer one would expect 87% You'll use it more, especially as it evolves and gets better. Less 8% So I'm kind of curious about that. Not at all. 5% you're just like, screw you.
I'm not going to use you. And then the third question is I said, there's a whole bunch of alphabet soup of terminology out there. So that's why we created the resource sheet. So we asked are you familiar with terms like LLM large language model, Gio, generative AI overview instead of regular SERP, NLP, natural language process, agentic AI and closed AI.
53% answered yes. So good for you. You get stickers. No, 4% somewhat 43% And that's expected because it's a lot of terms, as always, coming up and changing day to day. And also and this will be one of the key topics we'll be talking about today. Does your organization have any AI governance policies.
Given that things like blew up in 2023, the launch of AI, everybody's scrambling. What's legal was not legal. Is it ethical. Is it going to steal and eat all our stuff. So yes, 63% of your organizations have some sort of governance policies, and those change all the time. No, 10% to those 10% You better start talking to your in-house counsel about that because you definitely need something.
And under development 27% Good for you. And it's always good to be under development because it is always changing. And then bonus. And before we go. So bonus question to those of you who use AI, do you say please and thank you to your AI. Raise your hands. So you're nice to it.
What about. Let's see. Those of you are. Do you take your eye to the brink and try to break it. Anybody here. All right. Good job. I fall into the latter category. So these insights from the survey were very helpful for us to guide our content for this presentation.
So join us as we explore the significant, often mind boggling impact that AI and generative have on the world of scholarly content. We'll examine how these technologies are reshaping the very fabric of how research is written, investigated, peer reviewed, published, found, and consumed. And you think that's a lot. It is because the entire world has changed. We'll discuss the opportunities and challenges these advancements pose for traditional publisher websites, focusing on how they can potentially involve searchability and user experience because it's vital.
We'll also consider the vital importance for the researchers and technology partners who are integral to these digital publishing ecosystems. So it's basically everybody's in it. So we all have to help each other out. We'll be moving beyond theoretical discussions we have with us. Like I said, three distinguished leaders from scholarly publishing who are actively navigating this evolving landscape.
And it's not easy. It can get pretty trippy. They will provide a view into their real world experience. No hallucinations. They're human, I guarantee you. Offering candid insights into their AI journeys. This includes the hurdles they've overcome, the solutions they've implemented critically, the development and impact of their governance policies such as those related to security, licensing and ethics.
Each of our panelists will present short use cases. Then we'll go into a range of critical questions. We'll explore how our panelists organizations have shifted their perspectives on AI, the strategies they employ to manage expectations around AI'S capabilities, the evolution of AI policies, the crucial aspects of ethical practices, and how they address instances of undisclosed AI use. Because that's a pretty big thing.
Finally, we'll look towards the future of how AI overlaps with content delivery, production and discovery and how we evolve with AI. So first I'd like to introduce Paul Guinness. Paul Guinness serves as the Director of Digital experience at the American Institute of Physics, where he drives the strategic evolution of the digital experience platform. Paul will discuss establishing AI governance and deploying AI platforms and workflows at ape.
Next, we have Todd Ware. Todd is Vice President of publishing at the American College of Physicians. Well, somebody I alarm is going off. Todd has been working in the book and journal publications over 30 years working in product engineering, production support and publishing management roles has given him a broad spectrum of knowledge and understanding of the industry.
Todd will discuss impacts of LLMs on content discovery policies and experimentation in closed systems. Finally, we have Jay Patel, the head of cactus, the head of sales for the Americas at cactus communications and an SDG publishers contact fellow. He specializes in working with publishers and professional societies on the implementation of AI solutions.
Jay will focus on the use of paper pal, our discovery AI solutions playground, which are some of the AI solutions available for publishing partners that enable them to embrace AI. So now I hand off the discussion to Paul with his case studies. Then Todd will present and then Jay, and then we'll open it up to panelist questions. Thank you.
All right. So as Christina says, I'm Paul, I'm with the American Institute of Physics at the Institute's been around since 1931 and it's a Federation of 10 member societies. So this means we have connections and a lot of personal data of a lot of people, around 120,000.
So when I early last year, the head of the IT department at AP reached out to me and said, I know you're doing all this exciting technology for our website and our online presence. What do you know about AI. And we had a discussion. It was based on what I know about this, but this technology I know about that. I know some background documents, some of which you'll find in the QR code that we published earlier.
One of the things that came out of that discussion is an understanding that our staff did not know what AI is. They did not know what they could do. And more importantly, to the company, they did not know where we would get in trouble if they took certain directions. So this led to a three month study between myself and the IT department over what should be the guidelines.
How should we govern user behavior. Now hands up in this audience who is remote or working in a hybrid environment. Exactly which means that having a policy documents, having discussions amongst your staff on issues such as I can be difficult unless you put procedures in place. So the main procedure we started to put in place is that when you come to the internal website for the staff, which contains all our personnel documents, how to apply to get a contract and things like this.
This is the first thing you'll see. You'll see it says home IP at the top. The next two big boxes on that page is our guidelines and training resources. And this was to try and get through to the staff that we are taking this seriously, and we want you to take it seriously. And if you have questions, come out and talk to us.
So what's the purpose and what's the scope. Well, the scope and purpose was to really come up with some ethical guidelines so that would not just reflect on what our staff would do, but what we would expect our contractors and any other third party that's handling our information or has a connection to ape how we expect them to work with us. And the other part is to make sure that we're compliant with laws and regulations.
Everybody's favorite boogeyman, the GDPR that can come. You can come into some serious difficulty with that if you do not have the right guidelines put in place. And we decided that the best way to handle this was to commit to what we call responsible AI use with complete transparency to anybody who's using it and anybody whose data is in it, and that we would put in maximum private protections on this. So as a brief example, on our website, we're using a customer data platform called blueconic.
We are using some AI tools with that platform, unless we have consent from the individual when they visit our website, because we have a little cookie thing, you've all seen them except for cookies or personalization or marketing, whatever. Unless they accept those cookies, we do not use any AI on the data we collect on them. So we're trying to be very, very upfront and transparent on how we're handling it.
So the other thing we had to do was Institute boundaries inside the organization. I think we've all heard horror stories in this room about people who have said, oh, I've got all my personnel documents. I wonder what happens if I stick them into chat GDP and come up with a list of new job titles, but what they have done in where people fall down is thinking that they can do that to the public version of TATP which means you've just given all your private personal information to a language learning model outside of your control.
If you're going to do something like that do what we have done, which is you buy an enterprise license, you restrict it to the data that you provide, and GDP can't do anything with that data outside of the controls that you've put on it. So you've got to define boundaries. Now at the same time you're defining boundaries. You've also got to define a culture of innovation. How are you going to persuade people to use these tools.
Where are you going to find that there are pain points in your existing processes, and see if you can get AI to help alleviate them. And you do that by what I call the 85, 10, 5 or 85% of your staff aren't going to care. 10% of your staff are going to care quite a bit, and 5% are going to care quite a lot. So you identify that 5% you get them into meetings. We now have monthly meetings where we bring everybody together and you show people what you're working on.
You ask of those questions, you try and do it in such a way that they're comfortable in handling the information, and you have a clear understanding of how they're handling it. So the inherent risks of confidentiality, data privacy, accuracy and security, I think we've got them covered with the governance document we've created. It doesn't have to be a large document that is only something like five pages long.
And it's written in plain English. And you also define what you're not going to allow people to do with it. So I briefly gone through this long list. And the other aspect I will say that we've noticed is you have to remind people to restrict themselves if they're using your company information to use company tools. It's very, very easy for someone to go and say, oh, I'm just going to use this tool or that tool to do stuff.
You say, no, you have to go through procedures, you have to put in a request to get that tool, and then you're going to tell us how you're going to use it. And we have very strong responses. If you break those regulations, violations are immediately sent to our HR department, which we call the Townsend culture department. And there will be disciplinary consequences if we think there's been a significant failure to follow the guidelines that we've set down.
And the other thing is to commit to ongoing training. If you just publish it and say, we're done. It's not going to work. You have to go and make sure that people are following it. So here's some very brief guidelines. And I don't want to take up too much of the rest of the speaker's time. But you start small. Begin by selecting one or two tools.
The most obvious one that you may not think of is otter, AI or Copilot for transcribing meetings. It's incredibly useful, also incredibly inaccurate on occasion. So you still have to take notes, but start small and build yourself up and empower your team. Try and make sure that there's plenty of training opportunities for them that they can take, so they can familiarize themselves with the tools before you start ramping them out across the enterprise.
So take otter AI very quickly. We did roll it out to everybody. We quickly realized it was a mistake. We rolled it back again so that only a few individuals have access to Otter.ai, because it was being used in ways that were not appropriate for the type of work that we do and continue to assess and evolve as Christina says it's a changing world. What is standard procedures now in six months time could be obsolete or even wrong.
So continually assess and look. Check your documentation. See if this changes that you need to make. At that point I'm going to pass it over to Todd. All right. Thank you, Paul. Todd Ware from American College of Physicians. And I'm going to kick off sort of after what Paul has gone over we back in 2023, I guess it was when LLM started coming out GPTs we were all like, whoa, what's this.
And a lot of presentations with terminator on them and things like that. And we were like, whoa this is not what we want to be dealing with here. Maybe but we got into a lot of experimentation and started looking at what is AI, what is an LLM capable of doing. And that's where I'm going to go with this. And, and we had policies that we put in place.
The policies were you may not use any of our content in an LLM, you may not feed an LLM. And so that kind of locked everything down with those policies. And then over time, things have evolved. And we're definitely evolving with it. And this is kind of what I'm going to talk about is that evolution, we're all navigating rapid changes in publishing right now and scholarly publishing.
And of course, is one of the big ones. Government policy and digital transformation to stay ahead and be informed. And the evolving world of scholarly publishing, things like AI, is now there. It's already subsuming our content as much as it can. We can't really do a lot about that because a lot of the things out there are allowing that to happen.
Government policy, saying when our content, how our content can be created now, how that can be used, when it can be made available, when it has to be made available, all of those things are starting to feed into this. This combined issue that we're seeing, and one of the other things that we're looking at is data for our users. And how is all that data from our users.
How do we take that data and then look at what are the segments of our users. And then where do we go with. We have now segments of our users. We know who they are, to a certain extent. We can profile them. And then what do we do with our content that isn't segmented. So looking at that brings in AI and we say, wow, it would take a lot of resources, a lot of effort to start segmenting our content.
And an article that is not specific to one of our segments. All the physicians that we have, they're not all internal medicine physicians, which that's what we work with, a lot of them cardiologists and other types of physicians. So we start looking at how do we segment that content and how would we do that. And that's where we started looking at AI and saying, we can now do this.
If we do it the right way and we start moving towards that. So with all of this combining and all of these things happening at once, we started looking at one of the issues, like I said, is the technologies to use data and AI to engage online customers, to create personalized segments and content segments, and then also apply that to user journeys and how AI will be or is used to accomplish that in the future.
And one of the other things that we start seeing is the impact of AI based retrieval augmented generation, or RAG search summarization that are happening. And one of the things that we're realizing is that when we look at usage from AI. A user goes to a search, a rag search puts in a puts in a prompt to ask a question. That user is not coming to our site any longer. AI is coming to our site getting that information, pulling it back, turning it into whatever it turns it into.
And we started tracking that. And this is basically, this is a very simple way of looking at it and saying, oh my gosh, it's just rising every month. It's getting more and more as we look forward. So one of the things we need to look at is how do we start taking our content and start utilizing that and start creating our own derivative content that is monetizable? Possibly we hope to monetize some of it, but taking a general article that we get submitted and then turn that into something that is specific to segments or to outputs that can be then used in personalized segments.
And of course, I went to ChatGPT and I said, this whole impact of AI rag search and summarization, what's your recommendation to us, if we're a publisher and of course, it always has. Great answer. The key is to proactively explore ways to participate in RAG based systems securely, ideally through partnerships, licensing, or deploying your own AI layer before you left out of the evolving information ecosystem.
That's a pretty incredible answer coming from ChatGPT, where we have to look at that and say, is it right. Is it right what it's saying. And how do we look at how do we license our content with the LLMs, and how do we start looking at that as another thing that we're confronted with, and how do we go forward with that and take that directly from the answer there. And that's also something we're working on is when somebody comes to us and they say, oh, we want to use your content and we want to throw it into an LLM and create either a walled garden, LLM where that content is secure or they just don't know.
And they want to throw it into an LLM. They're coming to us, and we're seeing a lot of these contracts coming in. And people are saying we want to use it in AI, in LLM. And that's one of the things it's keeping us very busy right now is we're getting all these contracts and we have to literally put into the contract, how they can use it, how we don't want them to use it, what they can do with it what they set up has to be secure.
And that's also impacting us in as we're going forward. We did a lot of experimentation with LLMs originally when we looked at what can an LLM, LLM do for us, we were looking at creating derivative content, things like summarization, physician summarization, segmented summarization all of these things started going out to ChatGPT directly, and I wasn't feeding the articles to it. I was asking it to go to.
Our articles and then give us a reply and what it was giving us. It was using not only our direct. This is our annals of Internal Medicine journal. It was also giving feedback from other sources, and of course, all the sources in the world that it's subsumed. And it was giving us a prompt reply that was not accurate. It was even after I said to it, do not use any other source other than our content. It was giving us these inaccurate replies.
So we looked at it and said, we can't use this. We're not going to be able to use in this way. So we have to look at a different way to do that. The way to do that is we started looking at secured walled garden models, and we looked at if we could get a secured walled garden model set up, put our content into that and start creating derivative content from that, we get really, really accurate results.
And that was something that it totally changed what we were getting as far as the prompt results and then the experiments we were doing. We were looking at how that was impacted by using a secured wall garden. And with that, we could use a secure walled garden solution that we set up and buy policy for ACP. I'm not allowed to say who we're working with on that, but there's a gentleman third row over here.
If he raises his hand. I'm sure you could talk to him. So thank you. And we set up a walled garden. And with that, we could start creating derivatives. Now, of course, everybody's going to ask, what is that. What is that walled garden do for us. And one of the things is one source of content. We feed it only the content we want.
Selected content only. It's managed output through that solution. It has the full capabilities of the LMGS that we're actually directing it to. It gives us the ability to select which one we want to use. So we have four or five options that we can say, go out to this GPT, or we can say we want multiple GPTs to actually do a comparison output and then get the outputs from those replies, the prompt replies.
And then I said, the accuracy went way up. And minimized hallucination was part of that, which was very helpful. Secure integration. We don't have to worry about our data because it's secure. It's in a walled garden. Solution? no, no AI training with the content so the AI doesn't get the content. No copying or saving the content to the LLM, and it's only temporary caching.
So with that it gets a package. It's got the prompt, it's got the content, it replies, and then that cache is deleted. So we don't have to worry about our content going out to the LLM and being there in perpetuity forever. So we started looking at what can this give us as far as these kinds of derivatives that we could create. And we came up with all these lists of things that we could do with our content.
Medical content. We do article summaries. We could do 200 words, 400 words, any amount that we wanted, which was very helpful. Video scripts. It's amazing. You can feed it a prompt and say, create a three minute video script with these two characters that are in this place, talking about this, talking about the article.
It'll give you a script seconds later. You've got a script physician, Article summaries specific to physicians, patient summaries that are specific to patients, things like annals for educators, which is one of the things that we produce. And then things like article key terms are pretty simple and then other things that it allowed us. And these are just some of the examples real quick. I'm not going to leave them up here very long, but these are some of the examples of the replies that we could get.
And then, of course, everybody says, whoa, OK. So you're creating all this great, wonderful derivative content. You can't publish any of it because it's not copyrightable, because the AI created it. So like I said, we're doing a lot of experiments. We're seeing what it can do. And then of course, we're working with our legal team. And I would suggest anybody that does anything like this, you have to be just constantly in with your legal team to say, is anything that we just created, even publishable, and can we copyright it.
And I'll leave that to you and your legal teams to decide all that and within your policies. So of course, like I said, in the beginning, we set up policies. We were locked down. Everything has evolved now to where we're going. How does that policy really work for us now. And what do we do with this. Of course, the Copyright Office guidance has been changing.
It's been evolving a little bit. And it's been saying things like, oh, you can use AI, but you have to do so much to it. It's human and things like that. Like I said, take that offline with your legal team and deal with that discussion. And then we'll go forward with continuing experimenting, seeing how we can use this. One of the biggest things with this is you have to have humans involved after that output.
We have our editorial team. If we're outputting something, they look at it, they would edit that, edit it a lot, look at that as more a framework for what they could create, using it as a framework for saying, OK, it created this script so I can use a script as a framework and then edit it a lot and then determine what we are or won't do with it or whatever. Determining, like I said, the legal team.
And of course, like I said, the AI policy, there's a lot of guardrails that we have to look at, and everybody in this room has to look at these guardrails and say, can we do something with this. It's a lot of experimentation. That's great, but what do we do with it. And then how do you interact with your teams back at your publishers or wherever. AI policy there must be a human review.
As far as we're concerned. The accuracy of course, is the most important thing. Bias mitigation. It definitely introduces bias and you have to be aware of that. And of course ethics and all these other things too. So now I'll hand it off to Jay. No I'm sorry.
Hey, everyone. A little bit up, maybe. Can you hear me. All right. Great so I kind of just wanted to start with a little bit of detail on our team, because I know we think AI is set it and forget it. You throw some stuff in and it does the magic and there's no humans involved.
But there's a lot of things that happen in the background with a lot of people that have to make this magic happen. So the way I think about AI is kind of like the Wizard behind the curtain. It doesn't do everything on its own. There's always people involved. There's somebody pulling the lever somewhere along the way to make it provide the output that you're getting.
There's content involved with getting the output you're getting. There's a lot of training involved that requires a lot of humans looking at a lot of things to tag it, organize it, and understand it. Otherwise the AI doesn't work. So at cactus, I forgot to introduce myself, but I don't know if I need to but I'm Jay Patel with I'm the head of sales.
I'm from New Jersey. If you can't tell from my accent. And I work for Cactus communications. Been with them for about four years. And I focus on AI solutions of all sorts. So we have a very large team. We have over 300 people in our tech department. 50 more than 50 folks are actually assigned to our AI division.
So that includes working on machine learning, natural language processing, as well as generative AI. We built over 50 plus tools. A lot of them are internal to us, but we've also provided tools and knowledge models to publishers over the past. I don't know how many years. We process a lot of words, well over $20 billion, and we serve over 100 million requests easily.
And we have about five million users globally of a lot of our systems. So we get a lot of user feedback, and we probably do 2000, maybe 3,000 user interviews a year to understand what our users are doing with our solutions. What do they want to do. What's frustrating them. What can help them make their life easier. And then we use a lot of that feedback to actually make the systems better, not just for our users, but also for our publishers.
So I'm just going to go through a couple of solutions and a couple of ways that we're using AI. So when ChatGPT hit the scene 2 and 1/2 years ago now. I think the big thing was well, how are journals going to be able to use it. And I Paul and Todd have shared how they're utilizing it. So the big question was how do you actually use it to engage audiences. So we started thinking about how do we use generative AI to create extenders to articles.
How do we use it to communicate the content to the audiences that you want. Because a lot of times journal content is just researchers talking to researchers. It's not really researchers talking to policymakers or the public or to practitioners in the field. And it's also not very easy to go from English to other languages, which is a big part of accessibility and understanding is being able to translate your research to a researcher or a user in a language of their own, not just in English.
So one of the things that we're working on right now with anesthesiology is actually using AI to translate their EIC podcast into Korean. So where we started here and where I think policy really comes in, is they came to us and they said we used to translate our podcast into Korean, but we haven't been able to do it because of resource issues. Can you help us.
And we said, sure, we can use AI for that. So the first question was, well, how good is AI. So we took some of the things I already had translated in Korean. And then we used AI to translate it. And then they evaluated the two next to each other. And they said the AI sounds just as good. It pronounces things perfectly fine. It, it gets the context.
And so now we've rolled out a Korean translation. We're looking at other languages also. But that's just one use case for journals. And I that really helps extend the audience for your content is to think more than just English. It's how do you get this into the hands of people who don't speak English for naturally or as their first language. And there's a huge audience out there that does not speak English as a first language.
And so if you are not actively translating that content either into text or into audio, you are missing a huge audience that could be utilizing your content, that could be paying for your content. And of course, if you're talking to researchers that are English as a second language or an additional language, you could be missing out on citations as well, because they probably don't know what's actually in the resource because they don't understand English very clearly.
So that's just one use case with American. With AUA we actually did patient summaries. So this was last year. We actually generated patient summaries using AI. And then we had human experts evaluate the output of every summary before it got published. And with AUA also we actually created audio summaries of their journal insights. So we took their journal insights.
We fed it into an AI model and it generated an audio, and we compiled it together to make a podcast. So this is just one other use case for generative AI that can really help communicate research better to a wider audience. So one of the tools that I work on is called paper pal. So there's a couple of different versions of paper pal, but I just wanted to show you how authors utilize paper pal.
So typically authors come in, they get their manuscript evaluated for language and grammar and all sorts of technical checks. And within this system, we're not actually using generative AI or large language models at all. This is classical machine learning where we are looking at their manuscript. We're evaluating the fingerprints in the manuscript. What are they talking about.
Is this something does this match the fingerprint of a similar article or a similar paper that we have edited in the past. How closely does it match as far as the grammar, the language that you might want to see that you might want to publish in your journal. How well does it match the technical checks that you want to run for your journal. So does it is the abstract 250 words or less.
Is the paper 5,000 words or less. So technical aspects of the journal. So we use machine learning to do a lot of that checking. And then based on that the author can get a report. If they have green everything is good to go. You can go ahead and submit. If they get an Amber or red, then there's some edits that they need to do and the author can pay for a report, and then they can basically edit it and then they can submit it.
And the whole goal here is to really help the author prepare a much better manuscript for submission. So they're not getting rejected for language or grammar or for missing something that they should have done. And so this really is aimed at helping the authors be more successful in their submissions, and also help the publishers do less of the manual checking that they might be doing right now. But the point here is that AI is just not generative AI.
It is. There's so many different flavors of AI, and you don't always have to use LLMs or Gen AI to solve a problem. There are perfectly good other solutions out there that will do the job, and probably do it better than a large language model will. You just kind of need to explore your options. So I would just say and I'm a skeptic by nature, I'm a skeptic.
I know you can probably ask some people here, but I don't always believe what I read or what I see. So it's always important to be skeptical about what you're getting and what someone is telling you. So again, LLMs and AI are not always the solution. One of the other products and are we good on time. I'll wrap it up. I got to go and I'm going to go into my Jersey mode.
But yeah. So one of the other products I work on is called our discovery, and it's actually utilized by researchers globally. We have over 5 million downloads now, and it's really used to help them discover new research. And we also roll out a lot of AI features into it. So we provide translation. We provide them the ability to do audio versions of papers. We also do a lot of user level tracking, so we understand exactly what our users are interested in when they're logging in, when what they're reading, why they're reading it.
And we also get a lot of customer feedback. But the thing with this app is that we are actually starting to provide it as a white label service to publishers as well. But if you're interested in that can always ask me. I can probably talk for days about this app. I love it, and it's also just a great way to engage global audiences. And lastly, it's just our AI solutions playground.
So one of the questions I always get is, how can I use large language models or AI or any sorts of AI in a safe, secure environment. So we built this AI solutions playground for publishers, where they can come in and they can actually try about 10 different solutions we've built so they can do multilingual audio AI summary, Alt text generation. There's a rejection analyzer, but there's various different flavors of AI in here, from machine learning to language, natural language processing to large language models in Gen AI.
So this is available if you want to try it out, just hit me up and I can certainly set you up with access. So you can play around with it and use AI in a more secure environment. Thank you. Thank you. All right. Thank you.
Speakers let's see I think this is so a couple of questions here. This first one is for Paul and Todd. So given how quickly everything has developed how has your organization evolved its stance on AI over the past year from that glazed, overwhelmed apprehension to adoption. How much convincing did it take and how many meetings, how many months before you had some sort of draft governance policy.
Well, it took about six months for us to actually have a draft because you needed we needed to understand what the question was and what was it that we were trying to solve. And once we figured out what we were trying to solve, which was protect our information, but also at the same time be innovative, then it was easy to go and decide what it is that we were trying to do. But the other important aspect of it was to make sure it was written in plain English.
And that's one of the key lessons that we learned, I say in the second month, is that a conversation just between me. Who understands some of the editorial stuff, but I don't understand a lot of the HR stuff. Wouldn't work it. It couldn't be just me and the IT director. It had to be a conversation that included consulting and asking questions to every single department, because this is going to be something that's going to hit every single department.
And I didn't really talk about it much, and I'm not going to talk about it right now because I think it's the next question. But our goal is to embed AI in every process we have, whether internal or external, where it makes sense. And that's the goal of the policy. I think initially we tried to lock everything down. We told all the staff, do not use this.
We went that direction making sure that they weren't going to just start taking stuff and throwing it into it. And then we've evolved since then over the last two years or so, to the point now where I said, doing a lot of experimenting, and we are going to be looking at our policy again and making sure that it evolves with what we're doing. But we're still for editorial, for submissions and everything else.
We're still pretty much locked down. If a submission is using AI, we need to know about it. And then on the production side, we're not using it other than experimenting. And then across the college American College of Physicians, we're allowing people to experiment, but making sure they're not dumping content away from us into the lab. Thank you. This next question is for Jay.
So, Jay, help you help publishers with solutions. So often have two schools. You have oh, AI is evil. I don't want to touch it. I don't want to see it. And then you have the School of hey, it's awesome. It'll do everything perfectly for me. So how do you temper those expectations. What are some of the key strategies that when you start having meetings, ensuring that the stakeholders understand that AI, while powerful, isn't infallible and requires thoughtful integration to maximize efficiency rather than generate more complexity or workload.
Because I know when we meet, like with our production and editors, it's like, oh my God, another 20 hours of work that we have to do. So how do you placate. How do you set those expectations. Yeah, I mean, for me, like I said, I'm a bit of a skeptic. So I think where I really start is to better understand what the workflow is first, not to just say, hey, this is going to solve your problem for you.
So we look at the workflow. We look at what are the things that you can easily automate with AI to begin with. And then we say, what are the things that you could potentially automate in the future once you're comfortable enough with AI in your workflow. And that's really where we start. I think the other thing, I didn't really get a chance to say this, but when I talk about using AI in the workflow or AI for marketing or anything like that, people and Todd and I have had these conversations, of course, but people get a fear that oh, what's going to happen with my data.
Are you going to use it to retrain your stuff or is it going to get leaked out to the general public. The fact is that we have very, very strict policies about how data gets used from our customers, and that includes our user data publisher data. And so we are very careful to always silo it and secure it in their own environments, so they don't get used in things that the publisher didn't tell us to use it for.
So it's really important, just as both, Paul and Todd pointed out, it's really important to have really, really good policies in place. If you don't have get them in place soon, both for internal and external. Be very transparent about what you're going to use AI for, and really think very hard and long about your workflow. And what are the things that you believe you can automate easily, and what are the things that are going to be more complex to automate.
And don't make a decision because you have a fear of missing out. Or if you have people that you report to pushing down on you going, well, society x just did this, why can't we do it. How soon will we be able to do it. So always be sure that whenever you're making. And I said this to our clients, whenever you're making a decision, make sure that it makes business sense.
Make sure you're able to make money from it in the long run. Just don't do it because everyone else is doing it. So I think it's really, really important to take your time when you're implementing AI and really understand what you want to get out of it. Just don't do it because everyone else is doing it, or because people are pushing you to do it, because you're probably going to end up wasting a lot of money and a lot of time, and you're probably going to have to find another solution down the line again.
Thank you. Yeah just because the rest of the world is insane and doing things rationally doesn't mean we have to make a rash decisions and approach it with focus. So one last question to our panelists before we open up to the audience. So this is more about the future of content discovery. So with traditional search engine optimization principle no longer enough to drive content discovery as Todd's chart showed AI is just going we up and up, sound effects included.
Are you seeing your so with your organizations. Are you seeing them adapt their measurement strategies in this evolving landscape. So same thing with AJ. Like, are you having publishers come to you. It's like, hey, we need to think of new strategies to get our content out there. Or are people still stuck and kind of like, they want to try these new things, but they're still measuring the old way.
Real quick, we're reporting it up. And then there's conversations. And strategy is something that we're starting to talk about. But it's definitely like, what are we going to do with this when our content is being looked at by AI for a user and not the user. And hoping that there'll be things set up where citations are in all of those replies as opposed to not being there, which we do see in a lot of cases.
For us, we've been following a philosophy for at least 3 and 1/2, maybe four years now where we want a direct connection to our audiences. So we're not passively waiting for someone to search Google and find us. We're making a determined effort to try and build a relationship with the individuals who we think need us, and strengthen that relationship by reaching out via newsletters, emails basically build a direct connection.
Don't rely on the passivity of the Google search engine anymore. And like everybody else, we've noticed our old website was getting slammed with AI bots. Our new website had a lot of controls to put in place to stop that from happening. So they can't scrape our content. But that's the biggest philosophy change that we've had based on the changing environment.
All right. Well thank you panelists. So we're going to open up to any questions. Can I say something about this question. Yeah I mean just really quick. So I've been reading and thinking a lot about this. And so SEO is if you haven't heard, it's officially dead. If somebody tells you otherwise, they're wrong. So what publishers need to do is stop spending time on SEO because it's not going to work.
It's to really start thinking about how to influence the chatbots and to do it in an ethical fashion. In order to do that, publishers need to move away from this obsession with the article, and they really need to think about first of all, what bits of the article are monetizable and how do you monetize them using AI technology. The second thing you need to think about is how do you create extenders that are not only going to engage the audience, but also have extenders that are specifically written for AI chatbots and in a fashion that influences them.
To cite your stuff or attribute your stuff more than second or third versions of your stuff that are written by somebody else in a blog or a review or somewhere else. If you're not thinking about this, you will not only be hurting yourself and your publications and your sites, but you will also be losing out on revenue. So it's really, really important that you start thinking about these things.
And it's something that I'm thinking about on a daily basis, and I'm pestering my marketing folks about looking at ways on how we can help publishers influence chatbots. So, yeah, that's all I wanted to say. Yeah I mean, because even if you read search engine roundtable newsletter, you hear Google saying, oh, SEO is dead. But it's like at the same time, they're always trying to figure out, their competition with themselves.
They still have advertising with SEO, but then Google Ads, but then they're more AI overview. So they're constantly trying to figure out, what their own rendition of content discovery is because sometimes they don't even know what it is. So we'd like to open up to the floor now and please state your name and your organization. Just turn the switch on.
How's that. Annette Flanagan, Jama Jama Network. Thank you. Really insightful. Your experiences and your policies are very helpful. For those of you who are actually publishing derivative content or generating content that you know is being published.
Are you or will you disclose to human readers that it was generated by AI. So one of the things I did not say is that the platform that I've just built has got AI built into it. It's baked in. And when I say I've got an author who's gone and put an article into this system, they can request that the AI tools inside it will generate the summary and the headline.
And then when that goes to the copy editor, there's a big green piece of text above it saying this is generated by AI. So every step during the process until it's been read and checked by a human, there's a big green button that says this is AI generated. When it gets to the published state we don't actually bother at that point. So I generated because it's gone through three sets of eyes and it probably tweaks the stuff anyway.
But it's because we deal with a lot of scientific content. We couldn't trust the system to simply do it automatically. Thank you. Like I said, we're doing a lot of experimentation to see what it can do. Now, taking that to a whole different level, creating derivatives that would be published. That's not something that we're looking at doing right now, but anything that we did with any of that, we would go through a whole legal review and determine if it's even usable.
And then if we edit it like an enormous amount of editing, then it's like, is the resource to do that worth the time. In some cases, and if we do edit it to a point where a legal review says, yeah, you can publish that because it's no longer anything like the AI creation, that's a whole conversation that we have to have and we continue to have those conversations. But we're still like I said, an experimental phase. Thanks hello, I'm Andrew Harmon with the Endocrine Society, and I just want I'm there are a lot of solutions and ideas I'm hearing about here, but from the society point of view, or at least as I've been observing it for many years, I'm only getting a smidgen of perspective about what societies often consider themselves to be mission based on which is either building or enhancing, expanding, or nurturing communities.
And I'm just wondering how that factors into your either product development or product adaptation for your journals and things like that. Because inasmuch as we pump out articles, just one right after the other. And inasmuch as we now hear about creating summaries, audio summaries, translations, there's still a lot of volume based emphasis here. And I'm just wondering where the discussion about communities, affinities, subspecialties not just patient outreach, but communities within the expertise communities that we all in the society world tend to focus our attention on.
Where does that come into play in all this strategy and planning and adaptation and adapting of products and solutions. I think, like I was talking about before, when we get an article, the article will have a title and we'll have keywords, and that doesn't say anything to the subspecialties that are represented within medicine that we represent. So one of the great things with AI is you can pull out.
What are the subspecialties that are specific that could specifically use this article, that article should be used as a segment to go out to them and say, oh, as a subspecialty. As a cardiologist, this article may not sound like it is, but it actually has relevance to you and can be used. I mean that. Does that answer your question. Yeah, I was thinking so much more about actual examples of that.
That would be an example. But if any of you are actually doing anything specific with these positions look at what some physician subspecialties and then you've got patients patient summaries, and all of those things can be derived from that specific article that you would not get that article to any of them without doing that. And I think that's something that we can experiment with and see what's possible there.
I'm going to also give a shout out to the American Geophysical Union, which is doing a very interesting approach, using AI to try and build relationships between authors. So when you go to the website, you log in as a member of AGU and only members of Hugh can get this. It will look at the papers you've published. It will say, do you recognize or have you met these researchers who have published in a similar feel to you.
So it's using AI to try and build community and also using the same tool for their annual meeting, where we used to take 150 people four days to process and assess all the abstracts and split them into sessions. It took 75 people and half a day to do it. The last time they did it, because they used exactly the same tool. Yeah that's great. Also, to add to Todd's comments.
So we not only are we a national membership organization, we also have state chapters. So as our data strategy evolves and gets more mature, we'll be able to use some of this AI capability to reach out and work with our state chapters to get them the content that they need for their practices, for their communities that have the highest needs for certain treatments or wellness issues.
Any more questions, you can add one thing to that or final thoughts from our panelists. I mean, you can also have your marketing teams use it in the same exact way to market directly to your segments, and I think that's powerful to add to that. Yeah Jane. Yeah I mean, just the whole idea of community building that's really important as we start dealing with the impacts of AI on discovery and traffic to your sites, it's really important that publishers are looking at building communities with their around their content.
Unfortunately and I've seen this over many years working with publishers is that many publishers don't have a really good understanding of who actually their audiences are, because they don't really collect a lot of information on them. And with AI, you can actually get to that position where you can understand who your audience is without actually cooking them or collecting information on them, but by just using what they're reading and looking at their reading history and saying, OK, somebody might go to ACP, but maybe they're not internists, maybe they're really interested in, cardiovascular disease or arrhythmias or they're interested in COPD or something.
Well, you wouldn't know that unless they have a profile set up and they actually selected that stuff. But if you can actually just track what they're reading, then you can use AI to say, hey, you're reading this article, by the way, you might want to read X, y, and z because you read X, y and z, or because other people that are reading similar articles are also reading X, y, and z. So I think that's really important to keep in mind, is that I can really help you potentially build these communities by better understanding who's visiting your websites, and also better understanding your members as well, and to do better member outreach and author outreach by understanding their behavior.
And that's something that we do a lot because we serve a community of millions of researchers and they come from all over the world. So we have folks in Asia, we have folks in Africa, South America, Middle East, and each of them, we have to customize our platforms and personalize our platforms to their needs and to their capabilities. So I think AI holds a lot of promise in building communities, but it starts with better understanding who your audience is.
And if you don't know who your audience is, you're not going to be able to build a successful community. All right. I think we need to wrap up and to be short and brief, people inside your organization are going to start playing with AI, and it's best to try and have a coordinated strategy on how you're going to handle that, because if you keep everybody siloed, they're just simply going to work in parallel, doing exactly the same things, or they're going to go in a completely different tangent.
So the stuff that we do on the platform internally can also be useful externally. We want to turn all our. Employee documentation into a ChatGTP Q&A question that staff can I ask. You can do exactly the same thing for Q&A on a website. All right, well, I wanted to thank you panelists for your time and expertise today. Oh, one more question.
Oh, sorry. The SEO is that statement was pretty thought provoking, and some of the comments made me wonder if we're also need to think about serving AI as someone that's actually searching for the content, patient summaries and the other summaries, I don't is that maybe an opportunity in the SEO space thinking, OK, if Google searches are down, but if people are typing hey, I want to know about cancer research into GPT.
Well, I need to make that findable via rag. There was a figure that showed the increase in RAG search segments. So curious if you think there's opportunity for SEO to be optimized instead of just thinking it's dead. That's reallocate resources. Yeah so in terms of content. So scholarly publishing doesn't evolve so quickly. But, like yesterday's keynote speaker, it's be excited, create exciting content.
So more search queries will be in the natural language question format. So we have to start thinking about the type of content. How will it answer directly a question that is asked as opposed to just spitting out a whole bunch of facts and figures. So that's where the derivative content comes in handy, is when the resource links show up in the AI overview, then the person can go to the page of the article, which will have that derivative content and the Fuller study that's full of the facts and figures.
Yeah, I mean, we do need to consider I as a audience and think about how you're going to develop content going in the future to influence the AI overlords in the future. So, yeah, but I think it's really important that you rethink or you think about content being just more than the article. It's what comes after the article and not just thinking about human audiences, but also thinking about an AI audience, because that's really how people are going to find your content is through AI Search engines or AI enabled search engines.
Very few. I mean, people are still going to come to your journal because they like it, because they trust it, but they're also going to get more of their information from these AI reviews or these AI Search engines. All right. We're running out of time here. So, Todd, make it quick. No, no.
OK well thank you, everybody, for your time. So I hope you found this useful.