Name:
AI as Discovery Layer: On-Platform Tools That Transform How Users Find Content
Description:
AI as Discovery Layer: On-Platform Tools That Transform How Users Find Content
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/89b60527-985a-4c77-a351-676c23ce2a4c/thumbnails/89b60527-985a-4c77-a351-676c23ce2a4c.jpg
Duration:
T01H00M25S
Embed URL:
https://stream.cadmore.media/player/89b60527-985a-4c77-a351-676c23ce2a4c
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/89b60527-985a-4c77-a351-676c23ce2a4c/PSWS26 April webinar.mp4?sv=2019-02-02&sr=c&sig=EIX8prSSA8kh5KO8lw%2BSEsTWh0uWg43gc7Iab6O08oI%3D&st=2026-04-17T08%3A05%3A33Z&se=2026-04-17T10%3A10%3A33Z&sp=r
Upload Date:
2026-04-14T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
STEPHANIE LOVEGROVE
HANSEN: Hello.
HANSEN: Welcome. Welcome to the webinar today. My name is Stephanie Lovegrove Hansen. I'm the VP of Marketing at Silverchair. So thank you so much for joining us. This is the second in the 2026 Platform Strategies Webinar series, and we're really looking forward to today's discussion. We have an excellent panel. But before we do, I'm just going to cover a few logistics. So this year's webinar series looks at the way the definition of the end user has changed in recent years, and how we can design platforms and strategies that effectively serve both the human and the machine users.
HANSEN: This free webinar series features three virtual events, this being the second, so you may view the recording of the first event and sign up for the third one, which will take place in May, and cover MCP applications on our website. The event is being recorded, as the voice told you, and a copy of the recording will be made freely available on the website and via email afterwards. Finally, at the end of the event, you'll see a survey.
HANSEN: We'd really appreciate if you take a moment to give us feedback so we can continue planning events around the topics that you want to hear more about. So with that, I'm going to hand it over to today's moderator, Emilie Delquié.
EMILIE DELQUIÉ: Thank you, Steph. Hi, I'm Emilie Delquié. I'm the Chief Product and Customer Success Officer for Silverchair. So this topic is very close and dear to my heart and has kept us all very busy for the last 18-plus months. So I'm joined by three wonderful panelists, which I'm thrilled to have today, which I'll pass over to you all in a minute. But first, just to get everybody thinking about the topic of what we're calling on-platform discovery.
EMILIE DELQUIÉ: So discovery tools that you would build, that you would think about offering to your users while they're on the platform. So let's start to get us all thinking and also understanding where everybody is coming from. Let's start with a quick question that Steph is going to launch. And then while you're all voting, let me just tee up a little bit more about the context.
EMILIE DELQUIÉ: So discovery has always been central to the value publishers provide, but AI is reshaping how researchers navigate and engage with scholarly content. This session is going to focus on AI powered discovery tools integrated directly into publishing platform. So think about recommendation engines, research assistants, intelligent navigation systems that really are meant to help users surface relevant content more effectively.
EMILIE DELQUIÉ: So with the panelists, we're going to examine how these tools are changing user behavior, what makes on-platform AI discovery successful, and how publishers can balance enhanced discoverability with maintaining the trusted, authoritative environment that defines scholarly publishing. Our three panelists are going to share insights from implementing AI discovery features themselves and discuss the evolving relationship between traditional search, browse experience, impact on usage, implications, rather, on usage, and AI assisted content exploration.
EMILIE DELQUIÉ: So before we go into introducing-- before we introduce our panelists, let's see how you're all thinking about this now. Ooh, excellent. A good broad range in the group. So it looks like a lot of evaluation going on, no commitment yet. And it looks like also a good-- just under a third also watching and waiting.
EMILIE DELQUIÉ: And aura to the 15% who are in it and actually seeing results with their users already engaging with it. So excellent. Glad you're here. This is really good to see. And I think we're going to have a wonderful conversation. So I'll pass it over to you ladies to introduce yourselves. So please introduce yourself.
EMILIE DELQUIÉ: And can you also tell us why on-platform AI discovery is important for you, specifically? Cheryl, we'll kick it off with you.
CHERYL FIRESTONE: Hi. Good morning, everyone. I'm Cheryl Firestone. I'm the Senior Manager of Publishing at the American Academy of Pediatrics. I think the most important thing for us is to make sure that our content is delivered through an AI that can be trusted. That's why on-platform AI tool, using the Dynamic Discovery from Silverchair is really important.
CHERYL FIRESTONE: Our users really trust the information that comes from the Academy, and they need to know that it's not hallucinating or getting information pulled in that's outside of our policies. So the on-platform tool has really been the perfect way for us to implement AI.
EMILIE DELQUIÉ: Excellent. Thank you, Cheryl. All right. Tanya up next.
TANYA LAPLANTE: Hi, I'm Tanya Laplante. I'm Director of Product Platforms at Oxford University Press. And we have launched the research assistant and AI research assistant for our LawPro product and an AI discovery assistant across all of our books and journals content. And it was really critical for us to solve existing user problems with AI. And search has always been a less-than-excellent and delightful experience.
TANYA LAPLANTE: So we knew that we could use AI to improve that experience. And the other thing that we were really focused on is that, with the impact of AI on referrals and usage, once we do have users on the platform, we really want to ensure that we have opportunities for them to engage and discover content in an effective way. And AI is certainly more effective than existing keyword searches. So that's where we are.
EMILIE DELQUIÉ: Thank you, Tanya. And really happy to have a different perspective on the panel, as well. Tasha.
TASHA MELLINS-COHEN: Hello. Tanya set me up perfectly because she mentioned AI usage. I am Tasha Mellins-Cohen. For my sins, I am the Executive Director at Counter Metrics. We have spent the last eight months really getting into the weeds of how publishers can measure and report usage through AI tools, whether those are, well, primarily on-platform agents. But anything that you are setting up as a research assistant or similar, how can you report that usage back to your institutional subscribers?
TASHA MELLINS-COHEN: Because counter has always said you can't report bot usage, and obviously, that presents an enormous problem in a world where usage is increasingly driven by bots. So slightly different perspective, but hopefully, a useful one.
EMILIE DELQUIÉ: Excellent. Thank you, Tasha. I think everybody will be delighted to hear that we will not be sharing slides during this conversation. This panel is really just purely a conversation between the four of us. That said, don't hesitate to put your questions in the chat, and we'll reserve some time at the end to ask them. So really, just again, to frame the conversation, we are indeed talking about features that are on your existing site to help end users, while they are on your site, discover additional content within your site.
EMILIE DELQUIÉ: I'm using the word "on your site" obnoxiously too many times, but I think that's because there are so many users' journeys at the moment. I really want to make sure that the framing is clear for today, because indeed, there are just so many options right now to get you to publishers' content. And I think the discussion right now, today is really centered around the user experience on the site, and how to enhance that enhance the discoverability for the content that already exists, and make sure that the user has an opportunity to see other highly relevant pieces of content that will help them in their research.
EMILIE DELQUIÉ: So we're going to tackle it from a couple of different angles, but let's just start very broadly to set the stage. I'll go. So I'll go in reverse order on my screen. But for the three of you, how has your understanding of discovery changed in the past two or three years? Are you solving the same problem you were solving before AI? And how has the problem shifted itself? I'll start with Tanya.
EMILIE DELQUIÉ: I will shuffle the order throughout, but start with Tanya.
TANYA LAPLANTE: Yeah, so I think discovery has definitely changed in terms of how we're thinking of it. And I think the biggest change for us is that before users were trying to find content and sources. And they're still trying to find that, but more directly, they're trying to find answers to questions that they have. So they need to be able-- in order to do that, they need to be able to enter their question into a search bar, rather than just the keyword, and then find the answers themselves.
TANYA LAPLANTE: And that has been a really great step forward. However, we have to be able to answer those questions in a really trusted way, and in a way that doesn't ruin our reputation or lead researchers astray. And that's why when we were building the native solutions, we were really focused on RAG based solutions, ensuring that all of the answers and results came directly from the content that we publish, so that we know that what we're serving to our users is trustworthy and peer reviewed content.
TANYA LAPLANTE: So discovery has definitely changed. And I think looking at that's really what led us to bring forward these AI solutions, because we want to be able to support those natural language questions and search journeys.
EMILIE DELQUIÉ: Thank you. Cheryl, same question.
CHERYL FIRESTONE: Yeah, so I completely agree with you, Tanya, that the way that people are searching is very different. We started to see a lot of traffic coming in from ChatGPT. We surveyed our users, and we know that they're using a lot of the AI based tools, and that they want our information to come through the AI based tools. Obviously, our goal is always to get people to come to our site. So there's a little bit of a conflict there with how do we get people to come to the site.
CHERYL FIRESTONE: And so by building the on-platform tool, we're in a pilot phase right now. We haven't actually rolled ours out yet, but we are using also a RAG based model for some content that's around pediatric infectious disease. It's our red book, which has been around since 1938. It's now in its 33rd edition. It's the most trusted pediatric infectious disease resource out there.
CHERYL FIRESTONE: And we know that people really want that information. Our recent survey actually showed that we asked the question about if we had an AI based tool on our site, would you be likely to use it? And it came back at 90% of our users said, yes, we would use an AI based tool on the site, as long as it was only pulling information from the red book. So we're really encouraged by that, and we know that there's a need for it out there for people to be able to ask the questions that they've really gotten used to asking through tools like ChatGPT.
CHERYL FIRESTONE: And we know that if we have something comparable, that that will hopefully drive more traffic to our site, and that people will be able to engage with our content the same way as they're engaging with other content through other AI tools.
EMILIE DELQUIÉ: Thank you. And Cheryl and Tanya will actually spend a bit more time around the concept of trust later in the conversation, as well. Tasha, from your perspective, what are you hearing from researchers, from users? How are you seeing those behaviors shift?
TASHA MELLINS-COHEN: So I was actually-- this is perfect timing. I was at the UKSG conference a couple of weeks ago, and a lot of my conversations were with librarians who were really worried about information literacy issues. And what they're seeing is a pattern of behavior where students, but also faculty and librarians themselves, are coming to these generated responses that have been created by AI tools, and they are trusting that they are correct without necessarily then going and checking the source materials, which is creating quite an issue when they're using-- when people are using these off-site tools, so for example, ChatGPT, or Claude, or Gemini, or any of the others.
TASHA MELLINS-COHEN: You may or may not know what the sources of material are that these tools are using. Whereas something like Pro-- it's not a Silverchair platform, but something like the ProQuest research assistant, the libraries are reasonably confident that the answers being generated from that source are going to be at least based on scholarly content. The challenge that they're seeing and the thing that they're trying to solve, quite apart from the question of usage, is how do we as an information community-- so I am looping in publishers here as well as libraries.
TASHA MELLINS-COHEN: How do we make sure that new researchers, that students know how to assess the quality of the responses that are being generated by these AI tools, and know how to then go and find the source materials, and check that the source materials actually say what the AI tool says they're saying? Which made more sense in my head than I think it did coming out of my mouth. But it's a huge, huge question and source of debate in the library community at the moment, is that trust and security and just basic research skills, they're being eroded by this reliance on an external brain.
EMILIE DELQUIÉ: Yeah. Well, thank you for bringing this perspective to the discussion because it's absolutely critical. And I'll do a quick plug for the kudos research that's going on now that I know a lot of publishers are involved in. And we are, as well, taming the crocodile, which is reaching out to thousands of researchers, asking them how they're interacting with these AI search tools, and how much they trust the results. And seeing a preview of the initial results, some of the information is a little scary out there.
EMILIE DELQUIÉ: To your point, Tasha, there is definitely some education that needs to happen. So stay tuned on that research. I know it will come out. The results will come out in the spring, May, June. But it's going to capture quite a few potentially sobering realities. Let's go back to-- let's stay with our topic of AI assisted discovery.
EMILIE DELQUIÉ: How are you seeing researchers' behavior evolve? So both just in terms of how-- compared to traditional search, are you finding in terms of both the technology, but-- not in terms of technology, sorry, but in terms of how the users are actually looking for content? Are you starting to see some evolving behaviors there? Yeah, Tanya going off mute. Go ahead.
TANYA LAPLANTE: Yeah. I think what we're seeing, so we have the discovery assistant, which essentially searches across all of our books and journals content. It supports natural language queries, but just returns results. So it's not that chat experience. And what we're seeing in that is that the users are already becoming quite used to going to something like ChatGPT and being able to ask the questions, and they want the answers.
TANYA LAPLANTE: So we just want to provide better search results, and via that tool. That's the function of that tool. But we're finding that a lot of people expect it to do more and to have that "give me an answer" experience. And so that's been interesting because it's using AI to improve relevancy of search results, but it's not a proper chatbot. And then we have what you would consider a chatbot in the AI research assistant that only works across our lab pro practitioner product.
TANYA LAPLANTE: And that is being used in the way that ChatGPT is used. So I think it's interesting of having to set users' expectations that every time they see the AI label, they're expecting that chat experience, whereas it can still be powered by AI and an improved experience enhanced by AI. But it's not necessarily that chat experience that we're seeing.
EMILIE DELQUIÉ: That makes sense. Cheryl, is it still somewhat early days for you all?
CHERYL FIRESTONE: It's early days. So the red book AI assistant has been through quite a few rounds of testing, and these are all tested by our pediatricians, so they know what answers they're looking for from the red book. It's been really positive in the way that people have been surprised at the way that the AI tool does deliver the answers in a way that is very satisfying to them, I would say, so far. We've had really good positive feedback, and they like the way that the very restrictive RAG model only takes the data from the red book.
CHERYL FIRESTONE: It doesn't bring in any external unvetted material. It only knows what the red book knows. And that's really, really important to the way that they want to receive the information and consume the information, because they don't have to go back and check those sources as exhaustively as they would if they were getting all the AI noise from other sources, like ChatGPT, for example, where it's going to pull in from a wide range of different websites and different phases of knowledge.
CHERYL FIRESTONE: So our really more very narrow, restrictive approach right now is, I think, going to be very popular and satisfying. Our biggest roadblock after that is, how do we expand this on the platform to include more information? And how do we keep it from bringing in too much of that noise? So that's really going to be the focus as we move forward with AI tools and how we ensure that it's only bringing in what we want it to bring in.
EMILIE DELQUIÉ: That makes sense. Let's zoom in on a more specific sliver of this topic. So focusing really on the on-platform versus off-platform tension So as we said earlier, there's really a lot of ways right now somebody can get to your content. So when you think about making your content accessible through external AI tools so ChatGPT, Perplexity, there are many others right now. How do you think about investing in on platform discovery versus via making your content available through a third party?
EMILIE DELQUIÉ: Can you walk us through the tension there? And are they complementary strategies?
TANYA LAPLANTE: Yeah. I think for us, I think it's quite complementary. I think for us, we want to be in as many spaces as we can be in. And AI, in some ways, keeps people from going to versions of record, because they're getting the answers they need where they are. But at the same time, there are so many AI tools out there at the moment, that there are so many more paths into your content if you are able to surface your content in those spaces and become discoverable.
TANYA LAPLANTE: So we have opened up the content, the free layer of our content, to be explicit, to ChatGPT, and Claude, and Perplexity. And the idea is that we are hoping that increases, and we have seen that increases referrals to our site, and then views on our site, which has been great. And then it complements what we're doing on the platform, because what we found is that, even though referrals in general are going down and usage is going down, the time spent on-site has gone up.
TANYA LAPLANTE: So average session duration has gone up about 30%. So what that suggests to us is that the users who are continuing to come to us are those really engaged researchers who might be getting an answer from ChatGPT, but are then saying to themselves, I need to go and look at the versions of record and the peer reviewed content, and make sure that what I'm seeing is accurate. And once they're on our site, we want to keep them engaged, and we want to give them as many more paths into our content as possible, which is why we're doing the native AI solutions.
TANYA LAPLANTE: Because if they want to continue that search journey, we want to provide a better search experience for them.
EMILIE DELQUIÉ: That makes sense. Tasha, what are you hearing from your community? Or do you have a follow up question?
TASHA MELLINS-COHEN: Not so much about what I'm hearing from the community, but I have a provocation for this group. Why is this so different from working with EBSCO, ProQuest, Web of Science, Scopus, ResearchGate, Google Scholar, all the other places that we've been syndicating our content for, certainly, as long as I've been in the industry? So 25 years ago, we were talking about syndication. This is just a new type of syndication.
TASHA MELLINS-COHEN: And I think if you're not talking to these groups now, they're going to have your content anyway. A lot of your metadata is openly available, even if they can't get the full text. If you make agreements that benefit you today, you're not going to be in a position in five years' time, five months time, where you're going cap in hand and saying to these groups, can you tell us how much usage that we're getting for our content on your platform.
TASHA MELLINS-COHEN: So I was a society publisher for 20 years, and I begged EBSCO, and ProQuest, and Elsevier, and Clarivate to tell me what the usage was of my material on their discovery services, and I got zilch. I knew I had to be there, but I had no visibility over what the return on that investment was. We're in a position with AI today that we can require that information to make those ROI justifications.
TASHA MELLINS-COHEN: Do it now before you run out of time. Sorry. That's my rant for the day.
EMILIE DELQUIÉ: No, that's great. Absolutely. It's a very important perspective here. And we were asking very similar questions in a different context 15 years ago and 20 years ago. The names of the players are different, but the concepts are absolutely the same. You're right. And before we move to the next one, which is will be about trust, because that's one of the big trade-offs.
EMILIE DELQUIÉ: Cheryl, on that topic, how do you weigh the options and the same thing? Do you see these as complementary strategies? How are you thinking about it?
CHERYL FIRESTONE: Well, my brain tells me that they're conflicting, but I'm opening up to them being complementary, because I do know that you have to be present in all of the-- you have to meet people where they are. And if they're doing their research through ChatGPT, or Claude, or Perplexity, or any other AI tool that's out there, if we're not there, they're not going to find us, because that's the way that they're doing their research now.
CHERYL FIRESTONE: Obviously, 10 years ago, it was all about, they've got to come to us to get the information, but they don't need to do that anymore. They're able to do some research in different ways, and I think we need to continue to move with the times and be open to using the tools that we know our users are preferring to use. So I do think that it's going to end up being complementary, and that once people do come to our platform and see the way that our AI tools work, that maybe they'll start coming to the platform more.
CHERYL FIRESTONE: I do think it's going to be an evolution to see how workflows continue to just change over time. Because these AI tools, a lot of the ones that people are using are fairly new. There's a lot of little AI companies popping up all over the place, some names that I had never heard I found in some research that people are using. And I was like, I never heard of any of these. You know the big ones, obviously, but it's a very popular industry right now.
CHERYL FIRESTONE: It's the dot com of today. And I do think that it will eventually fall into place where people are using both.
EMILIE DELQUIÉ: And so much also depends on the use case. An undergraduate student looking for research on PD from the red book is going to need something entirely different from the pediatrician who's trying to advise a student, or a patient. So no, absolutely. Let's talk about trust. So Tanya and Cheryl, you've both touched on that. Can we spend another couple of minutes on this aspect? Because I think it's really important.
EMILIE DELQUIÉ: And it ties in, in fact, with Tasha's point from earlier, as well. So how do you think about OSHA's role as a trusted, authoritative environment when AI can surface the same content, which sounds very confident, but not necessarily the actual truth? How do you think about this? We'll start with Tasha.
EMILIE DELQUIÉ:
TASHA MELLINS-COHEN: So I think this is going to touch on a little bit Stuart's question in the chat about AI generative engine optimization. Part of what we need to do as publishers is really think about ensuring that we have the best possible visibility of our content to make sure that the trusted sources and not disinformation is what's getting into these AI tools that are out there on the platform.
TASHA MELLINS-COHEN: So anything that you guys have been doing to optimize for accessibility in line with the European Accessibility Act-- and I know there's a new one in the US, which currently escapes me-- that also helps to optimize your content for machine readability. So that's one part of it. The other part of it is we need to as a community start thinking a bit more about trust markers within the content that we produce.
TASHA MELLINS-COHEN: We all know that there is just a flood of less rigorous research-- can I put it like that? --that is being submitted to journals on a daily basis, that does sometimes get through the peer review process. Because no matter how good your peer review process is, if you're getting 1,000 articles, and 500 of them are rubbish, some of those will get through.
TASHA MELLINS-COHEN: We need to find ways to better evaluate and demonstrate trust within the research that we are publishing, and then get that good quality stuff into the AI tools so that they don't need to rely on somebody's rant on Reddit or Facebook.
TANYA LAPLANTE: Yeah, I couldn't agree-- I couldn't agree more, Tasha. And I think it goes hand in hand with being visible in the spaces. AI, in some ways, is a threat to what we do, but also a real opportunity, because what we do is going to become more and more valued, and premium, and seen as the gold standard. And so making ourselves visible is critical. But then, because there's going to be such a spotlight on us, because we're claiming to be premium, and we're claiming to be trusted and peer reviewed and all of that, we need to be investing in the trust indicators.
TANYA LAPLANTE: But the tools that help us confirm that what we're publishing is trustworthy. So then we can with confidence, put those trust indicators on our content and make that visible in these environments. Because it really is a public service that we are visible in these spaces. And I think that's only going to become more and more true as this grows.
EMILIE DELQUIÉ: Yeah. So, Cheryl, I'm going to rephrase the question even a little bit more specifically for your audience, because we're talking about the Red Book, the authoritative source when it comes to pediatrics. And as a mom, I know that I want my pediatrician to get the right information when I go to their office. How does the API-- how do you all think about this question of trust and being able to surface the right information, but all the while making sure that this is the source of information?
CHERYL FIRESTONE: Yeah, your use case is exactly hitting the nail on the head. If you're taking your child to your pediatrician, and they're using ChatGPT as a diagnosis tool, I'm frightened. [LAUGHS] I want them to come to the AAP and get the trusted, evidence-based information that we provide through the Red Book, and know that they're getting the right diagnosis, the right treatment for their child, that they're getting answers that are based on 80-plus years of clinical evidence.
CHERYL FIRESTONE: And while we can't control that people go to ChatGPT and ask those questions, what we can control is that we have an AI tool that really does narrow down the scope of the content and provide that information directly to our pediatricians. And this tool is something that is available to all of our AAP members, so 67,000 pediatricians will have access to it as part of their membership. So if you're taking your child in, I would ask them, are you using AI tools in your diagnosis?
CHERYL FIRESTONE: It's going to become a common question, and they probably are. We know that our users are using a lot of AI tools. So really having something that narrows down the answers to our heavily trusted content is going to be crucial to our users. And we know that people want this, and that they are likely to use it. So we expect that it's going to be very, very well used, and that it's really going to, I think, change the way that people use our site because it will be on site, and they won't need to go to ChatGPT to ask, does my child have measles, or how do I diagnose a child who comes in with these symptoms.
CHERYL FIRESTONE: So it's really more about making sure that our material is in front of people, and that they know that they can come directly to us to get this information, and they don't have to go to those other sources that may not be as reliable.
EMILIE DELQUIÉ: Thank you. I love this answer.
CHERYL FIRESTONE: Thank you.
EMILIE DELQUIÉ: Tasha, yes.
TASHA MELLINS-COHEN: I was just saying, I think, yes, there's a huge challenge, but I also think we have a huge opportunity. There's been a whole development of the internet where everyone can exchange all of their little conspiracy theories, and it's very easy for people to do that. We now have a new technology at our disposal where we can-- to use a political phrase that I hate, but can't think of a better alternative for-- we can flood the zone with legitimate content.
TASHA MELLINS-COHEN: We can get the real information out there and combat things like vaccine disinformation in a way that simply hasn't been possible for the last couple of decades. In a world where relatively few people trust the experts, a lot of people do trust ChatGPT and its ilk. And if we can get them using our content and delivering that to the lay public, that's not going to work for pediatricians.
TASHA MELLINS-COHEN: 100% agree, it's not going to work for an orthopedic surgeon, or a lawyer, or whoever. But if you can get the lay public reading the good quality stuff and not the, well, my mate's mom's best friend's mice got ill because one of them got vaccinated, we might be able to start rectifying some of the catastrophes of the last couple of decades. There you go.
TASHA MELLINS-COHEN: That's your little bit of hope for the day.
EMILIE DELQUIÉ: Yep, and with education of the users, as you were saying earlier. Let's switch gears a little bit, and I'll talk about usage data, because there is a whole lot of unknown in the space, and there's a whole lot of work that Counter and your group, Tasha, are undertaking. So this one is going directly to you, Tasha. Counter has been tracking-- of course, I knew I would have a cat. That was guaranteed.
EMILIE DELQUIÉ: Hi, cat. So Counter has been tracking AI impact on usage patterns for some time now. What are the signals that concern you most? And knowing you now, you're going to look for the opportunities. What are publishers measuring that they probably shouldn't be? And equally, what are they not measuring what they should be?
TASHA MELLINS-COHEN: It goes back to one of the questions in the Q&A, which is, are we seeing drops in traffic to publishers across the board? All right. So far, every publisher that I have spoken to this year and every library that I have spoken to this year, has said that when they're looking at raw data, they're seeing very unusual spikes in activity.
TASHA MELLINS-COHEN: 15,000 searches in two minutes, kind of, massive spikes. But as soon as they look at their counter reporting, which removes all of that bot activity, they're seeing decreases across the board. This is libraries on six continents, and it's publishers around the world, so this is widespread. If you guys are seeing drops in your usage, it's not just you. Please don't panic. We started at Counter working on AI best practice last June.
TASHA MELLINS-COHEN: We had a really extensive consultation that was out for a few months, which included feedback from lots of libraries, lots of publishers, but also, lots of people who were in person at the NISO Plus event. Silverchair has been influential in that working group. So if you're working in Silverchair, you have been represented. Your voice was there. And wish me luck because on Thursday, I am taking the final best practice to Counter's executive committee for approval for publication.
TASHA MELLINS-COHEN: What we are specifying in there is ways that are Counter-compliant for publishers to report AI usage alongside the traditional human and text and data mining usage. So you will start to be able to capture all of that usage around-- well, so search, we have an AI Search equivalent, which is the AI responses generated.
TASHA MELLINS-COHEN: So every time your tool generates a response for a user prompt, that can be counted, which historically wasn't possible under the traditional code of practice. But you'll also be able to count the usage of the chunks of content within your corpus, and be able to report that back to your library users so they will no longer see this drop in usage. They'll be able to add the AI usage and the human usage, and the text and data mining usage together to get a comprehensive picture of the value of their subscriptions to your content.
TASHA MELLINS-COHEN: If they say yes on Thursday, I will be publishing on Monday, because frankly, Friday is a terrifying idea. So watch this space. Well, watch counter metrics org because that's where it will go out. And I'm sure Silverchair will be circulating that when it comes.
EMILIE DELQUIÉ: Absolutely. And on behalf of the whole community, thank you for taking this on. We definitely are looking forward to this guidance, and it's hard work. And we're really, really glad that you all are taking it quickly and giving us all something very tangible to discuss very soon. Thank you, and thank you for the preview today.
TASHA MELLINS-COHEN: You're welcome. And just to be clear, that has been focused-- that phase one was focused on what's happening on your platforms. We think, to a large extent, it will apply under model context protocol, as well, which obviously, you're talking about next month. But we are doing a phase two of this project to look at those third parties, like Sight, and Perplexity, and Elicit, and all of the others.
TASHA MELLINS-COHEN: And we've already started talking to them, so they are involved in those conversations.
EMILIE DELQUIÉ: Excellent. And before we know it, there will be a phase five. This is moving really, really fast.
TASHA MELLINS-COHEN: Phase five might end up being a future Counter release 5.2. So eventually, we know we need to bake this into the code of practice proper. But because it's so fluid, we're considering it at this point as a best practice, not as a formal part of the code itself, which gives us just that little bit of flexibility to respond to changing technology, basically. When we first started this, model context protocol didn't exist.
TASHA MELLINS-COHEN: And when we published, released 5.1, ChatGPT had only been on the market for six months. So really, this is moving incredibly quickly.
EMILIE DELQUIÉ: It really is, so thank you. All right, so being mindful of the time, and I see a very active chat. So let me ask you a couple more questions. But I'm jumping now to implementation realities. So just a very nitty gritty how-to and some of the lessons learned along the way. And Tanya, I'll start with you. What have you learned from implementing AI tools across your content that you didn't anticipate?
EMILIE DELQUIÉ: And where did some of your assumptions about user behavior turn out to be right or wrong?
TANYA LAPLANTE: OK, great. Great questions. So implementation. I think kind of winning hearts and minds within your business is really critical, explaining what you're doing, reassuring them around how it's being built, that the content will come only from your content, and then doing that same kind of campaigning with your customers and your partners in that space, as well. I think that's something that took us a little bit by surprise, was that there is still some resistance, and that will change, I think.
TANYA LAPLANTE: But there is still some resistance and a lot of institutions to where anything labeled AI, even if it's not a chat experience and it's just returning search results, comes under a great deal of critique. And they have to get approvals, just as we have to get approvals to launch these types of products. They have to bring anything labeled AI through many rounds of approvals, and reviews, and security, and privacy, and all of that, so really being prepared.
TANYA LAPLANTE: I think the work around documentation and FAQs, and doing seminars, and training our sales team was as much work as the technological lift of rolling out these tools. So that's what I would say is-- don't underestimate the kind of amount of paperwork, and documentation and PR work that you have to do if you're releasing any of these tools. And then assumptions that have either proved correct or incorrect.
TANYA LAPLANTE: What we found really interesting is we, of course, have traditional search. And alongside that, we now have AI enhanced search. And traditional search has always been used very rarely. It's about 2% of all visitors use search. They don't come to the platform to search. They come to get the content and then take the content or digest the content. And our concern was that AI would be the same.
TANYA LAPLANTE: And what we're seeing is we are seeing a very slow uptake of the AI discovery assistant. But the uptake is trending upwards, while traditional search is trending downwards. So more people are discovering the AI discovery assistant and using it. And the other thing that we're seeing is that the click-through rate from traditional search, which is about 9%, if you look at the same click through rate on the chat results-- or sorry, the AI results-- we're seeing in a standard prompt answer about a 14% click-through.
TANYA LAPLANTE: But then when the user continues to refine the results through further questioning, we're seeing a 40% click-through rate on those results, which suggests that they are getting to far more relevant results in that AI environment than they are in traditional search. So that's been really interesting and not surprising. So now the question for us is, how do we drive people to the discovery assistant knowing that it is a better experience.
TANYA LAPLANTE: And we're having conversations about that internally and what that looks like.
EMILIE DELQUIÉ: Thank you for these stats, Tanya. Your team tracking all this is wonderful.
TANYA LAPLANTE: They're a great team.
EMILIE DELQUIÉ: Cheryl, let's go to you again. Super mindful of the time. And let's go to you with a slightly different question. For smaller organizations that have limited staff and resources, how do you prioritize-- how do you all prioritize AI investment? What's the minimum viable approach for on-platform discovery? And basically, what does good enough look like?
CHERYL FIRESTONE: So we've been working with Silverchair since 2024. I can't believe that we've been actually talking about this tool for almost two years. And it is, hopefully, going live this summer, but we're really excited. I would say that for us, we've gone through all the stages of grief over the AI tool itself, because is it going to detract from our site? Is it going to minimize usage?
CHERYL FIRESTONE: Is it going to increase any usage? Are people going to know about it? Are they going to be excited about it? So we've really had to figure out where are we going to make the most impact. So that was the impetus for how we settled with our red book project. It's a very heavily used piece of content. It gets well over a million views through the life of an edition.
CHERYL FIRESTONE: And I think that we simply wanted to be in the space that is most needed. The timing is really right for this type of tool, with this type of content because of all the challenges that we've been facing through vaccine hesitancy, measles outbreaks, things coming back in, infectious disease that shouldn't be coming back because kids aren't getting vaccinated. So we wanted to make the tool as impactful as possible.
CHERYL FIRESTONE: So when we decided to implement it, we went with one of our strongest pieces of content. And I think a lot of the work that's being done today is really laying the groundwork for future tools. It's making the initial investment in the infrastructure that we know we can expand out to more of our platform. So for us, we really think that this is going to prove to be a good return on our investment. It's not a small dollar amount for our organization to invest in, but we really think that we're going to be in the right place at the right time with the right content.
CHERYL FIRESTONE: And I think that that's the most important thing to consider when you're thinking about building an on-platform tool. How are people going to use it? How do you want them to use it? What's the impact that it's going to have on their experience? We're hoping to save clinicians time and effort and get them the right information from the right source.
CHERYL FIRESTONE: So for us, it was kind of an easy no-brainer. Even seeing the price tag didn't scare us as much as it might have in other times, I would say. But we really do feel like this is the right thing to do at the right time.
EMILIE DELQUIÉ: Excellent. Thank you. And then let me bring it to the last question before we switch to the open Q&A. So now we're looking ahead, right? So thank you for all the thoughts in the tape so far. But looking ahead two, three, four years out, and Tasha, I'll start with you. What would need to be true for AI powered discovery to generally expand readership and engagement, rather than just redistribute it?
EMILIE DELQUIÉ: So I would say about a minute and a half each, a minute each, so that we have time for the Q&A.
TASHA MELLINS-COHEN: I can't even think three months out, never mind three years. I think to expand readership, we need to really, really think about machine readability and optimizing our content for that type of user. We can't continue to think in terms of the PDF as the version of record, or even the HTML necessarily on our platforms as being the version of record. We really need to think machine readable first.
TASHA MELLINS-COHEN: If you have content out there that doesn't have good metadata, if you have content out there that doesn't have alt text for your images, you are wasting your most precious resource. Don't know if it was a minute and a half.
EMILIE DELQUIÉ: All right. Thank you. Great tip. Tanya?
TANYA LAPLANTE: I think it comes back to trust, and those trust indicators, and doing everything that we possibly can to ensure the trustworthiness and the origin of the content that we're publishing, so that we can continue to claim the mantle of trust and premium content. And I think it also just really comes back to and it plays off of what Tasha said, because it's not possible if you don't have that machine readable content, but being in as many places as researchers are.
TANYA LAPLANTE: So really letting go of that protectionism, like taking the precautions that you need to. But thinking about what you're willing to put out there to-- one, again, it goes back to, we are doing a public service of counteracting the slop that is going to increase exponentially by putting our content out there. So I think it really does come back to trust and visibility in as many places as we can be visible.
EMILIE DELQUIÉ: Thank you, Tanya. We already have a couple of questions in the chat, but this is your chance to give another minute to-- you're doing well, Cheryl-- to answer, to ask any other questions you have. Cheryl, over to you.
CHERYL FIRESTONE: So I think for us, it's going to be a matter of making sure that the most current and relevant content is being delivered. We have a massive archive. As you can imagine, we've been in business for almost 100 years at the academy. So there's a lot of publications out there. We've been publishing for many, many, many, many years. And so if I am a doctor and I need to find the most current policy on hyperbilirubinemia, which is-- sorry-- jaundice, I need to know that the content that's being surfaced through the AI tools is actually the most current piece of content.
CHERYL FIRESTONE: So I think helping people to make sure that they have the right content is going to be a real challenge for us, because we have so much archival content that even that metadata, which has the data on it, people don't always read that. We know people don't read anymore, and it's a real struggle when you're a publisher to hear the words "people don't read anymore." So I think we're going to hope that the richness of the metadata is able to surface the most current content, so I think that's something we're going to have to really start to tune into.
TASHA MELLINS-COHEN: To tag on to that-- and this is not something Counter can do, thankfully-- I think we as a community need to really think about standards around attribution and provenance, as well as discovery. Because you might have metadata on the full text piece of content. But if an AI tool is looking at a 300-word chunk of that content, it's entirely possible it's become dissociated from the metadata.
TASHA MELLINS-COHEN: So we need to really think about those pieces of the puzzle, as well.
EMILIE DELQUIÉ: Absolutely. Thank you. All right, so I'm catching up with the chat. I see some of the questions that have already been answered by our panelists. Awesome. Thank you, thank you. A couple more that have come in. So for you, for everybody, so do you think the quality of an LLM's training material has an impact on how effective AI aided discovery is?
EMILIE DELQUIÉ: Might it be different by discipline?
TASHA MELLINS-COHEN: I think on this one, the old garbage in, garbage out rule really does hold. If you put crap in, you're going to get crap out. And sorry if that is a word that Americans don't like. I should have thought about that first. Sorry.
EMILIE DELQUIÉ: Good. Really good. Tanya, you have a breadth of content. Oxford as a university press has so much content and across so many different disciplines. How are you thinking about this?
TANYA LAPLANTE: I think we're seeing it in the AI tools that are being rolled out in terms of the disciplines that have the highest demand, from that clinician-practitioner space and the tools that are being rolled out there. What's tricky with the training is that training is kind of where we still draw a line, and I think we should continue to draw a line because the training is not attributed. It is not cited.
TANYA LAPLANTE: It still runs into the same question. We don't want our content to be mixed in with other training data if we can help it, because it might appear alongside data that's incorrect. So I think it really goes back to how we select, how we choose the people or the companies where we make our content visible, and then what we make visible, and what we give them access to. That's what's really critical.
TANYA LAPLANTE:
EMILIE DELQUIÉ: This one is interesting. So this one is both for Tanya and Cheryl. Tasha, absolutely welcome to jump in, as well. But considering the importance of being meeting users where they are, and people don't read anymore-- I'm quoting because there are quotes-- how are you both thinking about of parsing content, slash, least summary content to feed the beast in new ways?
TANYA LAPLANTE: Yeah, that goes into-- we've been talking about that so much internally in terms lay summaries and summarization by AI. And we think there are two pieces to that. One is it makes your content more discoverable in these various AI search solutions. However, a lot of that AI generated content, it's really difficult to have humans in the loop, especially with the vast archive that we have in backlist.
TANYA LAPLANTE: And that's where it becomes-- it's something that we are approaching very carefully, because we cannot put anything on the site. Even if we have disclaimers all over the place that say I generated, if we put AI generated content on the site that is a summary of someone's work and it's inaccurate or misrepresents that work in some way, that is a disservice to our authors and our researchers who use the content.
TANYA LAPLANTE: So it's something that is still very much under discussion, and there's a lot of internal debate. And it's, how do we do it in a way that optimizes discoverability while minimizing risk? That's always what we're looking to do.
CHERYL FIRESTONE: Yeah, I would say that this is where those conversations about, do we allow some of those external tools access to our content, that's where those conversations are really going to come into play. Because if the AI tools that people are using don't have access to our content, then our content isn't going to be discoverable in those tools, and we're going to disappear from being part of their research.
CHERYL FIRESTONE: So I think that that's where we need to start thinking about whether or it's appropriate to allow our content to be parts of those AI research tools. And those are hard conversations. We as publishers are very protective of our intellectual property. And opening it up to AI is really scary. And so there's a lot of conversation going on about what's the right thing to do.
CHERYL FIRESTONE: So we know what's happening out in the world. Luckily, we have a good sense of how people are using these tools, and I think that's really going to help us to make some good decisions.
EMILIE DELQUIÉ: Oh, boy. Questions are coming in. I don't know which one to pick. I apologize. I apologize if I'm not picking yours. You all have just typed many in the last minute. The one that had been sitting, and this one is for Tanya, can you quickly-- can you clarify the 40% click-through that you cited for users' better answers?
TANYA LAPLANTE: Yeah, sure. Of course. So how the discovery assistant works is a user enters a query, and it returns the top 10 search results. When they get that first return of 10 search results, the user is clicking through to a result 14% of the time. If they then ask a followup question to refine the results, so they might say, OK, that those results are in the right subject area, but please return only articles. Then it returns only articles.
TANYA LAPLANTE: As in a chat, a user is clicking through to a result 40% of the time during that chat session. So what it suggests is that as the user further refines the results, and via the chat function, those results become far more relevant to what they're looking for. i.e. they're finding what they're looking for. And that 40% is in comparison to when a user conducts a traditional search.
TANYA LAPLANTE: They are only clicking through 9% of the time. So one, it suggests that users are having a better experience in getting more relevant content. And two, it's driving our usage more than traditional search results are.
EMILIE DELQUIÉ: Thank you so much.
TANYA LAPLANTE: Yeah.
EMILIE DELQUIÉ: All right. I think I'll take one more question, which was actually one of my prepared questions, and I skipped, so I'm really glad it came up. Because I think it's a core tension also for a lot of publishers out here. So for scholarly publishers where advertising is a tangible revenue stream, and if the site, slash, user traffic, a trend away from publisher sites, continues to play out, any thoughts on what will replace that revenue stream?
EMILIE DELQUIÉ: Does anyone think there is truly a premium offering for publishers that will hold emergence-- Oh my God-- that will take hold emerging out of this AI evolution? You got this? Do I need to repeat? OK. So there is a real tension here. There are some, several, many societies who are relying on advertising as a real source of revenue and traffic on their site.
EMILIE DELQUIÉ: We think of on-platform AI discovery as a way to keep those users on the site a little bit longer. But indeed, how are you thinking about it?
TANYA LAPLANTE: If anyone has the answer, that would be great. I think, again, this is one where we're just kind of watching it unfold and thinking about how we can possibly-- one of the things we're doing is exploring partnerships with Discovery AI Solutions. And I don't know that licensing will be commensurate to advertising loss. But then MCP also presents new opportunities, both with institutions, but through corporate and more commercial offerings, as well.
TANYA LAPLANTE: So I don't know the answer, frankly. I think we're just starting to see the impact of that. And we also don't know where that advertising revenue is going to go. Where are they going to advertise? Because they're going to advertisers are going to need solutions, as well. And where are they going to choose to be? And I imagine they would want to be in areas where the content is valid and vetted, and not in areas where the content is slop.
TANYA LAPLANTE: But that's to be seen, so I don't have any answers.
EMILIE DELQUIÉ: And Cheryl, do you have a well--
CHERYL FIRESTONE: Full transparency. Full transparency. The person who asked this question is my manager at the AAP. So I feel like maybe he's specifically asking me how we're going to do this. I'm going to say what Tanya said, which is we don't have an answer yet. But we're hoping that some of these tools will maybe generate some more subscriptions.
CHERYL FIRESTONE: I know that's probably not the best answer. I think we're going to have to wait and see how it plays out.
EMILIE DELQUIÉ: And for a note of optimism, Tasha.
TASHA MELLINS-COHEN: Nothing. I don't work with-- I haven't worked with any advertising in about 15 years, so I've got nothing for you on that one.
EMILIE DELQUIÉ: Oh, good. We're absolutely at time. It is noon Eastern. Please join me in thanking Tasha, Tanya and Cheryl. Wonderful panel. Great conversation today. Really appreciate your time. To everybody who joined us, thank you as well. As Steph mentioned at the top, this is part two of a three-part series.
EMILIE DELQUIÉ: Next month, we will tackle the topic of MCP, including, I spotted the question in the chat about usage. We will be discussing all this. This is a journey. It's a very fast moving journey where we're so grateful that there is a lot of optimism, a lot of experimentation that is happening. And I think we're all learning together here. But yeah, truly grateful for your time today.
EMILIE DELQUIÉ: And join us next time for more on AI discovery in the MCP version. All right. Thanks, everyone. Have a wonderful day.
TANYA LAPLANTE: Thank you.
EMILIE DELQUIÉ: Thank you.