Name:
Ask About Generative AI
Description:
Ask About Generative AI
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/fe84966b-6432-4ed3-a52a-80f30d825278/thumbnails/fe84966b-6432-4ed3-a52a-80f30d825278.jpg
Duration:
T01H00M44S
Embed URL:
https://stream.cadmore.media/player/fe84966b-6432-4ed3-a52a-80f30d825278
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/fe84966b-6432-4ed3-a52a-80f30d825278/GMT20240612-150027_Recording_1920x1200.mp4?sv=2019-02-02&sr=c&sig=lrE6pv0LPpCyj%2FW9fdxDAh2il8KTGESMrCGCVzhd2D8%3D&st=2025-07-15T03%3A24%3A48Z&se=2025-07-15T05%3A29%3A48Z&sp=r
Upload Date:
2024-07-22T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
OK Welcome, everybody. We're going to get started in a minute. I'll join in.
OK welcome and Thank you for joining today's ask the Experts panel. We are pleased that you can join our discussion with experts in generative AI. I am Susan Patton, SSPS, program director, and I'm stepping in for our Ask the Experts lead, David Myers. Before we start, I want to thank our 2024 education sponsors, access innovations, openathens and Silverchair. We are grateful for your support.
A few housekeeping items. Attendee microphones have been muted automatically. Please use the Q&A feature in Zoom to enter any questions. You can also use the chat feature to communicate directly with other participants and organizers. Please don't be shy about participating. We will answer questions after brief presentations. Closed captions have been enabled.
You can view captions by selecting the More option on your screen and choosing Show Captions. This one hour session will be recorded and available after today's session. Registered attendees will be sent an email when the recording is available. A quick note on SSPs code of conduct. At today's meeting. We are committed to diversity, equity and providing an inclusive meeting environment, fostering open dialogue, free of harassment, discrimination and hostile conduct.
We ask all participants whether speaking or in chat, to consider and debate relevant viewpoints in an orderly, respectful and fair manner. Our original moderator, zsolt silberer, is unavailable today. So will Schweitzer, CEO of silverchair, will be helping out as CEO of Silverchair. Will Schweitzer oversees business development, customer success, delivery finance, product, people, operations and technology teams, making sure products, services and growth strategies meet the needs of its growing community.
He has a deep knowledge of scholarly publishing, having worked in the industry for over 18 years and product and publisher roles for leading commercial and society houses. So now over to you, will. Thank you very much, Susan. And as was noticeably absent from that bio, was a generative expert, which I'm not. But we're learning a lot as we go.
And I know that's probably true for my other panelists. So I'm going to give a quick primer on generative AI. Just some background context that may be helpful before we all jump in. But before I do that, I wanted to introduce my fellow experts or experts in quotation marks. There's Kate Eisenberg, who's the senior director for medical at EBSCO with her clinical decision products. And then Hong Xu, who's the director of intelligence services group and head of R&D at Wiley.
And after I give the primer, we'll go to Hong and then Kate, and then we'll answer any questions or try to answer any questions that you all have. So I'm going to share my screen. So just general background context. Obviously, this is a huge area. It is fast moving. There's a lot for all of us to learn and experiment with, but just some foundational concepts that generative AI is building off of.
Research that's been going on since the 1950s and 1960s, and whether we know it or not, there's machine learning in a lot of products and services that are in the scholarly publishing space already. Some of that machine learning drives things like content search or discovery. It drives recommendations on platforms. It is used in our workflow. You could think about tools like LaTeX or some copyediting and composition tools, having very basic forms of machine learning that were now building on with artificial intelligence and generative AI.
So it's kind of a step change function and the evolution is kind of happening at a quicker pace than it has in the past as we're getting from this stage of really, really advanced math or algorithms into technologies that are beginning to mimic human intelligence. So what we're seeing in this new wave are things called large language models.
And you've probably heard a lot of them in the news or even used ChatGPT bots based on those large language models. Those include things like openai's, chat, GPT four, and that's a large language model. So as Google Gemini or Claude and these LLMs are sitting behind a lot of generative AI features or services that you're encountering out in the world. So there's been a lot of companies in our space, Silverchair and Wiley and EBSCO included, that are using these LLMs to provide product features, workflow services and all types of things that will give us an overview of in just a minute.
But these things are building on the past. And I think it's important to remember that a lot of these things are still experimental, but I think a lot of experts and ourselves included see a lot of promise in these technologies and that how they can be applied to publishing products and workflow solutions. So they're not going to reinvent scholarly publishing overnight, but there's a lot of utility in really basic things that can create a lot of value for end users or make our jobs as publishers a lot easier when we think about really simple jobs to be done.
So these technologies, generative AI in particular, are creating new ways for interacting with content or doing our publishing jobs. We see these in the forms of chat bots or assistants. Some of the well known ones in our space include sites, assistants, consensus, which is a content kind of discovery app. And then we're seeing kind of providers in our space kind of rolling out or enabling pilot or beta features on top of existing products.
That includes companies like Clarivate or Elsevier or digital science with dimensions. And all of these technologies really come down to the core with a chat bot or assistant for us as users to interact with it. And those interactions often take the form of prompts or queries that you're asking the LLMs and how you construct the prompts drives kind of the utility you can get out of these technologies. And there's a more advanced concept that we may talk about today called retrieval augmented generation.
So if you think about the chat, GPT 4 kind of being the base rags allow you to layer over specific contexts. So say a given publishers corpus that allows you to enrich or refine the output that's coming out of the LLMs. So grounding the chat bots answers, say in publishers reference material to give end users a better answer. And all the publishers that I've Todd Whitthorne that are kind of within our broader community are experimenting and developing use cases.
Some of those use cases are around enhanced search or around research co-pilots or around meta analysis tools or around making really simple jobs to be done like content conversion or editorial kind of assistant tasks a lot easier or more efficient in our space. So one of the questions zolt had for me is what do I think is next. And I just want to talk for a second about where we are kind of in the hype cycle of these technologies, right.
There is a lot of noise and a lot of excitement and a lot of confusion and concern about what these technologies can do. And if you're just following the headline, you may feel like we're going into a trough of disillusionment right now. But there's, I think, a few things that I would encourage everyone kind of listening or watching today to be mindful of. One is that a lot of publishers or societies in our space are finding ways to publish or license their content to providers or taking making use of these LLMs within their products or services.
A couple of different ways to go about how to strategically harness these technologies. And at the same time, we're beginning to understand what these technologies may use for a business. So Google had for a while was running an experiment called their generative search experience, where if you were to go to Google and you were to conduct a search at the very, very top, you would see AI enabled answers.
And for some publishers, particularly those outside of the scholarly space, those Google answers using generative AI led to a decreasing amount of traffic from the Google search result page to the publishers site and for magazine publishers like the Atlantic, or more specialized publications like autotrader, kind of more in the consumer space, the drop in the traffic was 40% to 60% Now, every publisher out there has varying kind of traffic sources components.
But for some publishers, Google itself can be 50% of our traffic. So I think we have to be really mindful about how generative AI and these tools are deployed in our ecosystem. And what it means for us. I think fortunately a lot of students or researchers or practitioners have more than a simple question to ask of our content. Corpuses so hopefully the effect that we see in our space won't be as great as what the Atlantic saw.
In the last bit, I just want to plug a scholarly kitchen post by one of my colleagues, Stuart leach at silverchair, that as we experiment with these technologies, we start getting into publisher use cases and how to apply them. We're understanding the limitations of generative AI and LLM and if you think about scholarly publishing and having evolved now for hundreds of years with largely taking the same shape and form since World War two, there's a lot of considerations we have to make about how these technologies are used and how we can essentially train them or tune them or apply them in our space, because there's a lot of ethical and product considerations.
If you think about what Katherine is doing with these technologies within the medical community, there's a lot of things that we need to make sure that are done right. We are disclaimed or disclosed in the right way. So this is kind of what's next in the short term. We'll see a lot of experiments. All of us will have kind of a richer, deeper understanding of these technologies and their implications.
And I think we'll see a lot of publishers and other companies kind of striking strategic partnerships to figure out the best path forward. So with that, I'm going to stop and hand it over to Hong, who's going to talk a little bit about how generative AI can be used in various aspects of a publisher's workflow. Hong do you want to take it from here. Thanks thanks, bill.
OK, let me share my screen first. I can talk. But so. Can you see my screen. Yes OK, great. Hello, everyone.
As we all introduced, I'm the honcho. I'm based on Oxford, UK. Basically, I'm leading the intelligent service group in the partner solution and also the leading the R&D. And we basically, we leverage big Data Cloud all this advanced digital technologies to develop intelligent services, to cloud based intelligent services to support the many different initiatives goals.
One is we try to automate it, you know, the publishing solution publishing journey and from authoring to submission to production, publishing, discovery. And on the publishing side, we also have the Literatum. I mean from articles of flagship, you know, the system platform. We also apply AI to enhance the support discovery dissemination. So today I want to just give you a quick overview how the generic I can apply help on the publishing journey.
Let's start from the authoring. I think this is we all know that. We all maybe many of us already use generic AI, you know, the ChatGPT and all this, you know, the perplexity, et cetera. So almost every day at least I'm using this and many researchers are already use authoring. So I listed this all these different the applications which the has already been many published the researcher has used or the editors reviewers use.
But you know, the I just split into two parts for all this. You know, the application which the market is underlying this is A1 is already used well. And also this is a good, you know, the generic AI is good at. But for the rest of this is have the good potential huge potential. But it doesn't perform that well for now. This is I'm sure they're going to become better and better. And for authoring.
So you can see that drafting assistance and then the generic researcher can use this to drafting assistant to the drafting assistant to the, you know, to even the automatically generated title or the check, the overlap between the abstract and also the and also the full text and language style and improvement. This is probably is one of the most popular generic AI the application now and literature reviews help you to quickly found all this the relevant the publications data analysis.
This is if you use, you know, the Copilot you can see how powerful this in the embedded with the Excel sheet and also others and the paper peer review the preparation. This also has a huge potential but not as good at the others other applications. So we can quickly identify where the weakness is. Anything I have not mentioned clearly reference management and journal suggestions because the suggest journal during the submission review phase.
And we already as many of the Wiley has released the paper mill detection service in the three months ago in London book fair which we applied and generative AI to try to identify automatically detect any the integrity issues submission so we can submission preparation, formatting and the submission portal assistant so in the future. And then the authors don't have to send email to ask any of the author or editor or the author service and stuff.
They can just ask the natural language question, get immediately the answers back and also the quality check, the integrity check. This is one of the most important things and the reviewer report generation. So we can automatically apply the generator to generate the summary of the reviewer comments and then the Ask the generator AI to make the initial decision based on this comments first.
And then it's a for human to review. It's still everything. All this is about human still the human centric. It not replace human, but they just replace some of task. Of which human could do before to increase the productivity. So let's move to the production. Production this also has a huge potential, which is a genetic. I is very good at. So it's automated typesetting, the proof writing and editing and figure tables, optimizations, compliance checking and translations.
Localization so translation now is with a large language model generated I is a translation is in many, many cases is outperform has already outperform the humans. But in some of the domain specific area I mean the translation application is still the need more knowledge and to be trained in the model under the publishing set. Publishing set is, as I mentioned, the while we have the, you know, the literature in the article from Article and which is, you know, the online publishing the system and we can enhance the automatic enhance accessibility and means we automatically generate the text and then to enhance and also generate the missing metadata and the content enrichment.
So this also is not only the generate the, for example, transcript. If you give the audio or videos, we also generate the captions or the creates the other, the type of the, the new format of the content and content classification. This is a very one of the most popular application to use, very useful to the managed content to have content, metadata, enrichment, SEO optimization.
We automatically generate, you know, the meta description, et cetera, and the user profiling based on the user's behavior and the interactions with different content. We can predict what's the user's interests and the intentions, analytics and the insights. This also very important with the generative AI. We not only, you know, the people don't need to add write any SQL or the click, the different know, the Create click, create different filters, et cetera.
We can just ask the natural language question and to see the and for example, the give me, you know, most of the hot topic in the last three months in this domain and even show you who are the most influenced author and the why the business. The last three months publication volume is increase or decrease et cetera. Lastly is the discovery dissemination. So this is another most of besides know, the language the improvement writing improvement discovery is one of the area which is changed dramatically by the generative AI.
So for example, the now natural language question answers talk to content, which is like the we just mentioned, all based on the rock framework. We apply the generative AI to identify the relevant information first and then the generate the final answer based on this relevant the information not from the not from the large language model directly, but from this relevant information to reduce hallucination et cetera.
And this will help the people just do based on Enable the conversational search discovery. We're not just the use the traditional the keyword based search and get written the 100 list hundreds of the result and the users still need to go click each of them to understand to find the answer by themselves. But now we can just, you know, the understand this and get the exact answer directly.
And the community engagement and personalized recommendation is also very important like today's newsfeed et cetera. Automated content promotion and the email campaign will write personal email and our target outreach. We can identify the who is the most the, you know, the who is the right audience. We want to send this campaign based on their understanding about the interests, the expertise, et cetera.
So basically, this is all of the, you know, this give you the overall overview of this generative AI can help on the publishing journey. Thank you. So great. Thank you. Hong and think from here. Kate was going to give a demonstration of a generative AI product her team has been working on.
Kate, do you want to take it from here. Absolutely Thank you, will. So I'll give you all a little bit more about my background. I'm a family physician. I still practice one day a week and then the rest of my time I spend working for ebsco's clinical decisions division. So that's the health care arm of. Of EBSCO information services.
I also have a background in epidemiology and clinical informatics, so I've spent a lot of time working at the intersection of health care technology and data analytics, and our team has had a really fast moving journey to develop out a generative AI based application that is built on a retrieval augmented generation model or RAG model that I can show you a demonstration of today. So you can see what that model looks like in practice. So I'm going to share my screen.
So the product that I work with is called dynamix. And what this is, is a curated, evidence based database designed for point of care use for physicians, pharmacists and other health professionals. And what you're seeing right now is our core product. So nothing I about this view.
So you come to this landing page, you enter in a search term, let's say I wanted to know more about treating AUT. I enter my search term into this box, come to a set of topics in a traditional search view with our retrieval augmented generation tool, which we call DNA. We've taken our body of content and on top of that, we've layered this retrieval augmented generation framework combined with clinical prompt engineering.
So taking that prompt engineering to direct the large language model, what to do, but injecting this layer of what I call clinical intelligence, kind of that health care specific knowledge to help the information coming out the other end be geared and safe for health care professionals. So I'm just going to type in a question. And as was mentioned before, this allows you, rather than just putting in a couple of search terms to ask more complex questions or more of a natural language question.
And we're finding that our users search behavior right now tends towards typing in a single condition or one or two concepts. And with this framework, it allows us to let people put in a lot more detail. So one search, one thing I might look for. I'm a family physician. I see kids, I see adults. So let's say prevention of pediatric injuries.
And if anybody has clinical questions or medical questions that they are interested in running through this, we can do that as part of the demo. Just put them in the chat. And so I'll tell you what we're seeing here. So at the bottom part of the screen is our traditional product experience, where this is what our search service returns, the relevant topics and then subsections of the topics.
We've had to do a lot of policy and procedure work to support our approach to a generative AI based tool. Because of the high stakes of health care, we really wanted to build this product on a very principled foundation because our users have such trust in the information they're getting from us. We wanted to have that same trust flow over into the AI based tool. So to talk you through how the retrieval augmented generation framework works in practice here, what you're seeing here is all of the AI content that our generative AI service touches is in this light blue.
So that transparency layer is very important and is one of our policy and principles. So that's always clear to the user that they are using a generative AI based tool as opposed to our traditional product experience. So this is really geared in that discoverability space that Hong was mentioning and getting our users to their answer to their information needs faster. So there's actually two passes through the large language model as part of the product.
So first it takes the search term that I put in translates that to a natural language query. Now that's using GPT 3.5. All of this is on a Microsoft Azure platform that helps with safety and security. So it translates the search term to this natural language query. We're seeing that it does pretty well with knowing what I meant and being able to translate that question. And then another pass through the large language model currently GPT four, but we're actually transitioning this week to GPT 4 zero.
Speaking to the rapid pace of evolution, as the technology improves, we see that improves our product. And sometimes we need to pivot and make a change like that. So we just made that decision that the performance of this other model was enough better to warrant replacing the model that's fueling our beta product here. So then this natural language question goes through GPT four, goes through our search service, which is part of that retrieval, like the retrieval piece, to see which topics are the most relevant and then which subsections of those topics are the most relevant takes, those most relevant subsections and formulates them into the response that you see here.
And so what you're seeing is then we get an answer to your question rather than needing to search through each topic and search through each subsection of each topic. And what I like about this example is that it's pulling together four different topics, and I probably wasn't going to go searching in four different topics if. I had this question in our traditional experience, but now that we can pull out subsections and combine them together across topics, we're able to see that all at once in one synthesized response here.
So we quickly we were cautious about applying this, you know, unprecedented new technology to our very carefully curated body of evidence. And at the same time, we started to see the power here in terms of information retrieval and in terms of getting our users to the answer to their question more quickly, more effectively. There has been an enormous amount of work done to validate this from a clinical perspective and make sure it is safe and reliable for our users.
And I want to echo something that Hong said, that this is not replacing anybody right now. In fact, it's taking a lot of people to get it right because there's no automated, you know, quantitative way to tell me that this is accurate, you know, as a physician. And so we have a lot of folks testing the system and a lot of folks filling out our feedback form, which allows them to comment on how accurate is level of information, detail, any equity concerns, all those different elements that are important to us.
The other really key aspect of the retrieval augmented generation approach is that we're only answering questions based on our own body of content. So we have instructed through our prompt engineering, we've instructed the model not to answer from its training data and not to answer from the general internet so that we and our users know that this is only coming from our evidence based, curated data so that we can have that confidence and that trust that the response reflects all the effort that's gone into curating our database.
So we view that as a huge advantage. And we're also seeing that effectively eliminates the potential for hallucinations or pushes that potential. So far down as to be negligible. So that's been very exciting for us. So we started with dipping a toe in the water and then as we saw how powerful this tool was, we kind of kept building it out. And now we're planning a more extended beta launch for the summer.
So some of the questions that jolt asked me to address were how do we see this uptake in our industry, in health care. And the answer was that is quickly evolving. We were all very cautious at first, and we're seeing a growing amount of acceptance and even expectation that there will be AI tools in the clinicians workflow, starting largely with documentation assistance like summarizing notes, but moving into other applications as well.
So I can pause there and then we can go to questions. And if anyone wants to hear more about this, happy to talk about it. Thank you very much, Kate. So we'd love to take questions from the audience. You can do that using the Zoom chat or the Zoom Q&A feature. And while we're waiting for the first questions to come in, Kate, if you're comfortable sharing, how much time did it take you from dipping your toe in the technology to the kind of demo, we saw today.
Was that months or. Yeah, it's been a whirlwind. I actually just made a timeline for this. So we started, we actually literally did start with our policy and principles because our organization was really concerned about safe use of generative AI. So that was going back to April of 2023. As we stood up our policy and principles, we put together our very alpha version of this product, which did not have anything like that nice user interface you're seeing in September of 2023.
And then we got the beta product out, beta meaning to the point that we had potential customers or innovation partners in there testing it. That was just in February of 2024, so we've only been doing that a few months and have continued to increase the volume of testing. So, you know, this is we are not a technology company. We are building this curated database. So there has been a whirlwind.
There's been a huge learning curve about having a, you know, building a really effective cross-functional team because it is so critical to have the clinical folks, the product folks really embedded with the technology folks for our use case. And, you know, it's only accelerated from there. But I'm personally spending a lot of time on our reporting and analysis and evaluating the tool. And that has had to also come along very quickly about accurate according to who.
How do we rate that. How do we track that over time. So part of what's been wild about this is designing and developing all the benchmarks as we go. And so the business turns to me sometimes and says, hey, Kate, what's the benchmark for launch. And I'm like, there's not one answer to that. It's complicated. So that's been a really interesting, you know, journey and mean developing your policy and parameters for seems like a really smart place to start.
But were there kind of outside resources or experts or blogs or kind of reference material, you turn to when you first started dipping your toe in, that was really helpful for anybody who may be kind of learning or getting started in this space. Yeah I mean, in terms of the policy and principles, it was, you know, I think we were fortunate that everyone was having to wrestle with that at the same time.
So it was relatively easy to say, hey, how are all the big technology companies. What are they saying in terms of these themes. And then having that to fall back on has been very helpful. Early on there, you know, there was a lot of excitement. There weren't a lot of resources actually to turn to about how to do this. And I feel like in my space, looking at the medical literature, there's just starting to be a critical mass of people working on these same challenges, you know, and maybe writing a blog post or publishing something about that.
But that is just starting to come out more, and especially since every use case can be a little bit different, I'm finding that there's a lot out there, but that may not apply to the benchmarking that I need or the safety and guardrails you know, that I need. So our team is getting more and more involved with national organizations that are trying to define responsible health care. And what we're finding is a lot of folks are confronting these same challenges, and it's been very validating to me because it's like it's ambiguous because it's ambiguous and it's hard because it's hard.
And lots of folks are confronting those same types of challenges at once. So what we've really found is turning to a community of people working on the same types of challenges. And then we've stood up our own external advisory council to help us think about this too. So it's kind of that phone, a friend an all of the above approach. Excellent Yeah.
It is always reassuring to know you're not alone. Hong, turning to you like your presentation was great in helping us think about how the generative technologies can be applied throughout the publisher's workflow. I think the industry has talked a lot about how generative AI can help with manuscript submission and particularly research integrity concerns. But is anybody using AI at scale yet in different parts of the workflow.
Yes, indeed. So as I mentioned in the slides before, there are many the generative AI applications already have been used in the scale and most of the general applications are in the authoring and discovery. And for the authoring is already the, you know, the many, the, you know, the vendors provided, but mainly discovery discovery. You can see the elsewhere elsewhere as the scopes I and do this and also there's the I think I have seen the several know, the vendor in the slide as a illicit under the space, under the sun cetera.
Et cetera. So all the user generated to improve the discovery. But from publishing perspective. Yeah as we are elsewhere scopes widely we are using the applied the generate also do the paper mill detection service Springer Nature the use the generic to do the auto the book designer and to generate the automatic different the prompt to the immediately generate the new content new books and the extra so and besides this the applications, the user facing applications.
I think there's also the many indirect generated involved applications. For example, we apply the generate the many companies, many vendors is already applied the generate to create the more training data to improve the solutions and also using the generate to improve the operation, the productivity. So many the companies also including Wiley, we already use, you know, the, you know, the Microsoft Copilot to start you know, the increasing the productivity, which is helps a lot.
So I think this is the different things, different type of applications. And that's really helpful to know. And you know, in terms of my organization, we started using generative tools internally before we started talking to our clients and to the broader community. And we found it was really helpful from everything from writing performance evaluations and job descriptions to even like polishing up minute notes or agendas.
And it was yeah, right. It was, it was really fun place to start because it kind of helped you understand how these technologies work and what the limitations are. But this is a question for both of you. Like Wiley and EBSCO are really big companies, right. And there's probably smaller publishers or societies or librarians joining today that aren't a technology shopper don't have that scale.
So if you were to suggest kind of one practical application, what a place to immediately apply these technologies to learn or to find some results. What would you suggest. Kate? do you have any initial thoughts. Yeah, my suggestion is to just get started. And the it almost doesn't matter what the application is. It's anything where, you know, processing, summarizing or synthesizing content could be helpful to your workforce.
And I think in internal application like that is probably more accessible. And once you get your hands on the tools and start working with them, it's really people. The ideas just start flowing and the folks who start working with them start to see other applications within that space. You don't have to have a team of developers for something like this to be extremely helpful.
Even just that, like you were saying, you know, meeting summarization and you know, maybe that gets applied somewhere else to even just kind of an office, style workflow. Then you start understanding and getting that experience about how it might apply to your own workflow. But so in some ways it's so specific to your organization and in other ways the challenges are kind of common and they're starting to be that body of knowledge out there.
But I have certainly, as someone with a very academic background, had to retrain. My way of thinking of it doesn't have to be perfect to get your hands on it, to see the benefit and then to start iterating from there. I think that's what we've done and we're just now starting to move beyond this user facing application to our own workflow optimization tools. Now that our teams have more understanding of the benefits and limitations of these types of tools along with you.
Yeah, I totally agree with Kate. I think, you know, the so, so quickly, you know, you need to learn by doing it. So it's quickly make your hands dirty and try something and understand. But I think from business perspective, I think it's more important to also make sure so identify the problem you want to generate to solve first and start from simple, simple so you don't have to be start from very complicated problem, start from simple.
And then also this. This should align with the organization's strategy, your business goals because we need measures of return on investment because although you can use a free version of the tragedy or any other things but that's not secure. And also the data, you know, the privacy, security is also very important, especially for business application.
So I think the I remember before we use all this, you know, the widely used this, the, you know, the pilot, all this, the all this, we provide the training and to help the people, you know, colleagues to aware and also the, you know, the strengths and the weakness and et cetera. So this is a very important and also the I think we need quickly, as Kate said, you know, we quickly develop the POCs proof of concept and develop something and then release something to get the feedback to see the value and the adoption.
So it basically is a kind of agile, using agile methodology. The last thing I want to emphasize is although we have this service and many, the more and more the generated applications used by the researchers, by colleagues, by publishers, I think they still the human centric is not replace people. It cannot it's just the make people help people to facilitate people to make informed decision. But from the user perspective, don't fully rely on the general API.
Otherwise, it's the first most important thing is not. Accurate not 100% accurate. Secondly, if you fully rely on this, you maybe start losing the human, you know, the most critical thinking and the creative thinking for human, which will make humans unique. Yeah, I can build on that actually, that it's not 100% accurate and it's not going to be.
So we have a very active conversation on our team, you know, especially for a medical application. It needs to be accurate. So we have that gap that it's never going to be perfect just by virtue of how large language models function. So how do we help our users maintain their trust, understand the limitations of the system that we're working in, but still get that value out of this new technology, right.
Yeah I think we're all going to be talking about trust and integrity and what that means for our brand propositions for many years to come. Yes so we have a question from the audience for content discovery. Now that generative search results are practically competing with normal search results, what are platform providers doing in terms of adopting best practices to help publish content.
Stay relevant. So I guess that would be comparing the generative response to the original kind of content artifact journal article or book chapter or reference entry. And in our use case, that currency is so critical. So there's always going to be an element here where, you know, that published content has to make its way to our system in as close to real time as possible.
And what we're also seeing is a demand from our user base to have immediately and directly inline citations allowing folks to go right back to that source material. So right now you can link to the sources that were used to summarize the content. But we know that our people want even more granular citations to be able to go and see where that content came from. So it's almost analogous to we're not replacing people, we're kind of augmenting that to we're, we're not really replacing the need to go look for the source material when that's the that, when it's a more complex question or when it's, you know, when a little bit more detail is needed, it's more that kind of intermediate step to get you where you're going in terms of your discovery pathway.
Yeah, please. Thanks I just add two more point. So for me, I think I'm not know the generic search result are practically competing with so it depends know the depend from the application also is a scenario use case different use case maybe the normal search is still valid even the generic search result because it's for the scenario use cases for example for the experience researchers they may be they also still do know, the normal search result. And even the normal search is not necessary is the user enters the search.
The traditional keywords in the search box, click Search is you know, the most I mean the most relevant and the most topic. You know, all this is driven by all driven by search widget, at least you know, the literature side. So this is for the use case, different the purpose. I think the both and even for Google they still the valid for the normal search and also the generic search and Yeah from the technology perspective, I think the purely the RAC because RAC could be based on the semantic, the embedding vector and could be based on the, you know, the traditional the keyword search and the combination of this.
So now the research show still the hybrid, the RAC is still the show, the outperforms the, you know, the vector based or the keyword based. So is this from technology perspective, this is still need to be, you know, the collaborative combined together and then to find the most relevant content right mean and so I come from a generation where my library experience started with card catalog and then went into really rudimentary search where you had like a cheat sheet of Boolean operators and special characters that you needed to answer to find relevant information.
And Kate, a point you made when you were demonstrating doing your demo was that these technologies help surface content from four different subsections or areas of your content corpus that as an end user, you may not have gone and found yourself, right. Like it would have been a really, really complicated search that would have been a headache to piece together, or it would have taken more time than you were going to initially spend.
And, you know, my view of these technologies and how we interact with our publishers is that it's really rare that the simple answer that generative AI and is going to spit out is going to be good enough. And it may be if some of us have really simple test prep products for high schoolers or something. I hope nobody from Chegg is on this call because obviously their business was in a world of hurt when ChatGPT first came out.
But these technologies and how we design features and prompts and our UIs around them have an ability to drive users deeper into our content and find better, more accurate information for questions that they may not have known how to ask. So the example we were talking about in Kate's demo of using ChatGPT 3 5 to take the user prompt, turn it into a natural language question, these technologies can also help improve the user questions and deliver better information back.
And that's one of the most exciting things about what all of us do in this space. All of us do in this space is publish really, really high value content and try to deliver it to end users. And these technologies can help us do that in New ways. I love that. And we're, you know, we're finding that we want to prompt our users and educate our users on how to even leverage that type of experience better.
So if they give us a little bit more detail, a longer question, a little bit more complexity, we can person like not just reach farther into our body of content for more specific responses, but also more personalized responses based on that, you know, scenario. But there is a need for the user to learn how to interact with the system differently, to get more out of it. Yes and for those of you who haven't begun experimenting, as soon as I figured out that I could write a prompt that says you are the CEO of a technology company about to give a company wide address for a given holiday, please write remarks.
You get a far better response out. I'm not saying I've done it, but it's those really simple things. The additional context you can provide about your role, your company, the particular problem you're trying to solve that can help you get a lot more out of these LLMs and chat bots. Yeah context based learning. Yeah so you provide more context. So that.
So we have another question from the audience. And Kate, I think this one is for you. Can you use dynamics for drug information queries? If so, how does it compare to other drug information databases in terms of retrieving detailed drug information. And Micromedex and Lexicomp were called out. And then do you see it as a competitor to some of the other medical databases like up to date, which will take me to a follow on question once you're done.
Yes, our whole product is a competitor to up to date, so that is for sure. True Micromedex is our partner they're the part of dynamics is from Micromedex. So their drug information is fully incorporated into the dynamics product and in our beta version of our DNA product does not have the Micromedex incorporated, but we are actively working to do that because of the need to answer drug information questions.
That's been an interesting project to take on because it means not only getting that initial data like text processing piece, right, but now merging two content sets. So you can imagine that's added a lot of other moving parts and drug information specifically like you can't get drug dosing wrong, you can't get, you know, allergy information wrong. So we are actively tackling that right now in one of our next milestones is to advance our beta offering to include that Micromedex content fully.
But there's been a lot of moving parts there and certainly this is one of those taking on. The bigger challenge up front, you know, has been our approach because there's such a need for drug information, but it's a work in progress right now. So to pull up just a little bit in a more generalize that question, I mean, whether it's.
Kind of specific to medical publishers. I mean, do you see these technologies, generative AI and rags, kind of supplanting other players in the value chain. Traditionally, it's been publishers that hold a lot of value and in the primary publication of medical research. Obviously, this could kind of shift that. And then in Hong, I think the same question might have a different perspective, broader than, say, just the medical information space.
But Kate, you want to start. Yeah, interesting. Because, you know, just like, you know, I don't think we're supplanting people at any point. It might be shifting some of their activities. My experience, which I imagine could generalize to other contexts, is that we're not. We're really. Replacing the traditional search experience.
We're not necessarily cutting out any of the other pieces of the need to have that transparency and immediate like availability and access to that source information. It may be the way that you're interacting with it is a little bit different. And some of this does depend on which players in the value chain are kind of out ahead with providing this type of functionality.
And what our approach has been is we've done a lot of thinking about how can we license our content. There's clearly value there. But from our perspective, baking this into our core experience allows us to control which what that experience is rather than having, you know, that potential where someone is skipping over what we're doing completely. Not that seemed like a possibility, but it is to me, it's more about how the accessing the information rather than cutting out any of those elements of how important that published content is and how important it is for a user to have direct access to it.
But Hong thought any thoughts on these technologies kind of supplanting publishers or kind of current big stakeholders in the space. Yeah, I think this technology is, is using the wireless examples widely, not only focused on the publishing, but also learning. So now the beside the publishing, we also try to apply the generative AI to enhance the learning, to create the more interactive the content and also the help, the identify more interactive way for the learner.
I mean, to help the, you know, the people to quickly digest and understand this content, et cetera. And I think this is a very, very important. But I would like, you know, the. I want to mention them a little bit more about besides the technology. For me, the technology is there and they become more and more powerful large language model so that the technology entry barrier has become lower and lower.
So there are many people, many vendors can start developing whatever, you know, the application based on this and the easily. So for me is in the future, AI is not about the individual solutions, the application API, but it's a platform. It's a platform that has to be worked with, you know, the either integrate in the other platform to differentiate them from others or is the itself has become know, the big platform like today's the trend is about the Super app.
In the mobile phone we have so many the over 100 applications. But in the future maybe only the two or three. So everything is know, you can just get from the one single app, achieve your goals, et cetera. So this is another thing. But another thing I want to emphasize besides the technology, I think the business policy and the governance is at least is equally important or even more important for the, you know, the as you know, the technology itself.
And then you can see that today the companies invest the billions, billions of dollars in the technology itself. But how much the investing in the governance, know, now it's becoming the is picking up it's become the invest more but it's still lag far behind the capabilities technology itself. But it's very important if you cannot manage this capabilities solutions.
Well it's dangerous. It's dangerous. And also using the publishing you know, in the scholarly publishing industry, if we want to use AI, is it any the, you know, the right policy and the business, you know, standard to support this. And should we accept the AI as author and should we redefine the meaning of, you know, the contribution and to what extent we can leverage AI to help us to write the paper, to review.
And also, what's the copyright. The Copyright today is still protect the human, but is it is still valid in the, you know, the era of the digital intelligent work. So I think there's a lot of things. I think maybe is a even more important, at least for me, is more important than technology itself. And we're nearly at time.
But along with governance and understanding, business strategy and risks, I just want to plug the importance of product managers who understand your audience and their needs as being a vital role. Organization should have these days, right. And you know. I think. Hung you're right. There are chances that upstarts may leverage these technologies to come into our space.
There's a possible future where publishers like Axel Springer have done with OpenAI that may decide that their best forward business strategy is licensing content into these limbs are still getting value exchange for that, right. And that means they're just delivering their content to their audience in a different way. It's now through an intermediary. And whether that amounts to disruption or business change, who knows.
It's all about dollars at some level at the end of the day. But understanding your users how your users' needs and use cases are changing as they adopt these technologies is going to be so important. And you need, I think, someone who has that awareness of the end user, what they want to do, where they want to find your content, what that means for your strategy and your brand and how you protect all these things, just all the more critical.
So if you don't have a product manager in your company, you should probably talk to someone about getting one. So sorry. I started my career in product back in the day. So big, big, big plug for that. And a lot of publishers are on a product journey. So I think we are at time. And Susan, I think you wanted to come back with a closing script, but I just want to say, Kate Hong, Thank you.
It's been a lot of fun. Thank you. Thank you. Thank you. We'll all right. Thanks, everyone. Let me see if I can share my screen. You in.
OK not. I'm kind of like. Yeah Thank you, everyone. I just want to once again, Thanks. Our sponsors Texas innovations, openathens and Silverchair. We are grateful for your support. We encourage you to provide your feedback on today's webinar. Scan the QR code or click the link in the chat to tell us what you thought.
Please visit the website for information on upcoming programs. One upcoming event is an introduction to copyright seminar, which is going to be held on July 9th. Again, Thanks so much for our speakers and our organizers today. And just a reminder that this discussion was recorded and all registrants will receive a link to the recording when it's posted.
And so that's it. This session is concluded. Thank you all. Hi, everyone. Thank you. Bye Thank you. Thank you. OK bye bye.