Name:
Answers to the AI Questions You Don't Want to Ask
Description:
Answers to the AI Questions You Don't Want to Ask
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/9759f6ab-76bb-4e93-93a8-96334f5d67be/thumbnails/9759f6ab-76bb-4e93-93a8-96334f5d67be.png
Duration:
T01H01M32S
Embed URL:
https://stream.cadmore.media/player/9759f6ab-76bb-4e93-93a8-96334f5d67be
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/9759f6ab-76bb-4e93-93a8-96334f5d67be/AI Lab Reports- June 2024.mov?sv=2019-02-02&sr=c&sig=0e2DxHRfU%2BvDzcAEZwOGtvhyjR%2FRsmq0LrlZiHOUu8M%3D&st=2024-11-21T11%3A15%3A09Z&se=2024-11-21T13%3A20%3A09Z&sp=r
Upload Date:
2024-07-02T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Betsy Donohue: Okay, Hello, and welcome. My name is Betsy Donahue, SVP of Business Development at Silverchair. Betsy Donohue: and first and foremost, I wanted to thank you all for joining us for today's event. Betsy Donohue: And this event is the 3rd and final of the 2024 Silverchair AI Lab reports Webinar series. Betsy Donohue: We're really looking forward to today's discussion. Betsy Donohue: But before we start I wanted to go over just a few logistics. Betsy Donohue: As I mentioned, this event is last in our annual Spring Webinar series. But if you're interested we totally welcome you to go view the recordings from the previous webinar events on silverchair.com. Betsy Donohue: We are also delighted to be bringing back our in-person Platform Strategies. Event on September 25th of this year in Washington, DC.
Betsy Donohue: And we'll share more info on that at the end of today's webinar Betsy Donohue: this year's webinar series really aims to bring us down to Earth and ground us in practical discussion around AI in scholarly publishing. Betsy Donohue: and I'd really like to put out a call to action to you all joining us today. Betsy Donohue: We would like you to guard our discussion, and we need your questions, so please add those to the chat and or the Q&A. Features on zoom. Here, don't hold back. Let the questions fly Betsy Donohue: a little bit of housekeeping. This event is being recorded.
Betsy Donohue: and a copy of the recording will be made freely available via our website and to our attendees. Betsy Donohue: and finally, at the end of the event, you will see a survey requesting your rating. This really does help us with future event planning, and we do appreciate your feedback in advance. Betsy Donohue: So with that I am delighted to hand things over to my co-host, Lori Carlin, of Delta Think, over to you, Lori. Lori Carlin: Thanks so much, Betsy, and I, too, am delighted to be here and happy to be co-hosting today.
Lori Carlin: So we're going to start off with a poll Lori Carlin: today, and really to level, set and determine where everybody is in their journey. Lori Carlin: We'd like to understand that. So we can speak to it kind of on the fly, even though we prepared for this webinar. So if you can tell us what is your biggest concern when it comes to AI is it inaccuracy, traffic to trackable usage of the version of record, bias, privacy, and other ethical concerns, loss of revenue, research, integrity, loss of brand awareness and reputation, or something else Lori Carlin: that you can add into the chat. So we'd like you to take the poll. It's a single choice. So which is your your again, your biggest concern, and while you're doing that I will give a little in Lori Carlin: introduction to our topic today. So we all know AI is a tool. The topic is ubiquitous. Those of you who attended conferences this spring. I'm sure all of the programs were replete with sessions on AI this is the hot topic in various forms, very various forms and flavors.
Lori Carlin: And there's been a deluge of information. It's hard to know where to get started. Lori Carlin: If you have started, where to direct your interest, where to get your hands on additional information. Who to talk with who to network with. So in this session, we wanna cover some of those basics and best practices around how to get started. We'll talk a bit about prompt engineering, leveraging AI to engage technical topics like metadata, Xml, or Lori Carlin: code, stay compliant by creating and adhering to policies, what are the legal and privacy implications, and to just generally stay informed in this rapidly changing environment.
Lori Carlin: Plus. As Betsy said, we wanna hear your own questions. We left plenty of time for that Q&A. So please be ready to share those. You can do that anonymously if you prefer Lori Carlin: and then let's see, how are we doing on the poll? Lori Carlin: Stephanie? Yeah. Lori Carlin: Great. Lori Carlin: So we can see definitely inaccuracy, bias and privacy, research integrity. But everything's got some concern along the along the way. So and we will certainly wanna touch on on these areas.
Lori Carlin: And okay, Betsy, back over to you. Betsy Donohue: Yeah. So this is a perfect time to ask our speakers to introduce themselves. And while they're introducing themselves just to add a little bit of perspective on our own personal experience with this topic, and answer the question, What has your engagement with AI look like? So far Betsy Donohue: I've already introduced myself. But I can take that question. For me personally within Silverchair. Our organization has really energized and challenged Betsy Donohue: all Silverchairians to roll our sleeves up and get experienced, and start using AI in all aspects of our life.
Betsy Donohue: So I've had the benefit of fantastic guidance here anywhere from Otter AI on Zoom calls like this Betsy Donohue: to help pull straight transcripts and summarize to more specific silver tools, where we have a silver chat which is our own kind of walled off ChatGPT, where I can take things like ideas from a call and asked for refinement or prioritization. Betsy Donohue: So pretty cool stuff like that. So I'm I would say I am Betsy Donohue: a layman. But and learning a lot specifically, and I'll shout out to Silverchair's Playground our AI Playground where you can go in and experience different types of RAG model pulls from data, see the cost, how quickly things can happen.
Betsy Donohue: And it is a really cool experience to really dive in and understand. What's behind a RAG model. So those are the 3 things I've had experience with so far over to you, Lori. Lori Carlin: Thanks, Betsy. So yes. I introduced myself mostly before from Delta Think. I'm Chief Commercial Officer, and you know, as a consultant I have an opportunity to work with lots of different organizations of different types within our industry, different shapes and sizes, different risk tolerance if you will levels as well. So really, the Lori Carlin: the landscape of AI that that I've been involved in is very, very broad. For myself, as a tool Lori Carlin: we use it within the organization to help organize, to help search, to help even write content, beginning information. Would I rely Do I rely on it a hundred percent? No, but it's a great starting point tool for your for your marketing materials or or summarization of information.
Lori Carlin: On the client side. We've been working with clients on everything from just understanding. What does this mean for us? And should we have a policy, should we have an author policy regarding AI, for example, all the way up to, how can we incorporate this into our products and services? How can we build products around an AI solution. What kind of business models are we Lori Carlin: talking about? Should we be licensing our content for AI use and distribution? Really, what is the best use case for AI in our organization with our content with our information? So it really does span the gamut. And I think we're all wrestling with that? What is the right use of AI for each of us.
Lori Carlin: Brian? I believe you are up next. Brian Moore: Yeah. Hi, Brian Moore, I'm with the American Academy of Orthopedic Surgeons. I head up our Learning Division which is inclusive of our examinations and our resident curriculum. Brian Moore: We've really tried to push staff to use a generative AI in particular, to focus on their productivity. Look for ways to use it to draft emails. To me, many of the ways that Lori mentions draft emails to synthesize data to help us Brian Moore: to 1st drafts on Brian Moore: various policies. In fact, we use the generative AI to draft a policy on how to use generative AI so a bit meta there. But some of the other cool projects we've been doing, have really been around product development and Brian Moore: creating 1st drafts to then help optimize our SMEs' time.
Brian Moore: So we we've got busy volunteers that we work with, and them staring at a blank piece of paper always takes longer than if they're editing something. So we've tried to help them along that lines. Brian Moore: I I think the best project with the most opportunity, but also fraught with some hair pulling is us trying to figure out how to draft self assessment questions. So it's a heavy lift to generate a good, or to have a human generate a good self assessment question.
Brian Moore: So, looking at how we can again have a generative AI create that 1st draft and then work with our volunteer structure to make iterative improvements to that. So that that's been a fun challenge. Looking at how we can take Brian Moore: what is out there and apply it through various fine-tuning RAG models to really focus on orthopedics. Lori Carlin: Great. Lori Carlin: Dave. David Myers: So Hi, David Myers: I'm Dave Myers, and of course, like most of you, I use AI personally. Just to experiment. But by way of introducing myself, my engagement with AI takes on David Myers: really 2 forms, on the services side. I'm CEO of DMedia associates, and it's a bespoke consultancy. I started about 18 years ago, after leaving Walters Kluwer and we help companies from super large multinationals to AI startups manage negotiate and license data, especially around licensing data to train David Myers: their algorithms and integrate into their own product offerings. I've also created a course to help facilitate publishers and content owners coming up with their own policies for outbound data, licensing, that is, licensing the content to others rather than their internal employees using AI or to create products internally.
David Myers: On the product side, I also serve a CEO of Data Licensing Alliance, and it's a marketplace for licensing data specifically for AI, and I started that company close to 4 years ago. Luckily, right before Covid started, or unluckily, because I saw a huge need to make licensing data for AI much more efficient David Myers: it is very complex and time consuming. And most people don't know how or who to talk to, much less negotiate it. And so for me, that's generally my engagement around AI.
Lori Carlin: And Sven. I think you're next. Sven Molter: Yeah, thanks so much. Like everyone before me, I engage in AI in many similar ways. My name's Sven Molter. I'm the VP of Product here at Silverchair, and as Betsy was mentioning, we have a number of things in house here where we've been playing with AI. Super exciting on a personal level. I'm an explorer. So anytime I hear of a new tool that I have not played with, I immediately try and get in there. Start poking around and seeing Sven Molter: you know what I can do, and that's just always super exciting to see all the different tools that are available on the organization--on the Silverchair side as Betsy mentioned. We have a lot of things happening here at Silverchair, which is truly exciting, from the Playground to our internal SilverChat.
Sven Molter: And the Playground is super exciting because we're working on a generative RAG model. And we're going along with our clients right? So we're talking internally and also with our clients to see how we can partner together to help our clients achieve their goals. So it's as everyone knows, a very exciting time exciting thing to be a part of. Lori Carlin: Great thanks, Sven, that segues really well into our our 1st question, which was, or is, Lori Carlin: we assume folks today joining today, there are some who have not started to engage with AI. There are some who are ready to dive in or just starting to experiment around with it.
Lori Carlin: And how would you recommend that they get started in this experimentation? Sven Molter: That is a great question, and we don't have a lot of slides here today, but we do for this intro part here. And so jumping in. If you have not, the 1st thing to do is set your expectations right. The AI tools are currently most effective. When it's used to enhance your productivity. It's something that you're using as a tool. It's not the decision maker. You're still the decision maker. That's not a replacement for your judgment, and you know many situations, including complex legal ethical decisions.
Sven Molter: It requires a human in the loop, and that human is you. Sven Molter: So once you've set your expectations then, immediately you need to understand your organization's policy, right? There are a number of Sven Molter: areas and organizational policy that a policy could speak to, including the privacy, the security of data. How's that data going to be used? Is it going to be used to train the model? And if so, what data is included there? And are there any privacy concerns transparency which goes along with bias and fairness? What is the model trained on? Can you trust The source material? Is the source material, referenced at all?
Sven Molter: And then, of course, one of the big ones is intellectual property. If you, as a user are dropping sensitive information with company secrets into an LLM without privacy agreements or a policy, it's akin to giving those secrets away. So this means, of course, that you need to be aware of your organization's policy before you even jump in. Sven Molter: And then, once you're ready to jump in. Sven Molter: You're ready to go explore. Once you understand the policies you've set your expectations, and there are so many different tools that you could use. This is not an endorsement. This is just an example of a few that you could use. Lately I've been using one called Perplexity, and one of the reasons I love, it is because it includes the source right there. So you do a query you do a prompt. It generates information and then includes the source. You can go right to the data that's behind what you're looking at there.
Sven Molter: And of course, the biggest thing well, one of the bigger things with working with generative AI and LLMs is making a good prompt right? So it's not just a search. It's a conversation between you and the AI, and a good prompt is essential for effective communication with AI systems. So you need a clear objective. You need specific details. Avoid an ambiguity where possible. Sven Molter: Your structure and format is important. You can dictate the length that you want something. You can describe the format that you want something output in your context and background is very important. There may be relevant information that you need to provide to help the AI understand the situation.
Sven Molter: And then I think I would just jump down to iterative refinement, right? So Sven Molter: as specific as you can be in your prompt is great, but you will find that the true value comes as you see the results, and you iterate. And it can be a simple something as simple as the format didn't come out in the format you're expecting. So you give it a little bit more instruction, or you're just building on the conversation that you're having with that AI! So Sven Molter: those are all the ways that I would jump in.
Sven Molter: And I believe Betsy. Betsy Donohue: Over to me. Thanks, Sven. Betsy Donohue: that was a great, summary, awesome. Thank you for that Sven Betsy Donohue: and a segue onto exactly what Sven was covering there on his slides. There are so many AI tools Betsy Donohue: that have been released and often for specific use cases. This 1st question or discussion point is, what are some of the ones that are really most valuable? And it specifically might be the most accessible to a beginner to AI. Brian this one goes to you.
Brian Moore: Yeah, happy to. Brian Moore: So I think Brian Moore: that Brian Moore: given the pace of change Brian Moore: over the last few months, 6, 8, 18 months. I would stick to the Big 3. I'd stick to Claude GPT or Chat GPT or Gemini. I think what has been troubling for me is to see the number of tools that have popped up recently Brian Moore: and that are not clear in how they're using your data and given the fact that those 3 have enterprise level integrations, you can be fairly confident that they've got Brian Moore: the systems and the framework in place to protect your data should you choose to not have it used for training data, you have that option to select it, and they're not gonna use it. They're not gonna double back on something like that. There's a reason that Apple decided to work with OpenAI because they have those kinds of policies in place to Brian Moore: run a forthright business.
Brian Moore: I had done an internal training seminar with some folks here and said, Go out and download ChatGPT and get started. Dip your toe in the water, see what it's like. And I started getting these emails back, saying, Oh, well, it's asking for a credit card. I thought you said it was free to get started. Brian Moore: If you go to Google right now and type in ChatGPT, the 1st 4, 3, 4 hits are not Open AI products. They're actually skins on top of OpenAI. So they're taking your data and then sending it over to open AI via API, and then returning some result back to you and you dig into it.
Brian Moore: They are allowed to use your data by virtue of using their app. There are nefarious 3rd parties there. And so Brian Moore: as we're still trying to figure out what the environment looks like, I think the message, the take home message is, look for those companies that are doing enterprise-level integrations and trust them more so than other folks. The other piece something like Grammarly that has been around since 2009, they're now adding AI on top of their or generative AI on top of their platform. So I think you can be a little more Brian Moore: assured. That they're not just trying to strike quickly and grab some money and then go move on, sell out, move on!
Brian Moore: I, as of 2 weeks ago, was also a big fan of Perplexity. I think there's a word of caution there, because Brian Moore: I've seen some questions around the data and how they're creating that. I think the model is awesome. Insofar as let's cite everything that we're providing you back. I just question how that data is arising. So, the point being stick to the Big 3. And I think that's probably the safest bet until the dust settles and we see Brian Moore: where things are going.
Lori Carlin: Great thanks, Brian. I wanna build on that a bit now. Because there is a lot of hesitancy around using AI, and a lot of that is related to IP protection and compliance specifically. Lori Carlin: And so how can a user really know what policies they need to adhere to? How can they be sure their content is protected from training LLMs Lori Carlin: and becoming part of future models. And I'm gonna turn to Dave Lori Carlin: to address this very issue.
David Myers: Well, you know that, thank you. Lori. It's very complex, and there's a lot of David Myers: chance for error in this new world that we found ourselves in both Sven and Brian laid out some really interesting products and ways of thinking. And the way I look at this is, okay: you need to start experimenting. David Myers: But when you start experimenting, I advise working from the viewpoint that whatever you do with this generative AI world or AI it will be exposed to others. So take a conservative view.
David Myers: But you gotta read the privacy policy. You have to read the terms of use for every product that you're going to be using and think about the used cases. So if you want to, as an example use a generative AI to create a cartoon. David Myers: What's the danger in that? Right? It's something you're creating for fun. David Myers: You really should not-- I mean, in my opinion, not be too concerned about that. But if you use David Myers: generative AI with some 3rd party application, and you're inputting some sensitive corporate data into it. Well, that's a much bigger problem. So and as Brian mentioned you know, different David Myers: off the shelf versions now have the capability, especially the enterprise-level ones, of turning off training into using your data for for training themselves. So make sure that if you're using them. You have that ability and do so David Myers: especially through the you know the APIs and the 3rd party tools. But you know you have to bounce a fast approach with what your corporate policies are as well, or your organization's policies. So think about it from that perspective is, what does our organization-- What do we believe versus experimenting a lot.
David Myers: I would also say that make sure you pay close attention to privacy and GDPR concerns because, you can unwillingly violate those governmental and regulatory issues if you start exposing data that has David Myers: personally identifiable information as an example. And remember that your policies will change and will likely change often. So you have to, you know, go back again and again and review it. Don't assume anything. And again make sure that you remain flexible and open to changes. That's kind of how I see it as a broad overview and turn it over to anybody else for comments.
Brian Moore: Yeah, question to the rest of the group, actually: we've been trying to figure out what's the right pace to revise our policy. Betsy Donohue: Hmm. Brian Moore: And it feels like. So we originally said, every 6 months we're gonna look at that. We've since changed it to every 3 months. Brian Moore: And I'm wondering if we need to go to a like a monthly. Let's do a landscape assessment. What's going on? So curious if anyone else has tackled this. Sven Molter: We've not spoken to the frequency just yet. But we're currently discussing the next iteration of our policy now. And I think it's a great point. It's something that's moving so fast needs to be considered frequently.
Lori Carlin: Yeah, I will say, Brian, I worked with clients who were developing policies, and I had that same discussion with them that, 3 months, you need to be looking at this because the it's gonna change rapidly. So no, I don't have a better answer than that. But I think you're right on the mark there. David Myers: And Brian, you know, from my perspective, as I mentioned in my introduction, you know I view it 2 different ways. There's the policy for outbound licensing. What do you do with your own content Out there-- those can be a little less frequent. The ones internally much more frequent, I would say.
Brian Moore: Yeah, I think things are just moving so fast. It's like you just have to keep up with Brian Moore: with staff, because they're coming up with new ideas, applications that we're like, Oh, we didn't even think about having to give some guardrails to that and so, Brian Moore: having that bi-directional communication internally. But also it's not a 1 and done thing, obviously, with policy writing. David Myers: Absolutely. Betsy Donohue: This is a great opportunity to shift gears to the next discussion question.
Betsy Donohue: Because there's a nuance there that shouldn't be missed. Right? We're talking about internally at our own organizations Betsy Donohue: versus externally --customers, constituents. And the next question is about how we stay aligned there right? So externally, what are our customers and constituents expecting us to deliver to them on AI functionality. And how do we understand their needs while managing their expectations? Betsy Donohue: So this one goes to Lori.
Lori Carlin: Yeah, thanks, Betsy. So Lori Carlin: you know, some things never change. Right? So research is always important in anything that you are planning to build or move out into the community to really understand Lori Carlin: that voice of the customer. What does the customer need and want? Lori Carlin: And I don't want to get into, you know, analysis Lori Carlin: by paralysis, and and all of that, either because this is so fast moving that you do need to be really agile Lori Carlin: in the way that you approach it, but understanding, you know, I talked before about risk tolerance, understanding your customers where they are, you can build some beautiful solution and have a premier product. And if your community is not ready for it, it's going to fail at worst, and just not really do well Lori Carlin: or take a long time to get acceptance and adoption. So you really wanna understand where your community is in their journey when it is related to AI. And Lori Carlin: you also wanna understand what's going on in the landscape we just talked about. Is it one month? Is it 3 months, is it 6 months? And different areas have different timelines for that as well. So it's a mixture of looking out regularly into the landscape and assessing what is going on. What's the latest news? What is the reaction to that news? What do Lori Carlin: others in your space seem to be doing? And others outside your space. I'm a really big proponent of looking outside of Lori Carlin: scholarly communications, for example, because there's a whole world out there of other folks doing things, especially in this space that we can all learn from. If we only focus on our industry, we get very insular, and we're just kind of building off of each other. So there's that landscape and keeping track of what's going on in the landscape. But then there's also your customers, your users, your community. What are they interested in? What are their challenges? Because sometimes they don't know Lori Carlin: what the solution is, but they have challenges, and they either know their challenges. Or you can talk through their workflows. And you can pick out where those challenges are, those gaps are. And you start to build products and services and solutions around that for them that they're willing to accept.
Lori Carlin: And again, it's not a once and done. It's a frequent research engagement in this particular area, at least for right now. So maybe it involves advisory groups, folks that you can meet with on a monthly basis to run ideas by and to get feedback from. But you want to engage that audience. You know that build it and they will come mentality is is old, and Lori Carlin: really most of us do not subscribe to that any longer. But in this particular case you want to have that balance between what is the newest, what is really going to be relevant for us, and what's our community willing to accept? And then there's a 3rd piece in there which is how to educate them Lori Carlin: in order to get their buy in as well. Since this is so new. So I think there's a multi-prong approach here, but it it requires engaging the community.
David Myers: Lori, can I just chime in real quick? I love what you said. The one word that comes to to mind for me is all about trust, the because, no matter what you do, if your constituencies or your employees, or whomever is using, don't trust you don't trust your output, everything else goes to to pot. David Myers: And so you have to establish a culture and an authenticity about what you're doing and that'll resonate either positively or negatively, depending on what you do.
Lori Carlin: And why you're doing it. You know, what is this? How is this solution going to benefit them--the community? Lori Carlin: I would add that too. Yes, absolutely, Dave. Brian Moore: I would just double down on the point that David just made. Honestly Brian Moore: I think we're gonna see a lot of bad actors use utilizing AI to create fakes in various ways, and at the end of the day, amongst the glut of content, your credibility as an organization is going to be sort of the Brian Moore: the guiding light through all the fog. And so, being mindful around, how do you defend that reputation, defend that credibility above all else. It doesn't take much to lose that trust, but you were going to be sort of what folks are looking to help guide them through all of this over the next couple of years.
Lori Carlin: And I wouldn't underestimate Lori Carlin: the value of your credibility either, to organizations that want to partner Lori Carlin: with you. Because that trust is really really important to them as well. David Myers: Right. It's really gonna be the differentiators. The way I see it. Lori Carlin: Agree. Betsy Donohue: Well, we have a question in the Q&A that's, I think, worth touching on.
Betsy Donohue: There's some themes here in this question that sound like it should go to David first, but feel free to kinda like jump around here. Will you be getting into content development? Betsy Donohue: And specifically in the proposal stage for new books and ebooks, for example. So any tools, tips, tricks, pitfalls. Betsy Donohue: and then, finally, are there any ethical or ownership issues in using these tools for product development. Betsy Donohue: So this feels like David and Sven to me, David, do you want to start?
David Myers: Yeah, I'm trying to understand the question a little bit, but it's really about content development and what you should consider in when you're doing. And so David Myers: from a David Myers: a product and ethical perspective, I think that I have to tie it back to just our last prior comments. It's like what you're doing when you're creating content 1st of all. David Myers: The way I look at everything, it all starts with strategy, so does it fit with your organization strategy and what you want to deliver that will enhance your organization, whatever its goals. In content development.
David Myers: Especially with AI, you have to maintain trust. So how you have to communicate, how that product is being created, what tools are it's being, are being used to create it in such a way. I mean, you don't have to go into detail depending on the product itself. But people have to understand that David Myers: the product you're creating is something that can be trusted and will add value. So just high level. That's the way I see it. I mean, Sven, any thoughts?
Sven Molter: Yeah. You know, when I think about using these tools for product development similar to what I was mentioning earlier, I just think of them as an add on tool. I think there are definitely ethical and ownership issues especially without a prior agreement. So as an example with some of the work that we're doing with the Silverchair Playground, we have specific agreements in place with providers not to use the data to train their model. Sven Molter: And then I think there are other ethical concerns, Sven Molter: especially when getting results. If we're not understanding what the source of the data is, is it biased?
Sven Molter: Is it transparent enough for us? And then I think there's also a question of Sven Molter: if you're using Sven Molter: the LLM to generate ideas, you know, I don't know if there are questions about who actually owns it when those ideas have been generated. Sven Molter: And that's something. I think a lot of these questions don't necessarily have answers, and we will see it played out in the court when big organizations like the New York Times sue people. Betsy Donohue: Those are my thoughts. Brian Moore: Yeah, yeah, if I could jump in, so I am not a lawyer, let me 1st say that. But I've had a lot of conversations with our General Counsel around this very topic. Because we've been using it sort of as a thought starter for product development. And and so what we've come away with is a if it's Brian Moore: straight out of an LLM to market, that is not copyrightable. Therefore you don't own it.
Brian Moore: And the gray area with the Copyright Office right now is how much human interaction needs to happen. Betsy Donohue: No. Brian Moore: --before it goes from being LLM generated. Non human, I think, is the phrase they use. Non human, generated to now a human output. And so what we've drawn a line at with the development team is, you can use it for a 1st draft. You can use it to to develop outlines, you can use it to Brian Moore: help frame what this product idea is going to look like. But when it comes to actual content generation and the expectation is that that is going through our full editorial process, that it's working closely with our surgeon volunteers, and that nothing is going straight from an LLM out our doors without significant editing, significant, input from Brian Moore: human editors, developers, etc. So we're trying to -- until there is a firm decision by the copyright office to understand what that level of engagement is, we've tried to ensure that nothing is coming verbatim out of an LLM and going to market.
David Myers: Yeah, Brian, I mean, if I may just chime in here. So I think that the it will be a moving line in the sand for the foreseeable future. I think that you know it's summary of what we were just talking about. It's really about where you know what's the sources and human involvement? You know. They say that every idea has already been created somewhere in the world. Right? So there's nothing new. Right? It's just about how it gets repurposed. David Myers: And so yeah, understanding the source identifying the source and having human involvement will serve you well.
David Myers: Yeah. Betsy Donohue: Now I wanted to kind of -- we have a couple of more questions that have come in in the Q&A. But we have a related question that we're actually, our group here when we were preparing for this webinar, we're wondering if people are gonna ask. And I feel like, so related to what was just chatted there, and that human error interaction. And that is Betsy Donohue: the simple question Betsy Donohue: that we know many people have been wondering is, is my job going to be taken Betsy Donohue: by AI? I think this is a nice complement to the question that it was just answered prior. And again, there's a couple of different ways to look at this. Right? So the optimistic version is, what can my role be in the age of AI. So who wants to take that one?
Brian Moore: I'm dying to take that one. Betsy Donohue: Oh, yeah, Brian. Brian Moore: Yeah. So, I think the adage that has been making the rounds is, AI won't take your job; people using AI will take your job. There is a a study--the BCG study that gets referenced often around-- the consultancy Boston Consultancy Group gave it to a bunch of their folks, a bunch of their consultants. And you see this huge like 2 sigma shift in the capabilities of low performers.
Brian Moore: So all that to say, I think there is the Brian Moore: positive side of AI integration that says, well, maybe you could remove the mundane parts of my job. The parts that I don't like to do, so that I can focus on the things that I love to do. As coders, can it do that 1st draft? And then I can use it to debug? And I can use it to think about more complex problems? As editors, can it make that 1st draft so I can focus on, have we lost the meaning? Have we Brian Moore: have we retained the right tone for the organization. So I think there's a huge opportunity to take the Brian Moore: grunt work -- for lack of a better word -- off of our plates and focus on that the fulfilling part of what it is that you do?
Brian Moore: The response that I've had to folks when they they're coming to me to present something, a problem, whatever, has always been, or more recently been, Well, have you run this through AI? We run this through ChatGPT, what's there? Hey, I'm about to present this presentation to my boss on blank, can you take a 1st pass on it and let me know where you see there's problems. Brian Moore: And it has the potential to level up everybody. I think if somebody came to you and you said, Well have you Googled it? That you're basically calling them an idiot to their face, right. But if you say have you GPTed, that's like, Oh, I hadn't considered that instance. As I said, it has this capacity to upskill everyone, up-level everyone. And I think there's a great Brian Moore: positive upside to that. As opposed to, Oh, no, I need to start looking for a new job, it's, how does my job become more fulfilling? By virtue of a tool that is available to everyone, as opposed to it, being something that's sheltered. At an enterprise level Brian Moore: like it's awesome.
Lori Carlin: I would add to that, too, by saying, You know I don't think I've ever run into an organization that has a glut of staff Lori Carlin: and staff time. Right? So if you think about all the efficiencies that this will allow for Lori Carlin: and give you more time to get to the things that don't need that human touch, that human thought process, that strategic development Lori Carlin: that'll give you more time for that.
Lori Carlin: If you are able to allow AI to help you with some of these other areas. Lori Carlin: And I think that comes back to the way Betsy was framing the question about what can I do Lori Carlin: to, you know, improve my knowledge, improve my use, to really understand the value of these AI tools so that I can do my job better. Lori Carlin: and now have more time for things that AI may not be able to solve, or where I need to add value to the AI.
David Myers: Right. Lori Carlin: out of the box. Yeah. David Myers: And I would just add, you said exactly what I was just about to add, which is ultimately it comes down to, Can you add value to whatever it is you do, and either can you use AI to add value or to detract. And as long as you're adding value to your organization or to whatever you're working on, there'll be a place for you somewhere. Betsy Donohue: Yeah. Lori Carlin: Absolutely. Lori Carlin: So. There's a there's a somewhat related question in the Q&A.
Lori Carlin: And I hate to say, Brian, I might turn to you on this one to start off with Lori Carlin: but we'll see. So this really is, where do basically, where do you find the monetary investment needed Lori Carlin: for AI. Lori Carlin: How does an organization Lori Carlin: start to parse out money from everything else that they need to do? And now put it directed towards evolving AI. You know, what are the trade offs? What are the considerations? What's the business case really is? What comes to mind for me becomes where can we save? And where can we let management know, if we do this we can shift money from here to there, and that will help. This will help with that.
Brian Moore: So Brian Moore: 2 thoughts, one Brian Moore: we've been able to convince--make the case to our leadership that Brian Moore: the $20 a month subscription Brian Moore: for GPT for teams is $25, I think it is, that allows you to to withhold all your data and all of those protections. Helps make teams more efficient and therefore justifies the cost. We've been lucky to have a pretty robust project management team who has tracked what the development time is for each of the phases of product development. And then we've been able to Brian Moore: put that against what we're seeing. Now that we've utilized GPT in the development process and seen a gain. That Brian Moore: by far offsets the costs, the relatively low cost of the software. I mean, you're effectively about the same cost of Photoshop for an enterprise license. So it's not an outrageous fee that we're talking about.
Brian Moore: From the flip side of well, how do you get started? I think we people love the word pilot, because it sort of gives you this wide, ranging freedom to say, oh, well, we're gonna do a pilot. It's gonna cost us about 10 grand. We're gonna do a pilot Brian Moore: and you're not as beholden to those outcomes to some degree. But with the exams development process in particular, for us, not to speak to anyone else, but for us that's a 22 month process. From the time we say, all right, we're doing a standard setting meeting to, we have a completed exam. That's a 2 year process. And the bulk of that is in the actual question writing.
Brian Moore: And so we said, if we can condense this 8 month question writing process to, let's say a month, and then we can use all the volunteers time to validate those. This is taking a huge lift off of internal allocations and putting that out onto working with outside partners. So I don't know if that's directly applicable to everyone else. But that's sort of the approach that we've taken in how to justify Brian Moore: testing the waters, experimenting, as David had said.
David Myers: Brian. If I can add, I would also say that from a content owners perspective, you have to make sure that you allocate the appropriate amount of funds to create your content or data in a way that makes it interoperable with others. So when you're licensing it out. You know the FAIR principles is a 1 way to to look at it. David Myers: And you have to to allocate funds. But if you do that it will. Certainly in my experience return David Myers: ample dividends.
Betsy Donohue: David, Can you define those just in case our attendees aren't. David Myers: Yeah. Betsy Donohue: Findable... David Myers: Yeah. On the spot here, right: Fair. Betsy Donohue: Accessible David Myers: Yes, fair. Lori Carlin: Interoperable. David Myers: Interoperable. Betsy Donohue: Reproducible.
Lori Carlin: Yeah, yeah. Betsy Donohue: Sorry. Didn't mean to put you on the spot. David Myers: It's alright David Myers: I should know it by heart, but. Betsy Donohue: That's all right. David Myers: But ChatGPT will tell me it. Betsy Donohue: Yeah. Oh, for sure. Betsy Donohue: great! Do we feel like we completed that one? Everybody? Betsy Donohue: I think so.
Betsy Donohue: Although the one little thing, it's very anecdotal. It Betsy Donohue: it brings me back. Betsy Donohue: With the great migration, right from print to online, we were asking the same question. Betsy Donohue: Where are we going to get the money for this? Lori Carlin: We're always asking the same. Betsy Donohue: We're always asking the same question, yeah. Okay. So another open question. Betsy Donohue: It's about images, AI generated images. Betsy Donohue: And to date, they've not really been recognized for copyright protection.
Betsy Donohue: Does this group have any thoughts on how that impacts Betsy Donohue: potential branding Betsy Donohue: or content uses Betsy Donohue: or the degree of modification that would cross the threshold of protection Betsy Donohue: David Myers: I could start real quick, I mean, I think we kind of touched on it in a prior dialogue. But it's really about the percentage of David Myers: involvement that a human had in creating those images and text. Right? So it's kind of a sliding scale. But the more a human is involved in it.
David Myers: with their efforts, and also assuming that the tool that they're using has not illegally scraped the the content themselves--And there's a whole David Myers: dialogue around that-- David Myers: the better you are. Brian Moore: We have primarily used image generation, as sort of a conversation starter. So as we're working with graphic designers, Illustrators. Hey, This is sort of what I'm thinking. And it's shortened that back and forth process on the front end of any given development cycle. But we've not used it as any public facing Brian Moore: images. I know a lot of folks are using it as a filler for their Powerpoint presentations.
Brian Moore: But beyond that, nothing that's monetary or customer facing. Lori Carlin: So taking a look at our, at our questions. Lori Carlin: we have one about protecting unsanctioned scrapes Lori Carlin: of data and information. Lori Carlin: is there. How can that be done? Or is there Lori Carlin: a way that organizations can protect their content?
David Myers: Can I let me start off? If I may, real quick. So one of the things that I advise all my clients, and I advise all you on this webinar to do is update your terms of use on your website and stop allowing people to scrape your website David Myers: from a terms of use perspective. And from a technological perspective there, it's a pretty easy-- hopefully--pretty easy installation from your technical folks.
David Myers: to install software that doesn't allow scraping of your website. So let's start with that one. Lori Carlin: Sven or Brian, anything to add to that? Brian Moore: Yeah, I mean the to some degree. I don't know how you know whether or not you've been scraped. So it may be a genie out of the bottle situation. David Myers: You have been. Brian Moore: But we've, to the best of our ability, kept things behind the firewall. So if it's content that we do not want publicly accessible, to some degree we have Brian Moore: some comfort. Because, we went to expose our video library to Google, so that when somebody does a search we show up in the search results. They're not there. The minute we took it outside of the firewall. Everything started populating so I don't know that that's assurance that everything is safe. But that's Brian Moore: sort of been our experience.
Lori Carlin: Okay. Betsy Donohue: Little shift for a submitted question, really good one, and this Betsy Donohue: is more focused on understanding Betsy Donohue: authors as a stakeholder Betsy Donohue: in the breakneck speed of AI development. And the question is about Betsy Donohue: sentiment. So Betsy Donohue: do we have any recent data or information on scholars and researchers? Sentiment about their work being part of data on LLMs.
Betsy Donohue: And then the Betsy Donohue: following question brings up the point that it's pretty clear what literary authors think about this. Betsy Donohue: But there are different perspectives, especially new generations of researchers who are very OA oriented and would like their work to have high impact. Betsy Donohue: So Betsy Donohue: this is a really interesting question. Anyone want to take up Betsy Donohue: the 1st answer on that. Sven Molter: I could take up the start, then let everyone fill in. So in a previous life I worked at the Public Library of Science, and they were completely thrilled when they found out that they were one of the largest training data sets Sven Molter: for ChatGPT.
Sven Molter: In their mind. That was a fulfillment of the mission, which is disseminating science as broadly as possible without barriers. So I think, for those authors that are partial to open access. It just represents a way of getting their data out there Sven Molter: in ways that it wouldn't before. I don't know that all authors or publishers would agree with that, but certainly from the open access public library side, they definitely did. It was a benefit that they saw this.
Betsy Donohue: So this may hit some tension points between organizationally at certain types of publishers and serving their author community. David Myers: Yeah, I mean, I would say, you know, from the legal perspective, the author still should retain copyright, or whoever owns the copyright based on their contractual relationships should have the right to do that now, clearly, between a publisher and their author. That gets allocated one way or the other.
David Myers: And I don't think that AI necessarily changes that paradigm. Lori Carlin: Alright. I think we're-- Betsy Donohue: About 6 Betsy Donohue: Minutes out. So we've got Betsy Donohue: got a couple more questions to ask. Lori. You pick. What do you want to do next? Lori Carlin: Let's see. Lori Carlin: Yeah, I think about setting rules in your organization.
Lori Carlin: That might be a good, a good place. Betsy Donohue: Hmm. Lori Carlin: to start. Sven. How is Silverchair setting rules Lori Carlin: within the organization? Lori Carlin: If you want to start off. Sven Molter: Sure great question. So I think it comes from a couple of different angles, right, between the product management and the the tech teams. We've kind of run with a lean tech team leading the way and exploring, and so many times it's a conversation between the 2 parties, and then, of course, you have the legal side as well. And so everyone kind of has their thoughts and their opinions, and it's all one big conversation, and give and take, back and forth.
Lori Carlin: Yeah. Lori Carlin: Dave. Lori Carlin: What do you-- How do you think organizations should set the rules.? David Myers: Wow! David Myers: That that's a great one. So you know again, you know, my focus, really on the outbound side. It's really about trust, David Myers: privacy, and data protection is really, you know, the way I look at the 3 pillars. That a publisher needs to adhere to to really David Myers: remain relevant in this day and age.
Lori Carlin: Yeah. And then, Brian, you've talked about, you know, giving people access to tools to utilize, but with some framework around it, anything more. Brian Moore: Yeah, I mean. So the organization by virtue of our registry program, has some pretty strong policies already in place around HIPAA protections. Brian Moore: So that sort of mentality expands to this. Not putting out personal information, not putting out any personal, identifiable information. Not putting out any sort of financial information for the organization. It's just sort of-- We've come out of the gate early with like an interim policy that said, Here's all the don'ts. And over time we've expanded our what you can do within the policy. But Brian Moore: the basics of data production, Brian Moore: you know, HIPAA, obviously, cause then we're in trouble.
Lori Carlin: And I think it points to a little bit of, your mileage may vary, depends on your organization, your community, what the rule, what the standard rules are Lori Carlin: and Dave met, mentioned GDPR before, and you know other besides HIPAA, Other privacy rules. So keeping that all in mind. But it's a, it's a moving target and likely worth a core group who's constantly looking at it and engaging within the organization and and updating. Brian Moore: To that point we found that we had a lot of folks who were eager to get started, but they were just concerned as to what the guardrails were, and once we had got those established you started seeing little pockets cropping up with these really great novel ideas. But at 1st it was well, I don't want to get in trouble. I don't want to jeopardize my job.
Brian Moore: And so that was really quite frankly freeing for folks. Lori Carlin: And I've heard that from a number of other organizations that people wanted to get involved, but they were worried about what their limits might be. So. That's a really important piece of information. Betsy Donohue: Well looking at time. I think we are Betsy Donohue: probably at a time for a final question. It's a little cheesy, but I think it can be fun. So I'm going to ask everyone on this panel Betsy Donohue: to give an answer in 3 words or less.
Betsy Donohue: What's the biggest gift that AI is going to bring to our industry. And what's the biggest concern Betsy Donohue: that you have when it comes to AI. Betsy Donohue: So. Betsy Donohue: Dave, let's go with you first. David Myers: And so you said in 3 words, or 3 Betsy Donohue: Or 3, or less. David Myers: God! Betsy Donohue: Line here, like. David Myers: Yes, David Myers: GenAI will David Myers: evolve. Evolve David Myers: publishing 3 words.
Betsy Donohue: That's the gift. And what's the concern? David Myers: Like, I said before, trust David Myers: privacy David Myers: and relevance. Betsy Donohue: Nice. Betsy Donohue: How about you, Lori? That was good, Lori? How about you. Lori Carlin: I would say the gift, there's lots of gifts of AI-- efficiency comes to mind, 1st and foremost, the the speed at which we can Lori Carlin: move ahead. And the concern is a little bit of this Wild West. And who, you know, what do you rely on? What do you trust what's real? What's fake.
Betsy Donohue: Yeah, good ones. Sven. Sven Molter: Yeah, echoing both of those. I mean, I think, for the gift. It's a increased understanding. Truth potentially. Sven Molter: And then the flip side of that is lies masquerading as the truth. Sven Molter: And I think that's 1 of the biggest concerns that I have. You see a thing. But how do you know? You just have to trust. And that's especially when you see things where you know. Maybe not honoring, not scraping or training models on data that's not available or should not be available. And then you really have to kind of wonder. Question, what are the motivations here?
Betsy Donohue: Awesome thanks Sven. And Brian. Brian Moore: So Brian Moore: I actually love change. So I see this as an opportunity. Betsy Donohue: Yes. Brian Moore: And for folks who are creative and have that entrepreneurial mindset to really flourish. Brian Moore: So that's more than 3 words. But maybe Brian Moore: Disruption. Flourish. Creativity. Lori Carlin: Thanks Brian. Betsy, I'm gonna turn around on you. What's yours?
Betsy Donohue: For me, I think connection, cause I've got I've got teenage kids that have--this is the 1st time something huge has happened in the world. We're in the same spot right? Betsy Donohue: And then, I think, for concern. It's Betsy Donohue: nefarious behavior, those that's Betsy Donohue: similarly what my the dark side is. Betsy Donohue: So I think we're at time, I think. This is a great time to thank everybody for joining today. This is an awesome discussion. As I mentioned in the beginning, we're excited to invite you all to join us at our Platform Strategies event in September in DC. And as an attendee of this webinar. You're gonna get a special discount code for that. So keep your eyes peeled.
Betsy Donohue: We've really enjoyed engaging with you on these topics. Today we'll send around an email with all the recordings, and Betsy Donohue: some links to additional Betsy Donohue: reading. Yeah. So we've got a reading list for y'all. So thanks again. Have a great day and take care everybody. Brian Moore: Thank you. Lori Carlin: Thanks.