Name:
Platform Strategies 2023: AI In Scholarly Publishing Panel Discussion
Description:
Platform Strategies 2023: AI In Scholarly Publishing Panel Discussion
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/796a1f95-6766-41b1-9af7-b7c6e526234e/thumbnails/796a1f95-6766-41b1-9af7-b7c6e526234e.png
Duration:
T00H48M21S
Embed URL:
https://stream.cadmore.media/player/796a1f95-6766-41b1-9af7-b7c6e526234e
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/796a1f95-6766-41b1-9af7-b7c6e526234e/Silverchair_Platform_2023_Part_2-AI panel.mov?sv=2019-02-02&sr=c&sig=xSNSHvcXze6R46fTXHXDbXJLb5QyvE7N2zocaFoKMOo%3D&st=2024-12-21T12%3A47%3A43Z&se=2024-12-21T14%3A52%3A43Z&sp=r
Upload Date:
2023-10-09T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
BETSY DONOHUE: --everybody's attention. We're going to start off with some introductions for the panelists across the stage. So I'm going to ask each of the panelists to introduce themselves, and then answer a specific question, which is, what is the biggest focus for your organization for AI. So we'll start right here.
JESSICA MILES: Thanks, Betsy. I'm Jessica Miles. I'm here from Holtzbrinck Publishing Group. For those not familiar with Holtzbrinck, we have a portfolio of media and tech companies, including-- I'm getting a little feedback. Hopefully we'll manage that, but including digital science, as well as Springer Nature, of which we are the majority shareholder, and several holdings in the media and tech space across trade publishing as well.
BETSY DONOHUE: OK.
CHRIS BROEKHOFF: Is this working? Yes. So Chris Broekhoff. And I just want to say, it's great to be here while humans are still relevant. And I don't think there's any virtual assistants out there, but that could be-- so President of MEI Global. We're a consultancy that represents publishers and helps them license their content in the information industry. And our biggest focus on AI is really in helping publishers figure out how to license their content for use in AI applications.
HOLDEN THORP: Hey, I'm Holden Thorp. I'm the Editor-in-Chief of Science and the Science family of journals. Before I had this job, I was the University Administrator at WashU and the University of North Carolina. And I would say, our biggest focus on AI has been, for decades, publishing high-quality research about artificial intelligence. And we've been watching this for years. So yes, there's this huge frenzy in the public now, but the technology is progressing the way you would expect.
HOLDEN THORP: And just about everything that we do at AAAS and that I do is informed by the idea that higher education is basically incapable of changing. And so-- and I've sat on the side that won't change. There's a very tough piece in The Chronicle today about this. We thought lectures were going to go away when active learning came along. We thought Coursera was going to change higher education. We thought the internet was going to get rid of journals.
HOLDEN THORP: We thought OA was going to change the way we do stuff. And it's changed-- all these things have changed the ancillary industries, but the universities haven't changed anything, and they're not going to. And so that's a guiding principle for us. And so whenever I see breathless talks like the one I just saw, I'm like, yeah, OK, that's all correct, but the institutions aren't going to change that much as a result of all this.
HOLDEN THORP: And that has yet to be disproven.
WILL SCHWEITZER: And then hi, everybody. [LAUGHTER]
AUDIENCE: [INAUDIBLE]
WILL SCHWEITZER: It'll be a long time for academia to change. I'm Will Schweitzer. I'm Silverchair's CEO. And you obviously just heard from Stuart about what we're focused on, but I would characterize it as we're trying to learn as much as we can as fast as we can about how to apply these tools internally, and then how we can be a partner for a lot of you in the room.
BETSY DONOHUE: Excellent. Thanks, everybody. I neglected to point out-- I should have started with the poll for this session. If you go on to the app and click the session, and there's kind of a gray navigation bar on the far-right. There's what I thought looked like Tic Tacs, but they're actually bar graphs. If you click on that and answer for us that simple question, again, what is your organization's biggest focus around AI?
BETSY DONOHUE: And I will share the results as they come in. So it looks like the highest percentage is new products and business models. A close second and third are application to eternal workflows. And then ethical and research integrity implications.
BETSY DONOHUE: So those are the top three. So we'll those in mind as we dive into our set of questions. So a couple prepared questions for the panel. Starting with, what is the most exciting application of AI that you've heard of, whether theoretical or existing? Who wants to start with that one? Volunteers? OK, well, I'm going to go with the person closest to me. Are you ready to kick it off--
JESSICA MILES: So Will? [LAUGHTER] I'm just kidding. Well, I think-- I think Stuart touched on this a bit, but from where we looked at in terms of the research and science workflows, just the total potential to completely have artificial intelligence, AI-- we were talking about generative AI a lot now, but I feel like in maybe three to five years' time, who knows what we'll think of when we say AI, but just the potential to totally transform with academia as perhaps a reluctant participant how research is done.
JESSICA MILES: We talked about hypothesis-driven science, we've already seen a lot of universities investing in lab automation. And so growing that capability. And then taking the data-- right now, it's, by and large, a human looking at data, but as especially the data gets more and more voluminous, that process of taking the data and then putting it into an output.
JESSICA MILES: Obviously we all know the output of choice right now in scholarly communications is the manuscript, but as we see these technologies start to inform every part of the process, I would hope that we're able to see greater emphasis on different research outputs alongside that, and then how that information is then delivered and consumed not only by readers, but by institutions, by government, by industry, and then, of course, by the technologies that continue to funnel this process.
JESSICA MILES: I think we're looking at something very transformative. And I'm sure Holden and I will have a different take on how fast that goes and whether academia goes along with it, but that's what really excites me, I think, about this space.
CHRIS BROEKHOFF: Yeah. So I think the most exciting long-term possibility, as Stuart alluded to in his talk is, is this AGI, superhuman artificial general intelligence that can transform society and solve all our problems, or murder us all in the process. But stepping back from that and where we're really focused as a company is we look at the industry in three buckets. There's OpenAI and companies like that that are developing the large language models.
CHRIS BROEKHOFF: And really, a lot of the focus right now is on litigation. So companies-- publishers, authors suing OpenAI for using their content in training sets, claiming breach of copyright. And so we're really watching those cases and seeing what opportunities for licensing arise out of that. And then the second bucket is-- and Stuart alluded to this as well, the established aggregator and information industry vendors that are looking to integrate generative AI into their products and services.
CHRIS BROEKHOFF: And that's, I think, where most of our focus is. So figuring out how do publishers bring their content to these companies, license them in a judicious and strategic way so that their content is protected, they understand how it's being used, they're generating value from it. And I think the products that are going to arise out of that are sort of what I'm most excited about.
CHRIS BROEKHOFF: So how do you overlay generative AI on top of these existing aggregated research products, et cetera, so that there's now a layer or an interface between the researcher and the underlying corpus of content that's doing the analytics, that's doing the summarization, that's creating an output that sits between the content and the human interacting with the content? So I think there's a lot of exciting things happening there.
CHRIS BROEKHOFF: It raises a lot of issues, again, from the perspective of representing publishers and licensing. So how do the business models evolve in a way that publishers are still getting fair value for their content? So in the old model where royalties are based on a human reading an article, now when you put that label between the-- when you put that generative layer-- generative AI layer in between, how do you make sure that the business model is attributing value to the underlying content when it's maybe not being accessed directly?
CHRIS BROEKHOFF: And then there's lots of best practices associated with that, making sure that there's transparency between the generative output that is delivered and the content that underlies it, making sure that issues of hallucination and bad data are dealt with, and that publishers are indemnified against anything that arises out of that. So that's the second group. It's the established vendors.
CHRIS BROEKHOFF: And that's really where we're focused. The third group we look at is the raft of new startups that are looking to build tools on top of the large language models. So these are-- a lot of companies here that sort of the established players, but new players. And with those, it's pretty early days discussion, a lot of blue sky discussions around where that could go, so that's really exciting, too.
CHRIS BROEKHOFF: But in terms of actual products being developed, it's that middle group that we're really focused on.
BETSY DONOHUE: Great.
HOLDEN THORP: Yeah, so I think the most exciting thing is, as has been said already today, the way that it will transform how research is done. I mean, we selected Breakthrough of the Year two years ago for AI protein structure prediction. Demis got the Lasker Prize just recently, so that means there's a chance he'll get the Nobel Prize for that. And I do think that's a major breakthrough, for example.
HOLDEN THORP: But I think graduate students and postdocs using AI in the laboratory to help them design and accelerate experiments and write code, obviously that's going to accelerate the rate at which research gets done. It's going to make peer review and curating what the papers that come out even more challenging because they're going to come faster, and they could easily have all these errors that AI makes, and they're unchecked.
HOLDEN THORP: And so that is going to make our job as journals more challenging, not less, because we're going to have more papers to review, and there could be parts of them that haven't had as much human adjudication as they deserve, and so we're going to have to figure that out. But again, I still don't see how you're going to get tenure at a world-class university without a portfolio of outstanding peer-reviewed research papers because I don't think you could get all of academia to decide in a coordinated way that they're going to change that.
HOLDEN THORP: Nothing has ever changed that in the 80 years that we have had a major research enterprise. So there will be, I'm sure, other products of research, but I still don't think the full professors at Stanford in the Biology Department are all going to raise their hand to promote somebody who has some alternative product on their CV other than a slew of high-profile publications.
HOLDEN THORP: And if Stanford won't make that change, then everybody who copies them, which is the other 3,000 colleges and universities in America, aren't going to change that either. So I think the most exciting things are about the way that research is done. I don't-- the only thing it's going to do about the work that we do in publishing is make it harder. Yeah.
BETSY DONOHUE: On that note, Will?
WILL SCHWEITZER: Well-- I mean, I don't know if any of you have played with ChatPDF or if you've played along with ChatGPT-4, but I think the most exciting thing is something Stuart alluded to. We're just at that critical moment of removing friction from workflows. And I think it will help us become more efficient, allow us to deploy our talents in new ways. So some examples.
WILL SCHWEITZER: At Silverchair, it takes six months for a developer to understand our code base and become efficient enough to start developing new features. Or if you think about an editorial assistant, checking in a manuscript we're at a point now where there's a lot of manual checks. Was there an IRB? Was there a conflict of interest statement submitted? Were the figures the right size or resolution or did they do something horrible in PowerPoint?
WILL SCHWEITZER: These tools can help us with those things now. And I think in the next six months, all of us who are in leadership or management positions in the organization are going to be saying, what are the better things we can do with the talented people in our organizations? And I think we need to start thinking about that now because there's so much opportunity in all of these tools.
BETSY DONOHUE: Nice. So next question for the panel. Looking internally, what are the potential knock-on effects of internal workflow applications of our AI to our businesses? So looking for knock-on effects. Would you like to start?
JESSICA MILES: Sure. I think for us, one trend that I'm really heartened by is that there is much more discussion and focus internally on responsible use of AI and safe AI. And I think digital science especially, and a lot of our other businesses as well-- you Springer Nature, et cetera-- have used AI before 2022, quite extensively in some cases. One of our premier solutions, Dimensions-- it's called Dimensions AI.
JESSICA MILES: That's in the name. It's built on semantic search of scientific literature, patent databases, et cetera. So we were definitely an organization-focused on AI and technology. But it seems that with the changes and this really swift changes in the landscape, looking internally, we've really asked ourselves, how do we deploy this responsibly?
JESSICA MILES: One thing Stuart's showed was this arms race between Amazon, OpenAI, Microsoft to really bring these more experimental forward-looking technologies to market very quickly. And I think what has happened for us internally, and I've seen of the market respond in kind, is that we're really taking a much slower, more deliberate, methodical approach. We've talked about how these things hallucinate and they can spread misinformation, which is obviously problematic in a lot of contexts, but so much so in scholarly communication.
JESSICA MILES: And so I think having that external focus on making sure that what we're delivering to our consumers is reliable, is safe, is built on transparent technology has really triggered a sort of internal reflection around those same principles in terms of not only how we are constructing externally-facing products, but then what is the appropriate role of AI in our own day-to-day processes, and what types of safeguards do we need across our different organizations to make sure that those are maintained?
JESSICA MILES: And then how do we learn from each other. It's almost like being back in school where we have this really robust culture of trying and failing and learning and experimentation, which has been just really hugely rewarding over the last few months and something I really expect to continue.
CHRIS BROEKHOFF: Yeah. So for us, we're a consultancy. We're much smaller than your organization. Of course, but I think we take the same approach. I mean, we're looking at what are the areas of our business that we could make more efficient through the use of AI? And I think would say, first and foremost, we're taking a cautious approach because I think a lot of the tools are not quite ready for prime time in terms of handing over business processes completely to generative AI or AI-driven tools.
CHRIS BROEKHOFF: Some of the areas that I'd love to see just in terms of some of our own pain points are anything that can cut down my email inbox would be nice. So a generative AI overlay that kind of tells me what I need to focus on, creates responses, et cetera, but of course, there's lots of potential issues with that. And then on a more serious note, a lot of what we do is legal work, contract review, redlining, et cetera. I mean, that's the core of our business, is negotiating agreements for publishers.
CHRIS BROEKHOFF: And so I think we'll be taking a close look at some of the generative AI legal tools, I think everybody has heard the horror story of the attorney who used ChatGPT to file a legal brief and it didn't go so well for him. It made up a lot of precedent. It basically hallucinated the whole thing, and then when he asked it if these were real cases, ChatGPT said, yes, of course they are.
CHRIS BROEKHOFF: And of course, they weren't. So I think with things that are core to your reputation as a company and core to your business processes, I think it's important to maybe be more cautious and deliberative in terms of turning over key aspects of your business to some of these new tools, but it's something we're definitely looking at.
HOLDEN THORP: Yeah, I think it's going to speed up a lot of how we process manuscripts and get them posted and everything. And it's going to, for us-- so Science has published papers that are very carefully crafted. We put a lot of effort into getting them short and the figures beautiful. And we're one of the few places that has the privilege and opportunity to do that.
HOLDEN THORP: For our larger journals like Science Advances, those are not as meticulously crafted. So their AI is going to be more important. And for publishers with a lot more volume than we have, obviously that's going to help them a lot. So I think for us, it will automate some of the tasks that will free up our people to do even more to make our premium product even better, but I think in general, we're going to see that copyediting and checking the references and getting everything prepared to go online and all that stuff is going to speed up significantly and be easier to do.
HOLDEN THORP: And that's a great thing because we're going to have to put more effort into making sure that AI isn't making all these mistakes that it's capable of making, and dealing with the fact that we're going to have so many more papers now. And so the fact that this is going to automate a lot of the stuff that goes on in the background is a good thing.
BETSY DONOHUE: Nice. So Will.
WILL SCHWEITZER: I mean, I thought of knock-on effects in a slightly different way, which is, I run a technology company, I'm not a technologist. And Stuart talked about how we are starting to use some of these LLM and AI-based tools internally, and I thought, all of our folks will be excited to use these things. And I think if you were to ask any member of our executive team, the adoption rate or the curiosity rate, the number of Silverchairians who are using these tools is probably somewhere around 20 or 30%.
WILL SCHWEITZER: And the knock-on effect is we're all going to have to become better change leaders and more adept at encouraging people to try these tools. And we can do it out of fear. Like a developer who knows how to use GitHub Copilot is going to be a lot more effective and deliver a lot more value than you. It's one approach. Not one that I really want us to take.
WILL SCHWEITZER: But the other hand, is these tools can actually help you get more enjoyment out of your work. It can help you get rid of some of the mundane details. It can help automate basic quality control things so you can do more exciting stuff. And we have to help our folks along the way. And change management is one of those things that every manager thinks they know how to do until they start getting feedback probably a day into a change event.
WILL SCHWEITZER: And we're all going to have to sharpen our skills.
BETSY DONOHUE: Nice. And the next question is about balance. So how do we approach balancing the ethical implications of AI? For example, copyright fraud on one side, and then feeding the best research information into the AI models as Stuart pointed out in his presentation. So how-- what's the key to that balance? What do you think?
JESSICA MILES: So I'll leave the copyrights and licensing piece with you, perhaps, and I'll speak to my perspective on the ethics of feeding the models, but then also balancing more of the ethical considerations. And when we spoke a little bit prior to the panel, I mentioned the fact that-- obviously I talked a lot about the work that we do at Digital Science, but I feel like I'm in quite a privileged position in terms of being able to work with media and technology companies in academic publishing, in trade books, in educational books, in the educational space.
JESSICA MILES: And therefore, there are all these different lenses in terms of thinking about the rights of authors across different disciplines versus what readers should be entitled to in terms of, to Stuart's point, really having these resources trained on the best possible information. And I think the approach that a lot of us are taking is thinking about the resources that we have, the content that we have, and how we can develop products and services to add value and to really provide products that are leveraging that content set.
JESSICA MILES: And the reason-- one other reason I think we're so well-placed to do this, having that content, having that experience working with different communities from academia to government to industry, being in a position now to shape the different use cases within those communities. And then also having the ability to really develop tools that are quite specialized based on those use cases.
JESSICA MILES: So something like a ChatGPT or Bard or Claude is using a vast amount of resources-- it's trained on quite a lot of data. But once you're able to develop models that are quite specialized in terms of being trained on academic literature or perhaps even books or whatever the more specialized content is, you're getting something that is quite focused in its approach. And so even though it has many, many fewer parameters, you have that grounding in the content.
JESSICA MILES: And also from the perspective of thinking about how resource-intensive these tools are, not only with respect to cost, but the environmental considerations. Every time one of us puts a query into ChatGPT, what the end result of that is, there are also other ethical considerations around developing smaller, but more specialized models. So like I said, there are all these different lenses to the question of what is ethical and what is right, and I think it's amazing that we have panels like this with people representing all types of perspectives to think about, OK, what are the different considerations?
BETSY DONOHUE: Nice. Chris, how about you, the balance?
CHRIS BROEKHOFF: Yeah. So not surprisingly, I think the way to maintain this balance is through judicious, strategic, and careful licensing. So publishers should take part in these models. I think there's no avoiding that. I think this is a train that's left the station and we need to figure out how to ride it and not get run over by it. But when looking at it, there's a couple nuances to it.
CHRIS BROEKHOFF: So one is, is your content being used as context or is a training set? And there's kind of different considerations depending on which one it is. So when your content is used as context, essentially the product is pointing or LLM or the model at your content and analyzing it. It's not necessarily part of the training set that created the model, but it's applying the model to the content.
CHRIS BROEKHOFF: And that's one of the more straightforward ways of using these tools. And when your content is used as context, there are certain best practices that we should be advocating for. I talked before about transparency. So being able to understand this generative AI output that you're looking at. How does it relate to the underlying content that was used as context?
CHRIS BROEKHOFF: So what is the-- could there be like error bars for how accurate the information is or how confident the model is in the information? Providing other contexts so that you can compare. Providing that transparent link to the underlying content. So if there's an answer that relied on your content, is it being surfaced? And then, of course, there's the whole business model discussion that we touched on earlier.
CHRIS BROEKHOFF: So that's one area. Then in terms of training sets, people have talked about the difference between a broad or a general training set or a general model like ChatGPT, and then narrow models. And a lot of the established industry vendors that are trying to build products on top of the large language models, what they're doing with training sets are really to train narrow models using a general model, but the train it to perform a specific function or in a specific topical area based on a much smaller training set.
CHRIS BROEKHOFF: So publishers should think about allowing their content to be used in those narrow training sets. I think when you're doing that, you should look at, first and foremost, we should be working with reputable companies. So any time you license your content, you want to make sure that the party on the other side is going to do what they say with it. So understanding, what is the purpose of the narrow model?
CHRIS BROEKHOFF: What is the other content in the training set? I heard an example recently that if you were creating a chat bot for a bank customer service, you wouldn't want to train it on bank heist movies, for example. So making sure that that narrow training set is relevant. That you're in good company when you're in that training set. And then there's considerations to think about as well. One of them is, in most cases, once you allow your content to be part of a training set, there's no taking it back.
CHRIS BROEKHOFF: So in licensing, a lot of times, one of the key things we look for is the ability to pull content back. So if you have to retract an article or if you create a new edition of a work that replaces a previous edition, you want to be able to pull back that prior data. And with training sets, that's oftentimes not possible. So once it's trained the model, it's baked in, it's out there, you can't pull it back, so that's another consideration.
CHRIS BROEKHOFF: And then when you're looking at the general models-- so the-- as I talked about before, that first group, the OpenAIs, Google, Amazon, all the big tech companies that are getting in the game, the stakes are much higher and the issues I think are greater. With the narrow models, again, it's a walled garden, so you want to make sure that your content can't leak back into the general models unless you want it to. With the general models, again, the stakes are higher.
CHRIS BROEKHOFF: I think Stuart made a compelling case for these products are being developed, you might as well give it the best information. I think the counterpoint to that is you really lose control of your content potentially when you allow it to be part of a training set in one of these general models. You don't know-- a key tenant of licensing from our perspective is to always understand exactly how your content is going to be used, exactly what value is being created for it, and how are you being compensated for that.
CHRIS BROEKHOFF: So with these general models, it's very wide open and tough to pin down how your content will be used because the models could be used in so many different ways. And then the other thing to think about there is-- I think everybody's aware of the controversy around these large language models beyond just the copyright issues, but are we on the path to an aligned AGI that's going to do terrible things?
CHRIS BROEKHOFF: And key people in the field calling for a pause in training and a pause in development so that the work on alignment and the work on AI safety can catch up with these products, and the fact that people really don't understand how they work. So you could argue that maybe the ethical thing to do is to act as the brakes to not license your content or not allow your content to be part of these training sets until safety research and until the alignment problem writ large catches up with the product development.
CHRIS BROEKHOFF: So not necessarily advocating that, but it's just another way to look at the ethics here.
BETSY DONOHUE: Thank you. How about you, Holden?
HOLDEN THORP: Yeah, so I'm mostly-- we're mostly focused on the ethics of how these tools are used in research, and then what happens when human beings submit papers and sign author forms to send the papers to us. And what we're trying to achieve-- So we have taken a very restrictive stance that you can't submit us any text that was written by ChatGPT. And if you do and somebody calls you out for it, that's research misconduct.
HOLDEN THORP: And the editorial I wrote, just to tell you how irrational this whole craze is, the editorial-- So when I write an editorial, even one of my totally unfiltered things about politics or whatever, it hardly ever gets any citations. It gets a lot of traffic on social media and stuff. But my ChatGPT editorial, which I published at the end of January, has 440 citations.
HOLDEN THORP: I got an extra bump on my h-index. [LAUGHTER] And my next highest-cited editorial has 10 citations or something like that. So that just tells you how much people are losing their minds about all of this. And so-- but what we want, and I think-- so part of our logic is, remember when Photoshop came along and people started putting their JELs in Photoshop?
HOLDEN THORP: We had 10 years or so where we had pretty loose rules about what was permitted and not permitted. And that created this bolus of papers from 1999 to 2009 or something like that that have a lot of problematic image manipulations in them. The President of Stanford just lost his job because of this. And now we have much more stringent guidelines about what you can do with Photoshop on your JELs, but everyone in this room is dealing with the fact that we've got all this bolus of problematic papers that we're all constantly-- I mean, it's just eaten up so much of our time to deal with that.
HOLDEN THORP: And we don't want to create that same bolus over ChatGPT because we already got too much work to do to adjudicate these problems that we have that Photoshop gave us. And we're just now getting to the point where Prooffig and a lot of these programs are to the point where you could start using them to detect this reliably. So that's 25 years since Photoshop started.
HOLDEN THORP: So we're going to have an awfully hard time dealing with all of this text that was generated by ChatGPT that's getting into the literature because now if you want to attack somebody's paper, you could just go figure out that they did that and create another one of these kerfuffles that we're all pulling our hair out over. So what we want is to get to a point where people use ChatGPT as a tool, but they-- or all these other tools-- all these other AI things.
HOLDEN THORP: But it's still a human being that communicates research. I teach a writing class on the history of science for first-year students at GW. And this is my first time doing it since this whole craze started. And so what I decided, and I'm only two assignments into this, but so far the students are responding to it OK, because-- I mean, they're first-year students. I give them prompts that ChatGPT could get an A-minus on with, no problem.
BETSY DONOHUE: Mm-hmm.
HOLDEN THORP: All right. And I practice that. But the first-year students, those are the kinds of prompts they have to do. So the way I've gotten around-- what I've decided so far is they get the prompt a week ahead of time, they can ChatGPT themselves till the cows come home. They get to produce one piece of paper that they have to turn in with their assignment that they've written on by hand and it has no complete sentences on it.
HOLDEN THORP: And then they have to bring that piece of paper to class and write the essay in class. And I thought, when I proposed this, that a bunch of students would drop because they're like, no, I'm just going to take a class where I can get ChatGPT to write all my essays for me. But so far, nobody's dropped. They haven't complained about this. When you teach first-year students, you find some that need to go to the Writing Center and some that don't need to take Freshman Composition.
HOLDEN THORP: And that popped right out of the first set of essays that I got. And so there-- because I think why they responded OK to it is I've alleviated them of the burden of figuring out when they can use it and when they can't. They can use it all they want to do-- to prepare to write their essay, but they can't use it to write their essay, and there's no ethical thing that they're caught up in. And we need the same thing for research.
BETSY DONOHUE: Nice. Thank you.
WILL SCHWEITZER: A really great example. I mean, we have a lot of product managers in the room. And I think your job just became a lot more valuable. We're going to get things wrong. These are massive gray areas, very complicated problems, and that's before we think about what Chris said, we put your content in a training model and it's gone forever, you're not getting it back. But these technologies are complex. It's hard for us to understand them.
WILL SCHWEITZER: And I think the most unethical thing we could do as publishers or platforms or technologists is develop a feature, develop a beta product, and set it and forget it. And a product manager that understands their mission, the organization's purpose and values, that is engaged with stakeholders, that is thinking carefully about how this feature or how this product is being used is the only way we're going to navigate this over the next couple of years.
WILL SCHWEITZER: And anything else-- anything different than that I think is malfeasance, to be honest.
BETSY DONOHUE: Great. I know we are up against time. We were going to do a Q&A session. Steph, is there time for one or two questions? OK. So now it's your turn to ask a couple of questions. Any volunteers? We have--
JESSICA MILES: I think there's someone in the back--
BETSY DONOHUE: --up here. Got a hand here.
WILL SCHWEITZER: Paul. Turn around. Turn around. Just-- hand up behind you. Sorry.
BETSY DONOHUE: It's the race to the hot mic. Who can-- there we go.
AUDIENCE: Is it me first?
BETSY DONOHUE: Yes.
AUDIENCE: So it's Robert Harrington at the American Mathematical Society. And as a society, we're trying to grapple with AI across research, education, as well as publishing and the community at large. But just focusing on the publishing piece, we are in a situation where peer reviewers are hard to find. And mathematics, of course, we're looking at 60 to 70 longer-page proofs sometimes.
AUDIENCE: On the other hand, I think what I'm hearing, both from your panel and also from Stuart earlier, is that actually, peer review becomes more important than ever. So is this the future for the idea of human curation? And how do you all think that we are going to be able to enhance and extend peer review?
BETSY DONOHUE: You want to take that one?
HOLDEN THORP: Yeah, with great difficulty. [LAUGHTER] Because, as you point out, I mean, even for a journal like Science, we struggle to find reviewers, and the number of people we have to invite to get good reviews continues to go up. So, I mean, I think that-- I'm sure that reviewers will start using AI tools to start their reviews and do various analyzes, and hopefully that'll make it easier.
HOLDEN THORP: But as I said, this is all going to increase the volume, which is going to continue to put strain on our reviewers. And so I think-- I guess I don't have a good answer for how we're going to solve that, but it's a problem that we've been dealing with, and somehow, we can-- the industry continues to publish more and more papers. So you have this paradox where it's harder to find reviewers, but the volume is still going up.
HOLDEN THORP: So I guess maybe there's some limit out there that we can't hit, but even though it strains the system and people get cranky about all this free service that the reviewers do, more and more papers just keep coming. So my guess is that we'll figure it out somehow. Because the pressure to get these papers out, both in terms of the revenue that it drives to the publishers and the ambitions of the authors, that's a hard thing to curtail.
BETSY DONOHUE: Thank you. So one more question before the break.
AUDIENCE: Hi. Kent Anderson with The Geyser. A couple of questions, real quick ones, I'll try. So one of the things that I've seen in software development houses is that they have prohibitions against their coders using code derived from these tools because they can't guarantee its provenance, they can't copyright it, they can't put contractual language around it saying we guarantee this and that and the other thing.
AUDIENCE: So do your houses have similar prohibitions if you do software?
WILL SCHWEITZER: So within Silverchair, we are deploying these tools within our stack and in a controlled environment. And some of it is, we have to talk as within Silverchair about what are appropriate bounds. And just for example, of Silverchair's technology, more than half of what we use in the platform is open source. So it is this boundary that we're going to have to consider really carefully. But the biggest thing for us would be exposing our clients' data or content outside of our stack and into these solutions.
WILL SCHWEITZER: So I don't have a great answer for you, Kent.
AUDIENCE: Anyone else develop software in-house or know about their practices?
JESSICA MILES: I would say our-- similar to what Will said, it's about deploying the appropriate safeguards, and certainly, given that we contract regularly with government entities, as you said, the provenance of the code is specifically very important. And so we make sure that we're obviously in compliance with those agreements.
AUDIENCE: Second question is that, what are you doing to increase skepticism among your folks? Because the history of this is it's been fooling people since 1965. When the very first thing came out, people thought it was surely a person on the other side of the table that I was talking to when it was just a very rudimentary, by today's standards, computer program. And you fast forward to now and it's still a parlor game, it's still a burlesque.
AUDIENCE: And we're still putting in structured questions and getting roughly structured answers. And the whole thing is-- I mean, I could fool everybody here with card tricks and give you the illusion of control, but it's 52 things you've seen a million times, but it can still fool you. So what are you doing to help improve skepticism and people not being fooled by these systems?
AUDIENCE:
BETSY DONOHUE: Who wants to take that one? [LAUGHTER]
HOLDEN THORP: Well, that-- if we knew the answer to that, I guess that would radically change the way we do everything. I mean, I think that as far as fraudulent research getting through, is that what you're asking about?
AUDIENCE: No, just [INAUDIBLE]. The example that comes to mind-- so [INAUDIBLE]..
HOLDEN THORP: I can hear you. I'll repeat it.
AUDIENCE: Yeah. So when I programmed in Basic, I could write a Yahtzee game and people thought they were playing against somebody else. We did something where we had a video conference with people who took really good notes. And somebody used one of these assistance programs to get the notes out of it. They sent the notes around the book so structured and plausible and everybody's like, yep, that's [INAUDIBLE] perfectly.
AUDIENCE: And then I went back because I [INAUDIBLE] ChatGPT [INAUDIBLE]. And no so actually, this is wrong, that's wrong, this is out of order, this is not what we wanted. This is a nuance that it didn't capture that everybody else going oh my gosh, you're right. So it's this not turning over our responsibilities to these things [INAUDIBLE],, but actually accepting that they probably are going to be wrong..
AUDIENCE: And maybe to get back to take our own notes, so to speak.
HOLDEN THORP: Yeah. So, I mean, as far as we're concerned, we've taken the position that if you don't sign off on this and you don't generate the text yourself, then that's research misconduct because we can't trust these tools. Now tons of people are still going to do it. And so that's what bugs me about all this, because the biggest challenge that we have-- that I have as an editor is going back and cleaning up bad papers.
HOLDEN THORP: And we're probably just going to get more of them. So the only thing we can do, in my opinion, is-- I've written a lot about this, is try to reduce the stigma associated with bad papers going through because whenever there's a bad paper, if somebody doesn't want the stigma associated with having published it, they get a lawyer, the university has a lawyer, we have a lawyer-- sorry, you're a lawyer, right?
CHRIS BROEKHOFF: No.
HOLDEN THORP: Oh, OK. [LAUGHTER] Yeah. OK. And once you have all those lawyers involved, then you have a completely intractable situation, and everybody's pointing the finger at everybody-- and Ivan Oransky wrote this really good piece in The Chronicle about all the authors who are suing either institutions or the journals because they had to retract the paper, and that is just absolutely ridiculous.
HOLDEN THORP: So in my opinion, the most important thing we could do as an industry and try to get people on board with is to accept the fact that science is designed to be self-correcting, but we don't act like it. We act like it's perfect, and when something goes wrong, it's this great scandal. And Karl Popper and Thomas Kuhn and a whole bunch of other people wrote volumes of the philosophy of science saying it wasn't supposed to work that way.
WILL SCHWEITZER: I think one last closing thought. This is a product design challenge. We know there is potential for LLMs to, say, help publishers, help authors create lay summaries. And those are going to have to be reviewed by an expert. But I think to encourage that skepticism we should be disclosed. We should say this was generated by AI, click here to read the full paper. The search results were augmented by AI.
WILL SCHWEITZER: I mean, not that a lot of technologists even understand the underlying weights of their Solr Lucene Engine, but I think we have to disclose when we're using these tools to our end users, just like an author should be disclosing to the journal they submit, and that's where we start.
BETSY DONOHUE: So I think we're definitely over time. Now so let's wrap it up. Thank you for your questions. Let's have round of applause for everyone on the panel. [APPLAUSE]