Name:
Launching AI Products: From Idea to World-scale
Description:
Launching AI Products: From Idea to World-scale
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/065e4bc0-7ca5-4451-97e8-98b5d76ba6ac/videoscrubberimages/Scrubber_1.jpg
Duration:
T00H58M56S
Embed URL:
https://stream.cadmore.media/player/065e4bc0-7ca5-4451-97e8-98b5d76ba6ac
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/065e4bc0-7ca5-4451-97e8-98b5d76ba6ac/session_2f__launching_ai_products__from_idea_to_world-scale .mp4?sv=2019-02-02&sr=c&sig=GnbtZvlJMFFyowgi2Lxz8UuOx7QACZurwSnXBf9A8jo%3D&st=2025-05-24T16%3A30%3A24Z&se=2025-05-24T18%3A35%3A24Z&sp=r
Upload Date:
2024-12-03T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Max Gabriel, co-founder of actionable intelligence. We work with publishers to build intelligence out of their customer data. That's our focus. Sorry about my water. Hi, Anne Michael, the chief transformation officer at AP publishing, and someone who's been consulting for probably about 16 years prior to that. So just a broad perspective.
No, they don't have to do. We're going to use this mic, which is not which is now on because we don't have to get quite so awkwardly close. Julia McDonnell I work for Oxford University Press and I am director of journals product, which means lots of different things. But thinking about all of these great existential questions that are facing the publishing industry at the moment.
And I'm Paul G. I'm the vice president of product management and development for Jama Network and the AM hub. And where everyone else is right now trying to work out where AI fits into our work and our lives. I guess I will take that since this mic is not hot. So I'm Dustin Smith from hum. We do data and AI as our core business. And that's really focused on deeply understanding the Nexus of people and content.
We were very early to LLMs before we could even talk about it, but just like everybody else, we're still puzzling through how you actually develop and launch AI products. It's actually much easier to do it as a startup. It's really hard to do it within publisher orgs, and so that's part of what we're going to puzzle through today. The notion is we're going to do some sharing of notes and hopefully be a little bit smarter collectively by the time we get out of here.
One of the things that I did want to go through before we hop into the actual panel discussion is why are AI products different. There should be a slide here. There isn't. I see it right here, but five factors that we boiled it down to. And this came out of a conversation with Paul, who was actually going to talk to his board.
Why are the AI products different. Like why. Why I need to actually tell the board some of those things. So one is talent ultimately hiring the sort of people who can launch AI products, things like data scientists, data engineers. There's an emergent role of AI engineers, AI product managers. These are brand new sets of talent which typically don't live within publisher walls.
So there's potentially recruiting, there's potentially working with outside orgs, there's resources. So partially those people are expensive, but you may have to collect data sources, certainly GPUs and hitting APIs for large language models and other AI models, very expensive. So ultimately this is harder to get people in the building, ultimately more expensive for all of these products and projects.
There's the pace of change. Everybody knows this is happening at an incredibly rapid clip. So by the time you put an oar in the water, ultimately the river has changed underneath you. And so as Julia knows intimately, you can be talking about a project for six months and the landscape has rapidly changed around you. So ultimately, I've written a proposal for Julia for different times because ultimately things have changed both within the org.
And within the technology space. So there's also the shifting sands underneath you with software. You kind of had a very interpretable sort of computer code. And so you have instructions in and reliable outputs. Ultimately, the models underneath you constantly developing and ultimately that's something that you have to manage around effectively.
These are a little bit more like ecological sort of ecosystems as opposed to really deterministic computers. The last one is chasms. And boy, do I wish the slide was up here, because really what we're talking about in terms of chasms is there's a nice graphic. I really hope they come back. We're really we've been left alone here. But one of the things that we'll refer to and hopefully visually as well is an AI maturity framework, six steps.
So you start with getting oriented. Ultimately, what you're talking about is just figuring out what sort of direction you want to go, what sort of problem space, where do you actually want to apply AI. Then you have ideation and an idea selection. You're finding out you're basically pooling ideas, figuring out what you want to pursue on to experimentation and then prototypes and testing and launching an alpha or beta product.
And then all the way through the scaled sort of World scale sort of stuff. The graphic is really neat. You can't see it, but you can imagine just a perfect marketing graphic where each bar is the same and there's actually a guy with pink pants climbing a ladder. Yes, there's a guy with pink pants climbing a ladder, and it looks very easy to climb this ladder. But what is in reality, huge chasms between things like choose an idea and you have to launch experiments or you have experimental results and you're ultimately moving on to the sort of prototype and testing phase and you're pulling in getting user feedback, and all of these chasms get deeper and deeper as you go all the way towards the scaled production product.
This is a difficult process. And part of what we're going to puzzle through is how do you actually get this stuff done. How do you get to stage 6. How do you get all the way to the right. And it's not the pink pants and ladder. So an how do you get started. How do you go from getting oriented to ideation and idea selection.
Sorry, not canonically. Like tell us a little bit about your experience. Thanks, Justin. I was just very distracted by this very nice looking graphic. That's right on the computer next to me. So, yeah, I was like. Is it up No OK. So you'll be disappointed once again. So at AIP publishing, what we started to do about a year ago, a little more than a year, maybe a year ago, around that time anyway, is we started to say, well, we don't really how we want to get started.
So maybe the first thing we should do is just get a bunch of our content together. And what we didn't when we didn't have this graphic, this fine graphic back then is that our getting oriented phase involved first, just reading and looking at what was going on in the industry. But we kind of went right into the idea of experimentation, like we just need a place where we can interact with our content in an LLM and understand what does it do and how does it work, and let's get other people to look at it.
Let's just ask a question. So the way we got started was we just kind of said, all right, let's just jump in the water and see what happens. And that was really effective for us. And we had a small group within the organization that was playing around with the support of an outside party that built us a little sandbox. And the funny thing that happened is the group started to grow and grow and grow.
So this little team chat of like six people, I think it's like 28 people, several of which are in the room. And we started to play and say, well, what if we ask it this and this happens. And from that, we said, well, let's start generating a bunch of ideas, things we think we could do with this. And so we generated like 36, I don't figures.
It was like 36 ideas or something like that. And we narrowed it down to we narrowed it down to 15. It was like, all right, not that narrow. But then from that, we started to say, all right, well, what's good for us to do. Like, what are other people doing that there's so much happening that maybe we don't want to start playing over there. And what's something that needs more scale than we have or what's.
And so we narrowed it down to more like five things. And then we started to play with those. So without knowing it, we kind of were going through these steps here. And I would say right now we're in the prototyping testing phase. So that's phase four of his beautiful six step graphic. And we're moving on and there's a lot we can talk about. So we can wait till we get to those points about, well, where did the cash money come from.
Yeah, no, where did it come from. So what we did was we actually went so the first bit of it, it was not a lot of money. And that was a little easy to come up with. But we went to our board and within our product innovation and some of our other data asks that we were making, we incorporated funding for AI related experimentation. And the reason was and our board was fabulous, I'm really impressed with them because we were able to say to them, we want to learn.
So I can't tell you. I think hopefully if I let this go, it won't fall down. I can't tell you exactly where we're going, but we're starting to play around. We're looking at this and this, and we need some money and we'll come back to you. And we gave them a little more than that. We gave them areas of inquiry. And what was why we thought they were priority areas of inquiry.
But we said our real big thing is we want to learn and we know that eventually, a lot of these things that we're looking at May ultimately be fulfilled in another fashion by a partner or a larger company or whatnot. But we want to be better customers. And they gave us the money and now we manage our way carefully through that so that we have money to experiment. That's delightful.
So Paul boards money, infrastructure. How do you get off the starting blocks. Well, I looked up for the graphic. The graphic shows, like a traditional. We have a thing called the dev lab. And there's a project that we did with machine learning and NLP over three years, and it followed a stage gate process that everyone would be familiar with and closer.
And in many ways, it seems like that's what we're talking about here. But some of the things that get us stuck at a society with separate, very separate teams that even when the product leadership or the journal leadership or whoever is a sponsor of the product want to hit go. There's so many questions about AI. How do you get the contracts right.
What are you cutting off that you shouldn't be if you go down a certain path versus another. Those questions, I'm assuming everyone's been circling around over the last year as there's pressure to move, but there's usually something that pauses you. And I think we're close starting to get to the point of an upswing and momentum, but it's taken a lot of conversations with it, with finance, with the not the places where the money is necessarily or the budgets being approved from, but other areas that are checks and balances.
And that creates a complication just on the Getting Started phase. Julia you want to Yeah, I agree completely with what Paul was just saying there. And I think it's also it's that reality that AI is inherently risky. It's new, it's different. Nobody understands what it's going to mean in the way that we do with more traditional products, traditional tools, traditional technologies at this point.
And it is always easier to see the ways that something could go wrong if you're particularly if your job, is in one of those checks and balance areas of a business, that's what you're there to do is to think about the consequences of the risks. But that really undersells the opportunity and the opportunity cost of not exploring the benefits that could exist with AI. So I agree completely.
It's that we're again referencing the graphic that you may one day see this mythical graphic. I would say we keep sitting between numbers 2 and 3. So that ideation into experimentation and trying to get over that line the. So now you can see what we all mean. That man's an American hero. And it's just that real challenge of going around the business, going around and really educating people on what you're going to be able to do, what you're not going to be able to do.
So I am very envious of an space of just being able to have that space to go, we're going to experiment. I can't tell you exactly what we're going to learn yet because we're going to learn that through experimentation. It's research. That's literally what research is. And that's such a great way to be able to explore the opportunities that come with some of these technologies.
So yeah, I think it's a challenge. The money is not always the biggest challenge. It is really that education, bringing people with you on the journey and really helping people see what you are going to be able to at least learn, if not actually get the outcomes that you're looking for. But hopefully you know that learning is one of those key outcomes that can help people come with you.
So yeah, I don't know. And I mean, one of the things that we've seen successful is actually structuring shorter term projects, contracts which are much more geared towards testing and prototypes, allocating and has know money allocated. But there's also team time and being less stringent about actually putting a lot of team resources and then ultimately being open to doing a lot of small pivots.
And what's very difficult and from our perspective, like writing proposals, it's like we can't tell into the future. You have to do the experiments in order to ultimately figure out what you actually want to then turn into a prototype and what you ultimately want to test with small cohort cohort of people or the market. Max has been so silent. So I want to go a little sideways here.
And Max is a data guy, used to be at iris, part of informa. And one of the things that I'd like you to talk a little bit about is the importance of how does data relate to AI and ultimately the importance of good data and how does that play into developing and launching AI products. Yeah look, let me start with so I was with a company called informa before starting actionable intelligence.
They're one of the largest B2B media company who owns events and publishing business. The AI question came to us at that time from the board, from the investors saying, what are you doing about it. And in the context of generative AI. So we formed a task force, a central team, to come together saying, what do we want to do. And then we got the word out and we found out there were at least a dozen projects at varying stages in the company.
We thought we have to get things started or get oriented. But inside the organization there were a lot of ground up innovation that were happening. Engineers and data scientists figuring things on their own, which was actually a big learning for the organization, saying, how do we balance the risk and opportunity. What do we want to do at the center to put the guardrails so that you're thinking about in a more thoughtful way, like architecture, which language model to use to your point, it's quicksand.
Things are shifting so quickly things are getting obsolete in a few months, right. So we said, how do you strike the balance. Because we didn't want to kill the innovation that was happening at the edge, but without the right guardrails, particularly around copyright licensing and what boundaries, which data you send, what data you have access to, all of that. We had to very quickly figure that out.
And it was definitely different than on other product innovation that you normally do, because there are many things that are shared on how we want to do it. So that was a pleasant surprise in terms of how do we set the task force and stay connected as a community, but still continue to drive the innovation at the edge. And what we uncovered through that was, well, there were a lot of promises out there saying AI is going to do this or generative AI can cut short a lot of things.
It all came back to the quality of the data you managed to organize within the organization in many ways, and that's what we were doing centrally to build a common data analytics platform. So that was a great foundation for us to push some of these innovations way ahead in other areas as well as put the brakes on a few areas saying not so fast, we're not ready for it.
So as part of the notion, at least for informa was need clean data and well organized data for the AI to think on. Yeah, absolutely. In many use cases that is absolutely true. There are use cases which obviously you can work with imprecise data, but if you have a choice, work on your data foundation that's going to pay back. Is there an example you can talk about.
Not specific, but one advisor is there were teams that were trying to do new things we haven't done before and there were too many unknown variables on that. My recommendation would be try to extend what you're already doing, try to add enhancements to New products, which is what we did. What would have taken even in an automated fashion, what would have taken us weeks. We were able to skip a few steps using generative AI.
So this notion of Paul, your question about AI product different. My advice sort of coming out of it and hard learning is if you have a choice, don't build, don't build an AI product. Use what's out there. There are thousands of companies doing this, figure out a way to leverage the tool for your use case first. There's a lot to be learned rather than starting a product on your own.
Max is so lovely to have you here developing AI products. Just don't do that. Just, just don't do that. We're done. We're done. That was a good panel. Thanks, everybody, for coming. Does anybody want to talk about, I think you I mean, you talked about people making promises out there.
Talk about how you find people to work with and who's making what sort of promises. Go ahead. Yeah, I'll definitely take this because I feel like I'm sitting between several of our partners that we rely on here, and it's really helpful. So one of the things that you come up with right away is that in our case, we had a bit of money aside, but who's going to.
Do this, who's going to help us and who's going to help us learn. Because we're not going to just self pace teach. So we engaged with a company that's not here to help us out in the UK. And then shortly thereafter, we have worked with Dustin and Humm, and we're using some products there to help shortcut some of our development process. And I tap Dustin's brain on a continual basis.
Right in front of me is Silverchair. They have AI lab now as part of Silverchair, and that's also something that we're using. And Max over here to my right and his partner, his co-founder, who I believe is in here somewhere, we use their organization too. So we have a few different things going on at the same time. And one of the things that I think is really critical about this is understanding what you can do, understanding what you don't yet know how to do and with what you don't yet know how to do, which of it you actually want to learn and start to have the competency in and where you actually may want to continue to rely on partners.
So I think it takes a village to come up with an AI strategy and program that's going to work. And I agree with Max when it comes to I don't necessarily know if we're not going to go develop a frontier model. So Yeah, that's kind of silly. We don't have billions and billions of dollars, but where is the places and using our partners and bouncing ideas off where where's the place where we may have some secret sauce to put into this that's valuable to our customers that can make their lives easier and create the value that, in turn, creates its sustainability.
So I think partners are key. Paul, you don't just. Just do it. Just get in there, man. All right. So, well, one of the complications. And I think in the beginning phases, you have to learn as quickly as you can with however, however you can with whatever partners you can.
But some of our traditional sourcing decisions, you make a decision about what your IP is and what you own. And what's interesting about if you are fine tuning an LLM, you're not just putting like XML into a database, which you can then extract that XML again later and like the code is separate from the content. The content becomes the technology becomes the IP, and where does that layer end. And when you fine tune that LLM, is that editorial work or technology work.
Is it the traditional thing that's happening. When you take a Word doc, you put it into x styles, you tag it up and you export a transformed object that is now more useful than it was on the other side. How is that different when you fine tune an LLM and is that your IP or is that something you want to put in a vendor's hands. And that question. I don't think we have to answer in the first year, but we have to as we figure out what's valuable and what works, who owns it and why.
And is it mission critical to your business or is it not. Is there a stage specific answer to that. I'll let Raul. Julia Well, I'm going to answer a slightly different question. No, I agree with what you're saying. And I think it also goes to that point of you. This is where launching AI products isn't actually any different. In publishing, it is very normal to work with a wide range of vendors to combine relative forms of expertise.
You are not going to be the experts in every single thing, particularly every single form of technology that is so crucial to a modern publishing environment. And so it's that same process. You need to go out. You need to find trusted partners who you can bring in and work with to take advantage of advantage mutually of each other's expertise. And I mean, kind of stepping way ahead into scaled production product.
We work with him for your data engine version. So the sort of CDP side of things, lots of that is underlined by technology. And that was a very iterative learning journey with a huge amount of information exchange between OUP and hum so that we could be learning from each other's relative expertise. So I think AI is different on the slide you didn't see earlier and AI products are different, but there's a lot of commonalities with what you already know how to do in terms of going and finding the right vendors.
I do agree completely with what you were saying there, Paul. If there's some really key questions that are kind of existentially different around IP when it comes to AI, just because of the nature of how AI works. Sorry, that reminds me, I gave a presentation ages ago where I kept talking about AI because I kept tripping over myself. So if I do that, apologies, I mean I.
But I think we need to not second guess ourselves too much sometimes because it is still it's a partnership approach. You are never. No, no publishing organization is on the scale of the Google's. Et cetera. These people who are developing, as you were saying, the frontier models, we're not going to be in that space. We are going to be in a space where we're working with other vendors and other people to come together and hopefully find our secret sauce and add to the journey.
And something I didn't say earlier that I just want to throw in is to that point about the ideation and what you're going to be doing. One of the things that's been really critical for us is we've been thinking about that is our mission is a mission driven organization. And so we've been hearing in all of the sessions and every session you go to in modern publishing conferences about all the challenges that are facing publishing, integrity, AI, what do we do.
How do we handle these really complicated questions. And I keep coming back to what are we ultimately here to do. Who do we serve. What are we trying to achieve. Because you have to start with that. You can't just start with it would be cool if we did X, y, z. Well, Yeah, maybe. But is that going to help. Is that going to enhance the publishing industry.
Is that going to enhance how we serve the academic research community or not. So kind of again, where it's not actually that different. You've always got to be thinking about what are you ultimately trying to do and then go find the right people to help you deliver on that rather than getting caught up in let's see what we could do with AI. It's cool and shiny and we should do that because that's cool and shiny.
Anyway, did you want to say something on that. Julia let me add a footnote to the IP point. Which is you're right, it's not very different, but you'll be surprised when you look at data ownership, how often it's overlooked. We just assume by default saying it's all taken care of when we only when you try to unwind it, you realize, well, I thought the third party owned it or we owned it.
There's a lot of that confusion that's starting to bubble up. So I think it's right not to assume that by default and actually have the conversation about ownership because the lines are getting blurry. If I can throw a dilemma that when we were at informa, we were tossing around, I'll put it for the table as well as for the audience was there were two camps one of them saying, let's do more use cases for colleagues in the company.
So that it does two things. They are embracing it. They understand how it works. It inspires them to think about use cases for the customer. There was a camp which said, no, no, let's focus on customer. This will become a distraction. Before I share what we decided, I would love to I would love to know what's the sentiment on the table or the audience.
Jump in. I mean, it depends. Is this on. Yeah what do you do first. Well, you don't do either first. Well, OK. It depends on your scale of organization. I love it.
Yeah, it's a good place to start. It depends on the scale of your organization. If you're in a large scale organization, there's probably a lot of dedicated resource at an organizational level that's thinking about how does I serve the organization itself. Where are the efficiencies it can drive in how your organization operates. That can be running completely in parallel to the really specific customer focused product developments that we're kind of largely experimenting with.
So it's a bit of both. And it depends. But if you're a lot smaller, you probably are going to have to pick one of the two and that's going to come down then to your priorities as an organization and where you see the greatest opportunity. One of the tensions we see is if you're getting close to customers, you could book revenue against it. So when you think of Roi, then that's something which you can actually sell up to the board level.
And ultimately when you're doing internal efficiencies, arguably like the way most AI has been applied now has been with things like coding or workflow solutions. And so thinking of peer review in those sort of internal systems, well, you could save some time, but ultimately the business case gets a little squishier and dollars speak louder. So on the one hand, you may be optimizing for what you could actually do and make a deeper impact but it may actually be harder to sell within an organization.
So I would have answered your question Yes And so both so so like one of the things that I think is sometimes hard to remember is that most organizations have more than one person. And as soon as you have more than one person, you have more than one Avenue. So one of the things that happened, even though I was talking about this chat and our experimentation, that was only one aspect.
We had a phenomenal she's here director in production who was experimenting with partners that had products and was looking at that for I we have a head of application development that his team's experimenting with coding and other things and other partners. So the reality is that some people were already trying to say, well, here's something that impacts our customer, here's something that impacts me, here's something that's important.
And they were starting to move on those things as we were also looking at another product development feature development, exploration paths. So when there's interest. It's amazing how the time shows up. So we're all busy. We all have too much to do. But one of the great advantages of AI is it's piqued a lot of interest and people find the time to participate.
Now that's only good for a certain amount of time. We don't want to kill people, but it gets you through that ideation, experimentation kind of stuff. And then better where the high likelihood of applying resources is going to yield value. Max, I'm know you're curious, but Paul, do you have. Yeah sorry. That ramp up of interest. I think there's something about getting some tangible experiments started that help people see what it can do and that.
That's the stress of the Getting Started period. Or maybe it's getting into a couple of the others, but trying to make sure that especially what this does is it transforms how we do our work. And especially in a period of time where speaking of journals like and all the different changes that we just experienced to our workflows, the loss of print over the last few years, like for us, it's recent that type of transformation coming on at the same time as this new type of tool.
It's different than just experimenting with a different product off in the dev lab, you need the interest of everyone who knows who are SMEs in the work to participate and to have the time to do it in order to do it right. And that has taken a little bit of time. And I'm optimistic that we'll like the snowball effect will occur.
But that ramp up piece is one of the good frictions because the more people that collaborate, that know how the work gets done, the better it's going to be. How do you scope that sort of experiment. So that you're proving out whatever hypothesis, but also bringing the org along as well. Come on have it solved, Paul. I know it.
We're trying to get the snowball going. Well, my take on it is this may sound obvious to many of you, but what are the insight that I learned about AI is it's a general purpose technology like electricity, not a specialist technology. Once we bumped into that and said, well, how do you enable as many colleagues as possible to understand the possibility of it so they can think about different ideas.
So that's how we approached it. And we kept coaching about not looking for benefits in the immediate term because this is going to stay with us for a while. So don't try for quick wins. Don't try for quick Roi. But more colleagues is understanding how do we use this technology in different scenarios the better for the company.
So part of that is a baseline exposure to the technology, not just a specific prototype or product. I'm curious. I've heard CEO of a major publishing org and this was November, December time frame saying we have about two years before publishers start to lose their place in the ecosystem and things like perplexity and Bing and some of the big Silicon Valley Giants start to effectively engulf and their market position becomes so strong where you have New modes of interacting with AI models, they're using content to be Fed in, but you become a little box on a diagram.
Do you buy that. We will try. Thank you. Sorry these microphones are quite weak. Let me just say that again. So as of last fall, there's a CEO of a major publisher who made the claim that publishers have effectively two years to ultimately make significant moves in AI.
And if they don't the likes of perplexity and Bing and the Silicon Valley Giants will change consumer behavior. They're ultimately, delivering things like answers and not the sort of search engine results that Google have that ultimately it's a short time timescale. Things are moving very quickly and it's not the sort of situation where you can just be a fast follower. And what I was asking the panel is, do you buy that.
Well, I'll share and pass it on, I think. I can't speak for the two years timeline, but you can safely assume there will be new business models emerging and it's better to be proactive about it. There are licensing agreement being done with publishers, primarily B2C at the moment with these big companies, as OpenAI actively soliciting licensing deals and things like that.
So this, we can't predict what it's going to be. But I think there will be new business models on how content gets discovered and delivered, for sure. Anybody else you want to take a swing. You OK. You definitely think that there's a threat. I mean, everyone's talking about it here and.
I think that. One of the key things we have to do as publishers always is understand. It's like we're saying here, why we do what we do. Like what is our mission. Going back to basics and figuring out what we are. And we're like, if we're I'm talking from a society perspective, we bring researchers together for the last couple hundred years in order for other researchers or practitioners to read that research as quickly as possible, consume it and conduct new studies.
And sometimes it's for collegial get togethers and conferences. But there's a layer where the primary literature is the container of information, but the value is this interpretive thing that happens at the Society level. It's around the journal and we've spent our time as publishers like trying to figure out how to chunk up journal articles and make it more contextual. And I think the positive side of this question. Yeah, there's always a threat, but what can we do to actually leverage I as a collaborator that allows us to create more secondary ways to experience the content and the breadth of value that the full society brings, whether it's through interactive experiences that are multimedia alongside interpretive I scenarios.
I think there's a space there that we haven't discovered that it affords us to dream about maybe for five minutes before those two years go by. But we have a space of time to figure out the more product question what should we be instead of playing defense, which I think is the bigger threat. So is that cast against the background of the broader threat. But it's more saying like focus on the opportunity to figure out how you're going to proceed in the short term Yes All right, Paul, we're doing this.
This is great. I'm just building on that. I agree completely. And I think it's also something where I don't think it's a problem necessarily. If we get to a future where the majority of the literature is being consumed by AI, which is then a tool that researchers are using to support them in their journey. When Darwin was coming up with the theory of evolution, he had no idea that Mendel had already figured out genetics.
They didn't have the impact of that content because they didn't know it existed. There's a huge amount of literature published globally, lots of it not in English that most researchers have no access to. So I think there's a huge opportunity from AI to be able to do the things AI is brilliant at in terms of knowing a lot of information and looking for patterns across that.
So I think exactly like you're saying, it's an opportunity. Now, of course, there's challenges. We've got to make sure that the content is valued, that we are generating revenues from the use of it in these new outputs. But I think I wouldn't get too hung up on the concept that publishing is producing a print. It's not print. I mean, it's not been print for a long time, but a print journal product.
It's that journey of knowledge exactly like you were saying. And what is that going to look like in the future and how do we continue to serve that. And how do we appropriately monetize that process and the value that publishers are adding into that research activity. So it's also a Yes from me. I think it's coming. Who knows if it's two years or not, but it's going to change everything.
But I kind think that's a good thing. Love it. So just adding on to what Julia said, my first reaction to that is none of us, none of our organizations have a right to exist. We don't have an inalienable right that says we have to be here. Sorry, is this. This is really the better.
Is that OK. OK just. Just Thank you. Been good. Yeah so we don't have an innate whoa. We don't have a right to exist. We have to have a reason. We have to serve a purpose. We have to create value, we have to be valuable to someone or something.
And what Paul and Julia are saying, and I think Max would agree as well, and I know Justin agrees, is that. No, it's in finding that value. And if this environment, again, two years, five years, two months, I mean, it is what it is. If we don't take the opportunity to not only learn about ourselves and what we can do, but to figure out more precisely who we are and how we fit into an environment to add value to our core audience researchers, how we fulfill our societal missions for mission driven organizations that are here in New and better and more impactful ways.
We don't deserve to be here. So we just don't. Harsh, but true, I'm going to open it up for questions in a few minutes. So formulate those and we'll open it up. But I'm curious. And you were angling at it a little bit. But if you think of the bigger threat is a more generalist threat.
It's a Google or a perplexity. It's somebody that's basically a general purpose technology, general purpose product. What are the highest impact places that scholarly publishers can apply AI over the next few years. And that will be a distinctive advantage for them and things that frankly, the bigger companies won't be chasing.
Was that for me. It's for whomever. But Yeah, one of my colleagues said this again, very simple statement, but it sounded very profound to me. I used it as a filter, said I is superhuman in few things and subhuman in many things. And if you can actually, look at things in that lens, you use it for its superhuman capability and avoid it where it's subhuman, if that makes sense.
Yeah, because sometimes we don't have that distinction. You could end up using it in the wrong context and you won't get the results. So it's going to be the big platform players. They're going to solve a lot of problems around discovering discoverability and delivery and all of that, and I think they will go so far. And then in terms of contextualizing things, I agree with you there is no role for publishers.
And as long as you're creating value in terms of where it's already making an impact is in the marketing space, just in marketing, not just in publishing, but in marketing space, in terms of automating things, and generating content in that space. It's widespread already to take an example of the things that machines are better at than humans. And I think there's one thing that we worked on that there was a filtering spreadsheet where we had a few factors.
And that was one of them. But like keeping an entire research paper in an LLMs mind at the same time is something that a human can't really do. And so if there are things that you can articulate that the machines are actually very good at and humans can't do and that will those are nice threads to pull on for what are new capabilities you could actually potentially unlock.
So one thing that always is interesting to me is I hear people talk about AI is it's this competition, it's this human machine competition very often. And I think the keynote this morning, the rise of the machines, they talked about this a lot. It's really, how do you make that a collaboration and as you're saying, play to the strengths of both parties. So when I think about where we play as a society, two things come to mind.
First is where is there information or knowledge of our researchers, our customers and their needs and their science beyond simply the little end 2% of their job, which is producing a paper. But where are there places where we can develop insights and productivity gains that help them and we can be trusted, we can create potentially a safe environment that is not like an open frontier model.
And so there's some thoughts we have there. The second side is, as you may know, the American Institute of Physics publishing has recently started something with the Institute of Physics publishing and APPs, American Physical Society, called purpose led publishing. It's physics focused. And so one of my primary areas that I've been thinking about a lot lately in relation to AI is with my partners in those organizations and in our organization.
Where can we together add value that is more precisely targeted at the needs of our research community. Whereas the big players are adding a lot of big general value. Good and where can we take that to a place that is much more relevant and useful, impactful for the researcher in our fields. So we have some ideas and some things we're working on there.
If you have ideas or recommendations, we can do that. Otherwise questions. I would add something to that. So I think one of the things we've been thinking about a lot is around, unsurprisingly, peer review and integrity. Who isn't. And I think it's one of those areas where it's very controversial to be thinking about what is the role of AI in that space.
There's very strong opinions on that. But I also think we really have to challenge ourselves on what is peer review right now and what should it be and what is the gap between that, because nobody would argue peer review is perfect. It is highly, highly flawed in a lot of different ways. And I think it goes to that point exactly like we're saying of collaboration and augmentation are the ways to use AI to complement the weaknesses of human peer review.
We have to be asking ourselves those questions. And I think obviously integrity is fundamental to that process. So we're doing a lot of thinking on those questions right now and have been for a while. Can I add on to what Julia said. Yeah, of course. Sorry so one of the things that I think is really critical to when you look at those big thorny issues like peer review in just two words versus all of the aspects of it is it's bundled.
So people may have an allergic reaction to the idea of AI in peer review. But the reality is peer review is many different activities bundled together and some of those activities a human shouldn't have to do. So the first step could be let's just look at this and say, what can we take out of peer review that frees up the time and brain of the peer reviewer that is not controversial.
And I think this is something as humans, we do all the time is we say a word and we think everybody means the same thing by it. And it encompasses so much that it's just not possible to do anything about it. So like, how do we break it into tiny pieces and take little bites. Sorry this would be more dynamic if we weren't passing the mic back and forth and building on that, because I agree completely.
It's also that thing of I think a lot of people assume that things happen in peer review that don't actually happen in peer review. Particularly in an integrity context, there's all the classic examples of it looked like you had error bars, but actually you just put T's on top of your chart. So how can we use all the tools available to us. And it's not just going to be technology, but a combination of technology and humans to really try and get to a better class of peer review and a better class of research because that's what ultimately, all of us are here to do.
And like you say, we do not have an automatic right to exist. We have got to add value and show that value. So yeah, absolutely. I agree completely, though peer review, it means that a lot of things in two words, but lots of really good questions to answer. I agree. Scholarly publishing is under resourced and we're scrappy. We've always been scrappy.
We use whatever tool we can use. We use whatever tools. We can use many different ways. And I think when new tools come to light, one of the things that happens is people feel threatened, especially when it threatens tasks that they find, that they find that they might have complained about the day before. And that's happening with AI like we can.
I gave kind of a funny talk at stem innovations around PDFs, PDFs going away. They'll never go away. Well, they're going to go like already AI'S proving that after 20 years of complaining about it, it's going to go away. And the action of typing a manuscript. If you're a researcher, I know that there's a lot of threat talk around that.
Like what happens if a researcher doesn't actually type their manuscript. It's kind of a non-value added step, it's asking the questions, it's the probing, it's the research that's the value Add Step, and they need to be doing more of that and less time crafting a manuscript that's a container for that data. And I think that this starts to give us back that a workflow that's just a better version of what we do today with a new collaborator or colleague that are these different AI machines.
Perfect questions. Do we have any from the internet. You only have one roving bike. Unfortunately, we are under-resourced as an industry. Please kiss the microphone as denoted by microphones. I will hug the microphone. Thank you, guys. Super helpful. Thinking about your experiences with as you experimenting and thinking about working with your end users, how do you work with end users on the various stages on the scale when they themselves are really new to the technology.
They are also experimenting with it and don't necessarily have a way of a best practice with it. They're also just like when you're giving them a new tool, but essentially they're also just poking around and not necessarily using it. How they will be using it eventually when you're done with the product, how are you navigating that. You have the mic.
I have the mic. This is the power here. I mean, one of the notions is and this is part of some of the things we write into proposals is typically in the experimentation phase. You're trying to get internal expert users. So if you're launching AI products, often there's various sorts of validation that you need to do. And as you move towards prototypes and testing, this is typically smaller kind of trusted user groups who may be external but ultimately are potentially good early adopters.
And then as you're getting into the alpha and beta product, that's when you're doing a mix of private alpha beta, you're potentially expanding out into the public, but it sort of concentric circles of people who are potentially interested, who ultimately, want to contribute to this effort. And you're kind of finding energy in the organization, finding energy in the community to be able to do that. But you absolutely I mean, you can't effectively go through this process without cultivating internal and external users.
And I think we've seen some fairly high profile failures for not doing that for the first wave of people who have hit the market with AI products. It's a question. This is an old Silverchair question. Sorry so you touched on the fact that products don't follow the same product development cycles as typical products.
So then how do you measure success. What are the metrics. So this is a question that's near and dear to my heart. And we've been talking about it internally for a while. And I think you structure an experiment as you would structure any experiment, you have a hypothesis. And what your success is have you proven or disproven your hypothesis.
And eventually now there are certain dollar amounts that go with experiments like that. That's not $1 million investment to do something like that. And so you judge success for a period of time by what you learn and how it informs your next steps and how you have been at structuring and proving or disproving your hypothesis. But then after a while, you have to start to judge success by other factors.
And when you're in that, we're also in that, what is it, 3 to four. We're just getting into four now. We're kind of starting to exit three. So in 4, I think that's where you're going to have we're going to have to start to judge our success by reception by people are is the target market interested in what we're doing. How are they showing that interest.
We talk about this in that purpose led publishing, too. Like, how are we going to judge success. Well, we want to judge success by research or engagement, and we have ways that we want to measure research or engagement. But eventually for some of the items, it'll be money, it'll be revenue for sustainability. It's not simply because we're not out here to have a 50% margin. We're out here to continue to provide additional resources to further our mission.
Thank you. So from the virtual chat, are there best practices to stop AI from being a distraction from our main business. That's a loaded question. That's definitely a ball question. You're welcome.
I don't think so. I mean, it's going to be a distraction from our main business. And something this large probably should be. I'm not sure I agree it's a distraction. I mean, I think it's easy to see it as a distraction right now. And it is in its infancy. So to an extent, Yes, it is a distraction from the core business, but everybody is underestimating how quickly AI is going to keep changing and it's going to hit the inertia of an incredibly change resistant academic research environment.
And it will be fascinating to see how that plays out. But I think if we act like AI is a distraction. Now, we are going to have a lot of problems in a few years when it has become the norm in a lot of parts of people's lives. To say very good. I was just going to say, if anyone says they have the best practices, don't trust them. OK another one from the virtual chat from a tech and data integrity standpoint, how ready were your teams and what advice would you give to a Messier organization.
That's the best man, I think. Yeah Thanks. Thank you. So you assume we're not messy. Thank you very much. That's very kind. So we started with open access content in our sandbox because we didn't want to have questions like that stop us from getting started.
And so what I would suggest is if you have content, if you have things that are open access or even abstracts or whatnot and you want to start experiment with that, and then as you figure out what you're doing, start to think about how you'd like to experiment as you develop that messiness. So now we are looking at things like that was one of the reasons I'm not to sound like, but that was one of the reasons why we were really interested in working with him on embeddings of the service and some of these other things, because we want to construct a safe environment.
But when we started, we weren't ready for that. We didn't know how to do that. And we would have just stopped. So find out what you have that's already out there and start playing with it and then start to figure out these questions and get your ducks in a row. Yeah and particularly in the experimentation phase, how stripped down can you get to ultimately prove your hypothesis.
And then as you get towards the scaled production product, that's ultimately when you have to have your data in order, which includes content data and content data in XML is not cleaned in ready. So ultimately there's a lot of things that have to fall into place. But you can start in the earlier phases with pretty rough and ready data. Hi, this is a question about integrity.
Related question and it's quite specific. And if you can answer it specifically, that would be great. This is Richard Nguyen. So when you're building these models and when you license your content to third parties for them to build their models, are you excluding retracted manuscripts? And what policies do you have downstream to deal with retracted manuscripts that have been built into the LLMs.
Is anybody licensing their content. Not yet. And those are some of the questions that would hold us back from crafting those deals. And at least I think we haven't yet. And that's how quickly things are evolving. We're working on the right models so that we can bring back a sort of licensing that we've enjoyed with other types of products.
But there's a lot of considerations that have to be weeded through that we may not have had to think about before. Yeah and I think just building on the point of what happens to something that was fine. Well, wasn't hadn't been retracted at the point that you were licensing it potentially. And I think it goes to the distinction between licensing for training versus where I think we will get to of licensing for products.
So I think we are going to see a lot more AI products that need to continue coming back to you because you are publishing the latest work and they don't have it and they need to know the latest work to provide a service to users so that they can stay up to date right on a topic, for example. And I think that's where there's potentially a lot more scope to then have as part of those agreements rules around what happens if something is retracted.
But I think there's a reality with training data that if it was in the training data, it's kind of there's not much you can do at that point if it's later retracted. But exactly like Paul says, I think these are all the questions that are being explored as part of thinking about this. Plenty of dirty secrets in the existing training data. So a retracted paper is probably way better than a lot of the detritus, but that is the end of the session.
What a great way to end it. Very cheery. No Do we have one more quick one. Anybody no Bueller. We have some more from the online crew. Pick one. Is there. Pick one. Make it cheery, if you would.
OK this one's cute. Should every tool be a company. And do we have a place to support tools that are just tools. Yeah, probably not to the first one. Digital science. I don't know.
And other people acquiring this sort of tools. But I mean, depending on how narrow it is, there's massive tool fatigue. And at a certain level, you need scale for some of these products. And then in the future, I will be so pervasive that we won't even be thinking about, quote unquote, AI tools that will be an anachronism like e-commerce.
But anyway, Thanks everybody for coming. Lovely to have you. And Thanks to all the panelists.