Name:
Teaming Up to Transform: Scaling Responsible AI through Strategic Partnerships
Description:
Teaming Up to Transform: Scaling Responsible AI through Strategic Partnerships
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/4ac8ad17-49ff-48af-a848-614d008cf066/videoscrubberimages/Scrubber_1.jpg
Duration:
T01H02M22S
Embed URL:
https://stream.cadmore.media/player/4ac8ad17-49ff-48af-a848-614d008cf066
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/4ac8ad17-49ff-48af-a848-614d008cf066/SSP2025 5-30 1330 - Session 5E.mp4?sv=2019-02-02&sr=c&sig=NkBN0h8fLNVADgA41ouM6Hxw3jr2Oz3TDjVsW5UWtMI%3D&st=2025-12-05T20%3A59%3A59Z&se=2025-12-05T23%3A04%3A59Z&sp=r
Upload Date:
2025-08-15T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
OK. Hi everyone. I think it's just after 130, so we're going to get started. Thanks for joining us. I know we're at the tail end of the program here, but thanks for sticking around. Hopefully they save the best for last. I'm Gina DeRose. I'm Wiley senior communications manager, and we're here to discuss teaming up to transform scaling responsible AI through strategic partnerships.
So a few things that we're going to discuss here are exactly how AI is reshaping our industry. When and how to partner, how to manage uncertainty, how to ensure trust. These are some very high level topics we hope to cover over the next hour. But before we get into that, I'd love for each of our panelists to please introduce yourselves, your organization, and what perspective you're representing in the discussion today.
So, Matt, would you like to go first. Sure Hi, I'm Matt jampala. I'm with the American Geophysical Union. We serve all of Earth and Space sciences researchers and those who benefit from that science. I work with their publishing arm, and we're partnered with full disclosure and I guess I'm playing the role of the wide eyed publisher who's trying to figure out how to navigate the new world of AI and how I can partner fruitfully to leverage this technology.
Ryan, I'm CTO at potato. Potato is building is a startup where seven people just raised some venture financing building an AI powered scientist. And so we're consumers of content. And I guess the role I'm playing is wide eyed tech kid. Hi Lisa McFerrin. Background in bioinformatics and cancer research through my own scientific lens and publications.
I was always frustrated with the state of technology to actually be able to ask and answer the questions at hand. So I joined Amazon Web services, AWS years ago to help create better enabling technologies to be able to address those needs at a global and industry scale. So that will be my lens today is where does AWS fit and what we're doing in the AI space, and how do we work with the various partners in this field to be able to bring that to everybody.
Good afternoon, everyone. My name is Josh Jarrett. I'm also at Wiley. I run a team called AI growth, which is a new team we created about a year plus ago to really interface and interact with the broader AI ecosystem, to explore opportunities to grow both impact and revenue opportunities for Wiley and our partners.
That includes working with licensing, but also strategic partnership development. And in full disclosure, we've partnered with all three of the great folks up here. Amazing thank you. So I think let's start with what's probably a broad question, but that will yield some fruitful discussion, which is how are you seeing AI reshape the industry. And what specifically are you doing to grapple with it within your own organization.
So whoever wants to jump on that one. Sure so we're building an AI scientist. So anything a human scientist is capable of doing literature reviews, coming up with hypotheses, planning experiments, writing the code for lab robotics, all this writing computational pipelines, all the stuff you might want to do as a scientist, we're trying to do now, this is obviously very hard.
Some may say it's probably technology doesn't exist to do it yet. We're trying anyway. But, the consequence of that is it's going to be it's going to suck for a while. And then it's going to work its way up. And when it reaches the point where it's as good as a human scientist, then suddenly, because the cost of training a human scientist is very high.
They have to go to school for many years. They do. They often do a postdoc. You've got to train them up on specific techniques. Very high. There's only so many scientists with software. You just flip a switch and you have 10,000, 100,000, a million new scientists out there in the world. That's twice as many that exist today.
And so the consequence of that is dramatically, more science done. Research continues to be an important part of that. Or content continues to be an important part of that. But it's a lot more value creation on the far side, a lot more money in the ecosystem. Like, how do you capture some of that. I was talking to Todd Carpenter this morning of NISO and scholarly kitchen, and he was talking about how we've spent the entire history of humanity writing for other humans to read.
And for the first time ever, we're writing in most of what most of the people or things that we'll be reading what we write will actually be AIs. And so what does that mean when AI is consuming more content than the humans are. And that becomes the mediated way in which the scholarly record is interpreted, knowledge is transferred. And how do we build the right relationships with AI in that world so that we maintain the scholarly record, we maintain the academic advancement ecosystem, and do all that while continuing to pursue some of the big aspirations of AI, science, and other things like that.
So coming from a tech company, we've looked at some of these numbers, and it's pretty astounding that AI is being adopted at a faster rate than any other technology that has existed faster than the internet and faster than personal computers. Already within the 2 and 1/2 year time span of generative AI. Its adoption rate is 40% within the broader industries. Within health life sciences, where I help support all major pharma are using generative AI.
In some sense, 25% are operating at scale with some generative AI in their pipelines. 70% of hospital and health systems are using it, and 66% of physicians are using generative AI in their workloads. So the amount of where it is infiltrating the general landscape, what is happening is astounding. So it's the concept that it's not going to be here. It is here and in what capacity.
So we're seeing groups use it for a lot of evidence based research. What literature is out there. What data is out there. How can they start coupling these diverse data sets and diverse multimodal data to be able to better understand and gain insights at a much faster pace, how to automate a lot of manual tasks, but how to also boost the scientists and physicians be able to reduce a lot of the burnout of the manual tasks and specialists that would be required for them to gain that level of information, but being able to have it at their fingertips and at their ready.
So we work a lot in that space. Sorry, I know we said we wouldn't all answer every question, but so from my perspective, technology, including AI, is changing everything about how people do research, consume research and share research. And it's obviously going to be speeding up. And from the publisher side, I think of it into.
Two ways on the input side as we do our peer review as production and all of that. All the things that we do as publishers, there's a lot of tools that can help us. I can comprehend this and see an upside on all of that and feel like I understand that what could happen there on the output side and how people consume and interact with validated research, that's a little bit more foggy to me and a little bit scary.
It's like we're approaching a cliff or something. And so we have to figure out how we can get to whatever happens next. And I think hopefully this panel is about how partnerships might help us envision what comes next. Yeah thank you. That's a great transition. So no one can go it alone.
The tech people need the content people the content people need the authors. There's a whole pipeline that has to work together here. So how and when do you think about partnership. How do you identify the need. When is the right time. Where do you even start. So I can start.
I'd say that exactly as Matt said, the future that we're heading toward looks. We know that it looks very different than the world we're in now. So it's a big gap. And we also know it's uncertain. So there's a lot of variability. And so that creates a lot of uncertainty. And so we need to get to a new place, which means we probably don't have all the capabilities in house to do that.
So I think of partnerships is being able to deliver at least three different things. One is about capabilities. Certain things we don't have we're not good at. So we partnered with AGU work that we were done with the European Space Agency to build AI tools for researchers. Well, we don't have all of the expert the have great content, and they also have experts, subject matter experts that are part of testing that system.
Beta testing. Does this thing actually going to work and work for real scientists. That's a great example of capability expertise partnering together. The second one is around scale right. How do we have reach and scale. AWS has a lot more reach than Wiley does into those life sciences companies, and it may just be those life sciences companies may be subscribing to our content, but it's their corporate librarian or somebody in procurement.
It's not the person running the research lab. So getting us reach into new opportunities and new places where this AI is consuming research is so scales. The second one, the third one is learning. Just learning that in a world where information is so valuable and knowledge is King, how do we learn as fast as we can about what's actually happening, the stuff that Ryan and potato are doing. He already said that.
Like they're trying to do something that's actually not possible yet, right. And AI scientists. So how do we part of that. How do we understand what the role literature has in that empowering that we don't build science applications and tools. And we don't have products that sit-in the lab. But by partnering together, we're able to see how that research is being used at the bench and understand more about how that future could evolve.
So capabilities scale and learning would be my three. Yeah so partnerships obviously they're always they're there to help you do the things that you can do on your own. And we need them more than ever. We're very happy that Wiley has sort of venturing out into this world of licensing, I think the most important thing to me is if we get into real partnership frameworks where we're saying what the licensee gets.
What are we providing. What are the rules about how things are shared. We're in a better sort of collaborative co-creation space at that point, rather than having to worry about who's sort of like coming up and skimming, skimming the information and doing whatever they want with it. So to my mind, to be actively in a partnership now is better than waiting to see if somebody's going to come to you and say, oh, we've done this, we've preserved your subscriptions and everything is great.
And I haven't changed anything. No, we should be in these partnership discussions now. I want to emphasize a point that I made earlier, which is the cost of doing science is going to go down. Orders of magnitude. And as a result, there's a lot more science going to be happening when the AI is involved.
And so, we're hoping to do thousands of times more science overall, mainly powered by the AI. And so we'll be the largest consumer of content out there. We want and we don't have that access to that content today. We get it through partnerships. It's valuable. It's necessary to do good science. It's necessary necessary to encourage reproducibility to maybe do more rigorous research prior to running an experiment to make the wet lab component of the research cheaper.
And so we're going to be consuming more literature than anyone in the long run hopefully. And as a result, we got to figure out the right way to reach to work with the publishers such that there's probably $1 trillion of new economic value created. You guys should get some of that. Like, what's the way for you to get that. I don't.
But I think on this panel today, we'll talk a little bit about the way to capture that value. And there's different points of leverage there as the biggest buyer of research. We have a lot more leverage as an Amazon. You got a lot of leverage. Like how do the publishers work with you to or how do you as publishers make sure that you capture your fair share of the value given super valuable role that you play in the ecosystem.
The goal was not to go down the line. But I have things to say to. So 90% of what AWS develops is based off of customer demand. So pretty much everything we do is in partnership with a customer. And based off of what their requirements are. So working with an organization like Genomics England, they wanted to couple their genomic data that they have in house with the medical data that is coupled with their patient population and understand drivers of genetic disease, in this case intellectual disability.
And they are scouring the public resources with millions of literature searches to be able to give that evidence based context to it. And they're informing over 20 different associations that they're now pursuing in clinical trials. So from that, though, that's all using the things that they had at their disposal and the technology we can help provide to get them as far as they can go. But there's the blissful desire to have more.
And that's where working with publishers is to get to that next level. What can we do beyond abstract searches. What can we do for full text searches. How can we get to the context relationship of these things. How can we get to that extra level of insight and application that these end users are looking to pursue, and that is something that they can't do in their own. We can't do on their own, but all of us can do together to be able to get to that end goal where everybody benefits.
And I think you just spoke, Lisa, to some examples of partnerships and how that kind of works its way through the system, but are there any other concrete examples the group would like to share that might prove illustrative? Or we can move on. Oh go ahead Ryan. Yeah, one of the risks and we talked about this at lunch is that someone becomes some body becomes an aggregator in the system.
The aggregator has the attention of the consumers of the data, or they're able to redirect that around. And so they get a lot of leverage against people whose content is really valuable. One way is that you fight that today is you have your own direct sales teams. You go out to science labs, you sell them. You're own the relationship with the customer. And the risk of an aggregator coming in is that you get disintermediated.
How do we're going to want to talk to all these applications. You and as a result, we'll have a lot of leverage. A company doing AI will have a lot of leverage in selecting and curating what sources of information there are. And so the question is for you guys, and I think just thinking as this larger ecosystem is how do you avoid becoming more than just a data set.
You want to be more than just a data set. One way to do that we've been talking with about with a number of publishers is tying you continuing to own the relationship with the customer. It's your subscription with them that allows me as a technology company to access the publisher content. And if I'm forced into that situation, which, today is the most likely scenario.
Then great. You get to keep maintaining that relationship with the customer Capture a lot of the value. If in the long run that is, ends up not being true. Where I own the relationship with the customer, I'm doing the science. Then there's a lot more opportunity for a tech. And I say, me, I'm a nice guy. I'm not going to do any of this, but a tech org could squeeze you out, and make it so that you don't have as much leverage in capturing the value.
And so I think managing that is an important part of figuring out what a partnership is supposed to look like. Oh, sure. Sorry I think we kind of went into the risks of partnering a little bit, but I maybe wanted to step it back to the why. So why do you want a partner. You want this corpus of material to train your AI system on and to improve its outputs.
And I think we had conversations previously where we talked about if you want it to be a bad actor, you might try to just buy up all the information that you can yourself and hoard it and wall it off from others. But that one thing that might prevent you from doing that is that the validated, ongoing, continued validation, validated content, new content that's coming out every second is going to be important for whatever your product is, and you have incentive to actually partner with us in a way that allows us to sustain the thing that we do that's important as publishers.
That for us to be able to continue to publish in whatever form, validated content. And that again that that's if we can be on that ground, then we can have a fruitful partnership. Yeah you just don't want to scrape the internet. There's all sorts of things on the internet that, it turns out, after some reflection, are not true. And I think there's just the complement of skill sets here too.
So we partner with organizations that are generating foundation models and work in a broad selection of those, whether they're things that Amazon's developing third party ones of Anthropic to Meta or Hugging Face that offers open source capabilities so you can develop your own models. So we can offer that as a technology layer that can enable you and your own products and capabilities.
And that's where working that area would come into play. But then there's the partnership layer to the scale and market access of where is it that you integrate within the broader workflow and what is that end customer experience. So how can you go from that selection to customer experience to traffic and create that flywheel that drives your user interactions. And those economies of scale that benefit.
Great well, let's talk about trust. Trust is so important for every company. It's especially important for what we do in these times. So when you're looking at AI applications or building AI partnerships, how do you ensure that they're aligned with the values of the scholarly enterprise.
Easy question. Really no. I mean, I think it's in every partnership, it's really important to understand motivations and incentives on both sides. And the get gives of that partnership. We were saying at lunch, you almost need to know your partner's business model as well as your own to understand what's going to make this a successful partnership, but also where your incentives might not be aligned, and to successfully partner, you have to put those tensions in the room.
You can't pretend that let's just talk about what we have in common. I think you have to name where our interests are not aligned and say, how are we going to manage this and have that as a conversation. So understanding those incentives, having the relationship where you can talk about it and at the end of the day, how do you create. How do you paper that in contractual agreements to make sure that your interests are protected, even as you work toward a shared goal.
So I think that that's how I'd recommend. I guess I would widen the circle there too, of trust. So it's not just a trust of the publisher with whatever group you're licensing to but we as publishers have to maintain the trust with our research communities and our authors and readers and continue our missions.
And they've got to trust that we're still focused on our mission. And I think that's a big challenge. And one of the things that we've experienced recently is authors really have no idea what copyright means and what copyright transfer agreements mean. Or if you've published under Creative Commons license, what the implications of that are. So we have this conversation.
We need to have to educate but also build, rebuild that trust as to how we're going to behave as publishers to continue to serve our authors. Just add in to trust is dependency dependent upon transparency urgency and understanding of ownership. So you're talking at the partner level. But a lot of what we're talking about here is that underlying data.
Like what do you have and what do you own and what's being done with it. And so the way we operate is that your data is your data. We aren't touching it. It is all in your accounts. It is owned by you. And when you work with your customers, that is the agreement that you and your customers have. And so making sure that it's clear that it isn't coming into a central repository.
And we're mining things. There's a lot of fear in a lot of customers, especially from the health and life sciences domain. We work with patient information, we have HIPAA compliance, we work in Europe and GDPR, we work in China. So how is their trust of where that data lives. Who's accessing it. Is it following the proper data sovereignty laws, all of those things underlying it.
We have a shared responsibility model of the cloud and in the cloud that we work with organizations on that make sure that it is transparent of what is happening, where things are going and how it's being used. And so we have the responsibility AI model of the fairness, security and privacy so that any models that are being developed are developed in your account and exactly what is being done. And I think that that's key when we're looking at the AI domain.
Something else that's interesting in terms of partnership too, is that when we're looking at models and where AI operates, a lot of these models are trained, but they're then moments in time. We're in a world now where we're looking at agentic AI and the dynamic nature of data that is happening. And that is a bit where I think it's key to think about partnerships, because there needs to be that ownership of that content of where the future is going, where organizations are fit for purpose of what they're doing, and with publishers that are creating this dynamic content, curating this dynamic content and helping to surface it, and having an agent based system that then allows those users to, again, have that trust of where it's coming from and how it's being used.
Sorry can I add one more thing on some of these questions of trust, particularly as it relates to licensing. Because I think that's Matt brought this up. That's a lot of places where this relationship is happening, this kind of partnering relationship is licensing to New partners who want to do new things with your content with AI.
And I think having certain principles that you're trying to withhold or uphold through that relationship is really important. So we talk a lot about transparency. So how is the user going to know that this content is in here and that data provenance and that matters. If I'm making a health care choice for my kid, I want to know if that I was trained on the internet and all its truths.
If it was trained on preprint servers or was trained on the version of record, I want to know that so I can make judgments. So transparency matters. Citation matters. That's the currency that we operate under. So how am I making sure that author is understood. I can back to the original work and the scientific record is continued clear grants of rights.
So what can you do and can't do. Matt talked about that. The importance of licensing. It actually creates a framework that says you don't have Orchard Blanche, you can do these things. And we have an escalation path. And so actually licensing giving people some rights creates constraints on all other rights.
So ironically, you get more control over your assets by licensing some rights to them and compensation. What's the fair compensation that should flow through the system. So mechanically, that's how we thought about creating aligned incentives in licensing. Yeah I mean, the goal of when you're doing science is to surface the truth.
And as much as I'd be interested in creating a model on seed oils and injecting bleach, I'm curious what would happen. We want our models to be accurate. And a lot of big part of that is curated content. Content that has been thoroughly vetted. Which is what you provide. I think that one of the questions for you guys is around where's the threshold.
How much do. How much I work. Do you do yourselves. I think Lisa would say you should do some of it yourself so that you understand it deeply. And that AWS can help you do that. On the other hand, am like, just trust me, I'm a tech startup. And so I think there's a wide range of things you could do. But trying to get access to information, trying to share that information.
The ethical question in my mind is how do you ensure the highest level of accuracy, rigor, reproducibility and this stuff. So we've talked a lot about the importance of partnership. What successful partnership looks like. Where to start. Are there any I think perhaps incorrectly, I would characterize this as an early phase of the AI era.
Are there any lessons learned at this point, or parts of partnerships that haven't gone as well that you would think about differently the next time. I'm going to have to think about the full question, but we are not in the early phase of the AI era. Amazon's been operating in machine learning for over 20 years.
When you look at what is happening and what you're selecting within your cart and says, customers may also like. We've had machine learning models for decades, and AI is a new iterative process on top of that. The transformer models that are driving what is happening in the field right now were established in 2017. It's really the scale that you can operate in the building of these things, and the amount of data that has been accumulated that creates these emerging capabilities.
So we are now hitting the inflection point of that adoption, and people moving from those POCs and interest level into integrating it within their use cases. So as far as lessons learned in partnership, I think it's important when you're thinking about AI and who to partner with. Now this is where I'm just going to be a stream of consciousness. I think the importance is thinking strategically.
Get around and play. Like get your hands dirty. See what things are capable of. Understand what the difference is in the hype and the hope. That is definitely important. But if you're looking at a partnership and you're only looking at things from a very tactical perspective, what is this one component that this may accelerate.
You're not thinking big enough because the organizations we're working with are integrating generative AI in every component of what they do. And so where can the type of content you have help support them and where can you intersect here. And part of why I'm here is because evidence based research is in part of every conversation that I have in pretty much every part of the businesses that I speak to. So it's really important to get it right and think about where you can intersect with those organizations.
So that's where you fit within a bigger picture. But there's also where AI fits within your organization. People are looking at it in terms of financial terms, operational terms, as well as the scientific and enablement. So I would say think big and think strategically because this isn't going away. I want to push back a little bit here. We are in the earliest possible phases of AI here.
You're right. I mean, so I ran data science company for 5 and 1/2 years. This technology is not immature. But the possibilities of that are new. You could do new things. And so you're looking at, things that are human level capable, but you can do them at a orders of magnitude cheaper than humans do them today. So you could do a lot more.
Like I said at the beginning, there's going to be thousands of times more science done once the AI does science, because the marginal cost of a scientist is very high, training a scientist is very expensive. Once the AI is already trained and has figured it out, then adding 1 additional unit of them is just a little bit of compute. So there's a lot of strategic thinking to do, like you said.
But there's also these are all science students' science nerd meet, computer nerd meet balvenie doublewood single malt scotch. And you sit down and you figure out what's possible out there and where you could go. It's going to look very different. I have no monopoly on how it's going to look different. But I think that the idea that things are going to be orders of magnitude faster or more or cheaper.
It really creates a lot of weird opportunities that probably no one has really worked out what they are. And I encourage you to think down that road as well. I will go ahead and concede that we're at an inflection point. So everything that's led to this point has been going on for quite some time. But we're absolutely at an inflection point, especially where the adoption is happening and the number of use cases and how pervasive it's going to be in the field.
Yes, it's scotch, not vodka. Well, if we were talking geologic time is that there's no. So I think the question was lessons learned. Was that the question. So in the early phases of licensing of material, I think there's been some lessons learned. I mean, Elsevier 10 years ago was I remember jeopardy had a Watson was like on and IBM was licensing some of the medical content from Elsevier.
So like that, we've had those things have been happening. And interestingly, at that time, I think people that was a curiosity. But now it's really real to us. So when we had last year, Wiley announced some ideal deals and a bunch of the society partners were like, wait, what. What just happened. I think Wiley learned that the society partners are going to have a lot of interest in that.
And we learned that we should be taking proactive interest in that. And so we've been and I'm sure all of your partners have been doing this like saying, hey, what's our understanding of how when we signed our contract over 10 years ago. And it says you could license our materials. What does that mean for AI. So we're realizing we have to if you're updating if you're renewing a contract or whatever, obviously you're going to maybe put some more wording in there.
But even in the meantime, we're having to have these conversations about, where are you going. Where do we want you to go. Where do we want to participate or not. And then again, I think our authors also were suddenly like, wait, you're doing what. And they're like, well, if you find a Creative Commons license, publish like, don't you expect that.
That's the idea is that people are going to leverage what you just published with technology and build on it. But, we've had some initial surprises. And then we have to recalibrate and say, this is the world we're in now. So we have to think about these things. So I think that's lessons learned. We should have had those conversations earlier, but we're having them now fully agree with that.
I think that communicating to all your stakeholders in this time of uncertainty is critically important. And why are you doing this. What are you doing. Take very little for granted. And it's tough because you're trying to move at pace and you don't have all the answers yet, and you don't want to say something that's wrong or going to get you in trouble.
So I think that communication and that partnership of that dialogue is a good example of that. A lesson I think we definitely learned. I'd also say that we're all up here talking about partnerships and the value that we've gotten out of partnerships. They're also really hard. They take extra effort. They take figuring out how to make the other person successful, learning how to say you're sorry.
And to try to figure out how to move forward. So I also would say pick experiment. But pick a few real strong partnerships that you want to invest in because a partnership done halfway, done without the endorsement of the organization. You're setting yourself up for a lot of work. And so how do you pick the few places where you can go deep would be another piece of advice. I accept your apology.
So, Josh, you mentioned uncertainty. Sorry, sorry to interrupt. It did take us six months to get our partnership with potato through our contracting process. So it was like, that's why I. That's why Ryan's accepting my apology. Because that's hard. You're like, well, wait a second.
Who are you partnering with. How much insurance does potato have. That had to be a conversation. Does potato have enough. They have more now because of us $5 million per instance. Like those. There's hundreds little things that can also get in the way of the big strategic ideas too. Yeah so so Josh, you mentioned uncertainty.
Obviously there's a huge amount of uncertainty in the space right now. Partnerships it seems and contracting is one way to create certainty, or at least legal frameworks between two or more parties, that set some boundaries around how AI will be applied. Are there other ways to think about uncertainty or manage through it in this stage of development.
Yeah, I think building in whatever the time period is, the quarterly review cycle is really important. I think it's easy to start to come together, you have these great visions, and then everybody goes back to their work and you start to diverge again. And you have to bring back and say, let's check in. How are we doing against our aspirations, our metrics that share not just the hard things that are contracted, but the soft things that we're learning.
It's really the soft sharing and the soft learning that's at least as important as the hard stuff. So it's easy when we're all running fast, we're all busy, we're all worried about the future. We're all trying to do a bunch of things to not nurture those partnerships. So I'd say if you don't build in the nurture cycle the least, and I spend most of our time emailing each other like, can you catch up.
Wait, when can you wait. Oh, maybe next week, right. It's like we just had a monthly meeting. We probably would talk five times more. So anyway, I think building in the human connectivity so you don't lose it would be another piece of advice. And two things come to mind. And this playing off a little bit of the last question too is culture matters.
Like who are you working with and are you culturally aligned. Do you have the same incentives. And one way that Amazon helps create that vision is through what we use in partnerships and helping set that stage as a press release and frequently asked questions. So we write a document internally for everything we develop internally, but things that we work with externally, we create that vision statement that acts as a living document with organizations we work with.
This is our vision. This is where we want to go. This is how we get stakeholder alignment. And then those frequently asked questions become and that's one page for the PR. The frequently asked questions become pages. That way, everybody has a common source of truth of what is happening, how are we building, what are we thinking about, and where do we go from here.
So it removes the confusion of multiple conversations. Maybe just this person, when you talk to them, says this. Another person over here says this. How can you have a common framework that you're building from and having common regular check ins to make sure that things are operating as expected. OK, well, I know we want to leave plenty of time for audience Q&A. So I think at this stage, I just love if each panelist could share any one or two takeaways for the audience here or anything you feel like we didn't cover that you really wanted to make sure we discussed.
Who wants to start. Yeah I mean, I said it at the beginning that we should be proactively having these partnership conversations. And I think an addendum to that is we've got to be educating each other on what's going on. Like, I still have hope. Maybe we can carry it on in the Q&A later.
But I still have questions about how is Wiley thinking about AI for search versus AI for generative outputs or and the differences of that and how is that blurry line evolving. So the importance of always checking in with your partners and updating the definitions of what do we mean by technology terms. And again, checking in with your partners.
I would say that the biggest risks are around failure of imagination. And that you guys are probably at the peak of your leverage right now. And so if you don't change what you're doing right. And everyone will figure it out and figure out ways to maintain where their position in the ecosystem and continue to be able to try to squeeze squeeze everyone else as best they can.
Josh is a good job of squeezing me. It's good. And that's about but you are probably at near a high point of your leverage. And so you should take advantage of it. What I found really interesting in just talking with the panelists in prep for this. And today is I really understanding each other.
And I think that reaching out and being proactive and getting a better understanding of where each of us are coming from, what our understandings are, what is our own business model, and how we all fit together. I think we've all kind of had some lesson learned just in our own conversations that I think everybody can benefit from. So being proactive in reaching out like I'm around for the rest of the evening, please come talk with me.
Understand where each of us are coming from. And what is that you're thinking about doing. What is it you want to do. What are those fears. What are those big questions and being proactive in addressing them and understanding what it is that you want to do in this field and can. I agree with everything that the panelists have just said.
I want to ask everybody in this room to do something that is incredibly unnatural, which is to run toward danger. There are risks there, and the risks are there, whether we like them or not. There is uncertainty. What is our business model like. What happens if.
Are people going to stop subscribing to our journals. Are people going to stop reading articles. If the article, not the artifact or research is just going to publish on a preprint server and carry on and there's all these there are all these unknowns and those are risks. And it feels like danger. And the human response is either to fight or flight to run away from danger.
And I think one of the single biggest lesson that Wiley's had in the last two years as we've really been engaged in the AI era, is that leaning in has helped us feel like we are actually more in control and has given us more confidence, while waiting and seeing feels like the safe play. Leaning in, taking action, experimenting, learning from partners has actually been the safer play than not acting. So run toward danger despite what your hypothalamus, whatever your lizard brain is telling you to do.
Excellent Thanks, everyone. So I think I'm sorry we're a little bit microphone challenged today. So we'll just share that one. And I've put the microphone very gingerly in the stand over there. If anyone has questions you can please queue up. Well, first of all, really good session. And you are a real model of good collaboration.
And how to do this as individuals. What I'm curious about is in most cases, when these kinds of partnerships are being explored and formed. Et cetera. We're talking about individuals taking initiatives to contact other individuals. Can you speak to where you may just hit an organizational obstacle that, holy cow, I'm trying to do the right thing and I can't.
Don't start with legal. Yeah I have been caught. You said six months. That can be good. But finding what it is you want to gather, finding that cultural fit and finding those people and setting that vision and getting the stakeholder alignment, then going to legal so that there's clear understanding and you're all coming at it together with, no, this is what we want to do.
This is what the outcome is. And this is why we are doing this and get legal to follow. That is probably the best way to go. If you start with legal of how do we manage our different contracts and finances, then there's going to be 20,000 reasons not to do it. I do need to commend the Wylie folks. Like we're a two-year-old startup. We were two people at the time that were more now, but we hadn't raised very much.
We raised like $1 million, like we were two people. And when we started the conversations and now we're marginally larger, still likely to go out of business statistically, that's a lot of reputational risk internally that you have to eat a little bit too, if you want to. There's a leap to make there. And so I just want to commend the Wylie folks on doing that.
I'd say two sell the program before you sell the partnership. What do you mean by that is sell the need, establish the guardrails and then bring the solution. If you show up and say, I want a partnership, I want to do, whoa, like you set up a bunch of antibodies, but you say, here's this challenge that we have. We're trying to figure this out. You almost are like describing the square peg before you. Sorry the square hole before you go.
Bring the square peg back. So we established the AI program we talked about. We actually set before we started any licensing. We established with our board a set of guardrails. We will only license within these parameters, this amount of archival content under these terms. And anything outside of these guardrails will come back and consult with you on. But within these guardrails, we will run at the pace of the market.
So we were able to create the shape of the hole and then be able to put that in. But if we showed up with a PEG, everybody would have guaranteed that it was not the fit for the available holes. Stretch that analogy too far. But yeah. I mean we've had similar on the society side, the establishing guardrails for where we can operate and what principles we're not going to bend on has been very important there.
And doing that at the board level. And we have a council as well at AGU, which is really large, like 120 scientists. And it's not the best place to have the conversation about how to be nimble and act within a certain set of guardrails. But there's still have to interact with those constituencies and you can learn from them with other conversations. But it's not the same type of conversation.
So those navigating the governance is challenging. But yeah, having a board that is ready to act can get you over those blocks. OK I know in the interest of time, I'll try to read my question quickly, but Lisa, I'm David Sampson from New England Journal of Medicine. Lisa, you said, how can we get to the next level of insight for customers.
And Ryan, you said asked how do publishers become more than a data set. And my comment or my question is I've already asked Josh this in the past, and this can be a yes or no question without going into detail, but should publishers be doing more to enrich their content for AI, for different personas, for different use cases. Yeah so we are having a bit of a discussion before of the past has been that you generate the text and the literature for users to read, but now it's not just an end user, it's also a machine that is then taking extracts of this and then producing that for an end user and creating synthesized results of that.
So what does that mean in terms of the end user experience. And how do you make sure that you're surfacing the right information in the right context, not misinterpreting it. There's all sorts of layers where in the early days of AI things can go wrong. So there's a lot of importance of how things are generated and the AI operates. But a lot of that can come back to where what is in that data and how to properly represent it.
To be honest, most organizations do not have good AI ready data. There are LLMs out there that can help look at databases and models, and do metadata capture and understanding of what the fields are, and do data harmonization. That takes a lot of work, and then you are relying on these models to be able to do so effectively and accurately. So I'd say there's work in understanding what is that end customer experience.
The biggest questions I get what are the genes. What are the molecules. What are the compounds. What are the diseases. What are those relationships. What is that graph base of those contexts between them. And then how can you understand the context of all that given the question. And AI is getting decent at doing that.
Every model we keep going through is improving in that process, but improving how that data is captured, how is annotated and having full lineage of it as you're thinking through publications and you have a method section, there's going to be a new iteration of this field where methods aren't just captured post-hoc and then written down in highly summarized. They will have full traceability of each of those components.
And reproducibility that's the end goal is reproducibility of it internally and externally for that shareable component. So how can we get to that future of a digitized approach for the full traceability and full detailed understanding, but also to improve that insight abstraction as you're looking across journals and across publications as well. Something we didn't talk about, but now that I'm talking, I'm going to keep going is the context of science isn't always really clear cut on this is the right answer.
So how can I help understand what are those areas of conflict, and how can an end user understand when that conflict arises and how to better interpret it. So I think that we're going to start entering that where humans are doing it very individually right now. But there's going to be a future where AI plays a role in that, and how to be able to properly represent that information for strong critical thinking and reasoning skills.
I was talking with a large publisher. I actually can't remember if it was you or someone else. Josh, we all. We all blend together. Yeah, exactly. And they asked if they should build better embeddings. Embedding is against their content. And the problem with embeddings is that you want to use them for something specific.
But what you have access to is generality, right. And so, even if you were to make really high quality embeddings in which your space, you got William Shakespeare over here and you got your content over here, what you want to do is you want to say, well, within this narrow, narrowest domain, what I want is within this narrow domain of this kind of chemistry. How do I differentiate and cluster things so that they're nearby each other.
And that specificity can't. It's so specific in answering a particular question that building the generic embeddings early actually buys you very little, if anything. And so trying to address the question of how do you get to a specific tool versus how do you provide a layer. I don't think there's going to be a meta layer that makes that people want to buy as much as those very specific tools.
And the best thing you guys can do is do the hard enrichment by hand that it takes to the AI will eventually need things like linking content together, things like tying together metadata, things like attaching additional data sources, being very clear that this is a real attachment, not one that's AI generated. It's a lot more valuable than trying to maybe create generative AI tools on top of things.
OK, I think I've asked this question to Josh before. We have a batch file that is like digitized print, that is just a scan. It's not machine readable or I mean, I'm sure there are some machines that can read it now and we've had that sort of well, should we be doing something now to go back and make it more machine readable or tag it better. And I think the answer we have been getting is like, well, when we need to do it, we'll know.
And then the other thing is some of the projects we might be doing may. We may be able to get information coming back our way that we could reuse later. So maybe if we're thoughtful about our licenses, we can get some returns on the investment that the licensee has put into the work. Did I say that right. Yeah if you had done that project five years ago, it would be terrible compared to the technology available today.
And you will have just you'll just have to redo it. So it's like some weird thing there. Generally when you can't build or buy or don't want to then your partner right now, with LLMs and AI evolving so fast, like you were saying, that world is going to exponentially change so magically the cost of building is going to come down and new possibilities are going to emerge.
So let's say you are halfway through the partnership, and then you realize that possibly you could build it yourself. So how do you see this panning out with regards to partnership dynamics. I understand that there's access to data and distribution. Those are some variables. But outside of that, do you see how partnership dynamics are going to emerge in the future with the cost of building going down.
This sounds ridiculous, but not only will I be an ideal world is my company becomes the largest consumer of published literature, but it will also be the largest producer of published literature in an ideal world. And so certainly that changes the dynamics. And it makes it changes the way that businesses and companies will interact with that content.
Like I said, you're at a period of maximum leverage right now. And you should find ways to make sure that leverages the same into the future. So we operate across the build and buy spectrum. And the goal is to meet customers where they're at. The majority of our customers want to buy because they want something that's already available, well vetted and industry adopted something that they don't have to maintain, things that they don't have internal knowledge around.
So it's great. And as quick as it is to develop your own agent, how do you ensure there's no hallucinations. How do you interpret your own guardrails if it is working off of private data sets or in certain GDPR domains. How are you owning and overseeing that from a legal perspective. There are lots of overhead components to think about and how these get implemented and rolled out and scaled and have the service support behind.
So it depends on who your user base and what the scale of it is. If it's for you in a project and a group of individuals of course, build your own, and then you have full visibility in the entire stack of what is being done. So there is a balance of that ownership and who is developing it. What is their version rate.
How fast do they respond to user requests for updates. And where is this running. Is it a multi-tenant environment in their own account. Is it something that you own and running in your account. There are many considerations at play here, which is where people do fall in the spectrum, but they want to get running quickly. Where I is now can get in and start testing and building POCs quickly.
A lot of organizations, when it comes to scale, unless they are quite large, are still looking in the BI spectrum for trustable sources of information. And I would add that I think you don't want to have a partner. Were the only thing keeping you from moving into that space is the cost of the bill, right. So if you think of Venn diagram, you want just you want Goldilocks not too hot, not too cold, right.
If your Venn diagram is too far apart or just barely overlapping, you don't have the cultural alignment. You don't have the incentive alignment that Lisa was talking about. If you're too close, if you're like, geez, we're pretty overlapping. It's just that I just then you should just do a work for hire contract and say, hey, how about you guys build this for us.
So when it relates to potato, sure. I could start building an AI scientist, but we're a 218-year-old company. We don't move at the pace of potato. We don't have distribution into lab scientists. We don't. There's four or five things that we are benefiting from. And so it's sort of just enough overlap but not enough where we're actually on each other's toes.
Excellent well, I don't think. Oh, OK. Are you sure one of the things that I see occasionally, and I don't think anybody in this room would do it, of course, is get an intern to work on their AI program like a potato. One of the reasons potato, I think, works is because I'm the youngest person at the company.
Like, I'm 40 and I've been programming for 20 years and I'm like, yeah, that's about as junior as I think we should go. Like we got like 25, 30-year-old, people with 25, 30 years experience, very compact, talented team, very hard to assemble that kind of team. Outside of maybe a tech startup. And everyone's doing IC work, so there's no manager. Director whatever.
I write code 80% of my day, right. And so I think that kind of environment very hard to establish as well. And so do it yourself, could work in some circumstances. But you also may need to go outside to achieve that sort of thing. Maybe one other thing too, as you're talking about being the youngest is experience. When we talk about what are some of the pitfalls as you're building these tools, if you're doing it for the first time or hiring new people in maybe haven't done it five times over and learn from your mistakes.
And so that's where partnering can help with people who have learned from their mistakes and can get moving faster and better. Wonderful well thanks everyone. Thanks for your questions. Thanks for your time and thanks to our panelists for this discussion today. Thank you.