Name:
Adapting Your Workforce
Description:
Adapting Your Workforce
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/9ea61be3-4a0a-4518-a65a-ce8a000872e5/thumbnails/9ea61be3-4a0a-4518-a65a-ce8a000872e5.png
Duration:
T01H05M05S
Embed URL:
https://stream.cadmore.media/player/9ea61be3-4a0a-4518-a65a-ce8a000872e5
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/9ea61be3-4a0a-4518-a65a-ce8a000872e5/GMT20241002-140353_Recording.cutfile.20241002160130941_1280x.mp4?sv=2019-02-02&sr=c&sig=cmzfogp6fp0Hgx%2FwqKfV6QKrfJS66%2FEi1pzQHWdKdQo%3D&st=2025-04-17T13%3A36%3A53Z&se=2025-04-17T15%3A41%3A53Z&sp=r
Upload Date:
2025-04-17T13:41:53.0570006Z
Transcript:
Language: EN.
Segment:0 .
OK, we're going to head into our next session on workforce adaptation. And feel free to correct me if this is wrong, but I think that we're doing a roving mic and this will be very Q&A style. So if you have questions throughout, please just raise your hand and letty and I will run to you with a microphone so I will hand it off.
Hi, everyone. I'll wait. I guess there's people still coming in, but yes, we're going to try to make this interactive and hopefully, we can have more of a conversation. I didn't want to have talking heads kind of just yapping away for 45 minutes. So hopefully, we can dig into the topic.
And as we said this is about workforce adaptation. So again, another AI topic I know you're sick of AI, so hopefully we can make this a little bit more interesting. But but this is really not from the perspective of AI in the, in the organization or AI I as you're using it for products. But I more with respect to how it's affecting your workforce and whether you're adapting your organizations and what you're doing with the changes that are occurring with AI.
But first, I wanted to introduce a couple of the people on the panel here. So we have David Sampson, who's the vice president and chief publishing officer at NJM group. David is a member of the executive team at NJM group, a part of the Massachusetts Medical Society. David works collaboratively across the organization on business and editorial strategy and policies for the New England Journal of Medicine, and is the group's portfolio of other publications.
Joining NJM. He worked at the American Society of Clinical Oncology, Elsevier and Wolters kluwer, and various publishing, sales, and marketing roles. We also have extra bones. He's the senior director of strategy at Wiley. Ezra Ezra leads the strategy. Strategic AI powered growth and academic and trade publishing. He works across sort of cross-functional strategy, tech, product and editorial teams, developing AI based tools to assist authors, editors, creators in developing quality sort of novel and derivative content.
And his team is laser focused on collaborating with end users to design AI tools that augment human creativity. And prior to joining Wiley, he read led efforts at Kaplan North America, notably supporting Kaplan's leadership to establish, operationalize, and expand its higher Ed academic and technology services partnership business. And we have we also have Jeff Jeff Strachan, chief information digital officer at ieee.
Jeff is the Jeff is currently the chief digital officer at. And in this role, he is accountable for driving digital transformation across the organization. He was previously the global CIO and head of digital for a midsize insurance company, leading the organization to utilize modern technology solutions to deliver more efficient insurance products and services. So his.
His career includes various technology roles across Transunion, CNA, GM and Pfizer. So thank you for the three of you for joining. And this topic obviously covers AI, but I wanted to be more specific in terms of AI as generative AI. So we spoke yesterday in terms of AI and you had some definitions, but I want to provide a little bit of introduction in terms of what we're talking about.
So when you think about generative AI, it has a number of components that make it very useful as a tool. And that's the thing we have to remember when we think about sort of AI as a, as a, as a sort of a technology piece. It's really just a tool. And and as a tool we can do different things with it. So there's a text generation component that is text generation being you can take text and create sort of prominent topics and research from specific information.
You can have image creation where you can create new images based on text prompts. There's video generation. So you could create video or stable what's called stable diffusion and create videos from other types of videos. There's language translation that we have. So you can use what are NLU models to create translations of various types of languages.
There's code development. So you could actually use AI to develop code and use it for that kind of generation. And then there's data generation. So you could generate new types of synthetic data from existing data sets. So there's a number of different ways you could look at this. But more importantly from an organizational standpoint I, I has sort of three major factors that influence us.
So there's what I would call knowledge management and information retrieval. So that so that we see as sort of the copilot Gemini and sort of in-office kind of capabilities where you could use that tool and generate information for the organization. And that could be from data or it could be from other sources. There's workflow enhancements or workflow capabilities. So that would be the one we're familiar with, which is the editorial checking.
There's engineering there's medicine and other areas that this could be applied to. There's knowledge creation. OK so you could actually create new forms of information around this. That is you create new products and new capabilities, such as, let's say PubMed that you heard about yesterday, or there's other AI tools that have been developed out there. There's ebsco has a j'en AI product and other similar ones.
So when you think about these, I want you to sort of frame that AI doesn't just have a general sort of capability. It has all these different capabilities across the board, and this is how it can affect the organization. So so that that was sort of, I guess in the sense of the preamble, but I wanted the group to talk a little bit about their AI journey.
First So I'll ask them, each of them after. What was your AI journey at this point? Sure first of all, Thank you all for having me here. Thank you for everyone who worked to put this panel and this seminar together. So I think my journey probably feels like maybe a lot of people in this room's journey, in which it started with a lot of uncertainty. And that uncertainty then progress to maybe some questions about what was possible, and then it was followed by experimentation and then seeing what resulted, and then using that information to try to figure out where we could go next.
And so that my journey sort of kind of maps to my organization's journey. So one of the things that we were interested in very early on, as you mentioned, I worked very closely on the learning side of our business. So adjacent to traditional research and scholarly publishing, more specifically in the areas of our learning, sort of advanced content, higher higher Ed, academic content, professional and trade publishing.
And so one of the things that we're focused on are, is the ways in which we can utilize AI and all of the ways in which you can create new and derivative content to expand our publishing footprint, increase our publishing productivity, publish more, publish faster, of course, always maintaining quality. And so we started with just thinking about what are the opportunities to use AI and putting it in the hands of our authors and creatives and editors to create new and novel content?
One of the things that we learned is Gen AI does not do a very good job out of the box with creating new content. We've heard the term humans in the loop a lot. I think we're going to continue to still hear that term. And so, you know, that was one of the first things that we learned is that humans have to be in the loop. And humans have to be in the lead. And so we've expanded a number of different pilots and a number of different projects in which we have been utilizing Gen AI tools and overlaying those tools with additional capabilities and seeing where we find traction, not just with novel but also derivative content.
And so we're continuing that exploration and we're continuing our experimentation in those areas. And we're learning along the way. Well Thank you Jeff. So I kind of Thank you also for having me here. It's good. I've just kind of recently got into the publishing world with starting at IEEE at the beginning of the year. More came from the traditional fortune 500 background.
So kind of bringing that perspective into IEEE and kind of trying to, you know, I come at it from the technology perspective where everybody thinks AI is a strategy and it's not. Like you said, it's a tool. And how do we best use that tool in the organization to be more productive? I think that, like you were mentioning, AI assisted technology is really where we're going right now.
We everybody has this idea, you know, with all of our engineers with IEEE, you sit-in a group session and it'll be, well, let's just use AI to solve this. Well, that's not how it works. We have to understand what we're trying to solve, what the different pieces of it are. So from my perspective, I need it. People and technology, people that understand the business so we can understand better how to insert technology into the workflow and into the stream, whether that be generating content, enhancing workflow.
Or one of the big things now is recognizing when original content uses AI to actually produce results. So it's those are all new challenges that we can also use AI to, to help us adapt to. David, Thanks. Yeah, Thanks. Great to be here and to see some familiar faces and new faces. I think it's been about three years since I've attended an SSP event.
My AI Jen AI I journey. I'll rewind. This will tell you how long I've been in this publishing business. Back in, I think, 1990, my boss at Raven press in New York City called me to her office and asked me, you know, how is the internet going to change publishing? And, you know, I sat in the chair squirming because I had no idea what the internet was, and I couldn't go to a computer and look up what the internet was.
And so I fudged an answer and clearly, you know, it was a horrible answer. But forward now. You know, two years ago, almost two years ago now, all of a sudden, Jen, I just landed in our laps at MHMS and nejm group. And what I mean by that is a very large tech company approached NJM New England Journal of Medicine and said, hey, we would like to do something with your organization.
Related to Gen I and LMS. And the reason we wanted to do something with you is because, you know, you are the leading medical journal in the world. You have a great reputation. Physicians trust you. You have a great brand. So we were all very excited at the organization. It was the shiny new object.
And fast forward to today. We ended up not doing anything with this company. But it did spark something in the organization, especially among our physician editors and physician editors. Physicians in general tend to be very skeptical about anything new. And so, you know, I had to pass some sniff tests with them, and it's still trying to pass some of the sniff tests with our editors.
So I'll just quickly go through, you know, in your introduction, you know, in terms of internal use, you know, knowledge management, information retrieval, I would say that that's very nascent. There are very few individuals who are really using I for those applications, those use cases on a daily basis. But what I am seeing increasingly among our colleagues and even myself and some of our physician editors is a shift away from default. Going to Google and doing a search and going to perplexity, going to chatgpt.
And what I'm hearing is, God, this is better than doing a keyword search on Google. I'm actually getting, you know, some decent answers on perplexity and chatgpt. So I think that is something that we all have to observe. Is this shift away from traditional keyword search to more of a conversational type of search workflow tools. I think we're a bit different than David. We're getting ahead of ourselves.
We're getting there. So Yeah, let's pull this back because I wanted to get a question up on the screen as well. So if we Susan, can we get that question up on the screen. So Yeah Thanks. But I think we're going to progress through these different aspects. But the first thing I wanted the group to have a look at was I want you to answer, if you have access to an online as well or online friends as to whether or not your organization is doing anything with respect to training or the use of AI in your actual organization.
So to have a look at that and see if you have it. Yes or no to that specific point. Looks like we're getting an even answer. So so we'll get back to that. But in any case. So thank you all three of you. In terms of the a journey, I think the first piece that I wanted to and sort of set it up for me very nicely. Thank you.
Was this notion of, well, we all have some form of are already available. So there's co-pilots in Office 365 if you're using that, there's Google Gemini which is released, across organizational footprints. There's other tools and each and every organization has different ways of either blocking this or making it available. I guess the first question to the group is just thinking about this general availability of tools for information retrieval in our many organizations to train and teach people how to use these tools.
Don't just search tools. Is there anything going on in your organizations to improve the way the people are using these tools, and make these tools available for me to be able to control information effectiveness? Sure so Yeah, we're certainly embracing all of these tools and embracing all the ways in which we can embed them into our ways of working.
But to your point about the degree to which we're making these tools available and we're offering trainings, we certainly started probably with, like many organizations, with pilot exercises where, you know, take a small group of folks, unleash, you know, co-pilot, get some results and see what that looks like. So we had those sorts of pilots. There's also a number of trainings that we offer through a centralized resource, a SharePoint, where colleagues can go, they can find trainings, they can get in depth at the, you know, the level that they're interested in.
There's also a number of, you know, we actually have sort of dedicated investment funds for internal pilots. So say a group is interested in developing a tool or developing a solution there. There's funding for those sorts of things. I would say the thing that's been most interesting to me, besides some of the sort of formal exercises to get people on board, is really kind of encouraging entrepreneurialism throughout the organization.
So one of the things that we developed was actually a sort of assistive authoring tool. So it's a set of tools that allow editors and authors and creatives to go in, select from, you know, the types of tools that are most useful to them, maybe things that help with copy editing. Things that help with summarization. And it's in a secure environment they're allowed to use, you know, ingest data and content from Wiley's corpus and then just play with it and experiment, right, and then share out those ideas.
But I think that the sort of philosophy of failing fast and trying a lot of different things and letting 1,000 flowers bloom has really been, I would say, sort of inspirational at our organization. And I think has allowed people to have an opportunity to sort of test out, try, experiment and then figure out what they need to learn. So I think those are some of the things that we're doing to just try to sort of encourage folks, because there's a lot of fear in this, right?
There's a lot of fear, there's a lot of skepticism, and there's a lot of unknown. And so I think just kind of getting your feet wet is really helpful for most, most folks. That's really. Is it good? It's on. OK Thanks.
That's really helpful. And I sort of wonder I mean, I've heard from a number of organizations here that they've blocked use of chatgpt or other tools. Is there a reason you decided not to block it? I mean, why are you not scared? Is it because it was in a secure environment or what? What allowed you what gave you the comfort that this was not going to blow up in your face?
Yeah so I think it depends on the nature and the use and creating specific policies and guidelines and recommendations and best practices around the usage. So in terms of being able to experiment with, as I mentioned, sort of Wiley's data or Wiley's content, right. We've created security around that with the types of tools that we've created, with the types of, you know, security, you know, security protocols in place, guided by our tech team and our information security team.
That enables, you know, our colleagues to work confidently within those environments with our content. And so there's a level of sort of risk mitigation there that allows for our colleagues to actually, you know, work, and experiment. And I think that as it relates to sort of, you know, using other tools like copilot, again, you know, that sort of within, you know, a sort of enterprise, you know, wide construct that, again, has some of those security protocols in place.
But I think that the, the, the for us, you know, we really see this as an opportunity to augment what, you know, our editorial teams or our colleagues are already doing. Right it augments human creativity to augment human expression. People are already experimenting with those tools. So I think just kind of bringing that to the fore and creating transparency around that helps create for a more productive conversation, a more productive exercise.
That's great. Thanks, Jeff. Your engineering organization and supposedly also involved in things. Things internet. So what about I mean, from your perspective, how has ieee used this sort of used copilot in these other sort of information retrieval sort of capabilities or what have you?
So we started, like many of you, like when I started there, it was all turned off. It was all blocked. You couldn't get to any of the tools. And you really had a combination of marketing, legal and the security and it organization saying, no, no, no, this is going to cause us trouble. But I think over the last 6 to 9 months, there's been a more of an enlightenment that if you're not looking at it, you're falling behind.
So we've created more of a capability. So we didn't want to centralize this work because as you mentioned, we need to be able to drive innovation as a cultural thing within the organization, but we want to do it in a controlled way. So we have a team, a governance team of it legal and marketing that really look at the business objectives and help guide the different groups into what's the best way to do this.
What are the best tools? We do have some limited usage of copilot and chatgpt, but not really creating content, but more as an assistant that helps you kind of research better, as you had mentioned. I truly believe within the next five years there will not be a Google search. It will be, I think chatgpt or something similar will own the search space.
And we need to enable our people in the organization to do that. So we've started opening those things up, putting controls around them, making sure they're managed, but recognizing that if you're not doing it, you know you're not taking advantage of what's out there. I think the other thing that we, we really try to do is store learnings from these things so that other people can use it. You will find yourself, especially in an organization like ours where we have, you know, a half, a half a million members all trying to come up with ideas and throwing stuff at the wall to see if it'll stick.
You end up spending energy on the same thing over and over. So we've really tried to centralize a repository of what are you trying to achieve? What did you do? What worked? What didn't work? Fail fast and move on to the next thing. So we don't continue to invest. So we've kind of created this enablement capability that drives innovation.
So we have something from the Zoom room. OK great. Thanks Thank you. Yeah if I can relay their comment. Excuse me? Megan says although the data is still emerging, we know that A's carbon footprint is alarming. In my opinion. There is a disturbing lack of conversation in our field about applying AI tools responsibly.
How can we take the lead, or will we wait to be regulated from the outside? That's an excellent question. So I'll, I'll switch to that and then follow up with you, David. I mean, are any of you looking right now in your organizations in terms of the use of AI from an environmental or, you know, footprint? So from, from my perspective, we're really we are involved in policy making, and we are a part of those discussions.
And then from a technology perspective, it's something that I'm very aware of. The issue we have is it's going to happen and it's more about what boundaries can we put around it and what framework can we do. And so I think, I think, I don't think there's any one organization other than the government that's going to be able to put some kind of framework. But I think, as many of us as can be involved in that discussion is really the way to drive the right behavior.
Or the other two of you doing it in the organization, it's still too nascent for involvement. I would have to admit that I hadn't thought of it until the question was asked. That's a good point. Yeah I mean, I'll say as much as sort of what Jeff was saying is, one of the things that we you mentioned was sort of the governance frameworks around usage.
And so one of the things that we're focused on is usage and how we grant access to those tools and encourage entrepreneurialism, but understanding that there is a cost and internally and then how you do it and how you do it. And so just keeping the right governance frameworks around that and making sure that we're being efficient in our usage. No, it's an excellent question.
And I think all of us, you know, have to sort of internalize that. We have also friends who do a lot of AI. So we'll get to you our friends later. But David, you were you were starting to come to talk about the information retrieval part of sort of what's going on. Use of copilot and chatgpt. What what are you doing in your organization to have better use of that?
So we're a very small organization relative to Wylie or other commercial publishers and large societies. You know, we have maybe 350 staff. I know that's large compared to some of the other society representatives here. And I would say that a lot of AI activity is self-directed in the organization right now. You know, you're either enthusiastic about the opportunities and the possibilities, or you're too busy just doing your day job and don't have time.
We did form an AI task force, which is comprised of Representatives from marketing and technology, legal, HR, publishing, editorial and you know, we have regular meetings every week to talk about, you know, what should be some of the policies for using AI in the organization. And, you know, I would say that we're evolving from being very conservative and concerned about the risks of using AI, for example, uploading articles into a free version of chatgpt and what that might mean to, you know, starting to use licensed, you know, versions where we know that our content isn't going to be ingested, although it already has into, you know, these LMS we're lucky in that, you know, we have physician editors, one in particular, who is, I would say, an AI expert.
And then we also, you know, going back to two years ago, launched a new journal, nejm AI. And so, you know, we have access to some of the leading subject matter experts in AI who can advise us on, you know, how to use AI. I'll mention, you know, one use case that we're doing related to, unless you want me to hold off on content. Yeah, we'll get to the content.
I'll hold off on that. All right. You keep getting ahead of me, but that's OK. No, that's. It's very helpful. I mean, the other aspect that I talked about was the workflow aspect of sort of these tools and how it can improve various, various tools that improve the workflow, the editing and other pieces.
The scary part of all this that all of us have heard is, you know, we get rid of our editors? You know, we reduce our staff and replace all of this with ai? So, so I guess the question I'm really asking our panelists here is, are there considerations going on within your organizations to improve your bottom line? Let's call it this way or reduce staff because you have access to workflow tools or these AI assistant tools that improve sort of editorial speed and other pieces.
What kind of considerations are going on at wiley? So our perspective really hasn't been around replacing staff, replacing editors, you know. You know, replacing and removing the role of authors in the process. We really see AI as an opportunity to support and augment what they do. In my perspective, this isn't a question about how do we remove folks, but rather how do we give them the tools that allow them to refocus their energy, their times, and the priorities towards the most, you know, activities of the highest value.
And so one of the things that we're actually doing, as I mentioned, we're looking at ways to deliver into the hands of our authors and editors and creators tools that support them. So one of the things that we're doing is, you know, we have all this content and we're thinking about how can we refashion, how can we reformat content right into other derivatives that allow us to reach other markets or kind of, you know, expand access or sort of expand, you know, revenue in all these ways, increase our title output.
And so the tools that we're looking to create are about growing the pie and not about reducing the pie at all. And so so we see that as the opportunity. And that's really more the question, which is, how can we accelerate the work that we do. How can we expand the work that our authors and our editors do? Through the use of all these tools? Thanks Jeff, I would say it is not the size of Wiley.
You're more a large society, obviously. Are you looking at changing the way? I mean, I know it's not in your remit necessarily as sort of information officer, but are you looking at the use of these tools in, in improving sort of your cost base and your staff, or how are you looking at these workloads? It's very similar to WileyPLUS is it's, it's about increasing products and services and not cutting costs, although cutting costs in the way of getting rid of the administrative tasks to allow the organization to spend the energy and more valuable tasks of product creation, service creation.
I think it's an interesting concept because, you know, especially coming from the IT side where historically it's been it and business. And we each talk about each other as, Oh those are the it guys and Oh, those are the business guys. This is the first real thing that is initiating these discussions about there having to be dual roles. This this is you can't just be an it person to train a large language model.
You can't just be a business person. You have to know both sides. And this is the first time where I think we're going to start seeing there's not going to be an it organization. In a business organization, there are going to be a business organization. It's going to be a business organization that knows about technology to do their job.
And this is changing everything that you look at. And it's opening more opportunities for people and that and not restricting them. I think you can be instead of just being an editor, you can be an editor and someone who's training a large language model, whether it's within your organization or not, in your organization, you know, there's a lot of discussion internally that we have. Is will these large language models take away people coming to our content, or will they incent people to come to our content?
And that's what a lot of the discussion is about. And how do we operate in those models? We have another Zoom room. Oh, cool. Thanks all right. This is from Emily. As a science organization and publisher, when we start to encourage industry wide use of AI tools, we need to think about energy and water resources.
Additionally, we need to think about land usage. As an organization located in the DMV. Va has just been identified as the data capital of the world. Do we from the inside need to pave the path in terms of coming up with guidelines and metrics for when data no longer needs to be stored. How are we managing these massive amounts of data to account for our broader community impact? Well, that's a hard question, isn't it?
I guess it's sort of related to that sort of resource question that we had previously. And I'll follow up with this question in a second as well. In terms of the three of you. I just want to finish up with David in terms of, you know, from a medical perspective, medical organization. How are you looking at workflow and the use of these tools for your editors?
And, you know, are you fearful that, you know, it can be it's very, you have some significant impact if the information that's produced is wrong or there's something that's, you know, introduced in there that other physicians might see. So what is your group doing to sort of help improve uses of it or. No no use of it. So I would say that we're different from a lot of organizations, a lot of publishers here, and that we are a very low volume publisher.
We probably publish fewer than 400 original research articles a year across our four journals. Most of them get published in nejm, so we don't have a problem with integrity and paper Mills. Our editors do some very quick scanning of abstracts and a lot of desk rejects. And the tools that we have looked at, you know, have yielded a lot of false positives. We don't think that they're good enough to spot a paper that's been written entirely by AI.
We've had maybe in all of 2023, a few dozen submissions where the authors have acknowledged the use of, you know, chatgpt or other AI tools. And, you know, even if someone did write a paper entirely from chatgpt, the research was good. We accepted the paper. By the time our copy editors manuscript editors get their hands on it, it would be completely rewritten in the nejm style, and you wouldn't know that.
You know, it was written or, you know, aided by any AI tools. Yeah, that sort of hands on work that you do has a very different effect than maybe other organizations that have sort of editorial at a, at a very specific, you know, small granular level. Yes, please. I wanted to probe a little bit more on the workforce piece, because I think with any Industrial Revolution, there is an impact and a reduction in workforce.
So mechanization, mass production, information age, you will see certain skills, fate no longer needed as you're replacing I. And so since this is about adapting the workforce, what are you doing to ensure that you're augmenting? You're helping staff. But there are staff who are very hyper focused on a task that I will now do for you. How are you upskilling them and finding them new roles in your organization?
I'm happy to start that. So what's been fun about this process for me is that as we've been developing tools. So just as context, like I mentioned a little bit earlier, a lot of the tools that we're developing are around how do we expand our publishing footprint and how do we increase our publishing productivity? And so a lot of that is about looking at working closely with our editorial teams and thinking about how are we able to utilize content that already exists and refashion it, reformat it, expanded increase access, et cetera.
And so the tools that we're actually developing, we've been developing them collaboratively with all those editors on our editorial team and our editorial staff. And so they're actually providing input into how those tools can help them do their job. And then subsequently we're training them and they're getting, you know, sort of training from either Microsoft or training from any of our other partners on how they can best utilize the AI tools to do their jobs better.
And so for us, really, the focus is about how anyone at Wiley can find and expand their own professional capabilities and their own skill sets through the use of AI. So the way we see it is that the world isn't going to be a case of those who do AI and those who don't. But really or, you know, the victory will really go to those who use AI to do their jobs better.
And so the way we're thinking about it is everyone at Wiley has an opportunity to use AI. And so by building those tools with our editorial teams, we're building the tools that make sense to them, building the tools that they can actually use. And if we do our jobs right, we'll actually build the tools that sort of expand the pie that make their roles sustainable. Yeah Yeah.
I think along the same lines, I think you can take people that maybe aren't as deeply skilled in other areas and make them productive in other areas by leveraging AI. So I think it can open the job market up to a y-intercept of people, and more generalize some of the capabilities into more around culture and work style and engagement rather than specific skills, because.
Gen Gen AI and other AI and machine learning tools are going to be able to provide at their fingertips some of the maybe more specialized things that you need to know. So I think that it makes a bigger workforce. Now, with that said, you're right there. The days of somebody doing grammar correction, and if that's all you're doing, that's not going to be here, but being able to do something that you couldn't do before because you didn't have the experience that a product or a solution is built that can allows you to look at other opportunities.
And I think that's the way we look at it. I would just say that it's maybe a little controversial that I'm not going to project whether there will be net job gains or net job losses because of Gen AI. It is forcing us to think about, you know, our marketing organization and skill set. Do we have the right skills in the organization? What do we need to do to either upskill? You know, colleagues or bring new talent into the organization?
Customer service is another area. Finance, editorial. You know, there are so many opportunities that are operations and workflow related that I can improve and make more efficient and reduce costs and hopefully enable staff to do more fulfilling and rewarding types of tasks. But let's, you know, be sober about this. You know, it may mean that there are some colleagues who, for whatever reason, you know, aren't the right fit for the future.
As an organization, we will do everything we can to make sure that, you know, they either find a place or, you know, you know, leave the organization in a respectful manner. Go ahead. You had was there another question? Do you have an online person? There are two comments online. Oh, lovely.
OK they're somewhat related so I'm going to read them both. OK from Matthew. I wondered about the panel's thoughts on AI use by bad actors, not in just creating articles, but potentially also nonsense data methods, code parentheses. Things we use to indicate trust or research integrity. The following comment is from Wiley groupwatch. Adding to Matthew's question above, I would also like to hear how we can prevent the damage done by nonsense data.
In the keynote yesterday, reputation was brought up as the main mitigating factor preventing bad faith AI usage. But by the time reputational damage has been affected, how much of the nonsense data has been integrated into a model? OK, well, I'm going to adapt that a little bit since this is related to workforce. I guess my I'm going to adapt it a little bit in terms of our workforce.
Are you doing anything in your organizations to train people or, or give them the skills to either have tools or recognize bad actors with use of ai? So I'm sort of flipping it and saying, what's going on in your organizations to account for this in your editorial processes, or is it too new and you haven't done anything yet? Yeah so I think that so I don't think it's too new.
And I think one of the key pillars of Wiley's efforts in AI is deeply embedded around integrity. And so much of what we're doing is trying to think about how to create the systems and the tools and put those in place, in some cases leveraging AI and in some cases to safeguard against bad actors in AI. Things around disclosure. Tracking things around transparency. Think about things around sort of AI, contribution around, you know, sort of the work that is or the content that's developed.
And so in many cases, I think it's also about maintaining some of the existing processes that were more human centric in place around plagiarism detection or any sort of copyright violations. All these, you know, pre-existing, I would say, human centric processes and tools that we had in place, coupling those with AI centered tools that enable us to kind of detect when we find bad actors and then sort of have processes in place to address them.
So don't think it's too early. It's something that we're deeply focused on, and I think it's something we're going to continue to have to focus on in the future. So with that, I mean, we talk about this all the time. It's something that is front of mind for everybody in the organization. My belief is you're only going to be able to apply technology to find these things.
They're going to get better and better. So the so the detection technology, whether that be AI or some other form of detection, is the only hope we have. It's the same thing we look at in the security space. Security attacks are getting more and more advanced, and the only way we can keep up with them is by providing AI and other kinds of technologies that can do that.
So I think it is a combination of being aware and then putting it into the, into the DNA of your company that you have to do it. I'll just quickly say that. So, you know, our journals and any jam in particular, you know, we have statistical editors at that, you know, have been with the journal for a very long time now. You know, AI is a different animal in terms of the amount of data.
The type of data that would get submitted with an article. And so a journal like nejm, I is going to require, you know, a very special, you know, type of, you know, editor who can look at the data, interrogate the data, and make sure that it's, you know, the data has integrity. Are there going to be times when something slips through? Guarantee it. You know, there's only so much tools and humans can do.
And I think AI is just going to get better. And if you have bad actors who really, you know, want to pull 1 over on journals, it's going to happen. And we just should expect it to happen. Thanks I'm going to change track a little bit. So the other aspect that I spoke about with regard to sort of generative AI and its application is, is sort of in a lot of them are a lot of us are publishers here involved in publishing.
Is this other aspect of licensing the content to LMS, the availability of this information, you know, in a wider scope, and what are you doing in your organizations to equip and understanding of sort of how to license this or, you know, are there groups that are thinking about the legal, the copyright, the sort of all the IP implications of the content that you would be working with LMS on?
And so, you know, I'm not sort of asking, what the process is, but is there work going on internally to think about this, and how are you upskilling folks to deal with that in the future? I'll start with the David first. Yes so we would just like to do a deal. And so we have, you know, appointed an individual in our organization, working with an external consultant to reach out to LLM companies, large small insurance companies in the health space to have conversations around content licensing.
And we've learned a lot in the past, you know, 6 to eight months by having these conversations. One, you need to start with just counting the number of tokens in your, you know, content corpus. And, you know, what we learned is we really don't have that much compared to what some of these big LLM companies are interested in. We've learned that some of our content is more valuable than other types of content.
Original research for some of these big LMS isn't as valuable as, say, case reports or review articles. That was that was interesting to learn. There have also been concerns raised among colleagues that, you know, if you licensed content to these companies, are you just going to be cannibalizing, you know, traffic to your own website because they're going to be able to serve answers that are as good as, or perhaps even better than your own website, but you're not going to learn anything unless you reach out to these companies.
And it's hard. You know, I think we can get conversations. We can open doors because of our brand. We're the New England Journal of Medicine. These companies want to talk to us. I think it would be harder, you know, for some other types of journals and brands and, and some are part of larger organizations. So we'll get to WileyPLUS.
But I guess for IEEE, are you considering this kind of path and what are you doing? Yeah, we've had a lot of conversations with the different organizations, but the problem is they all value content differently than we do. So they value content and tokens, word count. All of our content isn't equal. Our standards.
Things are maybe more important than historical data that we have from 20 years ago. So but that's not the way that these LMS do it. They really are about amount of content. And I think over time, working with us, they will start to learn what is more important content and what is not, at least to us. But I think that's one of the struggles we've had. Is that the value we think we should get for our content versus what these companies are willing to pay doesn't really line up right now.
And I'll just sort of just echo, you know, sort of, you know, Wiley similarly has had lots of discussions and public knowledge. We've also sort of struck some agreements with some of our licensing partners in the licensing of content for LMS space we see participating in the shaping of these LMS as critically important for us to be a part of the conversation and be at the table, as David mentioned, to learn what is important from some of these partners, and then also again to be a part of the conversation and shaping what, what these agreements look like and, and what usage of this content looks like and how rights are sort of distributed across all the stakeholders within it.
I think as David and Jeff both mentioned, all content, all tokens aren't considered equal. And so I think for us, it's about thinking about how to go about these licensing agreements smartly and ethically, of course. And then in addition, sort of getting to a point where you can sort of be strategic about the types of agreements that you're creating and where your content is best utilized or has the highest value.
And thinking about, you know, either what sorts of industries or which types of applications, you know, more broadly or specifically that you'd like to sort of direct your content at. I just say one thing. Yeah, absolutely. You know, don't ignore this one. Don't Yeah. Don't ignore the long tail of licensing opportunities.
What I mean by that is, you know, we're seeing a big uptick in the number of academic institutions, researchers at academic institutions who are doing AI research, and they would like to properly license either, you know, all of our content or slices of our content to do that research. You know, our licensing lead cannot negotiate all of those deals individually. So, you know, think about ways that you can license that content efficiently without having to do one by one, you know, agreements.
Yeah let's have the questions. OK let's go. So my question is around the sort of user experience of AI. So I'm very annoyed now when I use Google and I get the AI summary at the top because I tend to read it and not read the scroll. And since I don't know if it's Gemini that doesn't know how many countries that start with a K And the continent of Africa exist, I was like, I should probably scroll.
And it's taking me back to this conversation we had in the 2000 with libraries being so frustrated at the Google search bar, right? And we can't get our users to use these advanced search tools where licensing because of the Google search bar. So I'd really like I mean, it's probably too late in the session, but I'd really like to have a Frank conversation about the fact that the researcher interaction is going to be the same.
Like, why do any of us think researchers are going to be clicking through to our platforms when they are increasingly using and expecting this kind of synthesis to happen with an overlay that we don't control, that doesn't push anybody back to our content? I'd love to have a conversation about if we can't monetize the behavior that's coming on our platform. You know, let's talk about the consequences of that.
In '10 years. I'd really be interested in thoughts from everyone on this. Can I say something about. Yeah, go for it. So I've thought a lot about that in the context of our strategy, our websites. And you know, this is my opinion. I think 95% of journal websites stink in terms of the user experience.
The search is horrible. You know why? Why would a user come to your website regularly and have a horrible search experience, customer experience, user experience. And I think it's incumbent upon publishers, you know, to stop blaming big tech companies for eating our lunch and siphoning and cannibalizing traffic and thinking about, you know, how can you deliver a better user experience on your websites?
And it's not always about just delivering your content. There was a recent outsole report talking about a new dyna AI product in the health space. And, you know, the takeaway for me was, you know, they said, you know, this is an opportunity for publishers, you know, content providers to partner with, you know, solutions providers to help, in this case, clinicians to get to do multiple jobs to be done in a single session without having to exit one program to access, you know, content or vice versa.
So I think, you know, the pressure is on us to figure out how to deliver a better, you know, user experience. I would just quickly say it's going to happen. The reality is it's there. So it's thinking more about models that can still be productive. And some of the conversations we've had about sharing the monetization of driving traffic to our environment.
But the reality is this is going to happen. And we all used to send our content in printed form or on a CD or in some other fashion. So I promise you, that's already uploaded. It's already out there. So it's just they're going to get better at answering the questions. And and what will come down to is, is there value that we can still provide beyond that summarization or beyond what's going to happen?
And I think that's what we have to look at because there is no stopping the chatgpt evolution. It's just how do we monetize that ourselves? That's what I'm going to inject in interject in a second. I'll get to the next question. But I also wanted to sort of clarify that. I mean, yes, you know, it's going to happen. Right and Heather even said it, you know, we try to find information from publishers and is barely findable on websites.
So we all know that they suck. Sorry but the reality remains is what are you doing in your organizations? And you know, you don't all have to answer this, but what are you doing in your organizations to set up for this? That is, you know, are you training people to understand that we need to do this? And how are we going to make our content available to make this work?
So what it is this? OK, cool. So what it ends up being for us at Wiley is that it places an emphasis on things like getting closer to the customer and understanding what customers need and creating the mechanisms and the sort of capabilities within the organization that are designed to actually get feedback from customers and convene whether it's focus groups, surveys, sort of deep in-depth discussions and get that market insight that that helps you to identify what are the unique values what's the unique value proposition that we can provide that, say, a generalized LLM response on Google cannot provide?
Right what's the what's the value in the flow of work that you can provide. Right specific to that industry. Specific to that set of professionals. And how can we control the sort of user experience at the point of need that create value above and beyond what, you know, as Sarah was mentioning, sort of the, you know, the Google, you know, AI summary at the top of the page.
No, it's perfect. And if you remember, we had a whole market research presentation yesterday. So that may be of use to folks. And and I would also say that if you think about this and Danilo is a good example, that's taking clinical decision content and sort of making it available through an AI engine and LMS and, and making it specific for the use case, so the physician can find an answer to a question that they're asking of the AI engine.
That's a very specific use case. And that and I think that's what David's talking about in terms of adapting the information specific to what's going on. But you have to have the research around that. So but I know there's a question here. Thanks Simon Holt from Elsevier. So I want to return back to the theme of workforce that we were talking about earlier.
And I feel that one of the defining trends that we've seen in our workforce over the past 10 years has been the rise of conversations about Dia, especially equity. And I'm really proud in particular of the work that SSP has done in order to advance this. Over the past couple of years, I've seen more and more I being used in hiring processes, in talent management and performance, et cetera and I'm really quite concerned at the impact AI is having in terms of, in terms of basically, I suppose, eliminate eliminating affirmative action, especially in demographic areas like sexuality, like disability, like neurodiversity, and like social class, where it's difficult or impossible to have metrics around these things.
Right our industry is great because it's about ideas, and our industry is great because it's about bringing together diverse perspectives that make great decisions. So I'd be really interested in the panel's opinion as to how I can be a contributor to ensuring equity and affirmative action, as opposed to taking away from it and undermining efforts to advance equity within our industry.
So I'll start because I lead co-lead. We have a council at the Massachusetts Medical Society, and right now we are not using any AI applications in the screening or the hiring process. As I mentioned before, we don't have a lot of employees. So this is a case where I think it still requires humans in the loop, you know, senior leadership and managers to, you know, look at the data on your workforce and make sure that, you know, you are exceeding, you know, benchmarks around diversity.
You know, for me, I can't think of an AI application or a tool that's going to help me or the senior leadership team figure out a way to. Help those from underrepresented groups to rise in the organization. You know, what I see in our organization is we have great representation. It's not so great at the senior leadership and at the management level.
And so I think that takes, you know, humans to talk to each other and figure out how do you mentor those individuals. So that you do have greater representation at the management and senior leadership level in the future? Yeah, I would say and some of this is opinion and some of this is what we're doing, but we talk about it all the time and continuing to talk about it and making bringing awareness to it is going to have to happen today today, a lot of the tools out there, HR tools, if you're using whatever you're using, you know, they have, you can turn AI off and on.
I will tell you, in the next few years, you're not going to know whether I is running in a solution that you have or not. And so relying on these tools to make decisions for you, there's always going to be a bias somewhere. So you have to keep people in the loop and you have to talk about it and you have to think about it. The other side of it is, as much as possible, we try to be involved in the conversations around ethical AI and where, where because we can't change the way openai does their modeling.
But we can have a talk about what is ethical and try to force that into the industry. And I think that's the behavior we have to continue to do. That's going to make a hard question for you because you're at Wiley. So, I mean, Wiley is a super large organization, right? Multinational and HR is not something that is going to sit-in one in your organization directly. Right?
HR is going to sit at a very broad level, and they're going to have lots of tools that are in front of them that are available. I mean, from your sense, do you have an impact in terms of the tools that HR picks at wiley? How does that, you know, how can you have an impact on, you know, larger in a larger organization or whether it's an Elsevier right in, in, in making sure that the tools that are available are ethical from an I use.
Sure so I mean I'll just answer it directly in terms of where I sit. No, that's not sort of my remit to determine that, you know, HR in PeopleTools and performance technology that's at use at Wiley. But Wiley does have a focus on diversity, equity, and inclusion. And and has a function that supports and has input and provides sort of guidance and direction on people issues and talent organization at Wiley.
And so I think as much as we have representation in the decision making around the types of tools that we select around the way that we implement those tools directly. At Wiley, I think is important, as you mentioned. We don't determine how openai builds its tools, right, in any, any tool, really. But we do determine how we customize that, and we implement it when it comes off the shelf at Wiley.
So I think it's about having just, you know, representation at the point of decision making. I think it's about having representation throughout the sort of implementation and execution of any sort of centric or focused sort of people policy, whether it's hiring, whether it's recruitment, whether it's retention, and whether it's people development. I think it's important. It's important to remember how these tools work.
They work based on data, and a minority population is going to have less data, and there's going to be a bias that gets built into the tool. Now we all have to accept that, and we have to put things in place to deal with it. You can't you can't fix the fact that there's less data about different groups of people. You can't fix that. There's more data about other groups of people because by definition, that's what the minority group is.
So we all have to just recognize it and make sure that we put processes and, and, and, and awareness into the organization to deal with it. So well, I'm getting the chop of the neck on. I'm sorry. We have to end it. This has been a great conversation. No, no, it's OK. So I Thank you.
I'm sorry if I couldn't get to other questions. And I'm, I Thank my panelists for presenting sort of interesting perspectives on sort of where I is in the workforce. And if we want to have other conversations and we're happy to have that sort of later on and, and get into the chat and we'll, we'll answer those as well. Thank you again.