Name:
The Other AI: The Role of Actual Intelligence In the Future of Scholarly Publishing
Description:
The Other AI: The Role of Actual Intelligence In the Future of Scholarly Publishing
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/6c655119-b638-4689-9cf7-b48774b2f440/videoscrubberimages/Scrubber_1.jpg
Duration:
T01H00M25S
Embed URL:
https://stream.cadmore.media/player/6c655119-b638-4689-9cf7-b48774b2f440
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/6c655119-b638-4689-9cf7-b48774b2f440/session_1a___the_other_ai__the_role_of_actual_intelligence_i.mp4?sv=2019-02-02&sr=c&sig=vtD5MZqAYli4lWf3zmOuawAWuu6ELbH0%2Blx%2BKk9dJuE%3D&st=2025-04-29T21%3A55%3A58Z&se=2025-04-30T00%3A00%3A58Z&sp=r
Upload Date:
2024-12-03T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
All right. I got the thumbs up. So we're going to go ahead and get started. Well, first of all, welcome, everyone. My fellow panelists and I are very excited to be among the first block of speakers today. And we have a lot of engagement built into this. So keep the coffee flowing because we're going to be engaging with you as we go.
Since this is the first concurrent session block, I've also been asked to make sure that everyone has reviewed the code of conduct and that we align ourselves toward that as we dive into the discussions today. So first, I want to introduce our panel. We have Abby Arun, a CEO of TNK technologies if I go by this order of the slides. Penelope Lewis, chief publishing officer at AIP publishing, Nikesh gosalia president, global academic and publisher relations at Cactus communications.
And last but not least, Amita snow, Director, accessibility and diversity, equity, and inclusion strategy publications and standards. ASC that's I got through it. And I'm Stephanie Lovegrove Hansen, the VP of marketing at Silverchair, and I'm excited to be moderating the session today. So second, we want to get an idea of what brings everyone to the session so we can make sure you're getting out of this what you wanted to.
We had had a poll in the chat, but it's not really working at the moment. So instead raise your hand if you just don't want to talk about AI. And that's why you're here. OK Yeah. I mean, that's fewer than I expected. So, so well, and that brings me to my third point, which is we've been lured here under false pretenses, so I have to apologize because we promised in the description not to talk about AI.
But as we were planning for this session, we realized that by talking about everything, not AI, we are, of course, talking about AI. So we are going to dive into that today. So we Thank you for your forgiveness. So to kick us off, I'm just going to briefly set the stage before we dive into our discussion. So as we all the last two years have seen a rapid change in the role.
We see AI applications and capabilities playing in our lives and in our industry, and it's now proven capable of things that we thought were exclusively human. Things like art, poetry. The voices that they create are extremely realistic. So where does that leave us. Is there more that is unique to being human than the ability to identify all the bicycles.
Because I hope not. Because it turns out I'm terrible at that. So this has been a topic of much discussion, a variety of media outlets and a recent survey of the most in-demand skills from LinkedIn still had key human skills as being the most in need. So things like leadership, things like communication, and in a values driven industry like scholarly publishing, we're left to wonder not only what can't we do, but also what shouldn't we do.
And this was obviously touched on a lot in the wonderful keynote yesterday. So our industry is grounded in exactly the characteristics that I most struggles with. So things like trust, quality relationships. Most recently in the news, we've seen these values come into conflict with AI capabilities as research papers have gotten all the way through peer review, editorial to publication with clearly written AI passages and occurrences like this threaten the entire underlying foundation of the work that we all do.
And I really like this quote from the CEO of the medium, where he points out that issues of quality and whether that's spotty AI writing or just bad writing, which are often synonymous, is something very easily done by humans, but very inconsistently by AI tools. So what lost last line break there. What are the other areas like this in scholarly publishing and what are the spaces where we need real life humans.
And that's what we're going to dive into today. So to kick us off and give an idea of the perspectives that our panelists are bringing we're going to start with a question, which is what comes to mind or what is the primary focus when it comes to the idea of human intelligence in our industry. So we'll start with you. Sure can you guys hear me. Yeah OK.
So there are many values, but the one that I think would be critical is problem solving. If you think about us as businesses, we all say we do this and we do that. We either provide content as a to our customers, we provide services, we provide technology. But ultimately, we are problem solvers. And that's an inherent human quality that I don't think I will ever be able to in a manner humans can.
All right. Well, I want to start by first thanking Stephanie for organizing us and moderating this session and also the panelists. We've had some really great discussions towards getting to this day and this panel. And I'd like to Thank all of you for attending as well. So when I think about human intelligence my background is from research.
I started before publishing. I was a physical chemist and my time in the research lab, while it was fairly brief, has really influenced my time in publishing as well. And I bring that perspective to all my the decisions I make. And so on. In my career as a publisher and as a publisher. And I'm sure this is true for many of you in the audience as well.
It's not a very unique career path. At the same time, in my role in publishing at AIP publishing, I have the privilege of working with about 40 academic external editors in chief, another 250 or so deputy and associate editors. And they really help us to shape the direction of our journals, really validate and curate what we ultimately end up publishing within our journals. And with that network, which also includes 100,000 or so unique reviewers that we rely on annually and another 170,000 or so authors.
That is an intensely human network of curiosity, intelligence, insights that really I think I cannot replicate. So we heard a lot in the initial opening keynote panel, which I thought was really fascinating about how I can augment those types of interactions. But as far as replacing, I think we'll never we'll never be able to replace that kind of human intelligence and insight that they bring.
Morning, everyone. So yeah, even I would like to Thank Stephanie for some fantastic pre session conversations that we had. We had a great opportunity to know each other really well over the course of the last couple of months, just like you mentioned, Penelope.
I think in terms of my background, I have been with an organization called cactus communications for over 16 years. For the first eight years, I was involved in building teams. So that involved in-house editorial teams, but also freelancing subject matter experts. And essentially, we provided human led services for a very long time. From my perspective, the first thing that comes to my mind in terms of human intelligence is the person it's the human.
There's no doubt that I can master a lot of complex processes at work, but humans want to still be led by other humans, even if that humanity comes with flaws and messiness. Though we often think of work as in sync, perfect coordination, ideally organized in reality, work is full of messy dynamics, insecurities, personal agendas, ambition and hopes.
And as a result, we all just show up as being humans at work. So I think that is where the greatest limitation of AI is. Clearly, all leaders need to embrace AI and you can't be left behind. But at the same time, we cannot really forget the whole human centric leadership around the future of work. So in my opinion, just to summarize, I think the future of leadership is AI enabled and not AI dominated.
Good morning, everyone. Again, like everyone. Can you hear me OK. All right. I also want to Thank Stephanie and my fellow panelists and Thank you all for being here. When I first saw the question, I was thinking, Wow, you human intelligence definitely prioritize it over AI.
I look at AI from a Dea standpoint. And when I'm thinking about it, I realize that a lot of the conversations. That's a lot of the conversations that are happening don't necessarily include people that look like me as it relates to adoption, implementation, and the development of these technologies. I think about the fact that this is an additional burden for many people of color.
I think about the fact that a lot of times leadership determines the tools that are going to be used by their teams. And I think that there should be a conversation with the actual people that are going to be using these tools and open discussion, because I think a lot of people on that are doing the work have really good ideas, but they're not being asked to be a part of the process. So I would ask that some of you consider having more broader conversations within your organizations.
Wonderful Yes. OK making sure this is working. We do have a couple of polls in the app I encourage you to participate in. And one of them We asked, how frequently is I being discussed or used on a daily basis in your organization. Share the results with everyone now. And the good news is there's very few people who said not at all.
For the most part, we're all engaging with it. Very few of us spend all day on it, but most people are spending between an hour and two hours using AI or talking about AI in our daily lives, which I feel like that reflects a lot of our own experiences. So I'm going to go back to d'almeida here and say, a lot of us are thinking about how this is going to influence not only our organizations, but us personally in our jobs and our day to day, and how are we going to use AI, how will our jobs change and evolve.
How will our workforces change. So if someone's thinking about how they can find out how AI will impact their job, what would you recommend. I would recommend that they talk to their managers, find out what their managers are thinking, and they could work together to come up with some solutions. I also think that they should do research on their own with colleagues, reach out to other organizations, volunteer groups.
I learned a lot from volunteering. I volunteer a lot with women in AI. It's a fantastic global group. I'll just do a quick PSA for them. It's at women in ICO. A lot of volunteer opportunities there. And do some research. Don't wait for the technology to come to you. Get out and find out what's happening.
Anything to add. I would just also add, I mean, in addition to being proactive in taking control of your own career, I think it's also incumbent on the mentors themselves and managers to continuously inform and have those conversations as well with the folks that report to them, too. It is something that everybody is hearing about all the time. This whole conference.
Practically, almost every session is around AI, and there's so much discussion about how it's going to have the potential to disrupt. So many different careers, different levels of organizations and so on. So I think this is going to end up hitting almost every role in every function. So I think it's really important that managers keep those lines of communication open as well.
Yeah I also think there's something to identifying. What are the things about your job that will not change. And two, they say you can play up your strengths more than you can counter your weaknesses. So instead of thinking about what's going to be taken away, what are the unique strengths that you bring. And then how can you lean into that. Anything up here. OK all right.
And thinking about that and about what are the human values that we have and that we bring to our jobs and our lives. What do you think is the one human value that will never be replaced by AI. How much time do I have. So when I started thinking about this session and by the way, I cannot be the only one who hasn't thanked Stephanie, so I'll go back and Thank you.
But for the human intelligence that you brought to getting us prepared for this session. So I ran a survey and I spoke with my friends, my family, my colleagues, and asked the same question what is that one human value that will remain in the world of AI. And I got some very obvious responses. And I'm not going to tackle those. I'll focus on one that I thought was quite unique.
And for that I'll take you to 16th century Europe. Think about Renaissance. Renaissance was a time when everything changed. All the conventional beliefs that humanity had up until that point in time changed. We thought sun revolved around the Earth until Galileo came around and said, no, that's not true. And there is a different approach to it. That capability of humans is called divergence.
Divergence is the ability to think differently, is to challenge conventional perspectives, is to look beyond what data tells you, is to look beyond what conventional norms tells you. And I think that as a human capability is something that I will never be able to replace because AI works on data. If you take an AI model, train it on data that was available in the sixth century before Galileo, it would have proved in its own way that Yep, sun revolves around the Earth because that was the only data that was available to it.
So the human ability to differentiate the human ability to challenge a challenge, a conventional belief, is something that I think will remain prevalent in the world of AI. That's really interesting. So my I think the thing that won't be replaced and is really critical and something to always keep in mind as we're having these conversations is curiosity.
So this is something around, of course, scientific research and advancing it that comes up over and over again. Anybody who has small children, which I do know that humans are innately curious and intrinsically curious. My daughter, who's 7, will she asked this wonderful question the other day. We were in the car and she just said, Mom is light two dimensional.
And I thought my God because I'm a come from the world of physical science. I thought she's going to be a brilliant scientist thinking about these wonderful questions. And then, of course, I went on this long explanation, trying to recall my trainings in light and electromagnetism and all of these things. And then in the next question, she asked, mom, do you spiders pass gas.
And that's something that we really can't replicate. So I will never be asking those questions. We might be able to train them to ask those types of questions, but I think that innate curiosity is something that humans bring and curiosity and divergence go hand in hand. Yes that leads to divergence. Yes Yeah. Thank you. Thank you for sharing that.
Even I have a 6 and 1/2 year old and we clubbed this work conference with some personal time as well. And I've had all of that happening over the last 2 and 1/2 days. And I'm still kind of wondering what the answers to some of those questions are. But Thank God that we've got that curiosity going. For me, I think it's emotions. And I know there's a lot of conversation about building that perhaps in some form it could be avatar that it could take.
And you can talk to. But I maybe it's my personal opinion I thrive on talking to people. I thrive on energy. I thrive on thrive on passion, enthusiasm. And I just feel the fact that we all are here, it's a sellout. Everybody wanted to meet each one, though we've probably met only a couple of months back. And I think it just that personal connect and that personal touch that we need and I hope that's never replaced.
I don't have a kid story or anything like that. My kids are grown. I for me, I would choose the word empathy. Actually I think that is not anything that can be replicated by AI. I think it's certainly something we can all well, I'll speak for myself, do a little bit better. Some days I'm better at it because we all need to extend grace to others.
And we're only human. Some days are better than others. But Yeah, I would choose empathy for sure. For sure. I like that. I feel like that's something that's been much more in discussions with the rise of remote work, too. It's something you have to be a lot more intentional about in your interactions.
So this question we'll kick off with nakash. So what are some of the ways that a human centered approach will enhance the impact of research and publishing. We've been talking about this very broadly, but specifically to our industry. What are some of the things that you expect will change. Yeah, so Thank you for that. I think academia thrives on critical inquiry and all stakeholders would be positively impacted by a human centered approach.
So let's start with researchers. A researcher who adopts a human centered approach understands the diverse needs and perspectives of their audiences. They take efforts to tailor their communication to resonate with different stakeholders that they talk to. It could be policymakers, it could be community building efforts. They can also enhance the accessibility and relevance of their work.
And this can eventually help foster meaningful dialogue and promote broader societal impact. There's also the element of interdisciplinary collaboration. I think that's something that we've seen which has grown over the years, and that's not something which can be replaced. I think there is if we talk a little bit about maybe, say, peer reviewers, I think a more human centered approach, there will ensure that there is more communication and interaction happening between the author and the reviewer and the reviewer and the editor as well, which enables alignment, which enables common discussions.
And last but not the least, I think ethical considerations need to be kept in mind as well. So I think those are some of the things that come to mind. Any others want to jump in on that. Yeah, I can. I can. Yeah so I think research is fundamentally about people. The question is, what is the human centered approach. Research is about people.
Research is conducted by people for the benefit of humanity in general, so it will always remain a human centered domain. Human centered. There is no other possibility because researchers bring their own perspective when they come out and conduct or choose a topic for research. Researchers thrive on diverse viewpoints. The whole research that was led, the whole research that has led us to this point was also done by humans for the benefit of humanity.
And if it ever goes wrong, you guys are the ones to blame because you published that research. Research integrity actually requires a very human centric approach as well. When we think about research integrity, we are all saying, AI is committing all of these frauds. You have got a system that creates fraudulent images. AI is not doing it. Humans are doing it.
So research integrity also requires a human centered approach in looking at what does. We sometimes forget about what humans are doing. It's us who need to be at the center of this whole research ecosystem, not the AI. So one of the things that comes to mind for me in terms of human centered approach and I agree with what Abby and nakash have already shared is to get a little bit specific as well as I think that as we're thinking about how I can and will and already is augmenting some of the processes within scholarly publishing.
So take peer review, for example. We're hearing a lot about, reviewers using AI to potentially help wordsmith their peer review reports and what does this mean. And so on. And I think that time is probably coming in terms of how can we continue to enhance things like writing skills and Polish peer review reports, but in that world, then it becomes even more critical to have that human centered approach and have the accountabilities going back to the actual human beings who are providing those insights and those criticisms and those evaluations to help improve those research papers.
So experiments around things like collaborative or open peer review, where it's identified who the reviewers are, or there's an identification amongst the collaborative, collaborative peer reviewers themselves. I think those are going to become even more important. So the human centered, those human centered interactions are going to become even more critical as AI becomes more and more prevalent.
I would just like to add as we're talking about a human centered approach, we should also be focused on creating these guidelines, ethical guidelines for people to follow. You talked about accountability. That goes along with responsible AI. So all of these great tools out here, it makes me and some of my colleagues know this, I hear about AI and I cringe a little bit because it's like, yeah, OK, but what are you considering.
Where are the people in this process. Who is it going to affect in a negative way. You and I'll just tell you a really, really quick story about how AI affected me personally. A few years ago, we used fire sticks for some of our TVs at home. My husband grew up in the South. He has a cadence in his speech. So, we got this fire stick.
And I would tell her give me the weather, whatever, no problem. But when he would talk to it, it could not understand him. And so these are technologies that are supposed to be for everybody. Everybody should be able to use it. I could use it. He couldn't. It was becoming frustrating for him. Over time.
It has gotten better. But that was an instance where I felt like, Wow, what. Where did this data come from. Who is being represented when they're training these technologies. And so, again, we just have to be broader and outreach. We have to be mindful when we're using these technologies and think about who might be harmed in the long run. I know that it's easy.
It's fast. That's not always the best way to do things. Sometimes you just have to take a step back a little bit and think about it a little bit more. I mean, there's nothing wrong with a human actually doing a job and not using technology. So I just want to add that when you're thinking about all these human centered approaches, they need guidelines.
And then there's a lot of work that goes into that. The first part. You just give a gist? Sure You want summarize, summarize your point really quick. They couldn't hear it in the chat. Yeah, sure. Go ahead.
No, I was kidding. OK Yeah, so the human centered approach that we have been talking about is primarily around how research is for humans, how research is around humans. And Tomita talked about having guardrails around a human centered approach, which is bias making sure that there is no bias or ethical considerations in the world of AI.
Yeah, that's essentially what we have been talking about. Thank you. Yeah I think when I think about how the human centered approach changes publishing, I work at a technology company. There's a lot of AI and whatnot involved. But at some point, the technology becomes ubiquitous. And what really makes it different for people. Difference for people, I've found, is this sense of community.
And it's such a buzzword and everything. But even meetings like this where we come together and we share insights and we learn from one another community, I think it's one of those things where it's rising. Tides lift all boats. The more we're able to learn from one another, it really helps everyone across the entire ecosystem. Can I just add I mean, I think damita makes a really excellent point, and I think that's precisely why we need a human centered approach to AI because AI is not going to recognize its own biases necessarily that are being Fed into the system.
So I completely agree that we need to make sure that as we're developing the technology, we're also developing smart guidelines and guardrails in order to use that technology so that it's applicable to everybody. And yesterday, we had a conversation around this about the trust and the bias that is inherent in the world of AI. And one of the things that we talked about was cannot have a pure AI strategy.
I will always have a human in the loop. You need to determine, depending on what is your sensitivity, what is your risk appetite, what use cases you have as to where the human fits in. The loop is the human. The initiator of the task is the human, somebody who verifies everything that comes out of AI. But in that approach you cannot look at AI in isolation. It is still structured around the humans.
It's a tool that we use to do to make our life easier, to make our life efficient. But I don't think the human centered approach is going anywhere, especially because of the bias and lack of trust in AI. Yeah no, I think just on that point, some colleagues, we were discussing yesterday and saying probably the artificial in the artificial intelligence should be replaced by assistive intelligence.
And, maybe that's the more the apt word for it. Yeah Yeah. Charvi mentioned this morning in her discussion augmented intelligence as well. Yeah which I love. So we touched a little bit on mentorship earlier in the discussion. But as leaders in your organizations and putting some of the burden on the leaders to help bring people along, how should you be mentoring colleagues to best prepare them for the future.
Start with Penelope. Yeah so first, I think this is probably a very obvious point, but it's too I think the first step is to make sure that your organization yourselves are enhancing your AI literacy. We don't all have to be experts in the exact technology and become AI experts, but I think understanding how to use the tools, what tools are out there, what the developments are really important.
At AIP publishing, we actually have a separate teams thread that is dedicated to just I exploration and anybody in the organization, no matter what level, what function, et cetera, can join that thread if they're interested to have discussions around AI. And it's actually one of our most active chats within the organization. It's really fun actually, because people will just share news articles or industry reports around AI and start building on that as well.
And of course, there's other ways to do that too, that are broader than just within one organization. There's also a lot of training courses, dozens. So there's so many opportunities, I think to enhance that AI literacy. And because there are so many courses, it's possible to find those courses that could actually be very applicable to your day to day work as well.
So I'll give a shout out. I see Todd over there, I'll give a shout out to niso course actually that several of my colleagues have been taking as well around AI and prompt design, and this is something that, again can affect many different functions, many different roles within the organization, not just the sort of quote unquote traditional technology roles as well.
And then finally, I think the most important thing is to enhance those skills that we're talking about here on this panel. So what are the things that actually set us apart from AI and from technology. Things like communication and presentation skills, mentoring, continuing to network and so on. How can we enhance our emotional intelligence or our curiosity and growth mindset in a world where there's more and more AI and algorithms out there.
I think those are going to become even more important. And the folks that really have those skills and that expertise are going to be the ones who advance in the end. So I agree with everything Penelope has said. But I also think that we cannot assume that our teams do not know or have a lot of skills. They may be doing so many things that they're just not sharing, for one, because they may be fearful of telling their supervisor and another might be nobody asked.
So I think that reverse mentorship is a thing. I think that we should be talking and doing more talking and even more talking so that we can all learn from each other. But Yeah, of course, the responsibility is within the org, but it's also within the individual to do some outreach and find out what is really happening with their teams and also ensure that there is a culture of where people feel comfortable enough to even talk about these things.
Because I mean, we know that there are places where they just come and do the work. They don't want to talk too much, but we have to be open to sharing and actually listening. And I completely agree. I'll just add another perspective to it, which is the technology is evolving such that sometimes the younger generation that represents most of my colleagues actually know far more than I do and think about it.
Most of them in the coming years will be native to the use of. They are not going to consider that as a separate technology. In a way, I don't consider email As a technology. Email is not a technology for me because I just grew with it. So it's possible that they don't need mentorship. I do. But if as a skill, if there is one thing that hasn't failed generations before us and that won't fail generations after us, is the willingness to the ability to explore the fluidity, to adapt.
If I were to suggest anything to my colleagues to prepare themselves for the future, it would be that no matter what technology throws at you, just keep an open mind. Explore, get your hands dirty to what you said earlier. Make sure that you don't wait for somebody else to tell you what that technology can do, because these are things that would affect every aspect of our lives.
So get your hands dirty, then have the humility and fluidity to accept and adopt whatever the technology is willing to throw at you and in one sentence, just have the ability to unlearn and then learn and do that all over again many times over as technology evolves. Everyone summarized it. All right. So another question that is near and dear to me as a marketing person, which is around how the value of brands changes in the age of AI.
Because even now, as we see the Google AI summaries, it is taking more and more traffic away from your website, from the version of record, et cetera. What does that do to the value of our brands. I'll give it a whirl. Yeah so I think it becomes even more critical, because I. And these different LLMs can be and other AI tools can be quite faceless.
And so in that arena, the brand, the role of the brand, I think becomes even more important in terms of what are the sources that these AI engines are using, and so on. So again, going back to trust and transparency, it's really important, I think, in that regard to make sure that the brand is strong, that we're doing the right things in terms of increasing and maintaining that trust and transparency as well.
I mean as a brand, it is a representation of who you are and what you do. And earlier there was a brand that was a car manufacturing company or a brand that was of a publisher or of a gas or an oil company. But if you look at how technology is evolving and AI is evolving, I think there would not be any brand that will not have a component of AI or LLM. So I don't think a brand needs a brand, cannot not have an impact of it.
So every brand, every organization, no matter what we are doing, will have to consider the AI approach and the human approach and make it part of our brand story. It has to become synonymous with whoever we are. Other thoughts. Yeah, I think just to point to add, I think, in fact, the whole human element would probably start to become a differentiator because there are so many cultural aspects, softer elements which can be spoken about, which can be broadcasted, marketed to bring that element that we are an organization which is not just all about technology or products, but there are people behind it.
So yeah, I think it's the human element. So we do want to make sure we have plenty of time to engage with you all. So start thinking of your questions. But really quick, I wanted to review the final poll in the app, which we asked what are the areas of scholarly publishing that you found most either benefit from or rely on uniquely human involvement.
And there is definitely a theme, and I bet we can all guess it. It is peer review, peer review and quality, which really comes back to the theme of this entire meeting. We do have some around mentorship things, around nuance and sort of making judgment calls. There's a lot of that. And of course, those are the bedrock of our industry in many ways. So we do have someone monitoring the app for questions for our virtual attendees as well as anyone in the room who doesn't want to raise their hand.
But we also are happy to answer your questions live. So now's your time. There is perfect. So we have a question here. Do the panelists see AI as an existential threat to scholarly publishing. And as elements consumes caloric content. Will there will there be a role for the human centric world of publishing.
Very open question. They go, yeah, I don't think it is existential at all because even if you consider what LLMs have done, they have taken content that is available across the internet and they have trained themselves on that content. And that is the generative aspect of LLM going forward in the future. I believe that the only content that is going to be new, completely new, that does not exist at all in scholarly publishing, all the other content will be generated by the use of LLM in some form or the other.
If I have to create a marketing paper, I would engage with and some aspects of what I create in a marketing paper will come from LLM, but scientific research is going to be brand new. Nothing that would be published exists. So I don't think it is existential. In fact, this is the only domain that the LLMs will forever want to follow. This is the content that they would want.
This is the content they don't have. Yeah I have to agree. I don't think it's an existential threat. Of course I would say that because I'm a bit biased. But it is something I think that we are all within this industry really grappling with in terms of things like licensing content to LLMs and so on.
So I know there's we do as an industry, I think have a tendency to be quite conservative and in some cases, especially when some of the technology and the uses of that technology or the content is still a little bit unclear. I think sometimes it is OK to carefully consider how do we work in this new world. And what are our options in terms of content licensing and other aspects too.
So are there any other questions from the floor. I believe there are conversations that are happening with OpenAI and a consortium of publishers of what the licensing model is going to be in the future. Yeah, actually. Penelope, you almost took my question, but I still want to just to relate about brand dilution and licensing. When LLMs would have our content and they can produce summaries or give answers.
Do you see therefore a change in business model for scholarly publishing. I mean, how do you see the impact of this licensing going forward on the brand side and the business model side and open. Its open access side. If you can just give some color there, it would really help. That's a really tough question and I certainly don't have the answers.
I think, again, this is something that I think we are grappling with as well as I know many other publishers and other folks in our industry. I think what's difficult about it is a lot of the principles of scholarly publishing and why we do this work. I come to this, as I mentioned, as a researcher. And so the thing that's most critical to me is how can we advance research and accelerate the discoveries right.
As quickly as possible. And some open openness, open science, open access, these are all principles that certainly agree with that having our content out there. So that some of these LLMs can learn to benefit research and researchers that might be using these AI algorithms I think is also an important factor of that. And I think where it gets tricky and I'm not going to Wade too far into this because like I said, I don't have the answers, but is kind of in the business model and how do we recoup some of that investment as well to make sure that by doing this we're not also that it doesn't become an existential threat, for example.
Well, I think Yeah, 100% the business models are going to change. I think in a similar way that we've seen with open access, for example. But I do think it's going to be something that we are collectively coming to as an industry because, I mean, if you look at open access, how many different models and flavors are there and we have yet to settle on what is the one to rule them all.
But we're all experimenting kind of together. And it's that collective action that is really moving us forward. And I think it will be similar with AI. Yeah Yeah. Hello first of all, Thank you for the panel. great discussion. I have a question. Many of us in the field have been trying to do all that we can to make sure that diversity, equity and inclusion advances.
But I'm concerned that with the introduction of AI, a lot of that work, may erode. So I'm wondering if there are examples of places that have provided guidance tips to peer reviewers that build upon the work that we've been doing in DEI to make sure content from diverse audiences gets published. That takes in AI bringing it all together.
If not, we need it. So there is some guidelines that were recently developed by CFR. Disc is for peer reviewers editors. I would definitely recommend that CFR disk is the Coalition for diversity and inclusion in scholarly communications, which I'm sure you all know what that is, but I would recommend that I don't know if I don't remember I being a part of that.
But if you're interested in being a volunteer for a toolkit and you have that, please reach out to the staff at CFR disk and we'll certainly talk about it. And I do think there are definitely challenges that we are all very much aware of. But I think there are also some opportunities when you think about things like researchers for whom English is a second language, AI presents a huge opportunity for them to maybe publish more broadly and get access to translation services and things like that.
So there are some opportunities as well. Even I know I've heard a lot of tools where you can use to review things like your calls for papers, for unintentional bias, all these kind of things that we can leverage. So it definitely has its challenges and I think there will be some things that will help us do the work even more. Yeah Yeah.
And I think also more and more publishers, ape included, but we're certainly not unique in this area are starting to gather more data about what not only editors and editorial boards, but also authors and reviewers so that we can establish a baseline as well to make sure that we're keeping ourselves honest and being transparent as well. It's optional, of course. And so because we don't want to introduce any additional bias into the process either, but that's another way so that we can become even more aware, as if we don't have that baseline of the diversity of our authorship and our reviewer ship, then it's difficult to shift it.
I'll tell you one. Well, at RCA, we're working on several things, but one thing that I'm currently working on in detail is we're looking at some of our processes and our policies and procedures to ensure that we are not excluding anyone. We're going to be determining the best ways to move forward to make sure that what. No one has all the answers, but we're certainly going to be doing the best we can.
And as we a lot of the systems that are in place, they support the biases that are out there. They support exclusionary practices. And I think that just because we've been doing something for 20 years, we don't need to continue doing it for 20 years just because it's the way we've always done it. So I think that looking at some of the structures that you have in place could certainly be helpful.
I think bringing in more diverse voices for sure is certainly helpful and talking to folks that are already doing the work. There's no reason to reinvent the wheel, talk to people that have been doing the work, APA, ACS, ASC, feel free to do that. And that will certainly, assist you in moving forward. How are you doing. Paul Callaghan from exserta is my name, and I'm sitting here listening to the conversation.
And I think it's hard to argue with the principles of open access, but those principles were written for humans, open access to humans. And I'm sitting here thinking when you asked the question about the brand, because when you introduce LLMs, you're disassociating and you're putting a barrier between your brand and the consumer. And I'm sitting here actually asking the question, should open access be for machines.
Should that be free. Well, everyone's way to go, Paul. Step through. Yeah, it's an interesting question. And I also think, and I think this was touched on in the plenary this morning, there is the question of what are we using to train LLMs, right. If it's being Fed Reddit garbage in, garbage out it.
There are those who think that scholarly publishing is what we should be fighting to get into LLMs. But then, of course, like you said, what then happens to your brand, to your business model. Et cetera. I think some of the things that we've seen success with in I'm sure many of you have discussed is these RAG models where it is a sort of cordoned off AI application that only references your content and it gives you that higher quality, but it also gives you that ability to control the access, the brand.
Et cetera. So there are some things people are playing with, but Yeah, it's a hard thing, because you do want to have the good quality information in the models, but how does that kind of work. I don't think the legislation or the policies assumed that LLMs would use the open access data for its training.
I think now we will have to be more careful about it. But I just want to go back to what the other gentleman had asked about AI and the bias. And we did a kind of a interaction with ChatGPT, just for fun. The question that we asked ChatGPT was we know that these AI models have got biases. So the question that we asked ChatGPT was, tell us about your bias.
And it knows I mean, it is not aware, but it was able to tell us x percentage of scientific research is published from here, from this particular country. Why is published from that particular country. And then it became more aware of its bias. The second question was going forward, knowing that you have a bias, would you be able to account for it in the output that you produce.
And ChatGPT said, no, I cannot account for it because that bias is inherent. What it did say was, but if you as a human ask me the right question, I will give you the information. So the answer to your question is those biases. You cannot do anything about. Those biases will exist. It will depend on how we use those technologies, what kind of questions we ask, what prompts we give that would tell us whether those biases will continue for the next 20 years or not.
Otherwise, those biases will exist. Question from the virtual. Yeah, there's a question in the chat. Does the panel believe that publishers can control the use of journal content by external LLMs, by licensing. And how does the use of content by NLM in this way differ from the use of human writing a review.
So, I mean, I think there's already a lot of content that is out there. So I think it would be pretty naive to think by having the appropriate licenses in place that we'd be able to lock it all down and control it. So no, I think the answer to that question for me is no. What was the second question. I'm sorry.
How does use of content by Alnylam differ from the use of a human writing a review. How does use of the content by an LLM differ from a human writing a review. Well, I mean, I think it's just much more prevalent. And you know whereas a human writing a review that is again introducing that human centered approach and a human in the loop making sure that it's being used hopefully ethically.
Back there. I just wanted to return to the previous question and Aaron's response about biases remaining with jagapathi and all their LLM and you mentioned that if you do the correct question, you will get the appropriate answer.
And that is a threat we have in all these because biases will remain as long as we have the. Because you can do the same question with two different approaches, two different answers. So if you want that answer, you will find a way to bypass the question. So that can become a never ending issue. Absolutely the bias of an AI system is not very different to the bias that we all have as humans.
In fact, that is the bias that it has been trained on. So, so if you wish to continue with an AI system and have your bias in it, there is nothing that you can do about it. You cannot overcome that. Any other questions from the room. I actually have a quick question. OK, go ahead.
You're all leaders in your organizations. And some folks have talked about some specific things that you're doing, your polls that you're doing at ask your chat that you've got Penelope going at AP. But what are some specific things that you are doing as leaders to create and maintain a human centered culture at your organizations. I can go first. This is a topic I feel strongly about.
We have a couple of things we've done that I really like. One is that our entire review process is a strengths based approach. Like I said earlier, you can leverage strength more than you can cover up your weaknesses and what it does. Everyone takes the whole CliftonStrengths thing. And then the idea is that by identifying our strengths and leaning into those first, it shifts the tone of the conversation period.
But it also gives people a lot more to grow creatively as an individual as opposed to this is what your role is like. Who are you as a person and what are your strengths. Another thing we've done, and this is just a silly technology thing, but there are plug-ins for the chats and stuff. We have one for teams that will randomly pair you with the person in the organization. And as a remote organization.
This has been really powerful for us to get to know people and just the only rule is you can't talk about work. So you have a 30 minute meeting every two weeks with someone and you can't talk about work. And it's just about connecting on a human level. And I cannot tell you the difference that is made. It has been very powerful. So I think I mean, in addition to all of the disruptions that we're seeing from AI, this is also a moment where we're also having been disrupted in terms of remote work and hybrid work.
So AP, I know many of you are in the same situation where we've basically shifted to remote first culture. Previously before COVID, it was a very heavily in office culture. And so that has opened up a number of different areas where I would say the combination of AI plus remote hybrid work that has the power to either increase isolation or enhance sort of productivity and happiness and things like that.
So it's all in how you use it and how I think leaders approach that. At AP, we've been really deliberate about our culture transformation in addition to some of the business strategies that we're emphasizing as well. And I think that is really important. We and part of the whole human centered approach. So while we're remote, first we put a pretty strong emphasis, I would say, in terms of bringing the whole organization together.
We can do that because we're a smaller organization of about 150 people, and we do that twice a year. In fact, our last community all staff meeting was just last week over on Long Island. And it really can't take away that the impact that those really human connections when you come together in person what that creates and then so when you go back to your office, your home offices and your hybrid or remote environments, I think that kind of continues as well.
And there's been other activities that we've tried to emphasize on that as well. Yeah, I think just to add, I think in our organization as well, we've started with having very open communication channels. So you can talk about can express concerns. One of the instances, we also moved to remote first and maybe last year was probably one extreme where we did not get enough opportunities to meet, but we got feedback from all our employees.
We had discussions and this year we are organizing more deliberate meet UPS, more opportunities for them to meet everyone. And so being able to make those changes as well, listening to people and incorporating that. I think the other thing is the executive leadership team spending more time with everyone, and that's not considered as an additional task, but it's more your responsibility, your accountability of it could start with integrating the new joinees it could be, having one on ones with them, understanding if they're concerned about something.
But that is more done more proactively rather than that being why don't you talk twice a month. So yeah, I think some of those. And things happen. Sorry to cut you off. No, that's fine. I was done. Almost so Thank you for mentioning leadership, because I know that culture change happens when there's leadership support.
Definitely so I want to tell I also want to give a shout out to ASC because today is our 10th annual diversity day and I'm really, really I'm happy to be here, but I really wish I was with the committee celebrating today. So as soon as I leave here, I'm going to log in and see what's going on. But the other thing I want to mention is that for our pubs team, we have a DIY approach to everything that we do.
It's a part of our strategic plan and there is an intentional, intentional work to ensure that everyone on our team is paying attention to we have a list of things that we're working on. Everything has to be assigned a focus. And so it's embedded in our work for sure. The other thing I want to mention is that we have a public.
We have a staff diversity and Inclusion Council, which is our diversity day, but we also have a publications diversity, equity, and Inclusion Committee. We have a book club meeting twice a month. We talk about books that our last book was, I think called race at work, inclusivity practices in the ASC staff. They're in here. They could help me with the title, but that's the title is by the Winter's group.
Great book. You don't need to read the entire book. Parsed out in chapters. You can take a chapter and get to work, but I highly recommend it. If the scholarly kitchen has a list of books that you can actually read, I'm going to do some PR for the scholarly kitchen too. But Yeah, the inclusivity, the work, the changing of culture definitely starts at the top and also start creating those grassroots groups that can actually push this work forward.
And also you can do this work and not have a budget. Money doesn't hurt, but you can do a lot without a budget. I just add one quick thing because we are nearing the end of the session as leaders. You asked as leaders, what do we do about humans. So we talk a lot about the AI and the opportunities that I can present to any business model. I think we also need to talk about humans and we need to determine what values, what value a human has because there's a lot of apprehension, there's a lot of fear, and we don't talk enough about the human value in the world of AI.
If we can talk about that and let our people know that this is the value that we see in you, even in the world of AI, there would be a lot less apprehension, there would be a lot less fear. So as leaders, that's what we should do. Fantastic I have a quick wrap up, I promise. But one question I'll leave you all with to consider over the next couple of days is we're all together is the scholarly kitchen has been running a series of interviews with leaders.
And almost unanimously, when asked what they love most about their jobs, it is the people in every single one. So I encourage you to think about how you can leverage your relationships to improve our industry and your jobs. When we were planning for this, we were sharing a lot of articles and stuff back and forth, so we did go ahead and publish a reading list for this.
If you want to explore this topic more, it's at silverchair.com/other AI and I will also add it in the chat. But otherwise Thank you so much for joining us. Thank you so much to our panelists. This has been a wonderful discussion.