Name:
Artificial Intelligence (AI) in Scholarly Publishing: Looking Ahead to 2029
Description:
Artificial Intelligence (AI) in Scholarly Publishing: Looking Ahead to 2029
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/6e11a339-63f4-42ff-86ba-11ba0c3da8da/thumbnails/6e11a339-63f4-42ff-86ba-11ba0c3da8da.png
Duration:
T00H59M13S
Embed URL:
https://stream.cadmore.media/player/6e11a339-63f4-42ff-86ba-11ba0c3da8da
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/6e11a339-63f4-42ff-86ba-11ba0c3da8da/AM_2022_1A.mp4?sv=2019-02-02&sr=c&sig=3PAD7vEl%2FYuc%2BTDnzLwZF0cxLi8cp8iEh7MtXTFbLDE%3D&st=2024-11-20T04%3A34%3A40Z&se=2024-11-20T06%3A39%3A40Z&sp=r
Upload Date:
2024-02-02T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Hello, everyone. We're going to go ahead and get started. If you're in the back of the room, I ask that you come to the tables up in the front. Thank you. We're going to have some interactive conversations. And so we would like for you to join a table that is somewhat filled.
Thank you. Come on down. How are you? All right. Let's go ahead and get started. Got a few more folks coming in.
There can go. Good morning, everybody. My name is Damita snow. I'm the senior manager of publishing technologies and publications, diversity, equity, inclusion and accessibility. Meaning of. My name is Jeff dicanio.
I'm an executive advisor for foresight first LLC in Reston, Virginia, and I'm excited today. This is the first time I've been in Chicago in 2 and 1/2 years, which is fantastic. Got to go to a place that I love to eat. Yesterday, going to another one tonight and first time that I've ever spoken at SSP, any SSP event for that matter. So excited to see all of you. But I'm in a particularly good mood this morning because last night the New York Rangers won game one of the Eastern Conference Finals at home at the world's most famous arena, Madison Square Garden.
And so we are now seven wins away from the Stanley Cup. We're very excited. So let's go, Rangers. Go ahead. All right. Thank you for joining our session this morning. Before we begin, I'd like to remind everyone of SSPS core values, inclusivity, integrity, adaptability and community, as well as our code of conduct.
And if you wish to read the code of conduct in its entirety, you can scan the QR code that I believe is in the program as well as in the whova app. All right. Before we start the session discussion, I just want to mention my three opening points. We're excited about AI because I will allow us to publish more content, disseminate further research.
It will shorten our article submission to publish production timelines by automating processes and hopefully lowering costs. It will also help us do better at recommending content to our customers. We will make more informed marketing decisions as well as learn more about our members and customers based on better data collection methods. It sounds great, but I will impact our staff, authors, libraries, the entire scholarly publishing ecosystem in ways that we cannot imagine or become aware of until after some of these technologies are in place.
Jeff, so sorry. going to turn myself off here. So I see a bunch of people have come in just as we're getting started. If you're in the back row, I want to encourage you maybe to move forward a little bit to join one of the existing groups, because in a few minutes we are going to have a conversation at a table.
So you may want to just merge the groups a little bit so we can have some. You and I have prepared for this conversation and then we'll talk about that as a full group after you've had a chance to explore it. So kind of what we're doing now to get started with this conversation is to give you our thoughts on some things for you to consider as we move into the scenario discussion that's going to have on your tables.
And I'm going to give you some points to think about as you engage in the scenario conversation. So the work that I do is not anything really to do with scholarly publishing. The work that I do is I work with Association boards and chief staff executives and other contributors in associations on helping them learn with and prepare for the future. So the work, I do is on foresight, the work as I do on building board performance through stewardship, governing and foresight.
So in that work, what I do is try to understand what are the forces of turbulence that are going to affect associations. And not surprisingly, they're very similar to the forces of turbulence that are going to affect our society as a whole. So in the summer of 2019, long before any of us ever heard the term COVID 19, I was Warning Association CEOs and other decision makers that we were about to embark on a very turbulent decade when the 2020 began, and I spent most of the second half of 2019 talking with boards and other decision makers and associations to say, get ready because the turbulence is coming.
And then bam. Sorry, that was a little louder than I thought it would be. It arrived. The turbulence arrived and we weren't ready for it within the first 90 days of this decade. And we're still reeling from that. We're still reeling from that. And we're going to be reeling from that for quite a long time.
But even before COVID 19 became an issue, one of those forces of turbulence that was going to shape this decade, reshape this decade was our adoption of artificial intelligence and automation technology. As we knew that going in, this was always going to be the decade in which we made critical choices at every level of our society about how those technologies were going to shape what we do and what we will do going forward.
And again, in every aspect of really human endeavor. So we knew this was going to be an issue. And what the pandemic has done has accelerated our adoption of those technologies. And we know that from reporting from companies and other entities and saying, yes, we're using more AI because we have fewer people and we need more technology and we're not having people necessarily coming into the same physical space into the office anymore.
So we're using these technologies to fill the gaps or to even replace the human contribution in our organization. So this is a was always going to be a huge issue. And it's. One versus a science fiction idea going back to the 1950s knows that we've heard we've had fits and starts about what I can do and what it will do and so on and so forth.
Going back to when John McCarthy coined the term in 1955. Right up until today, we've had AI winters. We're now having an AI spring. We've had lots of things. What's different, in my view, and even as recently as a year or two ago, we thought there might be another AI winter coming on, right? But what's happening now is we are seeing the full force of the economic agenda behind artificial intelligence coming to play.
And we're not just talking about the economic agenda being advanced by the companies that have control over artificial intelligence like Apple and Amazon and Facebook and companies in China. The relatively small number of companies that control so much of the development of artificial intelligence, the talent for that development and the financial resources for investing in that. This is bigger than that, right?
This is a national economic agenda for countries like China. And for the United states, where right now there is a competition as to who will lead on artificial intelligence as this decade continues. And other countries like France and Canada, other European nations are trying to be a part of how they're going to advance the artificial intelligence economic agenda for their countries and therefore for the companies within it.
So there is a powerful level of investment at multiple levels in the use of artificial intelligence, which raises so many questions for us as we move into this decade. And so when we look at the complex ethical and human implications of all this artificial intelligence that's being used and that will be developed and further applied, we really have to have a much deeper focus and much deeper attention and intention for how we're going to address those issues going forward.
We've already gone very far down the pathway. There's already harm being done by artificial intelligence. So how are we going to address that? Right and I remember being part of a webinar. I know this is an unusual thing that I was part of a webinar during the worst of the pandemic. That's really surprising, I'm sure, for all of you. Responsible AI and AI for good for BWS and in the UK did a webinar and one of the things that she said that really stuck with me is that she says, I'm an AI ethicist, but the reality is we are all AI ethicists because every one of us is going to be affected by the use of AI in some fashion, in a personal way, in a human way, in a professional way, in a way, way as thinking as a citizen of the world, citizen of a country.
Understanding that the ethical implications of this are going to affect every single one of us. So we all have to be. AI ethicists in one way or another. And that's really a central idea in our scenario that we're going to be discussing this morning. OK so just some things to put some context around the discussion that we're hoping you have and then we will have later as a full group.
So now I want to talk about considering our scenario. And I think most of the tables here have copies of the scenario, excuse me, if you're sitting in the back row in an effort to get you to move forward. We didn't put copies down, but we can provide you with some copies. We do have we do have extra copies if you want to move to a table now where there are seats, you may want to do that to get a copy more quickly, but we're going to do that.
So just before we get into this, I want to give you a few key points to think about and then we'll have you get into your scenario conversation at your table. OK so the first thing I want to say about this is what we're doing today is what I would refer to as scenario learning. Some people like.
Rates as opposed to being a board of an organization or being a senior team or something like that. You're coming from different places. This is really truly a scenario learning situation where we're trying to learn together about the implications of this particular scenario. So think about it in that way. This is truly a learning opportunity for you. And in some ways, and this is really central to what I do in my work, a chance to practice.
We never, ever really get chance chances to practice having conversations about issues like this and then figuring out how we would handle them. So this is a practice opportunity for each one of you and a capacity building opportunity as you pursue your work and think about how this plays into what you do in whatever kind of organization you work in. Now, for people who like me, who do scenario writing and have done training, preparation and development experiences in scenarios, I always like to start with what's the definition of a scenario, right?
And the scenario writers will tell you that a scenario is a plausible alternative context for learning with the future rather than about it. Right? so this is not a prediction. The scenario that you're looking at right now and that you'll be discussing is not a prediction for what the world will look like on June 2nd, 2029. It's not even a forecast of what the world will look like.
It's simply a plausible scenario of what it could look like in 2029. And that's important to understand. We're not saying that this is exactly what it's going to look like. Everything that's in there could be different. It's a preview of a world that doesn't yet exist. And that's important to understand because as we get into this, you're going to want to talk about this, not in a way of, well, how would we solve this problem, but rather how would we prepare ourselves for a world that looks like this that is now 84 months away from where we are today.
And that's important to understand also in the context of the following, which is that future that this scenario presents to you this June 2029 scenario is an unknown future to us. Just as June 2nd, 2022, was an unknown scenario 84 months from before today. And further out right every day is a was once an unknown future. And that's what we're talking about today, an unknown future that we are trying to make known in some fashion.
But we're not saying this is the only plausible future. It's just one of many plausible Futures that could exist. Several actors within the story who are experiencing a situation that they're trying to grapple with in some fashion. And you're going to assist them in a sense by grappling with it at your tables this morning. So you're reading a story.
It's a plausible story, and it's a story that we want to encourage you to dig into as you get into your conversations at the table. Today's story is about Terry. And what Terry is experiencing in their work. As someone who works for a publishing company or for a journal publisher in the UK and what they're going through and how they're addressing issues of AI and sort of adapting to what's happening around them after so many years of working in this space.
So today's story is about Terry. You'll see at the bottom of your scenario that there are three questions that we would ask you to consider. We use this essentially as the frame for your conversation. What's your personal reaction to the scenario? Very important for us, as in this practice opportunity and very important for us just as human beings to when we confront a scenario of the future, to have the opportunity to talk about how it makes us feel right.
If we do not talk about the emotional content, the affective impact of a scenario at first, then it gets in the way of everything else we want to talk about. So we've got to be able to share freely. And I invite you to share as much of it as you feel comfortable sharing. Right we're not I'm not suggesting that this should become a group therapy session at your table.
Right but if you want to go there, you know, I'm cool with that, if that you feel strongly about it. Right so but it is an opportunity for you to say, you know what, I read this and it makes me afraid or I read this and it makes me angry or I read this and it makes me very concerned. I have real trepidation. I've got serious concerns about what this whatever you're feeling.
Own that and share it to the extent that you feel comfortable doing. So we're inviting you as part of this process to think of yourselves, the people at your table, as the SSP board of directors. Right to put. Board at your table make sense of this scenario? What are you taking away from it and how do you talk about that as a decision making group?
Even in this sort of scenario based scenario, learning based experience? And then what actions, if any, would you see at your group level, your particular version of the SSP board of directors? What actions would you see this group taking to address the issues and questions that are raised by the scenario? And we'll be really interested to hear your thoughts on each one of these questions.
And this will be not only the frame that you'll use at your tables for the conversation, but also the frame we will use for our follow up conversation after we conclude the small group conversations. A few key points to keep in mind, and then we're going to just open it up for any questions anyone might have before we get started, and then we'll move you into your discussions. We really want to encourage you to contribute to the conversation.
For some people, you come to a meeting like this, you know, and I certainly have seen it. I'm sure I know Damita has as well. People come in and they hear that there's interaction going on. They're right back out the door. We don't take it personally. And we understand that some people prefer listening mode. But in this case, given the way we frame this up, given the nature of the issues here, the importance of them, we really want to encourage you.
And I will personally say I want to challenge you to contribute to this conversation, right? Don't stay in listener mode. Be a participant in this conversation. People at your table want to hear what you think they want to know. People will say, oh, that would never happen. Right that'll never happen.
That's completely unlikely, right? No matter how many times I say suspend disbelief, people will still say it. Right and what I'm saying to you this morning is suspend disbelief, even if you don't think that this scenario is likely to happen, even if you have questions about whether you regard it as fully plausible. I'm asking you for the purposes of this learning experience to suspend your disbelief and fully participate in the conversation, thinking.
And believing that this scenario is fully plausible and you are sitting in 2029 and having to think about how to address the issues presented in this scenario. Think through your assumptions like no scenario provides complete information. It's impossible to do so because no one can know what the future is, right? It's still unknowable.
We're putting together a scenario of what it could be, but it's not fully knowable. So I can't write a scenario. No one can write a scenario for you that contains every piece of information you might like to have. So when you are considering this conversation, you may ask yourself, well, you know, there's something here that I don't understand, and you'll fill that gap of miss of lack of information with your own assumption.
Make sure your assumption is appropriate to the context of 2029 rather than making it a 2022 assumption. Right so think through your assumptions and test them as you go through the conversation. Resist the temptation to solve problems because as I've said, this is a preview of a future that does not exist yet. So there is no value in trying to solve the problems that the scenario raises. So people might say you might get into a conversation, well, if Terry did that to Carmen, then I want don't do that.
Don't try to solve the interpersonal conflict. Don't try to solve the issues of the just talk about the implications of this world rather than trying to solve the problems because these problems don't exist yet, right? This world does not exist. So let's talk about it in the context of how you would operate in this world rather than trying to solve the problems that we don't need to solve yet.
And then finally, focus on critical connections, issues. Questions what are something that we've given you these three questions to frame up your discussion? But these are certainly not the only questions that exist out there. We want to encourage you to bring forward. What are the other questions that are coming up for you? What are the other issues that are coming up for you? What other connections that may be of things that you've heard in other sessions since you've been at this conference?
Right or things that you're working on in your own organizations? Any connections, issues, questions, bring those into the conversation. So we can hear from you as we get into our main our main conversation a little bit later. OK so a lot of information. Any questions about what you're being asked to do? You're going to have 30 minutes to have this conversation at your tables.
If there's a need for a little more time, we may be able to accommodate that. But we basically are starting out with about a 30 minute time frame for you to have this conversation at your tables. Any questions that Damita and I can answer for you about what we're doing, how we're doing it or anything at all, before we get into our table discussion.
Sir are you expecting? The question is, are we expecting any products? And the answer is really what we're hoping is that someone will be willing to share something. We have a relatively large group, so we may not be able to hear from every table. But if you are prepared to share some thoughts and it may not just even be one person, right, because we're sort of going to be going by question rather than by table, just maybe have something to share.
When it comes time for us to have that conversation, we'd like to hear from as many people as time permits. Beyond that, whatever comes out of your group, if you're really ambitious and you want to come up with something fantastic, but I don't want to take you down that pathway unnecessarily, I just really want you to experience the conversation and learn from it and have it build your capacity and have it give you some new things to think about.
So I probably could have said no and been done with that. But that's why I use one word when 1,000 is better, you know? Right any other questions? I tell all my clients that I can out silence any group because I like to make sure that people have there's a question back here. You have a question? I just started a new publishing business with a newcomer.
Do I have to worry about? It's by the big publishers. So this brings us out. Polished I impressed. I how should we prepare for?
OK so I think I got that question. And and you're saying you're asking as a small business person, how should you prepare for larger publishers using ai? And what I'm going to say now is that question. It may be something that comes up later as we get into our post small group conversation. Right but for now, what I want to encourage you to do is, is stay within the structure of the scenario conversation and then we'll see how that conversation emerges later in our process.
So I'll just ask you to hold on to that for right now until we get through this first phase of things and we'll come back to the full group conversation. OK so last request for any questions or comments or anything anyone needs before you get going. OK, so it is now about 1054. So in 30 minutes or just before we get to that 30 minute mark, we'll check in with you to see if you need more time.
Again, we may be able to extend a little bit to have a little more time before we wrap up, but plan for right now that right around 2020 excuse me, 1124 we will come back to you to see where you are in time and we'll and move into our main group full group conversation. OK so fire away up to you right now. All right, everyone, let's go ahead and get started.
Please wrap up your conversations and we can get to the questions, answering some of these questions that are on the screen. We're going to come back together now as a whole group can you hear me? And damita's going to take us through the three questions on the screen.
All right, everybody. All right, everyone. All right. So let's go to question number one. What is your personal reaction to this scenario? We have a big group, so feel free to yell one word or phrase. And I don't want to pick people. So please go ahead.
Someone go ahead. It's it is a future scenario, but it's also a current scenario. There are a lot of tools and AI out there now. Introduced and actors used their. OK others repeat. Yeah there are a lot of tools. The person was saying that it's happening. It's a future scenario, but it's happening now as well.
So just shout out your reactions a word or a phrase quickly how the scenario made you feel. Shout them out. Go ahead. Encouraging OK. What was that? Scary scary. Yeah angry, huh? Say that again.
Inevitable inhumane. Inhumane that's a good one. Wait a minute. Who said inhumane? Why do you say that? Can you elaborate on that? If a system took my promotion, I'd be pretty upset. Welcome to the wonderful world of Amazon workers and rideshare drivers.
You have another comment? Since I made a positive comment, I. OK it also has. Yeah Yeah. Someone else had their hand up back there and, like, the whole thing.
Bias bias. I was waiting on that. I was waiting on that word. There's a hand back there. Nirvana fallacy. Say it again. The nirvana fallacy. Can you say more? Yeah Yeah.
It's sort of the principle that the solutions need to be perfect. It just needs to be better. And it sounds like this product is actually better in many ways that we already have. OK so jumping on it, maybe an example. To me it is OK if I ask a question. Go ahead. By show of hands, how many of you were scared, angry or something of that kind by reading the scenario just by show of hands?
OK so a pretty significant number. How many of you were excited and encouraged, enthusiastic after reading this scenario? You can be both. You can be. Of course, you can. We're celebrating diversity. You can be anything you want to be.
You can be scared and excited at the same time. No problem. It's cool. All right. Let's move to question 2 SSP board. How do you make sense of this scenario? Anyone just put your hand up? Just yeah, he'll bring the mic to you. Yeah, well, I think SSP would acknowledge that a journal article is a fluid thing rather than a fixed thing that we're historically think of it as, and that addressing the issues of this would be a community problem.
There are lots of different organizations that have to be brought together to address the issues that this raises. And I think as a speaker, to have a role of doing that. OK anyone else? The SSP board. You all are the board. You don't have any. OK there's back there.
I would say, have sessions dedicated to discussions on AI and ethics and where we draw a line. So yeah, I think before we get to actions, how did you make sense of it? Like, what was the substance of your discussion around the scenario? What were the issues that you were focused on primarily in your conversation?
Great so we were focused on, I guess, some of the issues that emerged for us. Before we talk about the actions the board should take, some of the things that we talked about the most were questions around transparency of authorship. And also the individual agency that Terry has in influencing what the tool does.
I think those were a couple of the big things. I know there were a couple of other things we hit on, but I'll just highlight those two areas. OK and there's another hand in the back. We looked at it as a tool, but it's like the Swiss army knife of tools because it's trying to do a lot of things and we're not sure what it's doing really, really well.
And we noted that the entire table did that they were trying to use I and take the human out of it when it should be. I should help humans make decisions but not make the decisions for them. And that is the unintended consequence that we saw happening.
The problems that are created when you assume things can be decisions can be made for you by a machine. Oh, and you can see how that theme carries through to the end with, you know, we're going to have the AI make a decision. And so the AI is so good at making decisions. We're going to have Terry now report to the AI.
Jeff, there's somebody behind you. So there was a hand over behind you. Oh, let's go here. We were concerned about the risk of the homogenization of science if the AI is using a database of past research, and more and more of that database is filled with AI produced or AI enhanced research, then there's a risk of it suing future publication. You were heading there.
We also talked about how AI is often introduced to increase efficiency without really considering the human impacts. In this scenario, it talks about how Terry's job has become very intense and hard to manage since they adopted this tool, and then now they're becoming further isolated. And then also how although the tool is supposed to or is effectively, I guess, increasing participation from authors whose first language isn't english, the publishing ecosystem is still very English centered and English dominated because they're still publishing in English.
So that's an interesting point. But let's also I just want to introduce the two ideas, which is remember, the acronym of heat is helping every author thrive. So they're trying to advance on some level the human aspect of things by trying to really focus on helping the authors. And there was a second point I was going to make that went right out of my head.
It'll come back to it. All right. Do you have a sort of managing the complexity of that? Was there another hand right there? Yeah I think another thing that we talked about was transparency. Like, how is he doing all of these things and is it making it very clear how it's making the judgments, how it's saying this paper is being flagged and making sure that we are paying attention to what the eye is looking for?
Is it only looking for bad english? And that's why it's kicking it out? Is it looking at image fraud? Like what? What's actually going on there? So more details would help the board. Make a choice. I remember the second point, which is I'll come over here, which is it was already been stated by another group, which is Terry is exercising an awful lot of agency, a lot of influence.
And frankly, he's coming into it from a place of bigotry. If you read that scenario carefully. And so that's also, you know, the human element is having a really detrimental impact on what's happening because this main person is coming at it from a place that's not a good place. All right. Let's move on to number three. Oh, you have one more.
I was just going to say that we talked about accountability and the AI needing to be owned by someone and accountable to someone. And then the way that people use it also needing to be accountable. So in that scenario where Terry is not really accountable for the way that they're choosing to use this AI and leverage it and thinking about, sorry, I'm thinking about like what the goals of the AI are and making sure that it is working towards those goals.
So if the goal is efficiency, if the goal is more diversity, how is it someone needs to be overseeing it and making sure that it is actually moving towards those goals and not being counterproductive? OK all right. Let's move on to number three. What actions should the board take? What actions would you all take? I think it would be necessary for the board to carry out a lot of analysis and consult with all the various heads and also find some way to monitor the progress and monitor the effects of AI on the publishing industry.
So perhaps issue a, a set of guidelines, you know, come up with a set of guidelines that's supports. Supports that. It gives the ability for analysis, I guess is what? OK there there's another hand. OK, go ahead. Go ahead. First thing, I think this is a promotion for your services for the publishers.
But education in the AI environment is really critical and you need to identify those kinds of consultants that know the good and the bad of the AI. The vendors will come in and say, trust me, it does what I say it does, and you have to learn what the hard questions are and where the shortcomings are. And you have to know that what kind of impact those will have on your organization.
I think this whole discussion dovetails with what Jennifer was talking about in the previous session when she was talking about the reliability. What's this going to do to you, you as an organization, is it going to maintain your integrity? Is it going to actually do the kinds of things that you need? AI is going to come back and says, well, it's a learning process and the AI learns as it goes along?
Well, that may be, but is it still? You know that not only how it's going to affect the organization, but how it's going to affect you personally. Because here we've got a person, terry, who essentially is being forced to collaborate with an AI.
They're trying to use it in a way that is sinister, really, in a sense. But there's still a sort of forced collaboration going on there. So what does that mean for each of us? Because there's more human machine collaboration going forward in our organizations. Organizations are made up of people, at least they have been historically.
Now we're talking about are they going to be made up more of ais rather than people? And what's the balance in all of that? So there's some stuff there as well. I mean, one thing we thought here was to what extent is this the SSP board of directors problem in the first place? sounds at some level like it's a cope issue from an ethics point of view.
So how would SSP board of directors best exert their influence? And absolutely what they said over there about pushing for transparency, to say that if you're a vendor with AI that you need to open it up so that the decisions being made on a day to day basis for this sort of essentially scholar one on steroids, people would be able to look under.
Countries yeah, I mean, it's a critical issue. Where is the governance going to come from in all of this? There's a good sentiment in the scenario to say we want to help every author thrive. But is there sufficient governance in there? Given what the scenario describes, our table talked a lot about collaboration and accountability. And one of the things we mentioned at the very end of our five minutes was a model for certification or badging that our organization could create to assess the rigor that a vendor is putting into their transparency and their ethics.
Maybe modeling it after something like the LEED certification of how buildings can meet specific sustainability benchmarks and then convey that to others who would buy that service. We sort of ran into two kind of issues in terms of actionable items anyway, saying like, hey, what are we going to do?
What can we do? And the first notion, of course, is put together some sort of group panel with different interests, backgrounds, whatever, to decide this, and then you end up in a circle because who's going to make the panel? Right and historically, we've not been great at that. And so does the AI make the panel to.
Because we don't know the ecosystem that the rest of the industry, let alone the world, is in. Have we literally and metaphorically taken our hands off the wheels? Do we trust AI in seven years in a way that we don't now? Like how we adapted smartphones or how we adapted the internet? That's tough. And if you end up in a situation where multiple different publishers have tools like this and some are using it judiciously, some.
2 2 work through in this scenario and in the world, because there's going to be outside influences, including, as you're saying, national directives to achieve things and who's the author when you get it before the eye even sees it. And then how does that we it's the whole thing. If you really want to get metacognitive about it, we could have the AI compose a panel of ais.
To figure things out. I got a couple more I got to get to first. A couple of hands up here. You were one and then over here. So I'll go here and then here. My steps today. So I know we've talked about transparency and a few of these questions, and we felt that the SSP board would have a role in calling for more transparency.
One thing I haven't heard other people mention is that there still is author choice in where to publish, and we should be calling for different metrics to help them make that decision. We talked about having like a bias score both for the human and the I element. What of content is written by AI in a particular journal? Even having public information on what apks have been paid for each article.
And we also felt that there should be some sort of audit trail to show when a Terry puts his finger on the scale and some more visibility into the logic behind the algorithm that the AI is using to make decisions. Just so as an author, you can decide whether or not to publish there or to publish in a fully human led journal. How do we get this all out of the Black box?
And we haven't really. No one is explicitly said the word deepfake yet. Right but there's that possibility for by the way, that photo of Terry comes from this person does not exist.com so it's not really real. Um, something simple that we discussed at our table briefly was that SSP could possibly promote more attendance at conferences or spaces in the mentorship programs, things like that for engineering and it teams at publishers trying to diversify conversations within that group and create a collaborative space for them to work with other publishers and other contacts like that.
OK so was there anyone I know? You've got a hand up. You've got a hand up. Is there anyone else who hasn't spoken yet? No offense to our colleagues here, but I just want to make sure we get other voices into this. Anyone who hasn't shared something yet that you'd like to share before I move to 1 or two other contributors.
Anyone else? OK Thanks. A couple of fundamental comments. Getting back to how you set this in the first place. First of all, this only seven years hence. That's a blink of an eye. Right? so that's an interesting observation and I think it's a plausible scenario. I think that was an important thing that you stated in the beginning.
So if it's a plausible scenario, one of the implications of this is. So and that involves. Adding adding that to credit and assigning an ORCID to an algorithm, et cetera. But the algorithm is actually contributing to this article that needs to be transparent and that needs to be.
Formally acknowledged. Yeah I mean, courts are now ruling that I can be classified as inventors. Right so you've got that issue as part of this as well. And part as inventors. OK so inventors can slide by here. So you have this issue now, where is this a plausible scenario for 2029?
And that's for you to determine for yourselves if you feel that way. But I think a thing to keep in mind is that even though it's only 84 months from now, the acceleration of the technology continues to some extent, although there's certainly backward steps as well. But the question will be. How much will conversations like this one around AI, ethics, AI governance and so on actually become a form of productive friction to slow things down?
So that we might not get to this scenario in 84 months, but it might take 120 months to get to something closer to this, but a better scenario than the one that we're presenting today. Um, to sort of follow that. I think SSP board may want to ask itself to what extent are we here to promote competition within this space among vendors to ensure that, for example, heat isn't the only vendor or the only AI that's able to do this.
So that there are others, there are other options. There is competition, not a race to the bottom to be. Like the only it's not accountable, then it can do whatever it likes. So it has to be others that are better actors. So you heard it here first, the business opportunities to come up with a system called cold. So there's a competitor competitor product.
OK, so we're coming almost to the end of our time and really appreciate all of your comments. Just want to as we wrap things up, oh, you advanced it great. You know, what's the key idea you're taking away from today's session to media. And I have some ideas to share with you, but we'd love to. Similar to the way we did the first question, shout them out. What's the key idea that you're coming taking away from this session that you think will be important as you continue your work, maybe bringing back to your organization or continuing to explore in some fashion?
And we want to share a key takeaway that you're walking away with. Yeah say it one more time. Transparency transparency. Getting it outside the Black box. How do we make sure that whatever is being used? However, using AI algorithm, machine learning, let's make sure that there is transparency in every aspect of that use so that it's understood it's interpretable if not explainable.
Standard standards. Yeah you need. Find potential. There's a lot of potential either for good or not. And in this scenario, we're not at an end stage. We're still on a journey. And things can go in any number of directions.
Right? Yeah. I mean, it's such an important point that, you know, I well, increasingly, I may occur spontaneously through to itself, but right now most of it we're creating. So how are we going to create it and where are what are we going to do in the course of this journey of these next 84 months and beyond to make sure that a scenario like this one is something that can be avoided or at least the better parts of it can be improved rather than operating within a space that feels kind of uncomfortable the way that it presents things, other things you're taking away.
Yeah symbiosis between human and AI to working together. The machine intelligence become our overlords. When we are responsible for its creation. Other takeaways. Yeah again, another one. Yeah, sort of obvious to me.
Now, the assumption is that it's really unclear to impose responsibility for shepherding the who is responsible for leadership or governance. It's like almost like hot. And that is like, for me, the takeaway. But that's like. About how do we find.
Sure Yeah. Do you want to say something about. Well, I was just going to say that is very scary. But I think on an individual level, we're responsible in our organizations, we're responsible with our teams. So it is AI like your analogy, hot potato. It's true.
It's scary. We don't know what the future is going to bring. But I think that we all have an individual responsibility to ensure that we do the right thing. Yeah, I couldn't agree more with that. It goes back to what were talking about earlier about everyone needs to be an AI ethicist. Now, the good news is there is a lot of good work being done on this around the world and in the Uc particularly.
There is or in North America, the Montreal AI Ethics Institute at Stanford. They're doing tremendous work there. There's a lot of entities that are doing things at NYU. There's a lot of great facilities. Certainly non-governmental agencies are working on this World Economic Forum and others. There's institutes in Europe, so there's a lot of work being done to look at these issues.
And those issues were not being presented explicitly in this scenario intentionally. But there is good news because there are really smart people really focused on trying to address a lot of these questions. And there's plenty of resources in that regard. Other takeaways. Other things that you're taking away. We're getting close to time.
We're getting close to time. Do you want to go to your. No, no. I think we're I think we can take a couple more if there's anyone. Yeah, go ahead. We want to start a Rumble in here. OK?
let's. Let's keep it. Let's keep it collegial. No, I mean, yeah, that is. That is a direction that seems like it's heading in that way. There was another one back there. Yeah, we'll talk about it. Can't make sense of this comment, but I think did a lot of value that I'm seeing.
I also wonder what we're giving up for efficiency so that we can afford open access and make our research portable. So it's really, for me, a balance of the benefit of what all of this can do for us that we can do, but also what are we giving up if we do have to? This an opportunity to do that? That's something we talked about.
Probably Yeah. Yeah so, yes, I mean, all. I'm sorry. Go ahead. Do you want to add one final point? Just one thing that has not been said yet, but trust and communication. A need for communication within humans working together. The machine and human.
Yeah did you want to just want to say it again? Here, I'll give you the mic. Yeah they can't hear you in the back. I just mentioned that there should be components of trust and communication, be it interactions between humans and humans or human and machine. So the processes that are being performed by human or machine are trusted by other stakeholders.
Yeah, it really. Sorry one more. Yeah, go ahead. I think there's a weak link here in that a lot of organizations now are operating remotely out of homes and so forth and communication levels are down. You don't have that interpersonal effect in these kind of processes, and you're going to have to step it up if you're going to get into these things to ask the questions and get the.
Yeah I mean, I think the whole. Sorry do you want to say. I was just going to say that's interesting. I feel for me personally, I feel like I communicate more because I work remotely. But that's an interesting comment. Yeah but I think one of the issues is that concern that since we are now remote and are likely to be remote at least part of the time, even most of the time going forward, how do you really build trusted relationships with people that you may you've never met in person and.
And we're not. We're experiencing some of that now. Right but it's a fairly tame thing. Like people probably like all of you. I use grammarly, right? It's helped me write things better, but it's a fairly tame kind of application. When it gets more intense, that's when the stakes maybe shift a little bit.
So I know we really appreciate the conversation you've had around this scenario. Just really three quick things I want to share with all of you in terms of takeaways for me. You know, we have. And this echoes themes we've heard throughout our conversation. We have a responsibility to prevent A's worst harms toward human beings before they occur.
We haven't done a great job of that in many ways, but we have to redouble our efforts to do that. I talk a lot with organizations about how can you increase the surface area of attraction to your organization? But now I'm going to be talking more about what is the surface area of harm that exists now, and are we growing that surface area of harm or are we reducing? It is very important.
I think I believe and this may not be a popular view, I believe we should be slowing the adoption of AI until we have more effective regulatory frameworks because right now we're operating without that. And there needs to be regulatory frameworks around this and certainly in the United States and North America. And then I think one of the things I hope you're taking away from this experience is that in your role as organizational decision makers, you have a responsibility to ask better questions, to ask different questions, to inform yourselves, to be able to operate in a space where someone like Terry doesn't begin to do things inside your organization of that kind that could be very detrimental to it.
Operate with the same level of positive intention like Carmen does, but do not allow someone like Terry to become a rogue actor in the organization, much less having the AI kind of take on so much responsibility without there being a full set of understanding of how that's happening. All right. My three takeaways for today are who decides on the technology that you will use, ensure that there is diverse representation on that team, diverse in terms of background levels within the organization, from teams across the organization, gender, race, ethnicity, thought, age.
I'm sure you know where I'm going. Value those ideas that differ from the majority. We all know that diversity brings innovation and creativity, which is what we want to see within our organizations. ASC has a publication, strategic plan, just like all of you. We also have is a publications diversity, equity, inclusion and accessibility committee, and we focus on a wide range of topics, including diversifying our editorial boards, advocating for open science, ensuring that our journal submissions adhere to our inclusive language guidelines.
And we also have an author name change policy. Although while we're working to ensure that our digital platform is accessible to anyone that wants content from our online library, we are also looking at our vendor selection processes, reviewing the needs of incoming authors and editors, and making our platform friendlier to non-native English speakers influenced. We seek to influence change within the industry, not just within ASC.
And I will also. During this discussion. The technology is only as good as the data sets provided to it. What are you training your AI systems on? You should you should use a diverse data sets and build in checkpoints to assess where bias can be introduced into your systems and processes.
Be strategic, intentional and broad when creating AI use cases. Have an anti-racism mindset. What I mean is that you should review your existing policies, procedures and internal systems to ensure that you aren't adding AI to an already weak foundation when it comes to your Dia publishing initiatives. On that note that we are at the end of our session, feel free to contact us if you want to have further discussion related to the session today.
Thank you again for joining our session. Please enjoy the rest of the. And just before just two things. A few things before everyone goes. If you want to capture pictures of these notes that David has taken, feel free to capture that. If you like the slides, just let us know.
I'd be happy to send the slides. And I just really want to Thank David for collaborating with me on this session. She she brings so much expertise in scholarly publishing and has made incredible contributions to this. I just really want to acknowledge what she has done for as part of this session. Thank you, Jeff.
Thank you all so very much. Thank you.