Name:
Annual Meeting Highlights Virtual Event 2025
Description:
Annual Meeting Highlights Virtual Event 2025
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/dfaaaf99-450d-42ab-8e6c-1abe33c937f1/thumbnails/dfaaaf99-450d-42ab-8e6c-1abe33c937f1.png
Duration:
T03H43M57S
Embed URL:
https://stream.cadmore.media/player/dfaaaf99-450d-42ab-8e6c-1abe33c937f1
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/dfaaaf99-450d-42ab-8e6c-1abe33c937f1/GMT20250617-144749_Recording_3840x2280.mp4?sv=2019-02-02&sr=c&sig=VIxuvbktMk6%2FmMbJh3bZexH4gvEWLS6KV9s1r0DBYO4%3D&st=2025-12-05T20%3A55%3A49Z&se=2025-12-05T23%3A00%3A49Z&sp=r
Upload Date:
2025-06-18T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
All right. I see, folks, I see more folks.
Welcome Come on in. Have a seat. We're excited to have you. We're going to begin in just a second. Hello and welcome. You should be able to see my screen.
And I think we're at the top of the hour. So, Susan, with your permission, I'll get this thing cranking. Ranking OK. All right, everybody. Welcome I want to say Hello and welcome to the very first of what we hope are many, many more annual meeting, virtual highlights events. We're breaking ground today with a new format. We're very excited about this.
It's been 12 months in the making. And we're excited to see what the results are here today. I'm David Turner, and I'm a content consultant and the head of partner relationships at data conversion laboratory, or DCL. I'm also part of the annual meeting program committee, and I'm going to be serving as emcee today. And this event is actually going to be a combination of live discussion and pre-recorded content.
But before I get too much further about agenda and all that kind of good stuff, let me just jump in and say a quick Thank you. We did have a lot of great meeting sponsors this year who helped with our annual meeting. And in particular for today, we want to say Thank you to the copyright Clearance Center. The video summary compilations that were put together today, the copyright Clearance Center sponsored all of that.
So we're really, really thankful that they did that for us. Just a few other Thank yous at of the I just want to say Thank you to all the annual meeting speakers and presenters. Some of you are on today, some of you are not on today, but you know, the work that you put in to the event and to your presentations really help make all of that go and really helps make this go. So thank you.
I also want to say Thank you to everybody who serves on the annual meeting program committee. It's a big group and a tireless group and a well attended group. They always are very, very faithful and we appreciate all that they do. I want to say a special Thanks to Greg Fagan and the others that are part of the SSP highlights subcommittee. This smaller group, we were meeting weekly trying to put this thing together, and we're very excited about hopefully getting cranky for next year.
I'll also say a quick Thank you. Here, we've got some of our chefs with us today that are going to be taking part. So thank you to you for taking time out of your busy schedule. And then last but certainly not least, a big Thank you to our SSP staff. Wow the things that they do to make our organization run make this event happen. Pictures, slides, QR codes, a little bit of everything.
Thank you for all the work that you do. We really do appreciate it. All right. With the Thank yous out of the way, let's jump in and let's just hit on the agenda quickly. We do have a really full schedule today. We're starting off here with this welcome and just trying to get everybody acclimated. We're going to get started on or about 1110 1115.
And these are all Eastern us Eastern time zone times here. We're going to get in. And we're going to have a quick recap of our keynote. And we've got our keynote speaker with us today. I think he's here. We're going to have recaps of our plenary discussions, including the Oxford style debate. And then we're going to follow those up with a time for discussion.
So if you attended the annual meeting and you didn't get to participate in the discussion, you still had questions. Or if you're just attending highlights and you've got questions, we're going to give you a chance to jump in after that. We're going to get into our concurrent sessions and we're going to hit highlights of those. We're actually going to do this in a couple of different parts.
We're going to start by doing a well, first of all, some live presenters. And then we're going to take a break and we're going to come back and we're going to do some video presentations. Just quick recaps of the event, and then we'll follow those up with discussions about 1:45 Eastern time. We're going to jump in and share some video content of the posters.
The videos done by the poster presenters. And then we're going to let the chefs in here around 220, and they're going to leave us on a quick recap of what was happening with the SSP previews sessions. And then we'll close this whole thing up on or around 240 with any additional questions, comments, things like that. All right. So that's our agenda today. A few housekeeping items.
OK we understand it's a new format. And we do ask for your patience with this. Unlike a lot of webinars, we're going to go ahead and encourage you to keep your camera on. We want to encourage conversation. We want it to feel as much like an in-person type event is as we can. But while you keep your camera on, we do ask that you will also keep your mic muted.
If you do have a question, you can either raise your hand or use the Q&A box. You can also use the chat feature. We've got people that are kind of monitoring all of those things. Feel free to also use the chat just to send messages to individual people. I will warn you, if you send something directly to me, it may be a while before I see it.
I've got enough things on my screen that I don't always, always see the chat. So but yes, feel free to put things in the general chat, Q&A, et cetera and Greg, Marianne, Audrey, they'll all recognize it and they'll jump in and interrupt me and say, hey, I've got a question. We do have closed captioning enabled. If you don't see the CC icon on your toolbar, you can view it by going to it's the More option on your screen and then choosing Show Captions in the dropdown menu.
Let's see what else here. I do want you also to feel free to come and go. We understand that know, four hours in the middle of the day or late in the day, if you're in other parts of the world or very early in the day in other parts of the world. You know, if you can't stay the whole time, just, you know, feel free to come and go.
But we want you to be as much a part as you can. And then finally, if you have any trouble, you know, try to use the chat to send a message to either Susan Patton or to, did we make greg? Greg, are you a co-host in this. I think you are. So if you can send a direct message to Greg, if you guys have an issue, then we'll try to get you taken care of from there.
OK all right. So the session is going to be recorded available to everybody following the event. You're going to get an email that says when the recording is available. And speaking of recorded videos, here is a link to all of the videos that we're going to have, both from the annual meeting session and then the session from here will be added to that.
And then once you've been inspired by all of these videos, the next thing that we hope to do is that you'll go in and your calendar and save the date for next year's in person annual meeting, which is going to be May 27th through the 29th in beautiful Chula Vista, California. I wonder what the weather will be like. Just kidding. We always know what the weather's like in San Diego, so. All right.
One last housekeeping item before we start getting in. Just a quick note about our code of conduct. You can actually see the code of conduct by using your phone and using that QR code. But we are committed to diversity here and to providing an inclusive meeting environment. We do foster open dialogue and a free expression of ideas, a free expression of ideas, which means free of harassment.
Free of discrimination. Free of hostile conduct by the speaker. We also ask that all participants, whether speaking or in the chat, will consider and debate relevant viewpoints in an orderly. David, you went on mute. Are you muted.
Oh, I think someone muted me. Thank you for letting me know or I clicked something. In any case, let's get rolling. We're actually going to start now with a few icebreaker questions. So if you will take your Cell phone, or join at menti.com. And I'm going to hold this up for just a second and give you a chance to capture this. And then I'm going to stop sharing and hand it off to Susan, who's going to show their results.
So just one second here. Give you another 5 seconds or so. I'm taking off my computer. All right. And stop. All right. So, Susan, you got it from here. Yep, I do.
Just give me one second. Let's see. Can you see that.
Yep all right. Where are you joining us from today. I love the word cloud. You know, I probably should have done this myself. Just pretend that you see Texas on there.
All right. Anymore? anymore. Thank you for whoever put Texas on there.
All right, let's hit the next one. All right, let's see who's here that attended the annual meeting and who did not. Right now, apparently we're having a little trouble with the app.
You could try mint.com. And then there's the code there for 807 0893. All right. All the yeses are gaining.
All right. We don't quite have everybody, but that's OK. David, someone is having a little bit of trouble getting to the questions. And I had a little trouble too, so it might just take a little bit longer. I just switched to my phone and refresh. Gotcha that's all right.
We can go ahead and we can just keep going here. Oh, Yeah. We're given the mini code again. OK Susan, if people have continued to add, can you go back and see an updated slide. OK cool.
Janet another there. And another. All right, let's see the next one here. Interesting anyone surprised by this distribution.
All right.
Yes let's move on to the next one there. Keep this thing moving a little bit. My favorite question of the day. I appreciate everyone on my side with this. This is why you're my people.
About the result we were expecting. Yep David. That's funny. But we promote diversity of thought here. There will be no canceling people who don't use the Oxford comma.
I know we missed one slide. I'm going to go. Yeah, I think we missed one slide. I'm going to go back one. OK nope. Maybe it's not here. I don't know where it went. I'm sorry.
That's all right. We can keep this going. We can keep this thing moving. All right. All right. Well, with that, I'll take back over again. OK and bup bup bup bup bup.
I'm going to share that screen there. All right, let's get into our first session here for the day. We're really pleased to kick off this whole event with a recap of, well, first of all, our keynote speech. David, I heard nothing but rave reviews about this, this speech, since I also did not to get to attend the annual meeting this year.
But we're excited to have David here, who's going to give us a five minute overview of the keynote. Helping to raise this laptop actually as well. Can you sit it on the side. Then we're going to have Melanie jump in, and she's going to give us a five minute summary of the plenary. That was the moderated discussion. And we'll talk through that.
And then we're going to bring on etzer and David together to highlight the Oxford style debate, which I know has become recently popular. So very anxious to hear how that turned out. And we won. And then what we'll do at the end of that is we'll bring all four back together to. To basically have a question and let you ask questions and have a discussion.
So with that, David, I'm going to stop sharing. So we can just focus on your face. And if Susan can spotlight you, that would be awesome. And we'll just ask you to take five or 10 minutes or something here and tell us a little bit about yourself, and then give us the highlights of what we missed. Or for those who did attend, give us a recap of what they should have learned when they listened to you in Baltimore.
Great Thanks, David, and Thanks, everyone, for including me as part of this, both the main event and this webinar. For those of you who I did not get a chance to meet. Hello, I'm Doctor David Schiffman. I'm a Washington, DC based marine conservation scientist and environmental consultant. I work broadly across the science policy interface, both doing original academic research and also translating that academic research into ways that the public and policymakers can understand with the explicit goal of changing policy based on the best available scientific data to protect endangered species.
And that involves understanding science, understanding policy, and understanding how advocacy works, as well as how to translate each of those topics to different audiences. So my plenary spoke a lot about lessons learned from a career doing this, of trying to encourage scientists to better understand how our research is used by end users and how to better communicate with different stakeholder groups.
I do that through a variety of means, including public talks, including social media, where I'm one of the most followed scientists in the world, including writing about science for the public through blogs, through popular press articles, through books and things like that. And along the way, I have adjusted my approaches is based on new data. I have learned a lot from expert colleagues and friends, and it has certainly been an interesting time being an environmental scientist in the Washington, DC area.
So we're having to adapt even more lately. So the important point is that academic research and scholarly publishing is always going to be a vital part of this effort, but that's not the end. That's step one. We need to do the research. Absolutely and the role that you all play in that is vital. And the importance can't be overstated. But that's not it, because most people, not only most people in our target audiences, but most people don't read scientific journals.
So a key point is that we need, once the work is published, once the data is gathered according to the best, best practices and published through outlets like yours, then we need to also translate that data in ways that it can be understandable, interesting, relevant, and useful to a variety of end users, including government agencies, including environmental nonprofit groups, including stakeholder groups who interact directly with sharks, with the ocean, with endangered species.
So throughout, I shared some of my own research and some of my own writings and practices, and offered people tips on how to learn more, and I'm always happy to take people's questions about that. That was about 3.5 minutes, but I think that was about everything I wanted to say. Well that's great. Well, we're going to save you. We're going to come back to you with a bunch of questions, except that I get to cheat and ask one question here, which is, are you pretty excited about the jaws anniversary that's happening right now.
Oh, jaws jazz is a complicated one for my world, because a lot of people's fear of sharks and in some cases, a really crippling fear. There are lots of people who won't go in the ocean at all, can be traced back to Jaws. And sharks are a highly persecuted species and some of the most threatened animals in the world. And that's in no small part because of public fear. But also jaws is a great movie.
It launched this whole genre of shark movies that keep people talking about sharks, and the generation about half a generation ahead of me. Many of them trace their desire to want to become a marine biologist to Jaws. So I think I'm quoted in 10 or 12 popular press articles about the jaws anniversary, and I wrote to myself. So it's certainly a busy, busy and complicated time for my people.
I love it, I love it. Oh, well, we'll come back. And we'll talk more about that in a second, I guess. Now we're going to give Melanie her opportunity and hey, Melanie, you heard him. He he didn't use all of his time. So you can just carry on as long as you want here. Right like a whole extra two minutes. You get a whole extra I mean Yeah.
So but you're the boss. You get to take however much time you want anyway. So I should be. You should be seeing my screen again. And I'm going to advance over to your first slide here and take it away. Great Well, Thank you so much, David. And you can go on to the next one actually. Sorry not this is the right one.
No you're not, you're right, you're right. Go back. All right. So I did moderate the plenary session Thursday morning. It is available to go back and watch in on demand library. So I highly encourage you to do it. It was a great session. Just sort set the stage. You know, it feels like for the last several months that sort of have just been filled with chaos and uncertainty and frustration.
And those feelings are not just our industry. It's across many industries, but rapid change and abrupt and constantly evolving. Policy decisions make it difficult to run a business day to day, much less really plan for the future. And it can kind of feel like the ground is shifting beneath us. And it's also kind of hard right now not to feel like the fabric of what we're doing is under attack.
So I had four wonderful panelists join me and Gabriel from Elsevier, Elizabeth long from Johns Hopkins University libraries, and Nico fan from previously Oxford University Press. He's in transition, moving to Yale University Press and Sarah tegen from the American Chemical Society publications. And really, we had a nice discussion and examined whether the values that underpin our work continue to benefit the society, sorry, society at large, and how scholarly publishing is and already sort of adapting and leading to continue to deliver in this shifting landscape.
So some of the key themes and takeaways were one trust, integrity and quality remain foundational to scholarly communications. The industry must clearly communicate that research is iterative, self-correcting and subject to change. And I absolutely love this quote by Sarah that's on the screen. And there's a growing disconnect between internal values and public perception.
And we need to work to bridge the gap between those things. Also, we talked about fragmented stakeholders and also our shared responsibility. Our ecosystem includes publishers, libraries, scholars, funders, institutions, and governments. But our shared values are not always aligned. But greater collaboration is needed to preserve the scholarly record, ensure data stewardship, and support research integrity.
We've also had a nice conversation about how public trust in science has dropped significantly since the 1950s. And Gabriel quantified this for us with this quote here. But mistrust can be fueled by economic distress, distress, cultural division, and a lack of communication to the public and policymakers about the value of research. And the panel really emphasized reframing science as a dynamic, people driven endeavor with real world impact to engage public interest.
Misaligned incentive structures could certainly stifle innovation and exclude diverse voices, and suggestions were made to rethink peer review and broaden participation in research publishing. Of course, naturally we talked a little bit about AI. And really sort of it's sort of, you know, both an opportunity and a threat. So accelerating it can accelerate discovery but also amplify misinformation.
Publishers are investing in AI driven tools while grappling with concerns over copyright infringement, data misuse and quality erosion. I think a really important point. Trusted publishers must differentiate themselves by emphasizing transparency and curation and the AI age. It's going to get even muddier in terms of what should we believe when it comes to communication advocacy. There was a great discussion about effective storytelling.
It's not just about the facts that storytelling is really critical to persuasion, and scholarly publishers must translate complex research into human stories and train research in science communication. So that's a great, great follow up to David's keynote, because he's done a tremendous job in this area and is doing the great work of teaching others how to do this in many fields.
So I think that was a real theme for the whole conference was, you know, we need to be better at communication beyond our own bubble. Leveraging and we can do that by leveraging trusted messengers like librarians and peer networks to build bridges with the public and policymakers. And then I really think this was, you know, this sort of hit home is meet people where they are by using the methods and platforms that they trust.
And it's really encouraging. You know, I mean, social media is not necessarily where all the most comfortable. But that is where people are getting more of their information these days. So perhaps that's an area where we can expand as scholarly publishers to expand our communication reach. And then finally, collaboration is such a theme for this session, but also for the conference overall.
And despite disruption, this moment presents a chance to unite our ecosystem. Again, we don't always align on everything, but now is sort of that time to foster new partnerships. For example, publishers and funders coming together. Publishers and librarians, researchers, rebuilding trust through empathy, transparency and engagement and addressing the structural challenges that we acknowledge are there, but doing it collaboratively in a way that we can be stronger together.
Outstanding outstanding. I suppose I should also ask you your feelings on jaws and the jaws anniversary. You know, I've never been a fan of jaws and not. But it doesn't make me scared to get in the ocean, I guess. But it is. It was really interesting hearing David's take on media's depiction of sharks and how that's sort of working against the science that he's trying to do.
And it certainly gives you a new perspective on something like Shark Week, which he talked a lot about in his talk. Of, you know, just because it's on T.V or on the internet, you know, we need to be cautious about, consider the source and understand that it does not necessarily represent the science. Yeah Yeah. So I know Yeah. Jaws didn't keep me out of the water either.
But jaws 3 may have kept me out of the theater for a while after it came out. So anyway. Yeah, for me, the better you can. You can't even say the word today. I know if you went to the SDM meeting and you saw the SDM trends graphic and we talked about they talked about one of the sort of metaphors for the period that we're going through right now is with the Sharknado.
And I feel like this is the year of the shark between that reference and David's talk as well. So, I mean, I think the sharks are well represented this year. I love it, I love it. Well Thank you Melanie. We appreciate it. Let's get over and let's talk about the closing plenary, the Oxford style debate. We have Ezra and the other David with us.
We can't have too many Davids on screen at one time, you know. Hey, David edsr is having some trouble getting in, but I know we have David, so until we can get her in, why don't we just ask David what his take was. All right. We might get a one sided view of how this debate went, but. But I'm all right with that. So can you maybe spotlight David. Oh, Yeah.
Yeah Hello and welcome. Hello Oh, we have it, sir. Now, then. Oh, great. Fantastic Hello. Just in time. Hi, everyone. Sorry I had some technical issues. Thank you.
All right, well, we'll let David. We'll let David start. And then Esther will let you jump in here. And I'm ready to see the fight. Ready to see the fight. Yeah, I think it already happened. So but aren't you ready to go again. Yeah I mean, I feel better after the experience. Then before I was a little nervous before, but I feel much more confident in the argument and the stance that I took, given the best that he could throw at it, I guess so.
Well, for those of us who were not there, give us, give us the highlights of the setup and what happened. Sure Yeah. But before we jump into that, it might just be worth footnoting that the affirmative for our argument could not join us today. So just so everyone's aware the resolution of the Oxford debate at this year's annual meeting for the SSP was I or use of I for, for model training and purposes is fair use.
And they Adam asked I scroll from the Chamber of progress argued in the affirmative that AI is fair use. And he had a number of arguments for that. And then David argued in the negative that I is not fair use in the context in which we're referring to it. And so so I just want to make that clear that unfortunately, our affirmative speaker is not here today.
But I can certainly provide you some of the highlights of his argument, but I'll defer to David to kick us off, since he is here and he can share with you some of the highlights of his argument. Yeah so from my perspective, when I, when, when I was sitting on the thing listening to Adam, his main argument was that like, it's highly transformative. And when you tokenize something to train the model, that is like transforming it in some meaningful way.
And then so my argument was basically it was like three prongs. So one is that there's no fair use case that's ever been held by an appellate court. You know, there's no binding authority from any court system that is similar to the generative AI case, because all the previous ones do not require to exploit the expressiveness of the works, right.
If you use Google search, whatever, Google doesn't care. If the scite is a bunch of gibberish, it'll work just the same. But large language models require expressive works in order to work. So that was like the first prong. The second one was that all these major models require pirating the material that they train on. So the songs, the books, the articles, you know, for all the scholarly publishing or publishers that are, you know, on the call here, they take all your stuff without paying you or your authors or anybody else.
Right? and they train. They have to do well. They claim they have to do this, but all of them do it like Anthropic, xAI, OpenAI, Google, all of them. And then the third prong is that every argument you can make in favor of AI company to give them fair use would apply at least as well for a human. So suppose that the models are transformative. They take in the works, and all of a sudden they can create a poem about dogs in the style of Shakespeare.
Super cool. Humans can do that too, right? So anything I can do, humans can also do. We can't do it at the scale or the speed, but there's nothing that they do that's transformative. That isn't at least as transformative by humans, including how we can have insights. You know, we came up with evolution, identified quantum mechanics, things like this.
Models haven't done any of that. So if anything, we're more transformative. So if we're going to give generic companies like Meta free pass to use any copyrighted works, we should also allow every individual human the same. Right and I think that would be bonkers because it would destroy the economy. But that's essentially what they're asking for is either an exception for them or to expand it beyond recognition.
Very compelling. And what was the other side. Yeah, Yeah. So the other side of the argument. And again, just acknowledging I'm going to share a few points from Adam. Argument again. So so as not to misrepresent him. I'll sort of state those clearly from his sort of written document.
But as David was mentioning, he was kind of level setting on a few areas. One, he wanted to level set that it's the inputs that matter in this issue as opposed to the outputs. He was arguing that no one owns copyright, and copyright doesn't protect facts and ideas, just how they're expressed. And so, of course, as part of that, he's kind of arguing that the expressiveness, as David was sort of mentioning, you know, is actually, you know, actually at heart here.
He's arguing that, of course, we're focusing more on the facts and ideas going in, and therefore these are not owned and protected. So he was focusing also on the social sort of output or the social benefits that could be attained by allowing greater use of the Materials. And so, you know, innovation and progress being kind of the aim of this in his mind.
He sees that there's more harm to be done by limiting the use of content for, for, for model training versus the potential benefits. And so he is arguing, therefore, that we should actually provide our large language model developers and AI developers and agentic tool used to have more range in the use of these tools than limiting them as, as David sort of mentioned.
Interesting any other highlights of the back and forth and the debate. Any key moments. Things that we should that would be good to bring up for the group. Yeah well, I'll let me jump to the end. And then David, I'll be interested to see what you think about this.
So the way the debate is set up is such that what we do is we take a poll of everyone prior to the debate. And then our speakers, our debaters, you know, present their opening arguments and then they present their rebuttals. And then they we take question and answer from the audience, and then we close with, a final argument summarizing each debaters point. And then we take a poll of the audience again and see where everyone fell out based off where, how the arguments swayed them.
And so the winner is not decided necessarily based on the number of people voting for an argument as much as it is the number of people whose minds they were able to change, and in our debate was sort of interesting in that we didn't necessarily we couldn't necessarily observe that there were more debaters who switched to one side or the other, but instead that both debaters actually sort of lost voters on their side.
The in the affirmative. He lost more voters. David, I think, ended up losing maybe five voters in the end. And so I'd be interested to know from so, so just so everyone knows kind of how the resolution won out, like David ended up winning because he was what we declared the lesser loser, which means he lost more votes in the end. So we didn't have a gain or what have you. But David did win in that respect.
So David, I'd be interested to know from you, you know, why do you think some voters perhaps may have voted less like so if there were instances in which we could say, Oh, people may have left your camp. Why do you think some voters may have switch to the other side or, you know, kind of voted less in favor of your argument than they did in the beginning. Yeah, I like to think that nobody switched. It's just that more people left the room because he lost about half of his votes, and I lost, you know, 3 to 5 or whatever.
If they didn't like what I had to say, maybe I just. You know, I really don't know. So to me, and this is like, obviously, I'm super biased here. When Adam went up there, he didn't seem as organized. His thought, his argument didn't seem as solid or sharpened. And so I think that there were people after the bait who told me when he went up there, like they wanted to be persuaded by him, but he just didn't quite reach that bar.
OK and, you know, so I imagine that's why a lot of people bailed on supporting him. I don't know if people stop supporting me for a bad argument. I don't know why they didn't tell me. Well, David, I know you used also a fair amount of humor in your argument. Yeah and I think you tried to be relatable to the audience from that regard, I think.
What do you think are some of the, you know, humorous arguments from the other side that you think just really don't hold up under scrutiny. When when you hear them in the affirmative, like what Adam made. Yeah you know, I don't. I'm not really sure the things that really stood out to me when his argument was like saying that, well, you know, he would say that because you have all these things in your brain.
I don't even know. I can't remember a thing. I know he told the story at the beginning, the opening that had nothing to do with the debate. And I think he made a couple more kind of jokes in between, but I really can't recall. Yeah Yeah. Yeah I think for me particularly, I think the sort of analogy.
You made David around anything that humans could do. I or if I can be permitted to do certain things, then humans should be permitted to do those same things. In which case, if you kind of play that logic out, then, you know, humans should never have to pay for any content ever, or we shouldn't buy any more books, or we shouldn't buy any more, you know, pay for any more songs or whatnot.
And I thought that that was kind of a really sort of poignant way to look at it, in terms of thinking about if we're going to permit, AI developers who have, you know, enormous sums of money, right, to purchase these. And if we allow them to kind of get off Scot free, pretty much, as your argument is saying, then every individual should be able to do the same thing. And I thought that was a really persuasive moment in the debate that you presented from your side.
Yeah I thought it was interesting is he didn't have like a rebuttal to that kind of argument. I haven't heard it from anybody. Like, you know, I first published this paper about it last October, and I haven't heard any, any like, valid argument against it. I say valid. I mean, that sounds biased, but like, there really hasn't not been an argument against it.
Yeah you know, he let's see he you know, at the end there was a question and he brought up Sega, which is a case where these programmers wanted to make their games compatible with Sega's console. And so they had to copy all the code from Sega, which included, expressive elements. Code is copyrightable, but they only kept the functional part. They provided all of their own expression. But of course, he elided over that and just kind of said, you know, they copied it and that was OK.
Therefore, it's OK if we copy it as well. You know where they say that. Like I think in his outline, I don't know if he mentioned it verbally was the models are trained, they don't retain any information. They're only learning the underlying facts and ideas. And yet and yet there's papers that show they can, you know, verbatim output, verbatim output. So they're clearly memorizing chunks of expressive works.
So I just haven't. I didn't find his arguments compelling, but I don't know how it came off to the audience, because I think I'm much more in the weeds of it. And sometimes I kind of, you know, I don't take those kind of arguments as seriously just because I know they're not true. And I don't know if I conveyed that information well to the people sitting out there.
Outstanding well, I think we'll find out from some of the group that's here today. Before I open it up to everybody, I do have to ask you a shark related question, which is what are your favorite pop culture references to sharks. Yeah, I don't back in the 90s, there was this cartoon with like the there's like sharks. They had like shark bodies but human legs. I can't remember the name of it, but I remember watching that a lot, like on Saturday morning cartoons.
Jabberjaw I honestly don't. Oh, street sharks. That's it. Yes street sharks. Oh, Yeah. Answer what about you. I would say I my favorite reference to sharks definitely has to be, I think, the Sharknado series. Yeah, that's popped up rather recently.
I mean, the idea of a shark. I live in I live in Florida. So storm storms and super storms, hurricanes, tornadoes, waterspouts are sort of a part of the summer life. And so the idea of a hurricane or a tornado comprised of sharks. Sounds interesting to me. It sounds genius. Exactly I want to be a part of that.
I love it, I love it. All right, well, with that, let's bring Melanie and the other David, back on, and I'll ask my panel of friends here today. Greg, Maryann, Audrey, those of you who are watching the chat and the Q&A, what questions do we have. Oh, and please, if you've got a David question, please make it a David S or a David. And I'm just going to assume there are no questions for David.
T we do have a question for shark. David shark. David I like it. Shark David I've been called worse. How do you think preprints are affecting the public and your efforts. Honestly not much. I think preprints can be helpful for policymakers, for sure, especially for really, really time sensitive stuff.
One one downside of the scholarly publishing model is it can take a year or 2 or 3 from when you have an important result to when it is formally peer reviewed and published. And for some endangered species, that's too late to be helpful. But and preprints can help get around that. But the public doesn't read, doesn't really read them any more than they read regular journals of it. Preprints have a role to play in any of this, but I think we as a community should probably do more to explain to people that preprint means it hasn't gone through any quality control screening yet, which means it might be just cuckoo banana pants nonsense.
And in my world, it often is. There are people who publish preprints with their theories on Shark behavior that are not informed by data or evidence of any kind, and then they never actually submit the preprint for peer review, because they claim it's published. So it's a bit of a mixed bag. But in terms of public understanding, I think it plays a very limited role, at least in my world.
All right. What other questions do we have. I had a question for David Atkinson. I'm sorry I didn't read your paper. I do see the link there. So thank you, Susan or whoever put it in. Do we know definitively how the large LLM companies got the scientific literature.
Did they go to a place like sci-hub or. They did. Yeah so we know this from two lawsuits so far. One against Anthropic and 1 against Meta. And I'm sure I mean, I would bet my entire salary life savings that Google and OpenAI did the same. But Yeah. So like in meta, for example, they use libgen, which also pulls from sci-hub and different other shadow libraries over 250tb of data.
So if it was published, it would more than all the published works in the Library of Congress, like 20 times more than all the published works in the Library of Congress. So Yeah, that's definitively proven they downloaded it or torrented it, which means they downloaded it and also redistribute it as they were downloading.
Melanie, I've got a question for you. Could you expand a little bit on, on your, your last little bit, which talked about the collaboration being the opportunity here in this time of disruption, how you see that leading to unification, et cetera. Yeah, I mean, I don't know that there was a clear path forward for bringing all of those groups together, but I know, you know, at least SSP and STM as working with some of the library associations as well.
And, you know, trying to figure out, how do we come together to do that. I think there's so many different facets to what's happening right now that figuring out exactly what areas to focus on will be the challenge. But there is an increased interest from those parties to work together in a way that I think they really haven't thought they needed to in the past, and I think that is important.
There was a point in the plenary, you know, somebody said, well, how come there's not a funder, you know, and a researcher on the stage. And they're right. I mean, we need to do more outreach in those areas and bring more of those perspectives into this, to the discussions. All right.
What other questions. Or comments, those of you who are attending, if you want to just raise your hand, if you've got a comment that you'd like to make, that'd be great. I'll kick off a couple comments. David I was so pleased with David Schiffman's talk about, making scientific literature accessible, understandable to people who aren't in, in our industry or the researchers themselves.
And it's a conversation I have with my friends right now who are more in academia. They're not in scholarly publishing. And they complain complained that, you know, our scholarly publishing world is just, you know, not approachable. So it was so great to hear you speak to that. And just on a personal note, my eldest son is studying and he's very concerned with, getting a job and everything.
And and I did send him your keynote saying like, look, here's an example. Like the importance too, of like doing the research but then being able to speak about it. So thank you. Oh, great. Hey, David. David Turner Jennifer alberghini has a question. Well, come on, Jennifer, let's take her off mute and let's hear what she has to say.
Hi hear me. Yeah So this is for David and etzer. So one of the things that I was interested about and maybe you could say a little bit of background, how it sort of came this way was the question sort of for the debate was, is AI fair use. But not so much if I can be used as such. So so my question is, is like, do you think if these things are being paid for, if they're being compensated, then do you sort of see the use of AI and its importance for the future in general.
But it's just the question is the matter, is it just a question of a matter of should these people be compensated for having their work used. And also how does that affect like things. Because like you mentioned, like using Shakespeare versus using something from a scientific journal of things of age and things with copyright, does it differ of the times.
And you know, what rules and what things are protected and what things are not. Does that make sense. Sure Yeah. That's where you want to go first. Yeah, sure. I'll just first of all, Thank you for the question. So so I think that framing piece that you mentioned is, is kind of spot on, which is that we wanted to make sure that this is about the UN permitted use of these works, and by virtue of it being unpermitted, then it also being uncompensated to the originators and the creators of that work.
And so that was the resolution that could. Could you is, is, is the end being to train models kind of, you know, a good enough reason that would allow one to be able to say, OK, I can use this work without permission or compensation. And so the question about whether or not use of these works and, and the social benefits down the line, right, if they were permitted, and compensated for, is a sort of separate question, which I'm happy to, of course, opine on, which I do believe is a valid use of.
This works just as anyone would write in any context. If you utilize works that have been copyrighted and you're using them sort of with the permission, and you're compensating and you're transforming or creating derivative works beyond that, or using it for teaching and research and education, et cetera, then I believe, certainly, right, that these are valid and socially beneficial reasons to utilize others other's works.
But again, that was sort of that's kind of was outside of the scope of this question, that this question was really about the unpermitted and uncompensated uses. But of course, I'll let David jump in there. And of course, I'm happy to opine on the second piece of what I see is your question if we want to. Yeah, Yeah. So Yeah, I agree.
I think that's the issue is whether it's authorized or not. So if the authors or copyright owner opts in to doing it, they, you know, they give the permission to do it. That's fine. One thing that people should be really aware of, though. If you license your book or whatever it is to an AI company, you better make sure that you have limitations around it because is it like a forever thing. They can use it for all models in the future or just for that model.
And what a lot of people don't realize is that, like GPT 4 and GPT 3 are different models. It's not like they just built on GPT three. GPT 4 is different from GPT four. Those are different models. So if you're going to license it for training, you should probably license it like you can use it for your next model. That's it.
You have to license it again. Otherwise, they just get a free ride for the rest of their lives on your work. And then the I think you had a question about using, you know, books that are public domain like Shakespeare. Copyright law is a little bit wonky, but generally if it's published before like 1929, 1930 or whatever it is, that's public domain. So those were all OK.
Project Gutenberg is a website you can go to and you can get all those books. You can download them for free. Nobody's going to have any issue with that. Totally fine. As far as using scientific works versus like creative works under the Fair use factors. Typically the science things are more ideas and you know, as opposed to expressions.
But then so that's like that's only one factor though. The other thing is when you take these science things, it takes a lot of effort, like you put in money and time, like I'm writing a book now for Cambridge University Press. And it takes a lot of work to dig up this information and form it into an idea. And maybe it's not creative expression the same way as a novel, but, you know, this is my labor going into it. And it's not fair to say that they can compete with it, just like take all the information out of it.
And use that to supplant me, you know, through a Google or a chatbot search. Yeah and I don't know if you had another part of that question or answer wants to jump back in, but all right. Well, we have two more hands up, Paul Rubino and Jonathan mallet. We'll go to you first, Paul. And then after these two questions, it'll be time to move on to the live concurrent presentations. Yeah Thank you very much, David.
And I'm sorry Adam was not here because I'd be all over anything he said. But, you know, we recently had a, you know, a victory finally in the courts with the Thompson Ross intelligence victory. Right where the court, I believe, seemed to focus on the, the commercial impact of what this ingestion of content by AI. Do you think that's going to be the. I like that your opinion on that.
But also, you know, do you think that the commercial impact of AI getting in between the content owner and its audience is going to be the big winner here in the copyright. Yeah so one, I think that case probably has less weight than we would prefer because the court specifically says it's not for generative AI, so it doesn't neatly map on to OpenAI and Anthropic and everything else. But Yeah, I think factor four, which is the market effects is going to be the biggest thing.
So the judge in the case, when they had the hearing a few weeks ago about summary judgment, which is when they decide, like he might decide up front whether or not it's fair use or not. He said he thinks it's probably highly transformative, which generally means it's highly transformative. Then it's fair use. But he's also very concerned that it messes with the market, because if it floods the market with a bunch of rip offs and makes it so, it dilutes everybody else, then like you delete, you remove the incentive to create and share, which is kind of a big deal for the economy.
So Yeah, I think factor for the marketplace is going to be where people butt heads and they argue about whether there's a licensing regime possible and things like that. Yeah and I'm just going to respond to something you just said. And I think everybody needs to be aware of, people are selling their souls and letting I ingest all their content for the training for big buck.
The problem is the derivative products later on down the road and have that content is going to be used. And I think there's not a lot of transparency when it comes to AI and its use of content. All right. Jonathan Yeah Jonathan had a question. Yes it's for David Atkinson again. Thanks very much for taking part in the debate.
I thought it was really very interesting. So my question relates to why is it that a different license is needed for AI compared with, say, a single user collecting together a bunch of articles that they could if they had access, if they paid for a subscription, they could do that. They could store copies, they could print copies, they could make notes, they could have photographic memories, all of that sort of thing.
They could then produce something like a review article that summarizes those that wasn't a copyright violation. And all of that could be done just by paying once to access each thing. So now, if an AI system does that at a large scale. Why does that. Why does that necessarily become a copyright violation. Doesn't that depend on what the AI system produces, the output it produces, and the analogous way to how it would for a human.
OK so I think what you're saying is earlier when I said, like you meant to limit how they can use it for just one model. Is that what you're kind of getting at versus being able to use it forever after. Well, a human could, you know, a human user if you're a scientist, for example, and you have access to content through your library and you write, you read it and you make notes on it, and you print it out on your printer and all this sort of thing, and then you use it to write a review article.
You might come back two years later and use the same content that you also still have access to through your subscription to write a second review article or do more of this sort of stuff. So what is it that makes the AI systems, special. I mean, obviously the scale of it is very different, but then just a few humans doing it. But just as a matter of principle. Why isn't that the.
Why aren't we worried about the output rather than what? What they're doing to train. Because we don't worry about how humans train on the content. Oh, Yeah. Well, I mean, output is certainly important, but like etzler said at the beginning that we ruled that out as part of the discussion because I think everybody's in agreement that if it infringes it's infringing output, then that matters.
The inputs matter because for almost everybody, you pay directly or indirectly for the stuff that you learn from, right? I can't tell my students to go to a pirate site and just download a pirated book. They have to buy the textbook, right? It would illegal if they didn't. So I think it's kind of a fallacy to think that we mostly get our information for free, even on websites, when it's hosted for free.
The people put it on there because they want to build a reputation. They want to get you to click on ads by subscription, license, something it's not really free. So I think that's kind of the false thing there. And the reason why you would want to limit it to one model. Whereas like a human I can use it forever. Why wouldn't I want to just pay $30 one time and have access forever.
Is because you should think of these models as like, individual brains. And I can't just like, you know, they should have to pay for every time they want a new brain, a new human, right. Every human has to buy it. You can't just. I can't just buy a book and then photocopy it forever for all my friends just because I licensed it.
Everybody has to get their own copy. And so same thing would apply to the models. Awesome Well, Thank you. I think everybody here, let's just give round of applause to our speakers and presenters here. Thank you so much for participating in this. In this time, especially with this new format. You got to be the first trial for us. And so Thank you.
Thank you so much David I my son goes to Texas A&M. And so you know, I was tempted to mute you during this presentation because of that. But you know we're going to let that just slide. So Thanks. Thanks any case, it was great. Thank you, everybody, for joining. Let's move on in. And let's talk about our next part of the highlights here, which we're going to highlight the concurrent sessions.
Right and so one of the key parts of every SSP annual meeting is, you know, the largest number of individual concurrent sessions. And if you're like me, when I've attended in the past, you struggle to pick one right at the same time. So if you did attend in person, this is your chance to learn about some of the ones that did happen. And we're going to break this into three parts.
We're going to start here by having three live presenters each. You're going to give us about a five minute recap, and then we'll open the floor for questions about 1215 ish Eastern time Tim, I think you are up first. So if we can spotlight Tim, let's I'll stop sharing here and let you just talk to us about. And give us a recap of your session. Yeah and I am going to.
Can I share my screen or not. If Susan can make it happen then. Yes I think you can share. Can you. Do you have it. Do you see the Share button or just work now. Wonderful Thank you. OK great. In that case, let's kick off.
OK so Hi, everyone. I'm Tim Lloyd from live links. I'm going to run through at somewhat of a pace, because trying to compress 40 minutes into five minutes is pretty challenging. And I'm presenting on behalf of the I3 co-presenters you can see listed here. So our session was titled addressing research integrity with identity verification, because we were presenting and discussing the results of an 18 month investigation by an STM working group that Ralph young and I were members of.
Our group published two reports. The first report was published last October, and it provides a good overview of the background to the issue, which is basically how and why research integrity is compromised by a lack of identity verification. The second report on the right was published in March and contains a recommended framework to address the issue, and our session was sharing and discussing those recommendations.
You can find both reports on the STM website, and I will cut and paste links into Zoom in a little bit. If I try and do it now, stop me. OK, so what's the problem. So the problem is essentially that research fraud is on the rise. We all know that. And our detection strategies to date have almost exclusively focused on checking the content submitted.
But we did a survey of 12 scholarly publishers last year. And it showed that some of the most severe causes of research fraud were actually driven by identity manipulation. And there's three particular examples I've pulled out here suggesting fake reviewers, fake guest editor applications, and claiming fake co-authors. Yet as an industry, we're doing very little to combat it, with most editorial systems simply accepting Gmail addresses for identity verification.
And this is despite the fact that we're incurring very significant costs across our industry to resolve research fraud issues after the fact, once publication has incurred as occurred. Rather, the researcher and entity trust framework proposing is based on four stages. So there's an initial assessment of how much trust is enough, a verification step to collect evidence of a user's identity. An evaluation stage to determine how much trust we now have based on that verification, and then a decision point, and whether to give them the role that they've requested, whether it's an author, a reviewer, or a guest editor.
The framework is based on a series of really important principles inclusivity, because we want to ensure that no legitimate researchers are excluded by this process. Proportionality so no more effort is needed than is necessary given the context, and that context will differ across publishers and publications. Privacy to minimize data collection and process it transparently.
Feasibility there has to be. We have to have simple, consistent, scalable self-service workflows here. And lastly, accountability to make sure we prevent abuse, maintain trust and create feedback loops. And this will all become a bit more sensible as I run through these slides. So firstly assessments. So not all situations require the same risk or involve the same risk or require the same level of trust.
So if you're taking cash out of a machine, you're traveling internationally or launching nuclear weapons it is not the same. And so the same is true for publications. And publishers need to decide what level of trust is appropriate for different types of content and for different roles. So a 17th century musicology journal may not have the same fraud problem as a leading edge AI journal, you can think about verification and enabling publishers to verify both the identity of an individual.
So Yeah, I'm really Tim Lloyd and their academic legitimacy. So Yeah, I really work for the University of Cambridge. Different verification methods provide different levels of reassurance along those axes. So unverifiable email addresses like Gmail or Yahoo don't tell us anything about the user other than that they happen to control that email account. And so the same goes for an ORCID account. So these offer no to low trust.
Whereas in comparison, if you're logging in with an institutional email address or with an ORCID account that contains trust markers to validate your claims, that can offer higher levels of trust. So again, you've got a pair of authentication methods with the level of trust you want. So our core recommendation is to introduce user verification on all editorial platforms. More specifically, we recommend that manuscripts from manuscript submission and peer review systems stop trusting users on the basis of unverifiable Gmail addresses or email addresses like that.
Now, it's OK to use unverifiable email addresses for correspondence, but then that just needs to be coupled with an additional step for identity verification, which you might only want to do on an infrequent basis. Secondly, we recommend publishers and research institutions contribute trust markers to increase the verification value of orcid records. So, for example, an author might claim that they work for an academic institution.
Well, that institution can actually validate that with a Trustmark, to say it's been confirmed, in which case it's a much higher value for verification than me claiming I work for Cambridge. And lastly, we recommend that we work together as a community to improve this framework by recording and sharing anonymized data and using insights to create a feedback loop. We invite you to study our recommendations, share them with colleagues, and provide us with feedback as part of their community consultation process.
Four seconds over and I'll put the links in the Zoom shortly. Thank you. Thanks so much, Tim. That was great. Laura, I think you are up next. Sure can everybody hear me. Yep all right. Thanks for having me. So I'll be really quick.
I'll just do a quick overview of what we were hoping to talk about in our session. So I put this session in on science communication back in November. I guess at this point. So obviously a lot has kind of changed since then, but I think the events of the past few months kind of demonstrated that figuring out effective science communication is more important than ever.
So the panel was David Schiffman, who you've already heard about. Fabulous speaker Megan Phelan from AAA s me, who I should mention, I'm the head of government affairs for Springer Nature in the US. And then Doctor fanuel muindi, who actually studies, science communication at Northwestern. So it was a really kind of nice way of looking at it from the individual research level.
You know, I talked to policymakers about science. Megan communicates with the media, and then Doctor muindi actually studies, you learned how we do it in the abstract. I'd mentioned we were going to talk about AI. We did not get into that at all in the session, which, you know, I think was kind of OK, that's a huge topic and it might have just taken the whole thing. But we did focus a lot on how to communicate to the general public.
There was wide agreement that the public needs to better understand the process of science, so that retractions and mistakes aren't so jarring that there isn't, like, a definite truth. It's kind of the search for truth, right? I think one of the speakers compared it to instead of like a getting ready with WeVideo on TikTok. Like you think about it more as like a do science with me so that people can see the process.
People can see where, you know, changes are, where mistakes are, where different scientists might approach things in a different way, that kind of thing. Let's see, there was also discussion about how the use of visuals and non-text communication is important, like graphics, videos, that kind of thing. And interest in how to expand that. The audience had a good question about how resource strapped organizations can approach a lot of this.
Doctor Mundy provided a lot of resources that I unfortunately don't have the link to right now, but I think is in the video and he'd be happy to provide. Doctor Schiffman suggested that you can reach out and partner with, like, graphic design courses at colleges or somewhere else that can kind of work on it as part of their class. My recommendation would be to really focus on your audience. You know, think about if you're.
Are you trying to communicate with the media, with the general public, with policymakers. And that's probably where you'd want to focus your resources. And I think that's about it. I'm happy to answer questions. I'm not sure if David's still on here. He could talk more specifically about what he did. But also, if you saw his opening statement, I think you probably got a pretty good sense of our discussion as well.
Wonderful, Laura. That's great. Thank you so much. Hang on, because we're going to get to questions. Oh, sorry. Yeah, Yeah. Just a second. So we're going to first of all go over to Jennifer. Jennifer, are you here somewhere.
I am here. Can you hear me. Yes, we can hear you. And we can see you. Welcome, welcome, welcome. Also, I have to say, I'm Jennifer rogala. I am associate director of publishing at Wolters Kluwer health. But I also just wanted to point out all my friends in the room, my colleagues, I'm coming to you live from Wolters Kluwer headquarters in Philly.
We're enjoying some pizza and some great conversation with you all, so just wanted to acknowledge their support. I dragged them all in here. I am here to talk about the session on real world results on peer review. Pilot study indicates a solution. Our fearless leader on this was Jenny Pitman of Elsevier. She put this great panel together. At first I wasn't really sure what I was doing on this panel, but she needed an old person who wasn't afraid to bring it all home.
So I guess I was the scholarly publishing elder here. I was joined by Jeff Christie of Aries. Specifically editorial manager. Chris Petecof, editor in chief of the Journal of excuse me. It was the research and neurobiology journal, and also we heavily acknowledge the work of baha mamani, also of Elsevier. She is the peer review innovation lead, and she's also currently the vice president of AEs.
But the incoming president of AEs. And I've been seeing a lot of exciting stuff that they're doing over in Europe lately. So the presentation covered structured peer review and creating some unified core questions. And I love the way that Jenny laid the whole presentation out for us. She set the stage about how Elsevier set this program up. Jeff Christie, who actually this is very interesting, always be nice to everyone in publishing because he and I worked together many years ago at Dartmouth journal services, so it was really fun to be together on a panel.
He is the person who knows everything. Editorial manager over there at ars, and he explained the logistics of how this all worked. Chris gave the EIC perspective and how he oversaw the implementation of this actual pilot program and from all accounts, successful. But of course, like anything else in publishing, still organic and evolving. And as I mentioned, I brought it home with talking about thinking about peer review and how complicated it is.
You know, there's a lot of ways we're all talking about how we can use AI to improve peer review processes, but at the end of the day, that human, human, human warm touch is irreplaceable. Our peer reviewers are a precious resource. It's really important that the editorial office and the editorial board all understand who their reviewers are, consider the peer review system that you want to use.
Is that editorial manager highly recommend. Is that scholar one? You know, they're all great, but you need to make sure that you're navigating them well and making the use of every single capability. Consideration of peer review models open peer review. What does that mean. How do you define that. Is that something you want to do.
Double anonymous. Triple anonymous. Single anonymous. You know, making sure that you really understand what all the options are and what fits your community best, because it's not one size fits all. Every journal is a fingerprint. Every journal is different from the next journal, and you need to make those considerations also.
Peer review recognition. I always am a big fan of that. I'm not talking about paying a reviewer $250 to review a paper. What I'm talking about is just, you know, giving good recognition and what that might look like. And again, at the end of the day, knowing your community, your editors, your reviewers, your authors and really, you know, understanding what their tolerance is for experimentation and the resources that you have.
So with that, Thank you to my co-presenters and Thanks to everyone who made it here today. Really appreciate it. Wonderful Thanks so much. And, Susan, if you could get Tim back on and Laura back on where we can have all three of them on the screen like you do, I, I'll throw out the first question. Were there any questions that you remember. And this is a question for each of the three of you.
Was there a particular question or comment at the annual meeting after your sessions that you remember that would be good to bring up to this group. Did I ask that in the most confusing way. I think I did, sorry, but think back to when you finished your presentation and the people came up and said, hey, I have a question for you. What were the memorable what was a memorable question.
And and how did the discussion go after that. Well, I have to confess that my husband unexpectedly showed up at this presentation because it was in my hometown, and so that threw me off for the whole day. So, you know, I never get stage fright about anything. But that day I was sweating a bit and he wore a pink shirt, so that kind of threw me off my game. But I will say the most meaningful conversations always are the ones that are not asked in front of the whole audience.
It's the folks who come up at the end and they want to talk to each one of us singly. They gather our email addresses and they reach out and they're like, how do you really do that. And I think that this I think Jenny just did a really spectacular job of putting this presentation together. I did encourage everyone in the office to reach out to Jeff Christie of Aries editorial manager, and to make best use of him always.
But, you know, find your Jeff. We had a lot of after presentation conversations about that, about finding your resources and making that work and a lot of conversation with the editor in chief, Chris, who was there and generous enough to fly in just for the day to make that presentation. So, Laura, you want to go next. Yeah you know, I was going to say so.
We had a reporter from Retraction Watch come up at the end, and Megan and I had a really interesting discussion with her around, you know, the difficulty of retractions. Right how do you explain them to the general public. You want to encourage them to correct the record. But there's also kind of a shaming aspect, obviously. And just kind of the real difficulty right around the communications around them. And, you know, you know, obviously universities don't want a high number of retractions.
Journals don't want a high number of attractions. But again, they do serve an important part of the scientific literature. Yeah so obviously we didn't come to any great conclusions that I can share, but I just throw it out there because I think it's something that is worth continuing to discuss. Thank you. All right.
The main question we had was about how do we ensure we don't disadvantage researchers in parts of the world that lack the infrastructure we have here. It's relatively easy for those of us who operate in North America or Western Europe to talk about other authentication mechanisms, things like signing into your institution using institutional email addresses. That is nowhere near as easy.
And particularly, you know, there are different technologies, there are different cultural approaches. There's different workflows, particularly in large countries like China or India, which published differently. And so understanding, you know, how do they operate and how we can make sure we design systems that don't disadvantage them is a major part of the work.
And, you know, one of the ways I try and summarize the SDM recommendations is that you should think of it as a journey, not a destination. That we need to work together to figure out how to make this work for people, so that we balance our desire to stop research fraud with, you know, helping people publish their research. Awesome all right. Marianne Audrey.
Greg, what? What comments, hands raised, et cetera do we have. I actually added one myself. I this is for Laura. How might retractions be presented in a way that encourages continued growth and development of knowledge. Like, how can we do it better in your opinion.
Yeah so this is falling a bit outside of it. But we've had discussions. I mean, I've had discussions with our integrity team about trying to flip the narrative a little bit about, I think there's something that we're just throwing around called, like, you know, doing the right thing award kind of thing. When Chris Graf, who's the head of our integrity team at Springer Nature presents.
On this, he actually presents three examples of researchers at different stages of their careers who stepped forward and said, you know, I made a mistake. I need to fix this. You know, of course, you can also argue why that might not that might not work. You don't want to incentivize retractions, but you also don't want, you know, you don't want to punish people.
Right so Yeah, I again, still no definite answer, but I think it's something institutions publishers need to keep thinking about. Thank you. Well, I have a question. Jennifer already touched on this a little bit, but how did you go about putting your panel together.
I mean, what was the thought process that the kind of people you wanted to have on the panel is that for each of them. Yes Yeah. I can start I. So actually mine came about because I was familiar with Megan who's a science her policy pack. I thought it was a really neat idea. It was something that she put together at science to publish science, send it to newsletters in a way to make the science they're publishing more digestible for policymakers.
And then, actually, David was a reference from one of my Scientific American colleagues who said, you know, if you want a researcher perspective, he's somebody that's been doing a really good job on this. And then Doctor muindi actually published in Nature, like a perspective on science comms. So it all came together very quickly. Within like a week, I was able to have a speaker at every level that I wanted, which I thought worked out really well and provided some really good balance to the panel.
So Yeah, like I said, I, Jenny Pitman, put this together for us, but I know that she really wanted it. It was leaning a little bit. Elsevier heavy, but not intentionally. And something that Tim said really brings this home for me, too. It's a journey, not a destination. And so that's what we were really looking at, explaining as well.
You know, you don't just set up a peer, you know, a peer review pilot or anything to do with your editorial office. And then you just you're done. It's like living in your home. It's constantly getting remodeled. You're constantly doing things, big and small to adjust and make it, you know, as welcoming and inclusive as it can be. I mean, I think that's true for DEI ethics, open access, really, you know, anything that we're working on in an editorial office.
So I think that Jenny was just trying to put together a really big picture view and then at the end really wanted something so people could write down and bring home to their, you know, editor office or wherever they were going back to some action items to get done at the end of the day. In our case, our session was moderated by Carolyn Sutton from SDM because they were sponsoring the work.
There were two of us from the panel, but we deliberately invited Teresa petitto from ape as a completely independent person with a lot of experience in editorial, so that she could actually kick off the session by talking about her own experience of research fraud. And, you know, I didn't have time to do that. But if you're interested, go and look at the video on demand and hear her in her own words, talking about how she's encountering research fraud driven by identity verification issues.
And then she also back ended the panel by talking practically about how could she tackle these recommendations, what it would look like to actually implement them for real. All right. Any other questions we have out there in the group. So we do have we do have one from Debbie Chen for Jennifer. Have you seen any good non-financial ways to recognize reviewers.
We we've published a list every year. I think the most fun thing that I ever did is I once had a reviewer rodeo because a meeting was in San Antonio, Texas, and we had the whole room wearing hot pink cowboy hats that lit up. We had the lights kind of dimmed, but we recognized these folks. We talked to them, we heard from them. And then also we had the best of the reviewers present to the rest of the group.
We had early career people there. We had every editor, even from our competitive journals there, and people just really felt seen and heard. The room was packed, people were standing in the back, and I think it's just a matter. I think it's what we all want. It's just to feel seen and to feel heard and that our hard work is appreciated. You know, we all do all of this work for SSP, for instance, we're not getting paid, but SSP and our community makes us feel really good and makes us feel really valued.
And I think it's the same thing with peer review, finding ways that are genuine to recognize people. I could go on and on, so feel free to email me. I have a question for Tim. What do you see as the responsibility for, like the institutions that employ researchers in, in kind of participating in helping with identity verification.
And could there be some, some collaboration between publishers and, and institutions, research institutions that are employing these researchers. Great question. I'll pay you later. Thank you. Yes, absolutely. And the key way, honestly, is to contribute trust markers. You know, one of the things that as a community has done is created an identifier, orcid, to make it simpler for authors to use editorial systems.
The huge problem with ORCID has always been that anyone can create an ORCID. I could create one and create a fake profile that I'm a professor of chemistry. Institutions can contribute trust markers. It's a very simple thing to do. There's a little bit of technology, but essentially what it will mean is that it will validate the fact that these people had actually done research at their institution, which means that we can all, as a community, start using orcid IDs to do a lot of this verification, which creates a much lower barrier for people anywhere in the world as well, so that it not only helps build trust as a community, but also lowers the technical barriers for any author anywhere to be able to do this.
So that's the key one. Yes, institutions start using trust markers. Thanks, Tim. All right. Any other questions. Yeah we have about two minutes left. Any others.
So, Laura, I think you mentioned that you did not get to I in your group. I'm wondering how you managed to do an entire session without the topic of AI coming up. Yeah you know, because I think science communications can just cover so many different facets. Like you focus on the public, you focus on media. You can focus on policymakers like science education in, you know, K through 12.
There's we talked about different ways to do it. Video I mean, you know, Springer Nature is thinking a lot about how I can help with science communication. Like, you know, what tools can we help provide our authors to help them, summarize their work for all these different audiences. But, you know, I also what we didn't get into at the session is, you know, I think there's also plenty of ways that I can kind of hinder science communication.
If you look at the kind of images that can come out, people that can't tell the difference between a real image, there's already obviously integrity issues sometimes coming up with images, and I think that's only going to continue. So I guess it was kind of nice that we didn't get into that kind of maybe dark, futuristic area, but maybe that's something that could come up in a separate SSP conference or maybe next year, right.
Like how AI intersects with science communication. And Jennifer, did I come up in yours at all. What cool new things that AI is doing as part of peer review. I you know, I honestly cannot remember. So definitely go back and watch the recording. But I would say, though for AI, I think AI is very important to the peer review process and can introduce efficiencies that preserve your precious resources, but that warm human touch cannot be replaced.
I don't think, at least in our lifetime, just knowing how to use that AI wisely and fairly to preserve the time and efforts of our most valuable resource, you know, peer reviewers and editors as well. Also, it's an author service to get a timely decision to an author. Like if AI can provide a quick desk reject decision but then inform a great cascade transfer opportunity.
All for that. But again human warm touch is so very important. So true. All right. Well, Thank you so much to all three of you for your participation today. I know it is not an easy thing to recap a session in five minutes that went 40. So we do appreciate you doing that for everybody on the call.
We're now at our break time. So we're going to leave this up and running. But just please feel free to you can stay on if you like. We're going to have a little slideshow going or you can take off. But if you do leave, stay whatever. Turn your turn your mic off in case you inadvertently say something that you don't want broadcast or recorded for all time.
So anyway, we'll see you guys back on 15 minutes. Well, 14 at this point. Thanks, everyone, and Thanks for arranging this annual meeting program committee. It's great to have this opportunity. So thank you. Thank you. Thanks for coming.
I was going to say welcome back but. I have trouble getting my video restarted. Just a second here. There it is. All right. Welcome back. Little by little we've got folks showing up here.
Just to pick up where we left off. We're in the middle of discussing the highlights of the concurrent sessions. And so for the part two of this we've actually got some video recaps. So some of the presenters and organizers were kind enough to create a recap for us that we're going to share here by video. And so at the end, if you were a part of one of these, one of these sessions that is highlighted here, and we want you to make sure you're around so that we can do some discussion and.
With that, I'll go ahead and start this up. And if I can actually hit my name's Megan McCarthy and I moderated the panel prevention of systematic manipulation at scale setting a proactive strategy. During the first round of concurrent sessions on Thursday, our panel brought together stakeholders from across the ecosystem to share strategies and solutions aimed at preventing research integrity threats rather than addressing them after publication, when investigations can be time consuming, slow and costly, causing untold damage to the overall research pipeline, researchers, researchers careers, trust in the scholarly record, and patient health.
In developing our panel, we intentionally brought together a variety of perspectives to reflect the collaborative approach to research integrity at play across research and publishing. Those perspectives include views from a scholarly publisher focused on a specific discipline, a larger publisher with a portfolio spanning many disciplines, a librarian and institutional perspective, as well as a researcher and independent research integrity sleuth perspective.
Some of our key takeaways include one. Trust markers need to be thoughtfully added to the ecosystem, whether that's in the form of stakeholder identity verification or other more robust manuscript screening workflows before and after peer review. Two technology can address this challenges of scale. No one tool, though, is a magic bullet. As journals and publishers, we need to consider how collections of signals prompt the right action by an integrity team or a journal.
Editor Mike Streeter at Wiley brought up the need for standardized workflows to be designed for using those tools that editors need proper training to interpret signals and flags, and that portfolio wide oversight should be implemented as a means of operationalizing editorial and integrity policy and best practices. Three both Mike and Beth Cronin from the American Physical Society stressed that publishers should play a supporting role in the submission process via tools and transparent policies to help authors, especially early career researchers, avoid integrity, mistakes and honest errors.
Academic societies like the apse are trusted by their academic communities. It's not only about publishing journals, it's about developing a research culture whose values are reinforced through conferences, editorial workshops and policy development. For mu, yang argued that, quote, currently there is little effort to foster open dialogue about the pressures to produce specific results.
Manipulated data and images are often the reluctant products of individuals working under duress. End quote. Graduate students and postdocs need mentorship and support from pis and other leaders in the lab to report the data as is and not bend to pressures to produce quote unquote, good data. Research integrity sleuths receive little recognition, pay or legal protection for the work they do.
Who would like to see more cross publisher collaboration to improve communication with sleuths? Perhaps even a ticketing system to ensure accountability and follow up on the integrity concerns they raised to Journals and publishers. Last point, Lisa Hinchcliffe argued that research integrity violations, quote, really stick to publishers and not to institutions, creating an accountability imbalance that needs addressing through stronger institutional compliance mechanisms.
She challenged everyone in the room, no matter their role, to think about the mechanisms we can influence to support a healthier ecosystem as a whole, not just within individual publishers, journals or societies. The panel also discussed United to act against paper Mills and other cross industry initiatives working to break down silos across publishers, institutions, funders, editors and research integrity sleuths. So to close things out, I just wanted to share some of the really insightful audience questions that we received.
The first one was around what was the what's the best way to manage research integrity threats across different languages. The second one was about how do we break down silos, while also minding legal considerations around data protection and privacy. And the last one was, do you believe that your leadership values or champions the work that integrity teams do, whether at the publisher or the institution.
So Thanks so much for listening to the summary and hope to see you at future SSP annual meetings. My name is Tony Alves, and I'm senior vice president of product management at highwire press. I want to first Thank my speakers, John Inglis from Cold Spring Harbor Laboratory press. Fiona Hutton and Emily Easton of Knowledge Futures for the excellent presentations they gave at the SSP annual meeting.
This session is grounded in two companion articles that explore how peer review is diversifying and decentralizing, both in theory and in practice. The presenters expanded on the ideas presented in this article by discussing real world examples, use cases, and forward looking perspectives. Peer review as we know it faces growing challenges. Reviewer fatigue, lack of transparency and bias in selection.
The push toward diversification and decentralization is not just about innovation, it's about equity, trust, and scholarly integrity. Our first presenter, John Inglis, explored how decoupling peer review from traditional publishing via preprint servers enables faster dissemination, greater transparency and innovation in research evaluation. Preprints are transforming the way we Bookshare and evaluate scientific research by enabling early dissemination, often within hours or days.
Preprints spark open dialogue and accelerate discovery. Unlike traditional journals, which may take months or years to publish, preprint servers facilitate broader, faster, and more global engagement. Reviewers, scholars, and the public can comment, critique, and build on research well before the formal peer review concludes. This fosters transparency, supports emerging alternative models like overlay journals and public commentary, and enhances equitable access, particularly for researchers in underrepresented regions.
Our second speaker, Fiona Hutton, advocated for a more diverse and decentralized peer review system by empowering historically excluded voices, particularly early career researchers and scholars from underrepresented regions through community led preprint review. Peer review is evolving beyond traditional gatekeeping to embrace more inclusive, community driven models. Platforms like pre-review empower early career researchers and underrepresented voices to participate in open review, elife, publish review, curate model shifts, evaluation to happen publicly promoting transparency, portable and post-publication peer review such as review Commons and SAP crowd review enable assessment that travels with the manuscript across journals.
Overlay journals. Further decentralized review by curating peer reviewed preprints from multiple sources. Multilingual and regional efforts, including PCI in Africa, arxiv, are helping to broaden global participation, ensuring that scholarly evaluation reflects a more diverse, equitable, and interconnected research ecosystem.
Decentralization doesn't mean disorganization. In fact, it requires robust infrastructure. Our final presenter, Emily eston, focused on technologies that support peer review models, as described in the article entitled mapping the pre-print metadata transfer network, the future of peer review relies on infrastructure that promotes openness, credit and collaboration.
Tools like core notify and Meca ensure that metadata and review information flow smoothly across platforms. Frameworks these frameworks facilitate interoperability, allowing preprints, reviews, and publishing systems to communicate seamlessly, enhancing discoverability and efficiency. At the same time, initiatives like Crossref, doc, maps, and datacite help standardize review metadata.
They embed trust markers and they support reviewer credit. As automation streamlines these workflows, it's essential to balance speed with trust, ensuring that integrity checks, ethic reviews, and accountability remain at the core of the scholarly communication endeavor. Thank you for your attention. Here are links to the articles that I mentioned.
Hi, everybody. I'm Leah Hynes, I'm the executive director of the Charleston hub, and I'm here with Lisa janicke hinchliffe Hinchcliffe from the University of Illinois at urbana-champaign, and we are here to do a Highlights session from the Charleston trendspotting initiative at the 2025 SSP annual meeting. Thank you for joining us. So the Charleston trendspotting initiative is a session that came out of a series of conversations at the Charleston conference some years ago that stressed the importance of being proactive rather than reactive to the trends and issues that are coming up for the near future that will impact the world of libraries and scholarly communications.
And it's been an ongoing event at the Charleston conference and the SSP meeting since 2017. Each time we have different, a different focus with different activities and different topics, but our mission remains the same, and that is to discuss the potential impacts of trends on the information industry and on scholarly communications with a group of our peers. So this year, after a quick primer from Lisa on Futures thinking 101, the first exercise we did was called Finding solid ground.
So we asked participants to do an individual reflection. Looking ahead 20 years from now, what things will they believe will endure in scholarly communications. Focusing on relationships, purposes, values and roles rather than just platforms or formats. And we wanted this discussion to really help anchor us for the rest of the day. So we had some small group discussions, and at the end, the groups shared out one enduring truth about scholarly publishing and why they chose it.
So Lisa's going to share some of the outcomes and overarching themes from that discussion. Great Thanks, Leah. It's always great to work with you on the trend spotting session. The big takeaway, I think, from the conversation on finding solid ground was that society needs truth and expertise. And perhaps we feel that, especially right now.
And so what does that mean for scholarly communications and scholarly publishing is that we need that vetted and validated content that the industry is known for developing and disseminating. Now, there's some real challenges to that right now and things that are hindering the ability. But as far as what's the solid ground for the industry, it is that vetted and validated content developed by and for communities of knowing.
And that really brought up this other interesting thing that people said that they felt citational practices were going to endure. Now, not a particular APA MLA, but the notion that part of producing vetted and validated content is not the individual object, but it's also the objects in relationship to each other that help us validate and give credit and also create connection among a lot of different ideas.
So I thought that was a really interesting ability to take that to a broader level, to say citational practices. Now, after this, we looked into, OK, what is threatening our ability to do this work. And we used the pestel framework, which is a way of categorizing trends and understanding the threats that they might present. So do you want to share a little bit about the exercise we did. Sure so the second group exercise was called Preparing to persevere.
We gave attendees a handout with the chart for threat analysis of some common risks. Nothing specific to our industry, but these are global general risks along that PESTEL framework that Lisa mentioned. So we asked them to choose one category to focus their discussions and identify some industry specific threats in that category, and how that might impact our enduring roles and responsibilities.
Discuss how those threats could be mitigated, and reflect on how prepared we are to respond to those threats or not, and consider how to enhance that readiness. So Lisa, again, has some of the outcomes from that discussion to share with us. Yeah so I'm going to summarize and say pretty much everyone chose the social threats and particularly the anti-science anti expertise trends that we're seeing.
One of the things that brought back was the notion of community and whether community helps us address that trend and that threat. And we ended with this quote, which from Soren Kierkegaard, life can only be understood backwards, but it must be lived forwards. And we hope this session helped people persevere and build their resilience.
Thank you, Lisa. Thank you to everyone who attended the session. We hope to see you back at a future trend spotting event in the future. Thanks a lot. Hi, I'm Heather Stevens, current SSP president and Sr. Consultant for Delta Inc greetings from the SSP president's early career roundtable.
Let's get to know our speakers. Hi, I'm pavitra, I'm a managing editor at ACEs publications. I'm also a past SSP fellow, and as a result, I got to attend the SSP annual meeting last year. The annual meeting is sort of a crash course MBA in scholarly publishing. You get to learn networking workflows, business models, all in a single welcoming room. One of the greatest dividends of the fellowship for me was the mentorship piece.
I was paired with Meredith adinolfi, and from day one, she was more than a mentor. She was a champion for me. Having a structured and accountable mentorship track, I think, is the difference between learning the career map yourself versus having a guide who has already hiked the trail. In summary, my SSP fellowship didn't just grow my network, it shortened my learning curve and expanded my sense of what's possible.
Hi, I'm Andy Webster. I'm employed in digital production for project muse, a content aggregation platform with a focus on scholarly publishing in the humanities and social sciences. At most University presses, my role would mean formatting and producing written content for the press. Instead, we ingest ready to publish content from other publishers because our customers sit on both sides of the process, I've come to understand the opportunities and challenges academic publishers face.
I'm not sure I would have been as interested in scholarly publishing if I had started in a role at a traditional University press. I'm also pursuing a master's degree in publishing at George Washington University. It's incredibly helpful that leadership at muse and Hopkins press alike have been very encouraging in my career growth. Hello, my name is Leonora Colangelo, public affairs officer at frontiers beyond my day job.
Volunteering has always been one of the most meaningful parts of what I do. Through volunteering, starting back when I was 17, helping with archaeological excavations in Italy, I found a long term passion for knowledge, for history, and for being part of something bigger than myself. That experience shaped everything that came after my studies, my love for classics and anthropology, and eventually my work in scholarly publishing.
We live in a time when scholarly communication is evolving rapidly, but the good news is there space for everyone to contribute. Volunteering has always been a kind of Compass for me. It's not just about giving time, it's about showing up. It's about service, yes, but also about curiosity and connection. When you volunteer, especially across different communities or projects, you open doors not just for others, but also for yourself.
It's really something I encourage everyone to embed into their routine. Hey, y'all, I'm Rachel. I'm from rural Kentucky, so when I saw that Eastern Kentucky University had an opening for a health sciences librarian just before I graduated with my masters, it felt like a miracle. I want to support my faculty's needs while publishing my own work, and I want them to have all access to all the content you work so hard to publish.
We focus on students first here. Publish or perish doesn't exist, so we write and do exactly what we care about. I see myself in my students and I can be for them. What I needed during undergrad. My students and faculty have such wonderfully unique perspectives which deserve to be shared, and you enable us to do just that. Hi, I'm omorodion okori.
I am currently a doctoral student in information science at the University of wisconsin-milwaukee, where my research focuses on accessibility within digital libraries and knowledge repositories, an interest developed during my years as a librarian at a University in Nigeria. While libraries are rapidly embracing digital transformation, I soon realized that not all users are equally supported in that transition.
This motivated me to pursue doctoral research focused on how artificial intelligence and user centered design can be used to make digital resources more inclusive. So my overall goal is to help bridge the gap between technological innovation and equitable access, ensuring that no one is left behind in the digital age. Thank you. Thank you so much to all of our speakers.
There we go. Sorry about that. Thank you to everybody who contributed videos. I don't think most of the people that presented are actually on with us today.
However, I, I'm not all seeing and all knowing in that regard. I did. So in the prevention of systematic manipulation at scale. I think it was Megan, Beth, Alicia, Lu, yang, any of those here today. If you are, just chime in and say hello. I think in the diversification one it was Tony, John, Fiona and then Emily Easton.
In the trend spotting one, I think Leah is here. I don't I didn't see Lisa though, and I don't think I saw any of the early career roundtable folks. Are there any have there been any questions that have come up. I'll start that and just throw it out to the whole group. Have we had anything in the chat etc.? David, this is Marianne.
Audrey put an interesting prompt from the trends spotting session that might be fun to talk about with people, but real quick the there was a lot of chatter about how good the photos were during the break, so that was fun. Thank you Jackie, and Thank you Jackie. But I would you like me to read. Yeah go ahead. Trent so imagine it's 20 years from now. Our industries, tools, technologies, and services may look very different, but some things will still matter deeply in the face of shifting policies and technological change.
What are the elements of our work that you believe will endure over time. What are some of the enduring roles and responsibilities of our industries. What what will society continue to need from the industry. All right. That's to all of you out there. Yeah who wants to chime in.
I'll kick it off, because when we were talking about the peer review and research integrity. When I started in publishing, I was an editorial assistant working in peer review. For some journals. And it was, you know, there there was no digital. Then we peer review was a series of, hard copy papers. And we would find reviewers by calling people on the phone.
And it's just so interesting to juxtapose today, like there's no way that you could I think an editorial office could keep up with the pace and the throughput in that manner. But I guess, you know, thinking back 30 years ago now, being able to dial an institution and know that I was calling, University of Toronto or whatever medical facility to get, you know, in this case, it was a cardiologist that was verification itself.
I'm calling a phone, and I know that the person who answers it. So, you know, thinking about this question, it's 20 years from now. We know all the tools and technologies are changing. But, what's that thread. You know, I think things go both ways. And thinking about peer review, maybe there's a piece from the old ways that could be preserved or thought about in a new way for today's modern world.
So I don't know what, 20 years from now, I'd hope that the scientific, the scholarly record is still important, but I don't know what TikTok science replace scholarly publishing. Leah, you said in the video that kind of one of the things that came out of this discussion was the need for citations and that would continue to be relevant.
Were there other things that people talked about in the session that they also like, maybe just weren't as unanimously supported, but that you could share with us. The unmute there. Yeah so one of the big focuses that came out of that was the continued need for the human touch, human interaction, that it's not all going to be automated.
It's not all going to be replaced by AI. That the human quality checks the human focus on our work as an enduring value. That's going to go regardless of technology, regardless of platforms, people will still matter. And I think a line that veins like people still mattering. How do we make science matter to people. And like in 20 years, if we are going to be at a place where we trust science again, hopefully, as Nate Smith says, return to trust in science and scientists.
I think that will come when we make it matter to people on a personal level. Right? like and instead of it being a US versus them as in us, the general public versus them, the scientists and the researchers instead, how is there collaboration between the two. How is there understanding and respect between the two. Where scientists are taking into consideration what matters to people and even in their explanation of science.
Like, I think this is what I hope to see in 20 years, that we are more attuned to the way that we present science as being applicable to the domain in the world that we're researching. Right and how do we make it relevant to those that are being impacted by whatever topic it that is being researched. And so I recognize I'm sorry, did you have another point, Leah.
No go ahead. I was going to recognize Angela, but I just wanted to say quickly that that ties back to the moderated discussion plenary that Melanie was talking about how in the scholarly publishing community need to do a better job of communicating what it is we're doing so that it's relevant to people outside of the industry, and they'll see so that they can see the importance of what it is that researchers and publishers who publish that research, and everyone who touches that research in some way to disseminate it, is doing something important.
Go ahead. Angela sure. Thanks, Greg. I was actually in the trend spotting session, and I actually have my notes right in front of me. But related to what Audrey just said, a really interesting point came up because there was some conversation around polarization between, you know, sense of community.
When we talk about things and individual sort of needs. And Lisa Hinchcliffe actually said something really interesting, which was that maybe we try to solve some issues around peer review with thinking about it as a problem for individuals, say, compensation. And really we need to think about it as a community practice. The problem to solve is a community one, so thinking about it in that way was important.
But but some of the most common issues that came up, what we thought would be around in 20 years from now is just what we're talking about. Validation was the word used or peer review or and then second to that was distribution, hosting, preservation, the need to get information to people, revenue. Someone said revenue, if we're going to exist in 20 years, we need to be sustaining ourselves, which yes, we need authors, people we thought authors and we will always have consumers.
Was the word used to need information. The example given was sick people because I think they were thinking of medical publishing. But we can think of another, a number of different examples. But those and then the fifth one that came up at many of the different breakout groups was some kind of branding, branding, imprimatur for publishers that was still going to have some currency.
We thought. So those were the five common topics that came up at this very interesting session. Outstanding outstanding. Well, before we start the next one, let me just say if you've got additional questions, we do have a time set aside at the end where you can certainly jump in. I know we spent a lot of time really talking about the trendspotting session, but I think there's a lot to be said to talk about some of the things that the early career people brought up about some of the interactions between those of us who've been in scholarly publishing a while and opportunities to learn from and/or to transfer knowledge to those that are early career.
And I think it'd also be fascinating, you to see Tony mentioned, how preprints can be catalysts for innovation in peer review. But, you know, some people have pointed at peer at preprints as somewhat causing research integrity issues. So it might be interesting to talk about that balance. Just just some kind of things to keep in mind as we move in. With that, let's go ahead and let's do the next block here. Let me hit the old share button again and we'll get the next set of highlights going here.
And before I actually share those, I'll first share one of my favorite pictures from the event. I thought this one of Randy was fantastic. But with that, let me jump over here and start the next set. Hi, I'm Kristen hipke, a standards program manager at NISO. And I'm Mary Beth Barilla, director of business development and communications at NISO.
We're here today to talk to you about business processes for sustainable, open access best practices for institutions research funders. And video is not displaying. We want to briefly introduce you to know we can hear it but not see it. All right, just a second. Let me stop it. See if I can get it to share again.
All right. Let me stop sharing and I'll reshare. All right. It shows that I have stopped. Let's try it again.
And did pretty good on the technical issues so far. Knocking on wood. Just a minor glitch. All right. Can you see it now. Not yet. Not yet. All right. That's because I haven't hit this button yet.
Just testing you. There we go. Got it. Graham, manager at NISO. Off we go. And I'm Mary Beth Barilla, director of business development and communications at NISO. We're here today to talk to you about business processes for sustainable, open access best practices for institutions, research funders and publishers.
To start, we want to briefly introduce you to NISO. We are the National Information Standards Organization, a nonprofit member Association that develops and maintains standards for the information community. Our standards are proposed, reviewed, and approved or not by community members. Standards, including the one we're going to talk about today, are developed by working groups of volunteers whom we Thank today our session offers an update on a much anticipated NISO recommended practice open access business processes.
This project arose from a need for better systems to accommodate open access content. We've seen a great shift in our industry from the publisher pay to read or traditional subscription model to two open access models. And as open access has grown, we have research funders and institutions expanding their requirements. Business models have become complex and diverse, and everyone has a growing list of systems they have to interact with.
This leads to some pain points for everyone involved. Existing business practices and processes and systems and tools were designed for that pay to read model. However, it needs now to accommodate open access content because it currently doesn't. We have librarians and publishers and other systems vendors, et cetera, developing individualized solutions or workarounds, and there are no agreed upon practices.
So this leads to inefficiencies and costs for multiple stakeholders. What we need are recommended practices that help us serve a variety of emerging models to better enable financial transactions. Practices that help us track deal and policy compliance and also optimize related workflows. To address this, NISO formed a working group to develop some guidelines around business processes for open access content.
Again, this is a working group of 27 volunteers led by co-chairs Ivana campaigns, executive director of the oa switchboard, and Amanda Holmes, senior licensing officer at the Canadian research knowledge network. You can see the names of our volunteers here, and we'd like to give a special shout out to Howard Ratner. Of course. And Chris Shillam of ORCID, who joined Giovanna in presenting this work in Baltimore at the SSP meeting.
Phase one of this project covers journals. If there is a phase Ii, it may address books. For now, the working group has set out to deliver several items. A glossary to standardize some of the terms used when we talk about open access. Similarly, a data dictionary, which will standardize data elements for open access content, and a list of recommended tools and practices.
So our recommendations are structured around three open access business processes that you see here. The first are oa agreements and these are for consortia, academic institutions, and publishers. We mentioned how complex some of these agreements are. And so we developed some key terms and best practices that will help to make these go more smoothly.
We also have funding applications here. The recommendations are for funders and researchers. This will help provide clear guidance to researchers on funder mandates, and also capture some information that will help make tracking compliance easier. Finally, there are processes for OA publication and this affects everyone in the ecosystem. Upstream to downstream. So institutions, researchers, publishers, and vendors.
These recommended practices offer guidance for researchers on institutional policies and publisher policies. They help to optimize workflows and also to capture the information and metadata needed. Now that all of the great things that are in the process, I'm sure you're wondering what's coming next. We hope to release the draft recommended practice this summer. We will announce this publicly so that all in the community, not just NISO members, can comment on it.
So be on the lookout. We want your feedback. Feedback will then be incorporated into the final draft, which we hope to publish in Q4 of 2025. So Thanks so much, everyone for listening to our video highlights from the SSP meeting in Baltimore. As Kristen said, we hope to hear from you once we announce that the draft is available for comment. In the meantime, if you have questions about the ODB recommended practice or about NISO, please get in touch with us.
Thank you. Thank you. I'm Stephanie Lovegrove Hansen, VP of marketing at Silverchair. And I'm Lauren Kane, CEO of I1. And Lauren today played the role of shark, along with a few others in the scholarly publishing shark session we just hosted. And what we did was we talked a little bit about investment challenges for the publishing that we set up a framework.
Everyone broke into groups, developed pitches to solve key problems, and then delivered their pitches to the sharks. And do you want to share the one that won. Why you chose it. Yeah, it was a really cool session, very interactive. We were really impressed with what everyone did in 15 minutes. Not an easy thing to do, but the winner was a group of people that described an idea that they called The Bridge, and the bridge was going to bridge the gap between researcher needs in communicating science, communicating their scholarly findings to the general public.
And I think they dealt with a really tough question from the judges about, well, how does this why should publishers fund this for researchers. And I think they said that, you know, it was important for publishers to stay relevant in the current moment and that with all the crises and challenges that we have in the current environment, it's really important to help researchers to communicate science so that we can continue to have a productive and productively revenue generating industry.
So really impressed with that idea and with all the ideas today. Yeah, they did a great job and it was really interesting to see what kind of came of it. Hopefully got people thinking about business models, about audiences, about where the data lives, that they need, all these kind of things. Because I think with all the investment and funding challenges in the industry, this is not the last hearing about our session was called AI measure with my heart outcomes versus outputs.
This session all started with a conversation that my wonderful friend Gabe and I had almost a year ago with me saying, Gabe, I'm really struggling right now because I know I'm really good at what I do. I know I understand how things work, but I don't measure with anything except for with my heart. And I need some guidance on how to really put things into perspective, how I can use actual metrics to prove what I know is an awesome and wonderful vibe.
So with that, I invite Gabe to go next and explain a little bit more of that conversation and what outputs versus outcomes really means. Thank you Jennifer. Yes, that was a great conversation. And it actually spun out of an AI subcommittee with SSP. So deepening that connection that raised one of my favorite topics, which is the concept of outputs versus outcomes.
Because I've long felt that in our industry, we tend to measure outputs too much. And what that means is measuring something that we deliver, something that we complete, something where we can check off a box. But I'd love to see us shifting more toward measuring outcomes. And outcomes are actually trying. They're harder to measure, right? It is trying to measure or quantify the value that you are delivering to the business or to a customer.
And so in a nutshell, that split is that an output is something delivered, an outcome is the value that you're actually producing through that. And I think letty will tell us a little bit more about how we tackled this in the actual session at SSP. Yes I was so thrilled. My name is letty Conrad with live links. I was so glad to be included in this conversation with Gabe and Jennifer and what we put together.
Although Gabe wasn't able to join us in person, he was key to our planning. What we put together was a workshop style session where we had tables and each group had a scenario, a real world business scenario, and they had to spot what the outputs are and what the outcomes are. And so, for example, we had a handful of tables that were challenged to consider if they were enhancing an existing peer review workflow tool.
What would the outputs be. What would the deliverables, those things that we, you know, can, you know, task each other with. So things like releasing three new features in the peer review system by the end of the quarter, or adding inline commenting or reviewer tagging. But that's not the outcome, right? That's not the change in our customer behavior that we're really trying to achieve.
The outcome in enhancing an existing peer review workflow tool could be something like increasing the time that reviewers are spending in, in tagging in comments, maybe decreasing the amount of time between submission and completing a task. Or increasing survey results. Folks saying yes, this was a more enjoyable experience or I was able to complete my tasks with success. So measuring our outcomes means that we're measuring a change in our customer behavior, or our user behavior, or even our employee's behavior.
So we're not measuring the things we're delivering, but we're measuring the ways in which we're improving people's lives and improving their experience. Now, all that's to say, the three of us are still experimenting with how to put outcomes and outputs to work in our everyday life. So please give this a try in your day job and let us know how it goes, because we're still figuring out how to make it work.
I think it was a really, really good session. And we had fortunately, a really nice packed room, and I really found that to be kind of great energy. And I think the questions were great as well. I think we touched on a lot of points. I won't go into the points because there are our amazing speakers who talk about it. So I'm going to hand it off to you, Catherine, to start us off with what were your highlights.
Well, Thanks, Jude, and good to see everyone again. We started out by giving the basics of two major new accessibility laws the European Accessibility Act, which is coming into effect this month, and the Americans with Disabilities Act, and the updated Title Ii regulations. I shared a bit about Ariel's work on accessibility, including our focus on advocating for born accessible publishing, and we talked a little bit about licensing strategies as an opportunity to communicate accessibility requirements, and kind of as a way that libraries and publishers can collaboratively move the scholarly publishing industry toward for an accessible publishing.
One thing maybe I didn't talk about enough that I would have that I should have brought out a little more is the opportunities for collaborative development of metadata standards for accessibility. And we know some of those initiatives are happening through groups like NISO and IFLA. And then on the licensing piece, I did want to point out that in our back and forth, Karen, another library panelist, talked about the way that know, when sometimes you're working with smaller publishers or publishers in the Global South, some of these licensing strategies might not apply there.
So just something to think about. We're talking about, libraries of multiple scales and sizes. We also heard a little bit about, resource constraints. I talked about that a bit. And we heard about that from Karen as well. So those are some of my highlights. And I think we're over to Simon. Yeah Thanks, Catherine. So I'm the head of Content Accessibility at the science publisher Elsevier.
And I'm also a visually impaired person myself. So this is really important to me personally as well as professionally. Accessibility and Content Accessibility in particular is at the center of what we do as publishers, right? We curate, enrich and disseminate content, and accessibility allows us to do all three of these things. So, for example, when we add Alt text to an image, we're changing a non-text asset into a text asset.
And that increases the discoverability, machine reading, search capabilities of the content that we publish. We're also making this content accessible to way more people and available to way more people. Because 15% of the world's working population have a disability and have an accessibility need. So if we think about our mission in terms of getting science and humanities out there, accessibility is really important to that in terms of the work that Elsevier are doing.
So because we published 17% of the world's scientific content, our role is really to think about how we can do this at scale. And so really we're working to add Alt text, captions, transcripts, detailed descriptions for video metadata, and publishing and accessible formats like epubs to our frontlist and our backlist books and journals. And we're also thinking about how we can better include people with disabilities in this process.
So running some focus groups, for example, to validate the Alt text provisions that we're putting into place, because at the end of the day, if it's not fit for purpose for the people that it's meant to be helping, then we're not really doing our job. So a lot of work to be done, a lot of work already done so far. But I really feel that these laws are helping us really transform the content that we publish, in order that more people are able to make use of it.
I'm now going to pass over to Charles, who's going to talk a bit about the library, publisher and library perspective. Yeah so in the middle of the presentation are. Karen and I gave joint talks from our perspectives as leaders in scholarly communication at Indiana. That's Karen and University of Michigan libraries, and that's me. And in both cases, the library actually includes a University Press.
So we talked a little bit from both perspectives. We're also both public universities. So the ADA Title Ii revised legislation is probably most on our mind. Even though the University presses are affected by the eaa, the European Accessibility Act as well. So we both talked about the work that we were doing to make our own materials accessible. But within libraries, a lot of focus is on collections, discussions on vendored works.
So how we can maintain compliance with the ADA revision as we look at the Materials we get from various vendors like Simon or like Jude, and one of the real focuses is how do we handle accessibility requests efficiently. So in both our cases, we have document delivery librarians doing quite a lot of remediation for students with disabilities.
Challenges wise, just there's a lot of work out there. And we do have fairly limited resources for this. And a particular challenge that Karen raised was smaller and international vendors. We're very concerned about the ability of smaller publishers in English language to keep up with these requirements. And we are also particularly concerned with foreign language vendors.
So we have a lack of capacity to do this work in the system that we're both quite concerned about. And we talked a little bit about a project called Emma educational materials made accessible, based at the University of Virginia libraries currently, which kind of brings together publishers, disability services, offices and libraries to try and find ways of remediating content that is a little bit outside of the big publishing system.
So that's what we talked about. Thank you so much. So Karen is not able to be here with us today. So Catherine and Charles have very nicely summarized some of her main points. So Yeah, it was a great session. There's lots of resources out there as well. From our side. Adobe has released some five steps to look at how to prepare for the eaa.
Yeah and it was really a great session. And I think what comes down to finally comes down to and Simon kind of raised this during the talk itself as well. This is. Not just because of the pain points that's going to come up in on June 28, but it is a must have. I think it should have been a must have, but I'm glad that it is becoming a more important point.
So thank you so much for watching and Thank you so much to the great speakers. And that's our highlight. Outstanding Thank you very much to everybody who participated in that. I guess before I start talking, does anyone do we have any questions that are sitting out there already or hands raised.
OK I don't see any hands raised. And being that we don't have. I don't think we have any of the concurrent presenters currently in attendance. Is Jennifer still here. She was here. So we can make this more of it doesn't have to be questions per se.
It can be discussions, comments, you know, whatever you feel like jumping in and saying is perfectly fine, guys. It's Jennifer. Oh, there she is. She is here. Good OK. Hang on. I'm going to a different spot.
OK yes. I'm here. Excellent excellent. Well, and I think letty will be here soon, but we have not seen her yet. But Thank you so much for your participation. I also want to just say another special Thank you. I saw that Jennifer Cobb joined in here, and she was such a pivotal help last year in last year's virtual personal experience.
So thank you. I'm so glad that you were able to participate this year and hope that you'll be a part of our virtual planning and things for next year. But anyway, back to the sessions. Did anybody participate. Those of you who went to the annual meeting in person, did any of you participate in any of these sessions, and are there any questions, thoughts.
I'd be really interested to see what, you know. Maybe some of the real world scenarios were and the I measure with my heart session. I'd also be really interested in if anybody attended the Shark Tank one. That that one sounded really fascinating to me, but I'll just throw it out there to the group. I was at the Shark Tank.
It was really great. I kind of wish that maybe that could be like something that SSP could do that. You know, I don't know if the pitches could go to like commercial publishers or if that's a bad, I don't know who could be the sharks in that. Example but the ideas that came out of it in such a short amount of time and how they presented were really great.
And I agreed with the sharks that the bridge was deserved the win. I agree with that, Marianne. And some medical societies I know do have Shark Tank's that they actually have venture capitalist folks join, and it's like the real Shark Tank. So I don't know. I don't know who we could get to do that.
But calling all venture capitalists, let's come to SSP 2026. But I think that would be a good that would be like a cool recurring. And you're in charge next year. Marianne and so we're telling you this right now, we're giving you're one of the three in charge. So there you go. There's a suggestion for you.
So I was at yours, Jennifer. And ladies, I measure with my heart. I didn't get to go to a lot of concurrent sessions, because I was kind of busy, but Yeah. That one I really enjoyed because I can't remember exactly what the scenarios were. But we worked in teams and we had to come up with, you know, outputs versus outcomes. And then we had discussions about that.
And I really enjoy those types of interactive hands on type sessions. By the way leti leti just joined. Whoop Yeah. Let's let leti say something. She is she knows what. She knows what she's talking about. Let's let's listen to her. Oh, I was just glad to join While Greg was singing praises, so I agree.
I think having a chance to, you know, get into small groups and think practically about what we're doing and how to do it better. Share ideas. I'm glad it was. I'm glad it was useful for you. Yeah the hope was to not just speak in the abstract, and leave everyone with these big ideas that they didn't know necessarily how to make use of.
But, you know, to really help folks, you know, think instead, you know, about the big picture and less about the deliverables in each of the boxes that we're trying to tick every day as part of our jobs. So, Yeah, we're looking forward to doing it again next year. Hopefully we have a new idea about bringing in some impact metrics as well, and sort of thinking about, how we can each be CEOs a little bit and bring, you know, our thoughts, our emotions and our behavior to change things in our organization.
Organizations Nate. Let's see. Did we come away with any concrete things to measure. Well, Jennifer coined a great term which I, Jennifer, have been using in meetings and conversations since, which are vibe metrics. And that's kind of what we were hoping to get to was, you know, it's really easy to measure downloads. It's really easy to measure, a project completion.
But it's really hard to measure if we're really making the change that we want to. And really having the impacts that we want to. So that's where the heart part came in, is trying to get to those big ideas and the meaningful change that we want to make and how to think about iteratively moving toward those and developing metrics that get us to a place where we can measure the vibe. We can measure how folks are feeling, what they're talking about, what they're doing as far as, really making the consumer or the community change that, that will facilitate what we're trying to achieve here.
I don't know Jennifer does that does that sound about right to you. I totally encapsulates it. And this is why I just adore letty and Gabe, because I know it feels good. And I know it's working, but from my heart. But, you know, I work for a corporation, so it's kind of hard to say, Oh, the vibe is so legit. And, you know, like, the vibes are so immaculate.
They're like, OK, we love your immaculate vibes. But how about some, you know, quantifiable metrics here. So I think it's just something to I ever since we started talking, it's been almost a year now that we've been having this discussion. And I think I'll be thinking about this forever, to be honest with you. And I did go to a sales lunch right after this and started talking about outputs versus outcomes.
And the folks at the table were totally wowed. And we did what we needed to do. And it was exciting because I was like, Oh, came to life Thanks to letty and gave all credit. Oh, it's been a team effort. Absolutely and I just I feel like the idea that you and Gabe hatched, you know, we just really brought it to life as far as bringing those concepts to bear on, on, on everybody's everyday life.
So I'm actually going to pull up the Joshua side book link, because I think the idea that we were hoping to share is really so easily encapsulated in this book. It's a super easy read. It's a teeny tiny book. I actually read it on a plane ride earlier this year. So it's a great place to start just as far as building that framework for yourselves and for your organizations.
And then just a little shameless plug for the annual meeting program committee is when you're having these conversations in the year, write them down and remember who you had them with, and then submit a great session based on that, because then that allows you to talk about it for an entire year and get together, you know, great group of people to help you with it. So I love it, I love it.
All right. To shift gears, just quickly before we jump into the posters, one of the things I, I have a special place in my heart for the topic of accessibility, and I really like what I saw there in those few minutes. Any of the librarians or those from the library community, anybody? Can you talk about some of the things you're doing to advance accessibility or if you attended that session.
I tend to hear a lot about what publishers are thinking about in this world. But this concept of, librarians and the library community advancing it. I'd be. Now he froze. David did he freeze for everybody. Yep he's back.
He's back. Question is, do we have any librarians on in on the call in attendance right now. Yeah I've got that really big video open right now for posters. So it takes up my bandwidth. We have a hidden. We have a librarian in disguise.
Oh Thank you letty. All right, well, with that, let's jump over and let's talk about posters. Am I still frozen. No you're good, you're good. OK so posters are obviously an important part, but people all the time complain that they don't get the chance to see all the posters.
So we have compiled some videos. And with that, let me get those off. I can actually get them started. The different types of Oh sorry, I cut you off stations that are in my safe. I just want you can filter by organization as far as exactly what format they're reading. You can bring in access type. Maybe I just want to know about the Open Usage for journals within the select organizations that are in my sales territory, for example.
Or maybe I want to take a look at the different types of access as far as whether they're controlled or open. I can also bring in country so I can zero in on which countries are using different formats. Obviously have the ability to customize our date ranges as well as use some preset ranges. So this is the type of prototype that we are testing right now.
We are working to better understand where publishers want to play and what sort of interface is going to help them most efficiently interact with and engage with their analytics. That was our poster. That was the whistle stop tour. Please reach out if you would like more information. Hello, my name is Kelly Henwood and I'm a Sr. Consultant at TBI communications today I'm presenting our poster titled Navigating the shifting social media landscape practical strategies for Scholarly Publishing.
This work reflects our ongoing research and client insights into how publishers, researchers, and institutions can respond to an evolving social media ecosystem where platform dynamics, content formats, and trust levels are all in flux. Let's begin with the first key question where is your audience. Now Social media platforms are shifting rapidly. Twitter, now X is becoming unstable and fragmented, while newer platforms like Bluesky are emerging as decentralized, values driven alternatives.
LinkedIn remains a professional mainstay, especially valuable for B2B connections and editorial outreach. We're also seeing TikTok and Instagram rise in popularity, especially among younger researchers, due to their visual and video first formats, and mastodon, although niche is becoming a hub for open science discussion. The key takeaway here. Don't try to be everywhere.
Instead, understand where your audience is and invest intentionally. Next, let's talk about content formats, specifically the rise of video and visual storytelling. Short form content is now dominant. 62nd explainers about a paper, visual abstracts and behind the scenes footage are becoming common ways for researchers and organizations to communicate. We recommend experimenting with accessible tools like CapCut, Canva, descript, and obs.
And remember this rule of thumb hook your viewer in the first three seconds. Always use captions and stay authentic. Right now amplification. Who is helping your message spread. Influencers, both internal and external, play a critical role. Think academics, journals, society members, as well as your own editors, authors and staff. We encourage publishers to activate advocacy programs, provide authors with promotional toolkits.
Launch editorial ambassador initiatives. Trained staff in employee advocacy. Micro influencers, in particular, drive trusted and highly targeted engagement. That's where real connection happens. Let's shift to the topic of trust. Perhaps the most important asset in scholarly communication. Trust is built through transparency. Show your process, not just the outcomes.
Use threads or carousels to provide context. Always include affiliations, dois, and real names. We also highlight the importance of prebunking tackling misinformation before it spreads. Train your team. Maintain a myth busting toolkit, and pin authoritative content. These actions help safeguard both your brand and the scholarly record. Right to future proof your social strategy.
We offer two sets of recommendations. First, some quick wins. Repurpose content across platforms. Use native tools like polls and reels. Look beyond likes. Find deeper metrics. Then think long term. Platforms will change, but your audience's needs remain stable. Focus on building relationships and habits, not just campaigns in an age of algorithmic feeds and platform volatility, consistency and human connection matter more than ever.
If you're unsure where to start, focus on storytelling, not just broadcasting. That might mean sharing the journey of an author, highlighting the impact of a study, or lifting the voices of underrepresented researchers in your community. These stories cut through the noise, build loyalty, and reinforce your organization's purpose far beyond the next post or platform.
To wrap up, here are five practical next steps. Audit your current platforms and understand audience behaviors. Start experimenting with video and visual formats. Activate your internal influencers or build a trust layer into your social presence. And finally, set flexible, platform agnostic goals. If you'd like support with social media and strategy in scholarly publishing, scan the QR code on this poster to contact TBI communications.
Thank you for listening and I hope this inspires new ideas for your organization. Hi there, and welcome to the Iceni poster presentation for the SSP 47th annual meeting. I'm Elena Marie Chapman, Communications Manager at the Iceni International Agency today, I'm excited to share how Iceni is helping to reimagine name identifiers for the scholarly publishing sector and how it's supporting the critical values of open science and research integrity.
So let's dive in. Iac, the international standard name identifier, is the certified Global standard for identifying contributors to creative and scholarly works, including researchers, institutions, publishers, editors, and more. You may sometimes hear us say isni, but however you pronounce it, IAC plays an essential role behind the scenes. There are now over 16 million isni assigned worldwide, including over 3.4 million for individual researchers and over 2 million for organizations.
And that number keeps growing as open research practices expand globally. At its core, ECI provides persistent, reliable identification across diverse platforms. It reduces confusion and author identification, ensuring that researchers, their outputs and their affiliations can be clearly and accurately linked across the research life cycle, from submission to citation and beyond. Seeking to resolve name ambiguity in search and discovery, eisenii helps to ensure that published works are accurately attributed to their creators wherever they are described.
By improving metadata quality, eisenii enhances discoverability, citation tracking and research assessment. It also supports global collaboration, enabling enriched metadata to move more easily across borders, languages and systems to expand market reach. Importantly, isni is an ISO standard ISO 27729 to be precise, and is governed through rigorous processes that ensure long term stability and trust, is adopted across sectors and is not tied to any one discipline, geography or commercial interest.
It is also designed for interoperability. It seamlessly connects with key parts of the scholarly ecosystem, including Doi registries like Crossref and datacite, Ringgold, raw, Scopus, orcid, and more. It is also sibling of other well-known ISO standards like ISBN for books, issn for periodicals, and isrc for musical recordings. But instead of focusing on one domain, it stretches potentially across all the creative sectors, from libraries and book publishing to music and entertainment research and rights management.
Isni is explicitly designed to act as a bridge or linking ID. It connects existing industry standards and identification schemes, consolidating them in one curated database, enabling crosswalks and correlation between the different schemas, thereby helping to make research more discoverable. Esi can be assigned not only to researchers and authors, but also to research institutions, publishers, funding bodies, editorial boards, and reviewers.
This makes ice and I are flexible, and scalable solution for persistent identity management, supporting workflows for grants, copyright management, and sharing data across different fields. Ice and AI improves operational efficiency by reducing the administrative burden of name management. Researchers do not need to self register for an esni. Assignments can happen independently through trusted registration agencies, and ice and ice can even be assigned posthumously, enabling persistent links to historical research and creative works over time.
Looking to the future ice is the cornerstone for research integrity in an age increasingly shaped by AI, big data, and global networks, it provides a foundation for emerging technologies like AI driven discovery tools, verify digital identities through blockchain and scalable linked data applications that help connect information more meaningfully.
In short, Iceni supports transparency, accountability, and interoperability across scholarly communications, all of which are central to the principles of open science. If you'd like to join the millions of individuals, organizations, and groups already using Iceni, you can learn more via our website IAC or scan the QR code on screen now. Our database is open to the public and available via our website.
You can also reach us directly at Iceni. Org if you have any questions about implementing or integrating Iceni into your workflows, becoming an IOC member organization or working with us as an IOC registration agency. Join our mailing list or follow us on social media to stay informed. So thank you for your time and for your interest in persistent identifiers.
Together, through global adoption, we can build a more open, trustworthy and connected future for research and creative works. We look forward to speaking to many of you during the event. Until then, take care. And forgive me, I started this thing in. In progress. So I'm going to move it back to. My name is Jessica.
Notes I'm editor with the American Society for Microbiology, and I'm here to talk about the poster I'm presenting, along with Adriana Borgia, managing editor, and Joe Schwartz, assistant managing editor, called open peer review for journals at the American Society for Microbiology. The American Society for Microbiology, or ASM, is a professional life science organization dedicated to promoting and advancing microbial sciences around the world.
The society publishes 16 peer reviewed journals that span the entire range of basic, clinical, and applied microbiology. Authors and reviewers can opt in to open peer review or opr at two of asm's open access journals, msystems and microbiology spectrum msystems is a selective systems microbiology journal that launched in 2016. While spectrum is a broad scope sound science journal that launched in 2021.
Opr means that if the paper is accepted, the Review Commons, decision letters, responses to the reviewer comments, and reviewer names will be published alongside the article. This option is presented to authors on the initial submission and revision forms as an opt in, opt out question that you see here. Spectrum shows a modest but consistent year over year increase in opr uptake over the past three years, and systems has a slightly higher overall rate of opr participation, which may be due to several factors.
First, it is a more established journal that introduced opr in 2019, so its author community may be more familiar with the option. In addition, systems has a systems microbiology focus that tends to attract data centric research, a domain in which open science practices may be more culturally normalized. The percentage of opr articles has remained relatively stable for both journals, with systems showing a slightly higher proportion of opr publications compared with spectrum.
The acceptance rate for opr manuscripts is marginally higher, about 4% higher in systems and 3% higher in spectrum. However, these differences are not statistically significant. These results indicate that opting in to opr does not disadvantage authors in the editorial decision process. Authors based in Europe and North America, showed the highest opioid uptake, consistent with regional trends in open science engagement.
M systems had higher opioid uptake, particularly in North America, which is likely due to its stronger focus on data centric research. In Asia, opr participation was driven by authors from China, with 74% of the region's opr submissions. Overall uptake in Asia was relatively low, suggesting that opr is not a key priority. Opr uptake in Africa seems higher for systems than spectrum, but the number of authors opting in was the same for both journals.
For both journals. Opr articles had slightly lower average citations compared to non opr articles. These differences are not statistically significant for systems and only marginally closer to significance for spectrum. Opr articles received slightly higher altmetric scores, though again, the differences are not statistically significant.
In conclusion, opr opt in varies by community norms and regional practices, and it does not disadvantage authors in the editorial decision process. In addition, opr does not negatively impact article quality and may be associated with broader audience reach and social media attention. I hope you enjoyed this presentation. And have a nice day.
Hello, my name is Katie rokakis and I'm a senior digital publishing coordinator for Michigan publishing services at the University of Michigan. In my role, I manage the production process for scholarly monographs published by our distributed presses, including lever press. Lever press is a fully open access press that is governed and supported by member libraries from more than 50 academic institutions.
In this poster, I share the results of a survey that is offered to readers of WordPress open access ebooks. The survey appears as a pop up window when readers open the book in the e-reader powered by fulcrum on the lever press website. The survey asks readers about their interests in the book, how they found the book, and how they plan to use the book from August 2020 to January 2025.
There were 441 responses to the survey, which I inventoried for this poster. The results of the survey will help publishers understand how open access ebooks are being discovered, used, and reacted to. The first question asked readers, how did you find out about this book and gave them multiple choice options. The majority of participants responded with other and in the free text.
Field noted that they found the book via mailing list or a recommendation from a colleague or the author of the book. The second question asked, why are you interested in this book with a free text response field. I read through each of the responses and categorized them according to the top five themes that were present. Most readers were interested in the book for professional and academic reasons, although close to the same number of readers were just personally interested in the subject matter or the author.
The third question asked, what are you going to do with the book now that you have it. With multiple choice options, very few plan to print out the book and read a physical copy, but many reported that they would do many of the other options, including save the book, read it, share it, and perhaps assign it in their courses. The fourth question asked readers is there anything else you would like to tell us or think we should know about how you found or are using the book, or about yourself with a free text response field.
This was a pretty broad question. So the types of responses varied widely. But there was an overwhelming sentiment of appreciation and support for open access ebooks. Many respondents simply wanted to express their gratitude for the e-book, while others shared more details about their demographics and their use of the e-book.
One main takeaway from these survey results is that the open access model is succeeding at helping readers overcome barriers of affordability, location, or institutional affiliation. Many readers indicated that they came from an area with limited access to scholarship. One reader noted, I am grateful for the sharing of books that otherwise would not be available to many of us in more out of the way places, such as small Canadian cities and universities.
Being able to access. So much of the world's literature and research has been critical to my freeing myself from literal oppression related to closed minds and spirits. The ease of access also encouraged readers to share the ebooks widely and assign it in their courses, but it was clear from the responses that the books were also valued for their high quality content, and not just for their free price point.
Readers appreciated that there was well researched material material in their subject areas that was also accessible. If you're interested in learning more about reader responses to open access ebooks, I encourage you to check out these related inventories and similar survey responses that were completed for the University of Michigan Press. You can also learn more about lever press, its books, and its publishing model at lever press.
Org. Thank you and happy reading. Hello my name is Casey Pickering, director of product marketing at copyright Clearance Center. I'm excited to share some highlights of a poster that will be featured at the poster sessions of the upcoming annual SSP conference in Baltimore. Technological advances from widespread digitization to artificial intelligence, combined with intensifying policy mandates for openness, are fundamentally reshaping the scholarly publishing landscape and presenting society publishers with both significant challenges and strategic opportunities.
To understand how they're responding. Cc and research consulting surveyed 66 society publishers and validated findings through an interactive workshop with 40 industry representatives. Some findings reveal some interesting insights. Publishing remains core to society's identity. Nearly 80% of respondents consider publishing central to their mission, with most having published for over 50 years.
Scale confers resilience. Societies publishing larger genre portfolios and more diverse outputs demonstrate greater financial stability, while those with more limited offerings face heightened vulnerability. AI is presenting a paradox. I ranked both the greatest challenge and the most significant opportunity, with society seeking guidance on both implementation and licensing strategies.
Open access creates volume without proportionate revenue. 55% of societies expect content volume to grow faster than revenues, straining quality assurance processes. And finally, collaboration is increasingly essential. Societies seek collaborative approaches as key to tackling challenges, including data integration, AI implementation, revenue decline, and the transition to publishing.
With these highlights in mind, let's review a sneak peek at the poster with some of the top challenges and opportunities as identified by society publishers. These were the top five challenges. Let's let's talk AI. Likely no surprise AI is viewed as both a top challenge challenge and, as you'll see shortly, atop opportunity as well, over a five year horizon. We asked in the survey, whether societies were engaged in licensing content to third parties.
Just over half said not yet or never will, and just under half are actively licensing in their own right through a partner or they plan to. This shows clear uncertainty or maybe a reevaluation of strategic priority, but also a willingness to experiment as a means to gain control over how content is being used to train AI systems and to be remunerated properly for it.
Rounding out the top 5 is competition from commercial publishers, meaning? Oh, and pricing expectations, government and funding mandates for Oh, and managing research integrity. It's worth noting that this survey closed at the end of January, just as we started to see sweeping reform in the US, particularly around research funding. So keep that in mind as a significant and uncertain market force here.
Next for opportunities. There are some really clear parallels between the challenges on the previous slide and the opportunities you see here. It feels like a very pragmatic outlook from societies for AI. There is already promising use cases underway, designed to drive time and cost out of the publishing processes through automation, especially in earlier editorial stages and peer review.
That's followed by greater society collaboration, market expansion, more oe offerings, and enhanced author services and support. So what does the future look like. With publishing services remaining core to their mission. Society publishers are really uniquely positioned by the value and the research integrity they deliver through community connection, quality standards, and trusted subject expertise.
The future of society publisher demands both pragmatic adaptation to changing market conditions and collaborative partnership that maintain independence, maintain independence while assessing essential capabilities. And this is just a quick look at what we'll cover in our poster session. The future of society publishing. Letting data tell the story.
It will be happening on Thursday the 29th of May, from 1:30 PM to 2:30 PM in the exhibitors marketplace. I hope to see you there. So we can talk more about these findings. And in addition, we will be sharing a white paper authored by our friends at research consulting that will give an intensive look at all of our survey findings and provide quotes from different publishers who attended our virtual workshop, and much more insights about what society publishers think is the future.
So please join us there. I look forward to seeing you. Thank you. Hello and welcome to dcl's poster presentation. My name is Maryanne Callahan and I work at data conversion laboratory. This poster is titled content clarity analysis ensuring content integrity, discoverability, and accessibility at scale.
Content clarity is a DCL solution that conducts a large scale analysis of a publisher's entire catalog to uncover issues that impact discovery of your content. For publishers planning strategic improvements to how their content is developed, produced, and distributed. Understanding content, metrics, and quality across the entire catalog is essential, and that's what content clarity does.
The software analyzes and reports metrics like how many different DTDS are you using. How big is your collection. But then it also identifies where your content is missing information. For example, for every image is there a call out in text and for every call out in text. Is there an image file. The software cross-checks against third party databases like PubMed central and Crossref to find out.
Are the dois valid. Are the dates of the article correct? The lists you see on this slide are just some of the metrics and information. We capture. All the issues that content clarity captures trace back to content discovery, interoperability, and good user experiences.
Content clarity works by analyzing all content and their related asset files. Part of that analysis is to ensure alignment with industry standards like JATS, bits, and nist's. The first step is having a publisher collect their content. Set PDF files XML image files. This alone is often a revealing endeavor for publishers who discover they don't even have all their content in one place.
Next, DCL conducts a secure analysis of the content, and a detailed report identifies issues that turn up and would benefit from remediation. Scholarly publishing content is vast, and the pace of production never ceases. Taking time to analyze decades of content rarely occurs until publishers move to a new platform. Replatforming decades of content always reveals broken links, inconsistencies, outdated or outdated metadata, and more.
Other challenges we've seen are the tables, equations, and math formula contain valuable information, but it's styled as images and not searchable. Also, references are not tagged in the latest XML standard with dois or other persistent identifiers. Media files and paths change over years, and finding invalid IDs and paths is tough to do at scale without this kind of analysis.
What we've consistently found is that content collections, collections develop issues over time. It's simply the nature of scholarly publishing. With changing tech over the decades. Just as technology evolves, so does metadata. So do the XML standards we use and how we even present research and information. But what remains consistent is the fact that strengthening content structure not only enhances accessibility for researchers and institutions, but also prepares publishers for the future today, that might mean leveraging AI powered research tool, but in the future, we just don't know.
What we do know now is that conducting a large scale analysis of a publisher's entire catalog uncovers hidden barriers that allow us to make content more discoverable and valuable. Thank you so much for watching this short intro to my poster presentation. I know there are so many great posters at this year's SSP annual meeting, and I'm really appreciative that you took the time to explore mine.
Imagine a world where knowledge, through research is accessible to everyone, everywhere. That's the promise of open access. Our analysis reveals a scholarly publishing ecosystem in transformation. Gold rising. Closed access. Declining and innovative funding models emerging. What we're witnessing is a revolution in how research is shared and in who gets to participate in global knowledge creation.
Our analysis, which looked at decades of data from dimensions, STM and Web of Science, reveals fascinating patterns. Gold is steadily growing and stabilizing as the dominant model, while closed access is gradually declining despite occasional rebounds. These trends are consistent across both STEM and social sciences and humanities fields, pointing to fundamental shifts in how research is disseminated globally.
We're witnessing a dichotomy in the publishing landscape. While closed access still receives substantial funding over $2.15 million from the National natural Science Foundation of China alone, gold O is emerging as the second most heavily funded model, particularly in Europe. When we examine contributions to sustainable development goals, we see gold o leading in critical areas like SDG 3, good health and well-being and SDG 7, affordable, and clean energy.
This suggests open access is not just changing how we publish, but actively contributing to global knowledge in high impact areas. What does the future hold for open access. The sustainability imperative is driving innovation. Emerging models highlight two promising approaches. Bio one. Subscribe to Open Model aligns community funding with long term stability and frontiers.
Flat fee agreements offer institutions predictable costs while expanding access. These alternatives move beyond traditional APC based publishing toward more inclusive, mission driven approaches. Nature teaches us that biodiversity, not monoculture, ensures ecosystem resilience. Similarly, the future of scholarly publishing won't be dominated by a single perfect business model.
Instead, like successful evolutionary adaptations, multiple models will coexist and hybridize, each filling different niches in the knowledge ecosystem. Open access models currently demonstrate this adaptive principle, developing specialized approaches that respond to unique community needs. This diversified publishing ecosystem is essential for meeting the varied requirements of researchers, institutions, funders, and disciplines worldwide.
The most sustainable scholarly future will be one where many paths to open access can flourish. Hi, everyone, I'm letty Conrad with live links. I'm so pleased to bring you a little bit of the poster session from SSPS. And we're going to stop there because letty is kind enough to let us stop there, since we did see of the end of hers. And she's here.
So if you have questions about her poster, Lete, you don't mind jumping in and answering those questions to you. So absolutely not. Anytime awesome, awesome. Well, we're going to take it now, and we're going to move in and have a discussion about the SSP previews session, which is one of my favorite parts of the annual meeting.
I also enjoy my mind's gone blank. What do we call it. Innovations all right. That's that's similar that we have during the year in any case. But we want to showcase innovation showcase. That's it. Which takes place a couple of times a year, and I always look forward to that one.
Always fascinated by it. We do have a couple of the different chefs here with us today. So if you are a chef, turn your microphone on and Susan start finding them and start highlighting them. And I'm going to let you take it away. I've got here a list of the new and noteworthy products that were featured. And let me just turn it over to you and you can tell us about some of the things that you saw and the impact that you think that these things will make.
Fabulous Thank you. Yeah, I always enjoy the previews, and I've noticed some people referred to it as the People's Choice or sort of the most popular new innovation or new offering. So it was exciting to see this year. What what's going on. We have a couple of questions that we're going to work through.
And the first of those is what stands out. And one of the things that strikes me is actually a theme throughout the annual meeting, which is a determination to solve problems and work together on solutions. And I feel like every single presenter came with, you know, an application workflow, a partnership, a solution that will crack some of the toughest nuts that we're facing.
So that's what stood out to me. And and just generally embracing AI as a way forward rather than a threat. That definitely stood out. But Randy, what stood out for you. Yeah so first I just want to say that to, you know, I agree with David completely. And you, letty, that, you know, this is one of my favorite parts of SSP, seeing all the new products.
And every time I see them, I'm like, I want that. Then they see the next one. I want that too. Like, I want everything, you know, definitely a theme. Like you said, we're embracing. I remember, you know, five to seven years ago, we were just so afraid of it. But now leaning into it and exploring the possibility of how it can have impact in what we're doing.
One thing this may be a flip side to the Sunshine that you just said, letty. That that really caught my attention, though, is I was watching the ORCID presentation and it's a fantastic presentation, but in the back of my mind, I'm saying here is something that we have not universally adopted. We have not said this is the standard, and I think it's something that has the potential to protect us from risk from a lot of vulnerabilities that some of the nefarious people are taking advantage of.
And I think that if we don't take this opportunity to start having these harder conversations and say, you know, yes, ORCID should be a standard because we need to verify authorship because it's important. I know it's asking authors to do one more thing, but, you know, if our end result is to say, yes, our research is valid. Yes, it's verified, I think ORCID not the sales pitch for ORCID, but I think it's just an example of something that, you know, we should be having discussions around.
And I don't know why we're not doing that just yet. Yeah, great point. Tom, what about you. Right I think the one thing that makes me very impressive is, is the final vote result display. And then the compare to last year because I participated last year, and the final presentation format has improved significantly, which this year is very colorful with visible logo, logo and also the clear the number of the, you know, the votes.
And then the last year is a simple slide with the name and the basic the bar, you know, the bar. So it's improved dramatically. But from the content standpoint, I would particularly the joint on how the I is involving the beyond the tools. So it's now you can see the playing a much bigger role in the workflow and decision making. So integrity although integrity remains a central focus.
But there's clearly a growing interest in the research identity solutions like ORCID, Ringo and which reflect the broader industrial shift. This is also the very important foundation for the AI as well. I also particularly really appreciate how the previous sessions gave me a curated view of emerging ideas and innovation. So SSP has done a heavy lifting in the preselecting all these relevant, the quality, the innovations, so that I can, you know, explore the new direction without spending days filtering out, you know, through this.
So this saved me a lot of time and also gave me, you know, the broad broaden my views, gave me a lot of ideas. That's this is all really good. Yeah Thanks. Yeah, Yeah, I agree the Polish is really there. I mean, especially Hong, you remember from last year when you step onto that stage, SSP has just upped the game as far as you know. Really? you feel like a rock star walking up there like you can't see anybody.
But the presentation is beautiful. The quality of the work is great. And I was struck by how so many presentations were eager to get right into the weeds. Get right into, you know, XML and showing us workflows. I think that just speaks to how eager we are to solve problems. But, Randy, to your point, you know, ORCID is one standard that we've been championing now for a long time.
One of those pids. I'm a big fan girl for persistent identifiers. And I noticed dataseer was our winner with snapshot this year. You know, working to crack one of those nuts that we're all struggling with around research integrity. But every research integrity score I've seen out there is using pids. You know, it's relying on those spokes of the wheel around information standards.
And so I just want to I just want to underscore what you said, Randy. There, as far as you know, the uptake of pids and the uptake of those standards. And we cannot get to reliable research integrity scores or, you know, research identity researcher identity scores. We can't get there without those standards and working together on those standards. So Yeah.
So I was there and Randy, you were there as well. Hong you didn't join us in person this year, but you did get to see all the previews. What did you think about the winner. Who would you have voted for if you were there. You mean the winner as dataseer or you. You you asked me, who did I do I vote for. Yeah do you think that was. Would you have voted for dataseer or who would you have voted for.
Although, you know, I, I, I know the team. I have the, you know, the several discussions with team before the founder of the dataseer, the product is both useful and meaningful. But actually this year, if you ask me to vote, I vote for. My pronunciation is right. And is it. And there's one. No is it the impact.
That was the one from impulses. Yeah Yeah. OK OK. Thank you. Yeah impulses. So I vote for that because I think there's a vision for of the agentic. The AI orchestrating the entire. The peer review journey is something I see as the future of scholarly workflow.
It's more than just automation. It's about, the editor, author, and the reviewers intelligent support that adapts and learns. So I think that the idea of the AI system acting almost like the human project manager across submission peer review is exciting. So moving from assistant to autonomy, the there's, the integration first approach. Also the resonated with the real world complexity of publishing platform.
I think that that's, that's a, that's a future. So in the future we can imagine, it's just, the editors although you everyone has to be the manager and the Manage the virtual, you know, the employees who is actually is agent. So it's very easy for, for us to call the, you know, the most suitable tools, automatically call the most suitable tools to complete the task, et cetera. So I think that's why I vote for it.
Yeah, you're on the cutting edge of AI though. We knew we knew that Randy had a passion for this. Yeah, I have a passion. Who did you for. For Randy. Yeah so first I do want to say the five minute limit was hard. And I would have loved to have seen Korea docs expand a little bit more because I was in the zen moment with them, you know, meditating and you know, I was we were getting there.
So I would have liked to see a little bit more from what they want to, to demonstrate. But in terms of my vote, I went with actually dataseer. I like their future focused approach. You know, I'm one of the kind of people that likes to see where are we going to be five years from now. And it looks like, from what they demonstrated, they're looking at five years from now and getting us to that point.
And the way they describe their tools, it was iterative. So it allows us to have bigger conversations on, you know, more substantive aspects of our industry. Because we can have the AI learn as it goes along. And improve our workflows and have an impact on those kinds of tasks. Yeah Yeah, I, I liked the guided meditation as well. I'm always game for that. Although I wasn't able to get up as early as the 7:00 AM guided meditation.
But I like that. You know, one of the thoughts that came to my mind while we were voting this year is, you know, I think we each come to the voting question with slightly different agendas. Maybe it's a popularity thing, maybe it's just who was the most impressive presentation or which one do I want right now for my business. So I think it can be tricky sometimes.
We may not always vote for the same thing. I will just show my cards and say, I voted this year for lib links. I've been working quite a bit with them lately. I thought Tim Lloyd delivered a Ted Talk style presentation that left him in the dust. But you know, it's really hard, as you say, Randy. Five minutes if anybody's done it. I did it a couple of years ago.
It is so hard. I did in my PhD thesis as well. We were required to do a 3 minute thesis. So try five minutes is hard. Try three minutes to try to get your whole six years of research into three minutes. It's a challenge. So what about surprises. Did any presentations this year surprise you.
Concern you. Anything inspire you. I won't say concerns because I think, you know we're all having the right conversations. But I will just circle back to the docks presentation, because one of the things that they were specifically talking about is the human interaction. And, you know, we're definitely focused on the technology and all the wonders that it can do.
But it didn't separate the importance of our, you know, one of our core values community. Right making sure that we have a place and a role to use these tools to improve our content and to improve our activities. So, you know, that's one element that really stood out to me and resonate. I walked away thinking, you know, more and more about that. Plus, I mean, Tim's voice was just so calming too, right?
Like, I do want to come talk to Tim. He's, you know, he has a way with words and a great approach indeed. What about you not having been there in person or anything. Yeah, I still be inspired. Yeah even although I was not there. But I watched all the videos and inspired by I still, you know, you know, I'm the AI guy, so I still inspired by how far the AI has come in this space.
We are seeing the transition. I mean, from the researcher perspective, we're seeing the transition from the simple tools to research assisted, AI powered research assistant to and now even to what we might call AI scientist. So this is, you know, the huge progress. And then the researcher said, on the other hand. So that's why we try to leverage AI to power, empower the, you know, the population side.
And then during the, you know, the peer Review Submission, peer review, the journey, we have to do this. So the idea is identical to AI. So that's why I vote for that. I just mentioned managing workflow and coordinate multiple virtual teammates is something I wouldn't have imagined in this space even two years ago. So this is and also what also stood out was the focus on the data quality.
I think this is very, very important, especially now that people focus not only content quality but also, you know, the researcher identity quality, et cetera. So the team are recognizing that without good data, even the best I won't perform. So this is a critical insight we need to keep pushing. And this is not only for reproducibility integrity, but for the, you know, for the whole society.
Knowledge sharing and was my own presentation didn't win last year, but it was valuable to see where the community attention is. So I like this question. So the which solution will have the most positive impact on the scholarly communication. I think this is the right question to ask. It's great to see this. The range of the answers that help me, or stay aligned with all the involving the needs of the researchers and the institutions.
So this is what inspired me. Yeah, Yeah, it's interesting to hear that you wouldn't have thought that we'd be so far along with, agentic, AI and other solutions. I think you're right. But I think it's also important to see that we've moved beyond the Fear and Loathing stage. We've moved into what are we going to do about it stage. And I think some of these other solutions underscore that as well.
I think about typify, for instance, as an accessibility advocate, I was really excited to see, the Alt text workflows. Let's make it easier. Let's build in, you know, automation where, you know, we need to save human brains for time doing the things that only we can do. And so bringing automation and leveraging AI for things like accessibility I was super happy to see that.
Any any of these other logos, any of these other solutions stand out to either of you. I would like to add one more point. Sorry for interruption. So I remember also, in the future is energetic AI is automations, but this still is a facilitated human. It's not, replace. It's you know, just the human improves the productivity.
But it's still humans need to make informed decisions at the end. Yeah, and humans were in the loop everywhere we saw AI. Yeah, exactly. That's just the very next thing that we were. Yeah, but here's the human check. Yeah I was just going to mention the kudos presentation as well, you know, understanding who's coming to your websites and, you know, making sure you're able to target those, those communities, those individuals understanding how your content is being received.
I think that's adds a tremendous value add to activities. If you have that data going back to Hong Kong's point, having clean data is so important in everything that we do. In the time we're watching our budgets, we need to make sure that we're targeting our messaging. You know, critically. Yeah Yeah. Charlie does an amazing job.
She's presented a ham. She's presented so many, so many times. But she's done previews a couple of times and she always brings an angle on what we're talking about. You know, certainly, you know, both connecting with our communities, but also communicating, you know, to communities outside of scholarly communications. Kudos, kudos, kudos. They always do an amazing job with that.
So Yeah, definitely. You know, one thought that's come to my mind too as we've been chatting is and something maybe for our annual meeting committee for 2026 to consider is giving us maybe a few categories to vote on. You know, kind of like the Grammys or the Oscars. You know, best this best that could be best talk, best innovation, best guided meditation. I don't know, maybe there's some categories to help us.
You know, give props to, you know, more folks choosing one among these amazing presentations. It was a challenge, I think, for all of us. Did any provide good outcomes, as opposed to talking about outputs. Oh great question, great question. I feel like a lot of them were focused on outcomes.
You know, maybe that's a point for all of our presenters to think about, because I think there was quite a bit of focus on discrete outputs. Write the things we can measure that are, you know, the thing delivered or the workflow enhanced. But being able to measure that those things are helping us achieve our bigger goals. That would definitely be a good one to vote on too.
Way to throw it back to you. Yeah well, I want to open it up here to the rest of the community. You know, we've got a unique opportunity with Hong and Randy and letty ready here. I'd love to hear the comments from the community about your perceptions during the preview session and any questions that you might have for this group. So feel free to raise your hand or put it in the chat or what have you.
Suggestions for 2026 previews? I'll jump in first. I'll get the ball rolling. So one of the things you mentioned, letty, was, you know, the Fear and Loathing stage of AI and how it scared us all, and now we're leaning into it and it's exponentially much bigger than the move to digital from print because of all the implications it has.
But it reminds me of when Bill kasdorf used to talk about SGML and say that to a lot of publishers, it means sounds good. Maybe later. And, you know, that's how anything new like this, you know, there's that hesitation and the fear. And then we realize that this is just something we have to do, and we better get moving. Yeah, I think it is common for the any the emerging technologies, there's this common you know, the always you know, the people have fear because we don't know where to be.
What's the potential. But I through the time I think we have become more clear about this. Yeah just like AI today. Yeah it's the storming and the norming phase, right? Yeah we're into the norming phase for sure. Yeah, I was, I was my parents were asking for highlights from the conference when I got back and I was giving them, some, some highlights.
And I told them, I said, if you've seen the Friday previews presentation, which is kind of like a rock star walk of Fame, you know, you wouldn't understand half of what's going. I mean, we just get we geek out. You know, I think the previews were just a fantastic way of showcasing the amount of really hard work that's going on around innovation in our community and just really impressive new ideas coming through every year. So every presenter should be super proud.
Yeah I'll just add to your point, Greg and I'll, I'll share something I called the Jennifer Regala effect because Jennifer has this way of taking, you know, we have all these things going on, right? Like, Oh my God, I'm so worried all these things are going on. And she'll say, these are the three things we need to focus on. And it's like, OK, suddenly it's not so big anymore. We're focused.
And that's what a lot of these presenters, did they. You know, I is so big here. The here's the way we're going to attack it. Here's the things we're going to address. Here's the problem we're going to solve. This is how we're going to employ it. And we as the audience, we can say, yes, I can see that, this is how it will work for me. Or we can be inspired to say, Oh, I understand that approach.
Let me see how I can adapt that to what is specific to my needs or my company's needs or my community's needs. I think we've coined a new term here. Well, AI is AI is such a big catch all term, and there are so many components of it that, you know, as you know, I work for a service provider.
So we need to figure out, what we're going to focus on to, to, you know, get deliver the best outcomes for our customers. What is it that they need from us. But there are so many other things about AI that we don't have to worry about because they're just not our bailiwick. I have just a comment, David.
I've always valued the previews because I think it's such a it just levels the playing field. Like we can have a really large organization like copyright Clearance Center and then, other organizations that don't compare in size. And they're all creating like really great technology or services. I, I just really value the previews for that reason too.
Yeah or I think sometimes we see it's interesting to see iterations. You know, I know in the past I've seen companies that have come in and they have. They've done a presentation on a particular technology, and then four years later, you see them again and, you know, somebody needles them and says, hey, wait, didn't you already innovate. But to see where they've come over those four years and to see how that's changed and to see what they're doing today is also pretty fascinating to me.
Yeah did you see her win a couple years ago as well. I mean, they Tim vines and his team has just been an innovative engine. But to Marianne's point, I agree. Previews is kind of like an equalizer in that respect. And it is a super exciting showcase. So absolutely. Yeah well I'm going to go ahead and roll us in. We've got 15 minutes left.
We're right here at the homestretch. I'm going to encourage our Kitchen chefs to stay close by because we may have additional questions for you. But at this point, we opened up. By the way, isn't this a great picture. I thought this was another fantastic one. It's talk shot. I mean, it's Yeah, it's great.
I use it for Q&A because it looks like he's actually asking a question, but. Anyway, I did think it was a great shot. Yeah Tim talk I love that. I'm just going to open it up here. We've got 15 minutes left. Questions thoughts, suggestions you know and this could be to the program committee. To those of us on this highlights subcommittee.
It could be to the chefs. It could be a call back. You know let's get Jennifer back here. You know, we can ask for questions again. Whatever we want to give a chance to wrap this up and see what else is there. And then do we have Melanie with us again. If Melanie's back, I want to make sure I give her a chance to say any closing thoughts as well.
But anyway, first of all, we'll just open it up to the group and any questions, comments, thoughts, things like that. David I'll jump in real quick. I don't know if you need me. Hi well, I just wanted to say, first of all, this has been great. I really got the feeling that I could see some of the things that I didn't get to see at the live meeting, which once told me the point for those of us who went to the live meeting.
So that was really great. And also, secondly, you know, for anybody who wasn't part of the highlight subcommittee, you have no idea how hard these guys work to put this on for us today. So I'm super grateful to the subcommittee, to Greg and all of you, David, for putting this together. And I'm also really grateful that we had so many people joining us live.
You know, from speaking on their talks to joining in the chef session to everything in between. So huge Thank you to the group and to all of you attendees for coming to this. I think it's a really cool way to have a virtual component of the annual meeting that's a little bit more accessible than last year's. So really well done. Thank you so much.
And Yeah, just kudos and shout out to everybody. Well Thank you Erin I know you worked really hard on last year's event as well. And it was you know, it was this was a very different sort of way to move forward this year. I'm sure we didn't get it perfect. But you know, I feel like that a lot of things that happened really went really well this year. We are excited about getting your feedback.
Susan has put a link in here. If you can give us your feedback, that would be very, very much appreciated. You know, any thoughts, any things you can do, especially those of you who don't, you know, who didn't get a chance like me to go to the annual meeting. You know, we want to make this as useful a time as possible. But also, those of you who did get to go, was this a good summary for you.
Another little thing. Before we get to the next thought comment, I did want to note that the next innovation showcase is on the calendar already. It's on Thursday, July 24, so a little over a month away. So go ahead and get yourself registered for that. I think, Susan, did you put that link for that one in the chat as well. If you didn't, I'm sure you will.
All right. Other comments, thoughts. Questions? feedback. Well, I just want to say, you know, having worked on the highlights subcommittee with all of you wonderful people, I want to second Aaron's.
Thank you. We we put in a lot of work on this, and I, I Thank you for all the help you gave me. It was it was really, you know, a heartfelt effort from all of you. And I greatly appreciate it. And I really appreciate everyone who participated today, from the chefs to the presenters to the people attending, especially if you didn't get a chance to go to the annual meeting if this is your only exposure.
I'm glad you were able to join us today. And as David said, please let us know. This is the first year we've done this format. So do you like the format. Do you not like the format. If you like the format, if you think there are features we can add that will make it better. If we can tighten it up, you know, we're open to all of your feedback because this is the first time we're doing this and we want the work.
It's for you. We want to make it as good as it possibly can be for everyone in attendance. Yeah and Thank you to Greg for leading the group. I also want to put out a special Thank you if you haven't figured it out already. These things don't happen without Susan Patton constantly keeping things in line, making things happen, delivering things when she says she'll deliver them.
Bringing a sense of calm to these events. She does a lot of things behind the scenes. And Susan, we really, really, really, really appreciate you for that. So thank you. David, what were your thoughts as someone who I mean, you're our emcee and you had a lot of insight into how this came together, but not being at the live event and being a big part of this, how do you feel.
Do you feel like you got a real sense from about the presentations and the conversations. I do. I feel a little. I feel like in trying to make sure that I didn't mess up pressing a button or something like that, that in some ways I, I may have missed some things that I might have been able to enjoy. Just just listening.
But yes, I feel like I did get a sense of my one thing in all this is I. I struggled to actually get to watch all the sessions online that I wanted to watch. You know, I thought, Oh Yeah, once SSP is over, these things are online. Yeah, down the 17th. No problem. It was it was more challenging to find time in the day for that than I thought.
But I look forward to and continuing to do that. And by the way, I should also advance the slide here. So everybody can have the link. If you weren't here at the beginning. We do have the link here for all the different annual meeting session recordings, of which I intend to go watch a lot more. I, I enjoyed getting to see the posters. That's something that I've enjoyed in the past, and it was good to get to see those on, you know, as a recording, which I think those are also on this library.
And honestly, I'd like to see more. I'd like to see some vendor presentations in the future. I think those would be really interesting, because I always enjoyed the tour around the exhibit hall as well. Yeah, I'm sure we're going to tweak this format as we go forward. And before we go any further, David, I'm sure I speak for all of us when I. When I Thank you for doing a fantastic job moderating this.
Thanks so much. It's a lot of work, and it's a lot of talking. So we're very grateful to you for agreeing to do it. Well Thank you. I was glad to get to be a part. It was great that David Schiffman, the keynote, was able to join, and that would be a great tradition to keep in future highlights meetings. Having a keynote person become involved with the community that SSP is created, I think was just fabulous.
Yeah, for sure. All right. Any other thoughts. Questions so. I think four hours is a long time. We have five minutes to spare. We can give.
Those five minutes went fast, I thought. I thought, Oh no. How are people going to stay around it. It didn't lag and I thought it was great. Well, Thank you again to everybody. Again, go watch the videos. I'll also say save the date for next year. Hopefully you can attend in person. And if not, I look forward to hopefully having a the second annual SSP Highlights session virtually.
If you have any questions or comments afterwards, feel free to reach out to me, Dean Turner at DC, lab.com or various other channels you can find me on CE3. All that kind of good stuff. So all right. Thank you, everybody, and good afternoon. Take care. Thank you. Bye all right.
Bye bye. All