Name:
What Smaller Publishers Need from Tech Vendors to Level Up
Description:
What Smaller Publishers Need from Tech Vendors to Level Up
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/fd0ea17a-4b21-4158-b272-bd2feba74f84/videoscrubberimages/Scrubber_1.jpg
Duration:
T00H54M40S
Embed URL:
https://stream.cadmore.media/player/fd0ea17a-4b21-4158-b272-bd2feba74f84
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/fd0ea17a-4b21-4158-b272-bd2feba74f84/SSP2025 5-29 1600 - Session 3A.mp4?sv=2019-02-02&sr=c&sig=GHBIodEDo1pNUI3f8uyTn2vkzk5vFeSQ95kOIzcliNQ%3D&st=2025-12-05T20%3A59%3A58Z&se=2025-12-05T23%3A04%3A58Z&sp=r
Upload Date:
2025-08-15T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
OK we're ready. So we're going to introduce my great panelists today. We're talking about what smaller vendors need from tech vendors to level up a lot of vendors, a lot of small publishers are really confronted a little bit with the challenges of trying to get technology and incorporating it in their program and keeping pace, and these people were going to help you get some ideas about what they've done.
And I'd like to introduce my panelists right now, and I'm going to allow them to do that. Great hello. And we do have seats up here if anyone wants them. There's seats in the front. I'm Brian Cody, I'm the CEO and one of the co-founders of Scholastica. So we're a technology vendor. We work with over 1,300 journals across a range of sizes.
So that's for societies who might have a single journal up to people who have larger portfolios. I'm also a developer by backgrounds. I'll try and bring that technical aspect to the conversation. Excellent so you handled technical. I handled the business side. I'm Emily delce. I'm chief product and customer success officer for Silverchair.
Silverchair is both a hosting platform and also has recently acquired ScholarOne, so we're also now working with peer review and conferences hosting. So we have the full spectrum of so from peer review studying peer review all the way to hosting the content and indeed less technical than my colleague there. Hey, everybody, I'm Michelle humble. I am the editorial director at the American Society of Civil Engineers.
We're sort of a small, medium sized ish publisher. I oversee everything on the editorial side. We have 35 scholarly journals. We have a books program, proceedings and standards acquisition. Hi, everyone. My name is Lily Simmons. I am the digital operations manager at University of Pennsylvania Press, so I am hoping to bring a UP perspective.
We are a small publisher with books and journals program. I am largely on the books side, but I will do my best to so represent our journals team today. And yeah, OK, so we have a great, great bunch of panelists here with a variety of experience. And I'm going to go through the code of ethics and the core values. Maverick and Scholastica partnered on a survey of small publishers, and we wanted to find out what people needed to basically do more with less, to try to keep up and keep going in the current environment.
The The survey talked looked at the challenges of smaller publishers, and it looked at how they want to level up responses to industry changes. From new metadata standards to expanding research integrity checks, to peer review and finding what services and software solutions that would be tailored best to the smaller publishing with limited bandwidth and budgetary resources.
And how can vendors best support that. So we did the survey, and these are the results so far that we've gotten. The survey is still open, so you can still take it if you want to. But the preliminary results show that some of the things that are most important that people do now with their publishing is peer review and website hosting and research integrity software.
That's not a big surprise, but it's amazing how much the other things aren't. Automated and do you know that there's a lot of automation in certain areas and not a lot in other areas. The challenges. That's not a surprise either. Budget constraints and human resources constraints, but there is a lot you can do that doesn't really cost a lot of money or take a lot of people to do it.
And I think that this group can maybe help you do some of that. And this is the results of the survey so far. And to what extent your publisher published organization will automate peer review in the next 24 months. I think that we're seeing technical checks and research integrity checks, but it was also interesting to find that finding qualified peer reviewers was high on the list as well.
And that surprised me a little bit. We're still getting the findings. They're still coming in. But that really speaks we need more data and we urge you to take the survey if you can. But given these findings, I'm going to go back to our panelists and ask them to talk a little bit about some of the areas of peer review and publishing workflow optimizations seem to be one that is a big area.
Metadata management and interoperability which comes to your hosting platform and research integrity in the age of AI. So I'm going to turn it over to them to talk about how it's done. And Cody, do you want to start. Sure I mean, I'll give some brief thoughts. Then we can go from there. I'll note that as I said, we work with publishers across a range of sizes.
So people on either the peer review side or production and some hosting. And I think that the core question here is when I thought about the session, is our small publishers particularly different than big publishers. There's an argument that, well, it's the same problems, it's the same process. Technology shouldn't maybe big publishers just have bigger tools.
Is it the same tool. And from our experience, and I'll share an analogy, the way I think about this is I think you can think about different benefits you get at different economies of scale. And so if you think about an industrial approach or a cottage industry or artisanal approach, even though you might use the same tools, they're radically different benefits.
And I'll share an analogy. I do woodworking, and if I'm going to make 1,000 of something, spending a long time setting it up correctly, building templates, really getting it right, or if I'm going to have 10 people work with me and I need consistency, there's a certain benefit at a certain scale. If it's just me and I need four parts, I should probably just do my best and move on and that I don't recoup the benefits at that scale.
And I can talk more about that as we go. But I think that when we talk about what you need from technology, because I wanted to push myself to is it actually different. So is this the session. And I do think as I talked to our customers there are because their ability, their bandwidth, their team size, the scale, publishing, their time, they have a different abilities to recoup at certain economies of scale.
And I'll say more later. Do we want to balance publisher. I can talk a little bit. So at AC we have about 40 staff members within our publications and standards department. And of those 40 staff we have myriad vendors that we use. So we have several different product lines. I know we're talking about journals today specifically.
So 35 journals were sort of small but mighty. What I would say is that we have all of the same needs as the larger publishers, but because we can't do it all, we have limited staff resources. We really need to rely on our vendors to not just help us. Roadmap where we need to be. I mean, obviously, we operate on a multi-year strategic plan for our publications department, as well as an overall organizational strategic plan that our pubs department is folded into.
So we really need to use our vendors as advisors partners, and we need them to listen to us when we say, hey, here's something we want to develop. We do want to innovate, but we have limited budget often and so on our side. So we work with about 650 publishers of various sizes. We have some publishers who have one journal that is using ScholarOne all the way to thousands of journals hosted on our platform, on the Silverchair platform.
The way we think about it is we need to make our tools so intuitive and easy to use that regardless of the size, the functionality is there regardless of the price point. The functionality is there. Because ultimately, as you said, the core functionality functions that the publishers will need to fulfill are the same, but we'd much more limited resources and they maybe they'll have maybe the smaller publishers and mid-size publishers will have access to fewer customization.
And that's OK because it doesn't necessarily meet their need. But as a platform, whether it's a hosting platform or peer review platform, we need to be able to keep those smaller publishers still core functionalities that will help them save time get fulfill their mission make their task easier overall. Yeah, I can speak to a lot of those points for pen press. We have 28 journals that we manage and we have a team of three that actually do all of that work.
And so we do rely heavily on our vendors. I'd say from the perspective of having such a small staff, the ease of use of the technology that we interact with is very important, especially from the side of we don't have full time editors either. We use they're just volunteer scholars, professors, graduate students. So these people have that we want to make sure that their attention and allocation, it's worth their time as well.
So the ease of use of the technology is very important. And as well I know we're going to talk more about this, the affordability. I think that. I think that when we talk about a peer review. We want to talk a little bit about the research integrity and how it affects that.
That's become a big issue and it does research integrity. Does in this technology have a role in research integrity and how are you using it. Do you want to start with you. So I might be quick with this one. We are not using any integrity software at this moment. We are relying on people currently. And that's really as much as I mean, I'd love to hear more about, what are you offering to small publishers in this kind of situation.
Maybe I can just speak to what we do at ASC as well, which is we're trying to do a lot all at once. Now, obviously, many publishers use iThenticate. We do too. But what we really are doing is a multi-layered approach where we're relying on our volunteer editors and editorial boards to also have their eyes on submissions. We're providing really clear guidelines to our reviewers on the types of things that we want them looking out for.
Our staff are trained to look for things like citation stacking, inappropriate use of their position, that sort of thing. But we do not yet have a technology tool that we've adopted for paper mill. Detection or that sort of thing. And I think that unfortunately, we're a little bit in a reactive rather than proactive position because I feel like things keep flying at us.
We need to comply with the European Accessibility Act all of a sudden. And we're putting forward a lot of DEI initiatives that are taking time and resources. All of the things that we're working on are really important, but with limited staff and resources, you can only do so many things at once. So yeah, we're definitely almost gambling right now in that we know that there could be problems lying under the surface that we don't know about yet.
So it's been a really, really interesting space in the last really. It took picked up a lot of speed in the last three years. But this is not a new issue. So with Scala one, we're taking a couple of different approaches to it. A because there are some tools that we have had in Scala, one that have been built in. Actually, before we launched, it was really forefront.
So we had an unusual activity tool that is there and that was starting to flag and that was great. But when it as it became much more like a paper mill became much more of an issue as it became so much more front of mind for a lot of people, where a lot of new vendors and a lot of new, I call it slivers of the issues have popped up and our publishers are experimenting with it, and they're all are very solid in their own right.
They all offer very important checks. Some of them will be focused on authors, others will be focused on citation abuse or plagiarism. They all have important facets, but we're finding that our clients are working, are experimenting with different vendors on their own. And for us, our stance is we'll continue to build tools in-house, but it's also really important that we're able to integrate with these vendors as well, because over 650 clients, they're going to want to be able to integrate with the vendors of their choice.
And that's OK. So we see that a priority to make it easy to pilot these various tools and then let the publisher ultimately choose the one that works best for them. And I'll note what we're seeing a lot of publishers do right now is test tools. I think especially on the smaller side, it's hard to do really, really enjoying the sessions.
But I mean, some of the discussions of pilots where it's oh, you can look at your billions of pages of content and there's a lot of smaller publishers. That's not where they're at. And to your point about bandwidth or sorry, having a team of three, I think that resonates like we had an experience where a publisher was looking at a tool and they tested it, they thought it would be good. And then what they sort of realized was they didn't have enough bandwidth to actually start dealing with some of the feedback.
And specifically, if a signal is very clear, good move on. If it's bad, you can do something. But that middle area leads to this sort of unclear decision tree. Well, what do you do. And that was a lot of work. And they had an existing process. And they're hoping the tool would break it or would eliminate the need for that.
And so part of what we're doing is working with a lot of people to understand how that testing is going and see what it is. And this will probably resonate with a lot of people here. But a tool that's amazing for this portfolio just might not be relevant or not solve the problem that people different risk structures. So we're very much around research integrity and the asking people as you're trying something what is actually solving your problem and especially and what are you going to pay for.
Because we have a lot of interest in tools, but especially on the smaller publisher side, they're kind of hoping, oh, are you adding a lot of things that are free or built into the platform. But some of these are very people here know you need a huge data set for some of the problems or the server costs are not insignificant. These products have cost. And so for us we're also trying to understand who's seeing that there's a savings in terms of the time or mitigating the risk or the problem you're dealing with, but in a way that they are going to pay that vendor for.
Because then that's what we want to integrate, because the smaller publisher side often, they're looking to us to add the one click integration. They're not going to do it themselves through an API. Yeah, I think that's a really good point of what you're saying. I'll also add that one of the things that we think about on my team really is on one side, we have these research integrity checks, which is really you've already got the authors and we're making sure that this is good research.
But on the flip side, we're doing a lot of thinking about author retention. And how are we going to recruit authors. What is the value add for them to be publishing with ASC. And how can we make their overall experience better. And so, building in automated things is really great. It helps our staff. It creates efficiencies. But I think we also have really been looking at what are some things that we can do to automate things and streamline things so that our authors have less of a burden and we keep them happy.
How are you doing that. Well, I will say one sort of pilot or experiment that we did recently that we're still sort of working through is we're using the edited paper pal preflight software. Rather than us doing it sort of on the publisher side, where we're having every single paper go through this. And for those of you that aren't familiar with the software, essentially what it is you can set up a series of checks that are done on a submission.
You can customize it. Oh, I want every paper to have double spaced. I want it to be a continuous line numbers. It needs to have an abstract if it's this particular article type that sort of thing. So we decided based on cost. Why don't we add this as an author optional service rather than something that we're just going to pay for across more than 20,000 submissions a year type of thing.
And what we've noticed is that it's getting good use. And authors are able to run their papers through this and a low barrier to entry, low cost for them. And it's creating an efficiency on the editorial office side of things, because papers are coming to us in better shape than they would have come to us. That's very interesting. Lily, have you had a situation like that with your papers being the quality of the papers improving through some of the technology you've used.
Yeah, I'm trying to think of a particular use case. I mean, when I did talk with Josh, Jocelyn's our director of journals, and I did sit down to have a conversation with her before coming on this panel. And she mentioned that some of our copy editors are using some manuscript cleanup software, but that's about as much as I can relay. And I guess the point would be, is that we are trusting the people that we're working with.
So whether it be our copy editors or anybody in the editorial office, we give them the free rein to use it. What is going to help them. I think, where are we seeing technology really help is in creating a lot of flags. But then we're finding that some publishers end up with so many flags because there are so many checks, and it's triaging all these flags and turning them into something meaningful.
And so we're spending a lot of time thinking through how we can help publishers really all sizes, but we have small and mid-sized publishers in mind. Really right now for this use case is great. This paper has been flagged. But really, how do I interpret this percentage, how they interpret this red flag. Is this really what does it really mean to this paper. Is it really altogether incorrect or is it just something that I need to review that I could pass.
How can I reduce the number of papers that ultimately I do need to physically review to then decide that it's not a fit. So it's the technology can help but to a point, and I suspect we'll get into AI soon. But this is a journey that also brings to mind for me the small publisher side. I think there it was mentioned having volunteers versus staff. And I think that some of these tools that give signals, personalities can come into this.
The lead editor seeing a signal and we've heard this from them as they add something and they suddenly say, I don't even want to look at that. And you're saying, well, no, this is something we need to look into. And they say, well, this looks bad. And so there can lead to power dynamics from some of this. And often it's not stuff that you can say, well, no, do this. Do these five steps.
Do it how I want. There's a lot more negotiation. And that's been one of things we've talked to publishers about is having that decision tree of OK, if you have this signal, what would you do with the user who's going to click. And one of the benefits is often that's a small group. You can do a call where with a larger team, if you're talking about dozens or hundreds of people, that's when often you're going to need the workflows that really enforce that.
With a smaller team, you can sometimes talk it through both evaluating tools. I think that is so important because we've seen people have serious problems where also they feel tools helpful. But one of the powerful people who's using it doesn't interpret it or doesn't know how to do what you're saying, how to work through that signal, and that's a complication.
Well, AI is a tool, that maybe can help. But meanwhile, we're stuck with a lot of tools that require a great deal of time. And time is the one thing that obviously none of you have any time. So this has become a big issue. I'll tell you, we did a project at Maverick. This is completely off topic, but we did a project with Maverick that they were spending so much time with their AI system or their research integrity that they couldn't really fix the research integrity problems.
So they asked us to do a dashboard and we did a ScholarOne dashboard. And it was just a lot of those things were not important to them. And so that they actually went through and said, well, what's important to me in this tool, in this tool and this tool, and you can rely on your vendors obviously, to come up with those tools.
But in addition, you can tailor them to your particular needs so that they're not quite so labor intensive to look for things that you don't really care about. And I think the volunteer idea is stellar. That's a great concept. One of the things that everybody's using but not using well, I had a client say to me, what's metadata.
I thought, oh, I'm in trouble. But we know the importance of metadata is key and it's key to our hosting capabilities. And I'd like to panelists to talk a little bit about hosting and metadata. That is an important part of this survey. So yeah, it's pretty key to the hosting platform as we think about the flow of research, how the content is being ingested, being found later, being used, being retrieved, being sent to other key infrastructure pieces of infrastructure.
We invest a lot in making sure that we are having the right plumbing throughout the platform. And then same thing, relying a lot on integrations with other vendors and making it easy for other vendors to integrate with us. Because ultimately the data, we are the host hosting the content. But really, we are a conduit to a lot of other directions for the content.
And as we think about how the content is being found, this is where the integrity of the metadata and the tagging is absolutely critical. And how we work with Google and how we work with Google Scholar and other organizations. We definitely see a lot of our responsibility in maintaining those relationships so that you all don't have to worry about it. But it is a significant amount of work, and it's usually pretty invisible.
Yeah, I would totally agree. I feel like I have to say we are a member organization at ASC. And so it's really important that our content is discoverable and to the right audiences. Our journals program is made up of mostly academics that are submitting to and reading our content, but we also have a very large practitioner focus within the Civil engineering community. And people go and visit our platform, which is called the ASC library, and they're searching for things and hopefully they're finding what they need.
But it's our responsibility as the publisher to make sure that we are from the very beginning of the stream, collecting all of the right metadata that we need so that the end user has what they need. Ultimately, you feel confident you're doing that. I feel confident that we are on the way to being good at that. And I think this is an area where small publishers and medium and large have actually the same challenge, which is you need this quality.
And I actually maybe this is two rose colored glasses, but the sense I have is I mean, if you look at over the last 10 years, I think there's a nice ecosystem of technology vendors that allow I think, even very, very small publishers. Single journal hosted at a department at a University. I think they can get exceptionally high quality metadata. If you look at some of the things like what's the Crossref report, where it has the different metadata score sheet, I can't remember what that's called now, but you can look at there are publishers who have 100% across these and it's a one person volunteer team and they can get really high quality metadata.
And a lot of it is because in these systems you can set up requirements validations, connect ORCID, things like that. And so I think that barrier compared to 10 years ago for small publishers is much, much lower than it used to be. Yeah and I'm thinking about there's so much to say about metadata. I'm one of the people who oversees our onyx feed, if you're not familiar with onyx.
So from the books perspective, anybody who's working in book publishing knows how important onyx is. And I guess to bring that perspective to our journals team. So for pen press, we in terms of leveling up, we switched our title management system about four years ago. And so a lot of our journals data before we have a virtual sales suite currently. And before we had that much of our journals data was kept in spreadsheets.
So I think having what's so important is just the way you organize your data and how you collect it, and having a tool that isn't just perusing an Excel spreadsheet or a Google Doc or any way that you just send you towards these shadow systems of this is how we keep our vendors, this is how we keep our suppliers over here. And so the great thing about integration with our new title and title management system is that now our metadata is centralized.
And I do know the data transmission from for journals and books is quite different, but I'm happy to speak to either one. I think that the metadata is so important that we have to constantly look at that. And it's important to also get feedback as to whether it's actually working and is actually in the right place.
How are you using. How are you using AI and research integrity or in any type of peer review in the AI space. There's a lot of places like finding reviewers, researching papers, looking at research papers. How are you using AI, if at all. And how do you think it's most useful.
I go first. Yeah so at ASC we're really actually specifically asking our reviewers not to put papers through AI. And so our very first thing that we did when just a couple years ago, we saw our first use case of somebody disclosed to us that they used AI to write a manuscript and tried to put ChatGPT as an author.
And we were like, oh my gosh, what's going on. But was really write policies and talk to other publishers about the use of AI and consult the website, see if NISO has anything. But I think that using AI to help generate reviews is really different than having tools that can help point you to the right reviewers. So as we did do an integration with an editorial manager using the Scopus reviewer finder tool, and that has been actually very helpful for both our large journals and a lot of our more niche journals, where often finding reviewers can be challenging and trying to recruit folks that are kind outside of that, first circle and expand the reviewer pool can also be a challenge.
So I think that's also one of the helpful things about using an AI tool like that. Scopus reviewer finder 1 is that it gets you out of that old Boys Club mentality of oh, I'm going to ask Joe. I know he does research on this and seeing who's published on that topic recently and expanding your pool that way. So we're having a lot of fun with AI at the moment. I know you we have an AI team that has been doing a lot of experimentation, and we really see it, in fact, as part of our value, to be able to experiment with the tools to uncover how far we can go with the technology so that again, you don't have to worry about it.
It's about giving you tools. We are involving publishers in how we're developing these tools. We are bringing prototypes to them so they can work with them. But the way we think about it is they are already a lot of research integrity checks that exist. And it's leveraging AI to help with efficiency there and to help cut through that noise of all these flags that are coming out, to help link different flags together where we can to get a full picture of one manuscript.
But we also feel pretty strongly that we need to do this in a way that we use AI responsibly and ethically. And so to your point on, generate your AI generated content or AI generated summaries. These are great on paper, but we do hold a strong responsibility in making sure that the content that we put in AI generated content we put in front of a reviewer or an editor is sound and is not making up content, but really just something that can continue to keep trust in the corpus.
So it's at the moment, a lot of experimentation and it's pretty fun. I think I'll add to that is we're trying to do a lot of education because again, a lot of smaller publishers, they again, depending on the staff makeup and their volunteers, they might know their field but might not know academic publishing. And so even when we say like cope and NISO, it's like that. It's no guarantee they're going to know those.
And so trying to translate as you hear about experiments, who's trying this when someone says you need to be transparent, what are examples. And I find that is one of the things that a lot of people are having the defensive don't do this. But realistically, one example is Matt. Matt Hodgkinson from DOJ shared at these conference recently was there's journals or publishers who have a lot of content where the English is not the native language.
What is the policy about using that to clean up language. So it's no novel ideas and that there can be a big benefit. And there are services you pay and they're using that. So what's the difference between that service. So having those discussions versus just adding to the submission having an affirmation saying, as an author I did not use this because being on the field that might be unrealistic or not helping educate them. So we've been doing a lot with our publishers, our publishers trying to say, where can you add links, where can you add examples.
And then on our side also like blogs and things like that. Because again, a lot of the smaller publisher population might not have a lot of them aren't. Here is also another way to say it. And I'll just quickly add similar to what you said, Michelle, I know from the books acquisition, we tell our authors to not use it with any kind of peer review journals doesn't have an explicit rule, but we do ask if the authors do use it, that they must explicitly tell the editor that they've been working with.
But that's about as far as we go in terms of I for us don't use any tools to tell you if I is. No, no. We've touched on some of the biggies the peer review, the research integrity. But there are other areas. A lot of other areas in our publishing programs. Are there some things that you could tell the group that you could share with them about techniques that you've used, technology or vendors, or outsourcing to help your workflow.
Like parts of the peer review process or production process or references or the dirt, the nitty gritty. I will say sort of starting at the very beginning of the process, what we've worked really hard to do is make it so that if someone wants to submit to an ASC journal, the submission policies from journal to journal are totally the same.
So creating efficiencies at the very beginning of the workflow was really important for us. We do use Wiley is a vendor to do our editorial coordinator processing across our journals portfolio, and it's been really helpful to have just super clear guidelines and expectations so that it's more a plug and play. So it's not necessarily a technology that we're using, but it's more an approach. Obviously, reducing turnaround time is really important to us and that creates author satisfaction.
But it also is keeping expenses a little bit lower for us when it comes to vendors, so that we can spend money on unanticipated needs. Like if a vendor says, OK, we can do this for you, but that's going to be an additional development cost, which comes up all the time. And we want the bells and whistles. We know we can't have all of them, but we want at least some of them.
So that baseline of let's keep things super standardized so that we can make it plug and play has been really important for us. That's a really good point about standardization from the vendor side. I think especially right now, being in touch with them to see what they've added. We just had an experience where someone wanted to do a call. We reached out.
They had a very specific question and it sort of was confusing to us. And it turns out they've been using the platform for eight years and they haven't people Mis emails. That's fine. We all do. But at some point they hadn't just gone in to look at all the new checkboxes. And I think I'm seeing some people not as a vendor.
Sometimes you can solve a lot of problems because it's already there, but people aren't utilizing it. And again, there's so many emails and announcements it's easy not to know. And also workflows change. New editor comes on. Something changes. And it turns out this piece of functionality would be perfect for what you're doing now.
But don't connect. So trying to do, at least every two years, especially right now with the things changing so much at least every two years. You're saying, here's what we do. Is there anything that we're wasting time on and your vendors will probably be able to help out. I just will add quickly. Emily, before you go.
We did do something like that with one of our vendors a couple of years ago, and it's sort of like we had created this Frankenstein's monster, essentially with one of our systems because we were like, well, we want this, we want this, put this on there. And we did sort of all right, let's strip it down. Can you tell us, are we weird. Like, are we using this system the way it should be.
What are your other clients doing. And it was pretty helpful. Absolutely I have several colleagues in the room, and I'm on a mission Boom to make sure that our clients are using the features that we release. We have released we have a short release cycle every three weeks on the platform side. And we do send a lot of emails and their comprehensive emails. But there is a lot of information.
I think it is hard. From your perspective, I'm sure it's just. Oh, another long email about release notes. Great and but it is hard to parse through. Actually, I could just turn this on. And we're finding that sitting down with our customers and looking at everything that is in the release notes, and it has been for in some cases, two years, just like you. You're absolutely right.
It's just doing a health check. Hey, are you sure you have adopted all the features that we have been releasing and working on. This is a win-win on both sides. So there's a lot to answer your question. Also, earlier on some of the things that you mentioned as well Michelle. Accessibility has been a top of mind for a lot of publishers this year and really important because of the European Accessibility Act.
So there's a lot of guidance here that we as vendors can provide to our clients. And a lot of work we can do behind the scenes. So once again, we're never going to be your legal advisor on this, but we can at least make sure that you have the tools and that on the platform side, the content can meet the requirements that are out there from just yeah, the directives either in Europe or in the US just next year.
So a lot of work there can be helped. And another use case. But now upstream finding reviewers is another one which is a piece of the process that just really just difficult. And this is where I think technology and AI to a degree, can help connect the dots and where we will. There are tools available now, but to go further, we need to collaborate with publishers and our clients. Just on that being a squeaky wheel I think is super.
Most customers don't email. They have a problem and they keep it quiet. Now some people here might be the squeaky wheels and they can't imagine that. But you'd be amazed how many people don't. And so if we do a focused call a focus group call, then suddenly we hear, but the people who email all the time. I mean, it's a perfect and you're getting the vendor towards what you want, but that's something I always try and just say, be that person.
Someone here is feeling very validated by that. But yeah, being that person I think is great. So we're talking a lot about technology, but also communication is key to technology. And you have to keep it up and going on going basis. So I think that's a really important process in bringing publishers together as vendors. We also have a role in bringing you all together and comparing notes.
Now there are some conversations. We had the antitrust statement I get it. There are some conversations we don't want to have. But it's not that, but it's thinking through issues together. It's oh, what are you doing about Google overview of a sudden that have come up, how do you manage the fact that some of your traffic is being diverted, or is some of the content is now surfaced on the front page of Google, and you don't have to go to your site anymore.
How? just let's talk about it. Let's compare notes. I think that's a really good point that technology does not stand alone. Yeah I was just going to add that your squeaky wheel comment is a good one because I think something. You see, in the University Press community, we're very tight knit, but a lot of the times, we might be late adopters of these kinds of technologies just because we don't really know about it or we don't hear about it until another peer press actually tells us about it.
So yeah, that's definitely a trend that I've seen before. I'm going to open this up to questions. Do you have any questions of the group. They have a lot of expertise so take advantage of it. And I'm sure they'd be happy to answer any questions. All right. All right.
Let's have it. So I wonder if you want to revisit your comment about metadata, because with I and you also talked about you could put an unstructured document into ChatGPT, for example, and say, tell me who the authors are and it will give you the answers. Tell me what the key words are or give you the keywords, and you can even ask it to produce Jad XML and it'll do that.
So are we spending a lot of money creating the structured data, which I think we need. But is does the cost benefit discussion regarding that data has to be revisited. It's a good question. Do you want to take that. It's a very good question in the context of experimenting. Yes, in the context of uprooting all of our workflows right now on the platform, we're not ready.
And there's just too much that dependent on what we on what we have. It's such an underpinning of so many processes that we want to be super careful now. But the concept, the concept of enriching what is already there, of leveraging AI to get to it faster. Absolutely But the way we're thinking about it now is experimentation. Are you bolder on this one.
Maybe no what I. So we've been doing a lot of. So one of our products we do production. So we produce XML and PDF and we've done a lot of experimentation around this. And I think the number one guiding principle in our industry, which is I kind of wish it wasn't true, but because other industries have it easy, is for us. Accuracy is so important.
Like this is the record. And so I think where I agree with you is I think there's an opportunity for this to be way faster. I mean, the things we see to your point are you can prepopulate fields. Imagine five years ago or say again, back to 10 years ago, the laborious I have to enter this in again. Well, we can move past that.
But you still want someone to verify it. And what I think is also really important is we see this. The one everyone knows is references, citations. Those real or not. But even we've done this as GPT can pull those out. But for XML ideally you want it parsed and structured correctly. And people have done this know is it a book right under the hood. Is that a book or is that a journal.
Is that government document that those sorts of things for the linkability you do want some verification, but also this stuff's getting way better, like the models a year ago were much worse at this. And so I'm pretty bullish about, I think the things that humans are like, I think manual tagging should already be on its way out, but I think it's going there. And what you're going to have is a set of tools that are now double checking everything.
So I think it's going to get way more efficient. I 100% agree with that. I think I've talked to people who've done sort of experiments to say, well, let's just accept it. And I think that it gets enough stuff wrong. And even if you said, well, it's only 1% in this industry, that's an important 1% And it leads to what was talked about in the session, other sessions about mistrust.
Like if we have something wrong, and especially if you look at the way articles, preprints get cited as if they were researched to justify political points. I think when you have errors in the record, I think our industry, like, has a responsibility and a duty to do that well. So I think that while we can get a lot out of it, to me right now, the trick is, how do you verify things either with code or in a way that isn't terrible for humans, because I think a lot of the way we do verification right now is laborious and it's not fun.
And I think there's things we can do to make it gamify it or make it where if for models all think this is right, let's trust it. But trusting one model I'm not there. So I think I'm a little more bullish. And I would say that in terms of the metadata question, I think it depends on what kind of metadata you are trying to generate. Because if we're talking about Alt text or keywords, I think that yeah, there's definitely a room for some efficiency.
But if we're talking about our descriptions or marketing copy sales points, this is something that we've been testing a little bit in our system. And we'll run that by marketing and marketing. We'll look at it and they'll just say, I don't know about this. So that's where I think it's kind of what you said, Emily. The technology, I don't think it's quite there in terms of when you really if you're looking at more of the marketing side, you want to make sure that are you getting the right voice inside your copy.
What really goes on the back text the back, the cover, back cover. So those would be the thoughts I have in mind about the kinds of metadata and using AI for it. I just want to note real quick, I think I'm also bullish because I think for smaller publishers, again, that sort of cottage industry model, they're more likely to have eyes on it. I think that publishers, where they might be sending 10,000 articles through something, I think there's actually more risk of things slipping through.
And so it might also be one of the reasons I'm a little bit like, oh, I think right now people are typing in or looking at all these things. So making that easier for them is a win. I think that when you're batch processing a million articles now, I get very nervous. It is one of the few areas where actually being a small publisher is an advantage, where the risk is lower and there's a whole lot less that you need to monitor.
One of the few areas. So the answer is not yet. Yes next year we will all we will all have a different check back guarantee. Anybody else have any questions. Nope nope. Any insights you want from the panelists while we have them.
Thank you very much. Let's give a hand to our guests. We had a question. I was just wondering. You all said small publishers have limited resources. What would you prioritize. So we're on the deep end. Sometimes you want to be out there.
Something shiny. That's what I'm trying to think. What would be one recommendation for how to prioritize that. I think what I'm most interested in is creating efficiencies so that I can free up staff time to do higher level tasks and work on strategic priorities rather than chasing our tail. Yes, chasing your tail. I would say so much depends on your mission.
Do you have members. Are they absolutely key to your activities. Are you working with heart surgeons or are mostly volunteers. Heart surgeons and their time is better spent having during surgery than actually dealing with, a peer review form. I would say, look at your mission and align with and pick the projects that are most aligned to what the core you should be doing, and that you do really well and leave the headaches, the technical headaches to us.
And I would say prioritizing community building because with a small press, you don't have a team for research and development, you don't have a team for innovation. So you really need to come to conferences like SSP to have these kinds of discussions and get these kind of insights. Maybe there's tools that can help us be able to do these kinds of data analytics that we can't do in house.
So just yeah, prioritizing meeting people and having discussions with vendors you haven't really met yet, like you guys and short pithy answer money and time to elaborate. I think that depending on what your organization's priority is right now is the time. A lot of people are talking about cutting costs and so like talking to vendors, shopping around or looking at. Is there a costly part of my workflow.
And the other one, with time, I think, asks usually the staff who you want to save their time and be like what takes the most. And we see again, that's a wide range of things. And sometimes it's answering emails from your editor and it's like, that is. And then that's a difficult one. But then it's go shopping and see again, is that training. Is there something we can do.
But I just say that because sometimes people are thinking about, well, should I focus on paper Mills or should I focus on image manipulation. And sometimes it's is that business risk and I think often for smaller publishers it's really first we need more time or money to then be able to attack other things. So that's where my mind always goes. Time and money that's all.
Final last words. So thank you my panelists. They did a great job.