Name:
2024 Previews Session: New and Noteworthy Product Presentations
Description:
2024 Previews Session: New and Noteworthy Product Presentations
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/978dc1b1-2e90-4c3d-af94-3aa6f134e1ac/videoscrubberimages/Scrubber_1.jpg
Duration:
T01H05M10S
Embed URL:
https://stream.cadmore.media/player/978dc1b1-2e90-4c3d-af94-3aa6f134e1ac
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/978dc1b1-2e90-4c3d-af94-3aa6f134e1ac/previews_session___new_and_noteworthy_product_presentations .mp4?sv=2019-02-02&sr=c&sig=HKregiwTNJ4MspTq3MF3mKlxIQncjmg1DwmrsI4gXNQ%3D&st=2025-04-29T19%3A19%3A31Z&se=2025-04-29T21%3A24%3A31Z&sp=r
Upload Date:
2024-12-03T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Good morning. Welcome to the plenary this morning plenary session. Because it is the preview session, I am choosing to take this time to preview a little bit of our new SSP and scholarly kitchen merchandise. This is the last day in which you can purchase this amazing merchandise, including this lovely mug. So if you haven't been to the membership booth in the exhibit Hall, please visit today.
Help support the generations fund 100% of the proceeds of this merchandise. And there are other things as well. There go to support the generations fund, which is 82% to goal. So we are very close and we could use all of your support. And I know that all of you want to support some of this lovely swag proudly proclaiming your role in the scholarly community.
So there's also the SSP originals auction that's going to close, I believe, around noon today. So if you haven't had a chance to see what some of those wonderful creations that your colleagues have made and bid on them again to support the generations fund, please do so. Your last opportunity will be today. So get in there, check it out and definitely say Hello to the membership.
Folks in the booth. Also want to give a shout out to our virtual attendees again, it's great to have them with us online. And all of our sessions, of course, are being recorded. So please go back and take a look at things that you might have missed voting for. The preview session is going to take place in the Whova app, so if you haven't downloaded that yet, please do so now while we do introductions so that you're ready.
And I think at this point I'm going to introduce our moderators for this session, which are Mary Ann Colonna and Greg Fagan. This Mike's on. You can hear me OK. Good morning. Happy Friday, everybody. It's great to see some familiar faces here today.
This is quite a turnout. It's Oprah giving away cars or something with what's going on here. For those of you who don't know me yet, my name is Greg Fagan and I'm senior director of business development at aptara. I have the great honor of emceeing jointly emceeing today's preview session with my colleague Marianne kalhana, who's the VP of marketing at data conversion laboratory.
What an incredible two days it's been so far. We kicked off the show on Wednesday with some engaging industry breakout sessions on a wide variety of topics and thought provoking keynote by Deborah Blum on publishers in the age of mistrust. Yesterday we had a great opening plenary with a moderated discussion on the rise of the machines, many informative educational sessions that aligned with our theme of inflection point, setting the course for scholarly communication and some awesome networking opportunities at breaks, breakfasts, lunches and receptions.
And I hope that all of you have had a chance to catch up with old friends and colleagues and make some new connections along the way. Today, I'm looking forward to more educational sessions to get involved. Lunch in the poster session and the closing plenary session with an Oxford style debate titled has the open access movement failed.
I hope a lot of you get to stick around for that. It promises to be a stimulating debate. But before we get to those sessions, I hope you're sufficiently caffeinated and alert because we're off to a dynamic start this morning with a series of product presentations where we'll learn more about some of the industry's most innovative new and noteworthy products, products, platforms, and content, I should say.
Each speaker will have five minutes, and we really mean that to show us their stuff. And after that, we'll all get a chance to vote on the best innovation at the end of the session. Now, having had sneak peek myself, I can tell you that it's going to be a tough decision. We have some terrific innovations and speakers for you. So get ready for some awesome presentations.
Now I have to read from my phone, so I need to take my glasses off, so. As a reminder, this session is being live streamed for our virtual attendees. For those of you joining us remotely, please be sure to vote at the end of the session and chat any questions to the presenters in the Whova app. So they can follow up later.
With that, I'll pass it off to Marianne to introduce our first speaker. Marian, over to you. Good morning, everyone. The slides are not advancing. Here we go. OK I just love the previews so much.
And by the look of this room, so do you. So I'm thrilled to introduce these great presenters. First up, we have Travis Hughley, business development manager with aeris systems. Travis will speak to us about finding peer reviewers. Welcome, Travis. Welcome so when Amazon launched pilot store in Austin, which is where I live, they were touting a technology called just walk out technology.
There was lots of buzz about it. The concept was grab and go using I and the concept was that there was no interaction with any cashier. Hundreds of sensors and cameras were in the room. It would take care of everything and it felt magical. Recently, a couple of months ago, Amazon re-evaluated the project because the magic behind that was 1,000 workers in India.
And so it made me really think as it relates to advanced technology, especially in this case GenAI, where it works and where it feels magical really does require a human element. So whether or not it's designing inputs of an LLM like ChatGPT or independently verifying results because that's what the disclaimer told you to do on the app or ensuring the company's bottom line is not impacted.
Like Amazon, the human element is very real. So as it relates to this session, I'm going to focus on not just the human element, but what are we doing to give the human editors, the experts technology tools they need to bring back confidence in the reviewer selection process. So I'm sure you're hearing what I've been hearing in terms of the traditional ways of sourcing reviewers is simply not working or it's working, but it's not working as it should be.
Reviewers are getting burned out and being misaligned with experience. We know this. Editors lack the suite of tools needed to feel confident about search results. We know this there's a lack of diversity in representation in their Europol, and many editors need help mitigating unconscious bias. And he helped to avoid overlooking potential conflicts of interest.
So here at areas we've heard that loud and clear. We recognize there there needs to be a better way. We have a long history of partnering with many third party technology providers. So today we're simply looking a bit closer at our newest integration with Scopus that combines find reviews using Scopus and editorial manager. And the feedback we've already gotten from customers supports that this integration is really about in-depth insights and customized searches versus static lists and limited and limited research or profile data.
It's about creating a more global and diverse representation within the researcher pool, both geographically and by discipline versus simply widening an editor network. It's about building in safeguards to support reviewer integrity. So as we start, this is me logged in to editorial manager. You'll see right off the bat that Scopus is the search source.
And as the reviewer selection process begins, editors editors are then automatically taken to the Scopus interface. And this is really where we're leveraging the metadata from editorial manager. This page is what we call a stateless environment. So there's no data being saved or stored, but you can see, it will pull in classifications, it will pull in keywords. It's also auto extracting, co-author information, author information, all which can be leveraged in a search.
Editors at this point have the decision to use these keywords, use these classifications. They can also choose not to use them and add certain things. But this is the initial search. Once the search results pull up, there's potentially could be 100 up to 100 suggested reviewers. But from here, they can take the search to the next level. So they can search by H index, they can filter by H index, they can look at some of the subject area categories and they can also do some other parameter changes.
And so we're really giving editors the tool to take the search to the next level. On the right, there's a reviewer profile. So this allows editors to look at the qualifications, the expertise. Is this reviewer a suitable fit. And again, as you scroll down, this is what the profile looks like on the right in red.
Advanced conflict of interest capabilities are also built into this tool. So as you'll see here, it's flagging up certain conflicts of interest may be publishing within the same with the same author or publishing within the last five years. You can add conflicts of interest to a separate list if you wanted to.
If you're choosing someone who's on a conflict of interest list, it can be added, but it will give you a prompt and tell you that you're sure you want to do that. At the end of the day, you close the session, review your information is pulled back in. The editorial manager of the session closes with Scopus and that's it.
Thank you. Next, I am pleased to introduce Stephanie auphan. Stephanie is program director with arkive and she's going to present on archives, HTML papers, project and accessibility in research. Welcome, Stephanie. Sit that big one. OK Thank you.
Hello, SSP. So archive has a long standing commitment to openness. There are no fees associated with submitting to archive and getting your paper on the platform, and content is openly and freely available to read around the world. Most importantly, we provide a way for researchers to get their work out quickly and into the hands of other researchers.
With this spirit of openness in mind, archive has been putting a lot of thought and effort into research accessible research papers over the last couple of years because research isn't truly open without them. This all started back in 2022. Our UCS Manager, shamsi Brin, was conducting research with some of our users who have a print disability. The research was focused at the time on the accessibility of the archive website and what she was universally experiencing was the researchers were changing the conversation.
And basically telling her, your website is not the problem. The problem we face is accessing the research itself. So as you can see here, some of the results of our research. Those with a print disability reported having a negative experience with everything from discovering and reading research to preparing and submitting their documents. Shamsi, who I mentioned, I wanted to say, is the champion, with an archive of all of our accessibility work.
And she's been leading this project for us to give you a sense of the scope of the problem. Research from the Allen Institute found that only 2.4% of papers are fully accessible in research that archive has conducted found that users of assistive technology report that only 38% of content they need to access is available to them without additional help. In addition to that, over a quarter of the world's population has been diagnosed with a vision impairment.
But in 20% of people in the US alone have dyslexia. So these and other factors combined to push people out of STEM and possibly other fields due to the high barriers to access to the tools and information that they need. Through our research, we've heard loud and clear from scientists with disabilities that we should provide HTML.
And so archive has stepped up and we're answering the call. So this is what we did. We worked on developing a solution that presents HTML and PDF as co-equal formats. Now, when you submit your paper to archive, an HTML version is generated at about the same time as the PDF version, and both of those versions are available at the same time on the site. We accomplished this by collaborating with the latexml team at NIST.
We use their text to HTML converter and this is a rare instance of moving something from archive labs, which is a way to integrate with other services where the action occurs outside of archive into the platform proper. And we did that because of the importance of this. We rolled out an experimental mode on December 1, 2023. It's going to remain in experimental mode for a while. What we heard was don't let perfect be the enemy of the good.
Any accessible content is better than not having anything than having nothing available. So we're still in listening mode. And we will be iterating on this over time. So this is just an example of how the page shows, but I don't want to waste my time going over it right now. So what can you do if you want to make research more accessible. If you're a tech shop, best share best practices with authors and colleagues.
I have a link on my final slide. Encourage the conferences and journals you work with to push for accessibility and please come to archives accessibility forum in September. It's a great way to hear from researchers who experience the barriers and work together to think about solutions. So I'll just say Thank you and leave you with some links to access.
Find information about our work. Thanks Thanks, Stephanie. OK, now let's welcome Chris Maverick, aims senior product manager. I'm sorry. It's not Chris. It's just Hong Zhu Hong is director of intelligence services group and AI research and development with adoption and Hong will present on author name disambiguation.
The Big Bang. Good morning, everyone. As we all over the industry, scholarly publishing industry is experiencing a shift from the journal centric to the author research centric approach driven by so understanding the researchers, our authors is becoming more and more important. Have you seen this.
I mean, in order to understand this, authors, researchers, it's important to know who they are, where they worked, what have they published before. It sounds an easy task. Actually, it's a really hard task to solve. So I think many of us have seen these experiences and the different researchers may have the same name and the same researcher may have the different names and same apply to the institution affiliation information.
Basically I personally received keep receiving the request to ask me to verify the verify whether this is my work, this is my paper, this is my publication, et cetera. Quite often I really I wish I could the publish so many papers. So we offer we provide a solution. Innovative AI powered solutions to this is not an individual solutions to just disambiguate the authors and affiliation, but we offer an end to end solutions by accepting the PDF, Word metadata as input and then the clean and disambiguate the authors and institution profiles and the link to the orchid raw and Ringo's of this public database.
And then we further enrich. This profile is beyond the desegregation. We further enrich all this unified information at the meeting. Under the publication history of the Asian history, based on our in-house build publication knowledge graph, which contains the more than 240 million publication, the metadata. And finally, we verify this.
We identify any potential. The problem this fake author or is any retraction history any unusual publishing behavior which is very important in today's integrity detection. We also apply this to clean the customer, to clean their database. So you can see here the real examples. We publisher have the full duplicate record for the same researcher and there are three duplicate records on the Semantic Scholar.
After run our service is all this duplicated. The record is merged into one. We also achieved the 90% accuracy generated profiles and have to wait for the two or three weeks to see this and to verify.
But for the new solutions, because everything is integrated on the platform. So we can offer the immediate near real time and simple and secure solutions. You don't have to wait. The author can immediately see the result and verified, and you don't need to park the passes all the information to the third party. And we also offer the many the AI, the powered innovative applications based on all these verified the profiles.
And let me show you some real example of this. For example, auto generated author profile pages. So allow the user data don't have to hide in the database. So a lot of people to search to browse to view all this. And we can target to help the customer, target the audience for the community engagement for marketing purpose based on the interests of the expertise. We can recommend the relevant the reviewers and based on this enriched profiles and given the manuscript and also identify the potential conflicts of interest.
Finally, the integrity. We can also help our customer identify, if any, the potential integrity issues for this authors. So this is very strong single node for this part of the strong single node for the paper mill. Actually, this solution has already been to the integrate in as part of the Wiley paper mill, the detection service. It's released in the London book fair the two months ago. If you need any questions, have any questions, need more information.
Welcome to visit the wireless booth and you know where it is. Thank you. Thank you, Hong. OK please welcome Jamie Carmichael. Jamie is senior director information and content solutions at Copyright Clearance Center. And Jamie is going to speak with us on a intelligence.
OK, Glenn, Thank you. I'm going to reset the clock. You reset the clock, if you don't mind. OK I'm just going to go for it. So thank you for having me today. It's great to be part of this line up at TCC, as I think most of we are all about licensing, and it's also true that we're about technology and data solutions to help remove friction in the market, particularly around the transition to open access.
So today I want to introduce you to our intelligence, the newest member of our scholarly publication, product family. But first a little bit about the problem we solve. Publishers sit on vast amounts of data and are using it to inform their own business models, whether that's subscribed to Open Pro deals, transitional agreements, et cetera. And this is really fantastic. And you knew there was a but coming.
The data quality is really generally not high enough to build reliable models without tremendous time and effort to cleanse it and in particular disambiguate author affiliation, which then slows your financial modeling, which then slows your actual deals and renewal cycles. This image makes me laugh because it looks like our actual product manager, Shannon running away from all of us.
I don't know if she's in the room, but I hope she gets I hope she sees the humor in that, too. So sales and analytics teams are doing the right things, but not in a very sustainable way. We've talked to publishers who spend hours and hours combing through thousands of rows of data in spreadsheets or who wait weeks for their data analytics colleagues to turn around reports. Manual data manipulation creates costly overhead that slows your time to market, frustrates your customers and undoubtedly has gaps that leave money on the table.
Oe intelligence is a simple solution to a very complex problem. We automate your disambiguation modeling and analysis practices in three steps. So that you can meet your customer needs at scale. Step one, get your historical publication data into the tool. And we have a number of ways to help you do that. We then disambiguate the author affiliations at the article level to help you get a true picture of who publishes with you across open and subscribed content.
Step two build your models and test what if scenarios with a really flexible set of parameters that work for both individual institutions as well as consortia with multiple members. And step three analyze the trends for opportunities to meet your strategic goals, deliver on customer expectations, or adjust your strategy where you might need to.
At the heart of the value that this brings to you and your sales teams is our I. Affiliation matching process. We leverage the institutional relationships in the Ringold database to connect historical article publication to academic institutions, consortia and research funders across the globe. In a test of 500,000 manuscripts across publishers who use our open access workflow platform called rightslink.
We match 95% to Ringold IDs with a high degree of confidence. So what this means for you. Significant time saved headaches, saved costs reduced. Better insight into your data, including projecting the impact of changes to research funding policies under OSTP in just a few minutes. And an accurate data set you can actually export to share directly with your customers for trust and transparency.
No, you do not need to be a rightslink customer to use this. Yes, it would be easier if you were because we'd onboard you with all the data that we already have, and I have barely scratched the surface here. But please scan this code. Come find us at the booth in the exhibit Hall. We would love to talk to show you what we're working on, and. And yeah, see how we might be able to solve your problems. So thank you very much.
Thank you. John and our tech team. Can we make sure that the clock gets reset. OK Thank you. All right. Next is romey beard, head of publisher relations with Kronos hub, and she's going to discuss Kronos Hub's new author interface.
Welcome big button. Thank you. So chronos hub is a platform for institutions, funders and publishers. And our goal is to make the publishing process easier for authors. And as you can see on the slide up just now, these are the modules that we have available for publishers that some of you might be familiar with.
The other day I was driving to a friend's house and I didn't have my iPhone charging cable because I have a teenage son. And so I was driving without my navigation on and I got stuck in a traffic jam because there was a road closure that I didn't know about. If I'd had navigation on, I could have easily avoided it. And taken a different route. Or at least I would have known how long it was going to take me to get home.
And this made me think about how frustrating the process is for authors. A lot of the times they don't always where to. They don't always make the best decisions about where to submit their manuscript in the first place, and often during the process, they don't know exactly what's going on. There's not enough communication to authors, especially co-authors, who are left at the mercy of corresponding authors to share information with them.
What we launched this week is going to change that. We really want to make the publishing process easier for authors and provide them with better information so they can make more informed and better decisions for their manuscript, and the whole process becomes more transparent for them. So we launched a number of things this week that will take you through really quickly. We have a really simple, secure login.
That's OneLogin for all of our publishers. Journal portfolio authors and corresponding authors can create profiles, and this has a number of features, including instant validation against known identifiers. There's a common theme here today. And this is super important because it means that we have clean data at the beginning. We check against known identifiers such as raw and Ringgold and also matched those against the email addresses that the authors use.
And again, this is for authors as well as co-authors. We have a new journal overview which includes additional metrics that you want to display to your authors, and this makes it really easy as well, because authors are allowed to. They can easily compare one journal against another journal. So maybe one journal has a high impact factor, whereas another journal gives you a quicker first decision. So it's really simple for authors to make better decisions about where to submit and find the best home for their research.
Submission itself is super easy with our submission interface. You upload your manuscript, we extract the metadata, and again, we do instant validation against known identifiers. That goes for institutional affiliations, orchid IDs. But also we pull out the funder statement and match it against the journal policy. So if there's any mismatch there, we alert the author immediately so they know if there's going to be an issue less likely for their manuscript to be stuck in a traffic jam later on.
At the heart of the update is our new author dashboard. And this is where everything comes together. This is the one place where all your interactions with your authors can be accessed from on top. Each publisher has their own instance, so what you want to display to your authors is in a configured setting. So includes your branding, your colors, only your journals, but also you can link up any underlying systems that you have.
So that might be things that you use for the submodules like APC processing, signing the license, but you can also pull in information from other underlying systems such as peer review, to provide more information production related activities. So everything comes together in one place. This is not everything. This is only the beginning. We're expanding this further.
We continuously develop developing it. So one of the next things we're looking at is to include other roles such as reviewers, editors, and ultimately publisher admins as well. So everything is in one place and the platform is also accessible and responsive, and that means that authors can access it even when they're on the go, even when they're in the car, they're obviously not while they're driving themselves.
It's a real life platform. So please come and see me at the booth. This will go live for the first publisher to go live with us is the American Chemical Society will be launching in August, but it's also available to other publishers. So please come and see me at booth 116. I'm also in another session this afternoon. I'd love to talk to you about how you can make life easier for your authors.
Thank you. OK please welcome Adam Day with clear skies. Who will discuss the paper mill alarm. Good back. Good morning, everyone. We're going to talk about the paper mill alarm. But first, I'm going to talk about this big red line.
What is this big red line. Red is bad, isn't it. Well, if you imagine that one end of this red line, we have one person at the other end of this red line. We have another person. What the line tells us is that these two people have co-authored on the same paper. The red tells us that this paper was from a paper mill and this is what a paper mill looks like.
Now, what I really like about this image is that where the lines meet, those are people. And the more lines that are meeting, the naughtier that person is. So the brighter, they are on this visualization. What we have here is a group of people who are working together in a network to do harm to this industry, to this community. I don't really want to spend a lot of time talking about them.
I want to talk about something else, something that's much better. So I am delighted to be working with Adrian Stanley, who many of you will know as a past president of the SSP. When we come to conferences, we often talk about coming to network and build networks. So what we're doing here is we're building a network that is the opposite of the one that we just saw.
We're building a network based on trust and integrity and honesty, one that will protect this industry from harm and protect science. I hope we get a chance to talk to you today. But if we don't get the chance, there's an email address at the foot of the slide, which is covered up by the text. But hopefully you'll be able to reach out to us. What is the paper mill alarm.
The paper mill alarm is the first commercial service dedicated to paper mill detection. We use artificial intelligence, large language models and network analysis to find signs of paper milling. The paper mill alarm is available via direct agreement with clear skies. However, you can also use a public version of the tool through our friends at the stem integrity hub or SCA. You may have also heard about our collaboration with paradox, which was announced yesterday.
As for our results, we have a very nice result that came out just recently. We looked at the last 12 months of retractions by Hindawi, and we find a signal on 98.9% of them. So basically all of them got a few other things that we can look at here. The last Chinese Academy of Sciences early Warning list, we find a signal on all of the journals on that list. We look at the last list of websites, the listings.
We actually don't find a signal on every one of these journals. I'm kidding. We do. We find a signal on every single one of them, with the exception of a few journals, which I think were delisted for reasons other than paper milling. But otherwise, we find them all. And an important point here is that, like I said at the start, red is bad.
So this is where we're finding a signal where it's red. Almost every journal in the world, if you look at it on a graph like this, is green. Almost everything is green. So these are the exceptions. These are the rare cases. And this is what the paper mill alarm does, is it finds those rare cases.
And what happens when you use the paper mill alarm. Well, as you would expect use a paper mill alarm, you get the alerts and then your alert rate goes down as intended. The data is really interesting. One thing we found recently was that the rate of paper mill alarm alerts on open access content is significantly higher than on subscription content.
We need to be careful here. This is quite nuanced. There are it's not true to say that necessarily that happens because of open access. And it's also true to say that there are plenty of subscription journals that appear to be targeted by paper Mills. If you want to explore the data yourself, then you should use our web app.
You've got to know a bit more about that. I hope you'll come and talk to us. And if we don't get the chance, please reach out. Thank you for your time. All right. Please welcome Dr. Alicia wise of clocks, who will speak about preservation.
Good morning, everybody. Have you ever considered what would happen to the scholarship entrusted to your organization if you stop publishing or ceased operating. Digital archiving for long term preservation should be a part of every responsible publisher's planning. Your customers require it.
Preservation is a standard requirement in agreements between publishers and University libraries and consortia or through aggregators. And those customers. Your authors expected. Their contributions are part of our cultural and intellectual heritage, and if used for research or teaching, they will also expect their contributions to form part of the scholarly record.
This means the content needs to be preserved in perpetuity to be available to the readers and researchers of the future. Outlasting your current organization. And you need it to. Long term preservation should be part of your disaster recovery, planning and strategies. This provides your organization with insurance for your valuable content and a safety net to meet an array of commercial obligations.
Did you know that despite all these good reasons to ensure publications are safely archived, 25% of academic journals are known to be at risk. And an even higher percentage of academic books are at risk. And so today we are launching a new digital preservation guide to help written jointly by librarians and publishers. The guide is intended for senior leaders in publishing organizations.
The target audience is your board and executives across the business and editorial, legal marketing, metadata, production rights and other related functions. The guide is focused on the foundational importance of digital preservation for responsible publishing organizations and the steps that senior leaders can take to facilitate this. Factors which will shape your organization's approach to preservation include considering what to preserve the content, functionality and formats related, and where to preserve materials.
There are many options. There may also be perceived barriers, including access concerns, complexity, cost, potential impacts on business models and copyright issues. These factors require due consideration and mitigations for each are outlined in the guide. Clocks is a digital archive developed by and for the publishing industry. Founding publishers include the American Medical Association, the American physiological society, Elsevier Oxford University Press, the Society for Industrial and applied mathematics.
Springer Nature, Taylor and Francis Wiley and Wolters Kluwer. Clocks is a harmonious community where libraries and publishers work together on developing digital preservation systems and outputs where financially secure and independent 501(c)(3 operating globally with participating publishers from 62 countries. We've earned the highest certification score ever awarded to a digital preservation service by the Center for research libraries.
And we preserved books, journals, and related materials, including data sets, images, metadata, software, video and more. We're here to provide advice and support, develop industry standards, and provide digital preservation services if you or others require them. And we're pretty friendly. Honestly, come talk to us. But I'm here today not only to share the new guide with you, but to ask for your help.
We would like to ask for your support in championing the vital cause of digital preservation. Your influence and reach could significantly contribute to the protection of digital content and our shared cultural and intellectual heritage. We would be most grateful for your assistance in getting the guide circulated within your organizations and your wider networks. Please, if you're a partner to publishers, you can be a champion too, and we'd be grateful for all of your support.
Together, we can ensure the enduring availability and integrity of our cultural and intellectual heritage for generations to come. Thanks very much. All right. Thank you. Now, please welcome Alan Schiffer. He is managing director at infolink and he's going to introduce info links.
That big one. All right. Thank you very much. I'm going to try to introduce info links in the five minutes I have. So let me just tell you who we are. We are a group of financial services, senior managers that are trying to bring what we think are best in class techniques about managing fraud and risk to the scholarly publishing industry.
We've been doing this now for 60 years in publishing and and in order to and sorry, in financial services and in order for you to understand our approach, let me just take you through some of the things we've learned over at least the 50 years I've been doing this, almost 50, I should say almost 50 years. I've been doing this in consumer credit first fraud moves really quickly. Those out to beat you look for vulnerabilities and they'll find them.
And these vulnerabilities will move from publisher to publisher. And so this is a cat and mouse game that's never ending. You'll hide the cheese. They'll find the cheese, you'll hide it again. They'll find it again. And this can go on and on and on. And they're very clever. Second you need to identify fraud early in the transaction in the process.
Most of what I've seen so far in publishing has focused on the right side of this chart. And that is actually where you're most intrusive and you're most annoying to your authors when you're starting to talk about taking papers back that have been already published. The success your success will come from moving way early in the pipeline. We're seeing work being done today in helping peer reviewers look at prospective publications.
We think it has to go beyond that to submissions and actually even before submissions, if the intervention is done well, it's actually perceived. This is what we learned in credit cards. It's actually perceived as a customer benefit. If you do it at the end where it's most irritating, it is quite the opposite. And so do it early and do it gently. Third, there's a notion of governance.
And this is something that, frankly, is missing in scholarly publishing, but is deeply ingrained in finance. Banks are in the business of assessing risk and managing risk. And so institutions around thinking about risk and reward, those trade offs are deeply ingrained in financial institutions. They seem to be nowhere, nowhere in publishing.
And that needs to frankly, to change. It needs to be somewhere in the organization, groups of people who think about that trade off of risk and reward. Set targets, set goals, and then ultimately drive that process. That's something that in fact, the discussion of trust and we just heard it before, that we're in the trust business and trying to instill trust. Personally, I trust my wife. I trust my children.
I trust everyone in this room. But beyond that, it gets a little dicey. And I would encourage you to think less trustingly and more not with each other, obviously, but and not at home. But I think a little more skepticism might be helpful, and I would encourage you to do that. And lastly, fraud is rare. As we keep talking about identifying it. And there are lots of tools out there now that can identify it in pieces.
But you catch a lot of good guys in that net. And in order to address to solve that problem, which we call the identifying a lot of the good actors in the web of bad actors. In order to do that need to connect a lot of data in a very sophisticated way and build out what we like to call the author fingerprint that is understanding which authors work with other authors in certain fields of study and are affiliated with certain institutions.
And look at all of that data over time in a longitudinal wave to draw basically this unique fingerprint for each author. And that allows you to compare that fingerprint with the fingerprints of those who are not long term scholars and not long term publishers and identify. Fraud or bad papers. So in that context, we think that those publishers that will be successful at doing this will have to create three distinct and interconnected capabilities.
These are governance, which I talked about, this idea of managing risk and reward at a very senior level and actually throughout the organization. That's a function that largely doesn't exist today. And needs to be developed in publishing that governance, working with large databases of interconnected data that builds out the author fingerprint and using that data under the guidance of governance to build out multivariate models that focus on particular aspects of publishing.
Those three things each rely on the other and they all have to be in place in order to succeed. And I would love to tell you more about what we do to help you do that. But I'm out of time. But I will say we work with organizations to do all three of those things. And if you're interested in working with us, please find me after this or.
Call me. Thank you. All right. Next up is Camille Gamboa. She is AVP of corporate communications at Sage. And she's going to be speaking with us about research and policy impact. Welcome just use the big button.
All right. So I'd like to start by asking you all a couple of questions, if you can hear me for a second. First, I'd like you to raise your hand if you think that as a scholarly community do a good enough job recognizing, celebrating and incentivizing research that makes impact outside of academia. Raise your hand if you're like, yeah, we got this.
We're doing pretty well. All right. That's telling. OK, we have a couple. All right. And now raise your hand if you think that we have some room to improve. All right. So I have to say, I have to agree with you all.
In fact, at a conference I spoke at earlier this year in front of a room of 200 faculty members, I asked them if they knew where to go to find where their research was cited by their peers in other scholarly works. And the room looked something like this. They all raised their hands. Of course, they all know where to go to find their citations of their works.
But then when I asked them if they knew where to go to find where their research had been cited in policy documents by governments, policymakers, et cetera. They didn't know where to start. And we all know that citation based data plays an important role in conversations where research and even researchers are assessed. And yet we also know that academic citations only give a partial picture of research impact.
So we're going to look at some data now that focuses on this. This comes from Overton and open Alex and it compares mean scholarly citations per publication in dark blue to mean policy, citation, citation policy citations per publication in light blue, all grouped by discipline. And as you can see in most disciplines, there are a lot more scholarly citations than there are policy citations.
But once we get to political science, psychology and sociology, the difference gets pretty small. And then once you get to business and economics, there are actually more policy citations than there are scholarly citations per publication. So with this and similar data in mind, we at Sage, we're an independent company, we've got a mission. We wanted to help shift the research assessment conversation so that we can measure, celebrate and incentivize a Fuller picture of impact.
And so we had friends at Overton, we have friends at Overton who we knew held the largest searchable index of policy documents. And we went to them and we said, hey, friends at Overton, why don't we work together to create a tool that will allow researchers to discover for themselves where their research is cited in policy. And why don't we let them do it for free. And so about a year later, we unveiled sage policy profiles, a free to use browser based tool that lets researchers easily discover and visualize where their work is cited in policy documents.
It puts it all together in a personalized dashboard where they can export citations, export visualizations with a shareable link that they can send around to anyone. And it's all powered by Overton, which hosts more than 11 million policy documents, guidelines, ThinkTECH papers from 188 different countries. So we launched the tool in December. And today I want to share some of the reactions from researchers so far.
So here, Professor of educational psychology in North Carolina, says sage has a tool to see how often your work has been cited in policy documents. 32 times for me, which is 32 more than I knew about a psychology professor in Canada, just learned that his work has been cited by UNESCO, the Royal Society of Canada, and more. A British marketing scholar learned that her work was cited by the European Commission.
A professor of sustainability management in Germany learned about 21 citations. I could go on and on and on. They've always known where to find the citations of their work in other scholarly works, and with this tool, they are now just learning about the citations by policy makers. With this knowledge, they can talk about research success through a completely different lens. As one researcher puts it, it can help in conversations about grants, awards and tenure and promotion.
And importantly, it brings a sense of validation and excitement that the hours of work, the challenges faced and the solutions proposed are helping shape policies. Now, while we do hope that this tool is helpful for the individual researcher and for social and behavioral scientists whose work make outsized impact on policy. Let me be clear.
Using the tool is not just about researchers. It's about the entire scholarly ecosystem. Before I mentioned that Sage is an independent company with a mission. In this instance, that mission is shifting the research impact conversation from one that centers on citations in scholarly works only to one that expands to include societal impact. We want this to be a community based tool.
We want it to be a community owned tool and a community use tool. And we want everyone and everyone, everyone and anyone who can benefit from it to try it out. So looking at a room full of people who are involved in scholarly publishing, I hope we can all agree that at the heart of it, our reason for existence is to help our content. All of our science backed scholarly works to make a positive impact in society and improve the human condition.
This tool helps us do just that, so please help me in spreading the word. Remember, it's a free tool, and join us in broadening the research impact conversation. Thank you. And our final previews presentation is by Hannah hochner Swayne, VP of Strategic Partnerships at Silverchair.
Hannah is going to speak about Silverchair's newest offering called census impact. Well, those are some tough acts to follow. But how wonderful. And I'm just trying to see this as something lucky to present following. So many great initiatives, products and projects that you've heard about that really demonstrate investment of publishers and those that serve publishers to talk to you all about a product that is meant to showcase the value of publishers census impact.
This is an initiative that Silverchair developed with our friends at Oxford University press, and I'm really excited to talk to you all more about it. So as with all good products, census impact came out of a problem. How do we better connect the efforts of publishers and funders at a time of shifting funder mandates and increased scrutiny on publisher dollars. Because at the end of the day, I feel confident that any publisher worth their salt has the same goal as any funder, and that's to advance science and improve health outcomes and benefit the public.
Of course, solving this is easier said than done. How do we define the value of research. How do we start working on that. What exactly are we measuring. Do publishers and funders have the same ways to measure things. How do we even track down all of those grants and connect them to all of those articles and track how they're used. That's a lot of dots to connect.
And we still have room to run with census. But we've really started by identifying some key metrics and signposting to start answering the question of how to communicate measures of value and impact between publishers and funders on our platform, onto which right now key metadata and usage data tied to articles with grant IDs published by Oxford University press have been published. You can see the page views and downloads of that content on a funder by funder microsite basis.
These dashboards are powered by our friends at hum. You can also see sortable tables that list each of the articles at the funder microsites with page views and download information that you saw in the graph before along with citation data. These are sortable. And you can also see aggregated attention data powered by data from our friends at altmetrics that show on a funder by funder basis.
What other alternative metrics are tied to these articles. And of course, you can also search across all of the content flowing to census on grant and funder facets. So as I mentioned. Excuse me. At its core, census is really aiming to build bridges. And that's not only between funders and publishers, but it's also between different publishers, between vendors, between initiatives, between content artifacts.
While Silverchair is at the helm of this project, it's really been community led since its conception, and we will continue to pull our community of practice to drive the further development of this product and excuse me, and then we will also have make sure that these impact narratives hinge on the consolidation of data that has historically lived in different places and was many times hard to find, hard to locate, because by combining aggregate platform usage citations and alternative metrics at funder specific dashboard sites, we really hope to facilitate funder publisher engagement and really underline the efforts of publishers and want to offer funders a dynamic at a glance view of all of the stories that they're trying to tell.
As mentioned, we're just getting started here. Our community of practice is 64 members strong. That represents publishers, funders, service providers and consultants. And we currently have 18 funder microsites live. A lot of these are US agencies, but we are going to grow to include Canadian and European funders by the end of the year. And we have had over 2,500 visitors.
I hope that in a few years I'm presenting at other SSP sessions about the growth of census, with more publishers participating and more data streams flowing into our environment. So that we can really continue to tell the story of research and the value that publishers are offering to the scholarly ecosystem. Thank you so much for your time. You can reach out to me to get involved with census impact. You can join our community of practice.
We're having our next meeting on June 24. And you can also visit census impact. It is publicly available at census impact.com. Thank you. So huge. Thanks to all of our presenters today. Well done those. Fantastic we did a dress rehearsal yesterday afternoon that didn't quite go so smoothly as it did today.
So that's why we do dress rehearsals. At this point. We're going to open the voting to all attendees, both in person and virtually. We'll have about two minutes to complete the voting. Here's how it works. So you open the Whova app from the home screen, you'll scroll, scroll down to additional resources, go to polls, and the poll for this session should appear at the top of the list.
Or if you're in the session details page, you'll see the poll button at the top of the screen, and once you're there, just vote for your favorite innovation and we'll see what happens. Just wanted to show.
It's all right. So some people are having a hard time finding it. We'll give you some additional time to vote.
Everybody good.
Give everybody another minute to complete their voting. All right. I think we can close the voting now. And all wonderful presentations.
But the winner of today's preview session is sage. Yes I don't have AI don't have a statuette for you, Camille. But I'm sorry. Picture you get this. I couldn't see that because it was right in front of me, so. There you go. Yay Thank you. No, but.
Thank you. Congratulations to Sage and Thank you to all of our presenters. If you've ever had to do a 5 minute presentation, it is a bit nerve wracking to do it fast and effectively. So another round of applause for our presenters today. Up next, we have a networking break back in the exhibit Hall.
Stop by the membership booth. Buy yourself some merchandise. But they also have some sweet treats for you there, as well as a membership, booth is hosting their social during this break. Check out the auction and also lunch today is in the exhibitors marketplace. We also have the get involved volunteer fair going on out in the foyer outside the exhibit Hall.
If you're interested in getting involved in SSP, serving on a committee, stop by, talk to the committee chairs. They'll tell you about what their committee does, what's expected, and how you might benefit from getting involved in our volunteer community as well. So have a great afternoon and we'll see you a little bit later on back in here of this evening, this afternoon, when we close with the debate.
That's right. Yeah what the people need us. To make them smile.