Name:
Understanding the value of open-access usage information Recording
Description:
Understanding the value of open-access usage information Recording
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/200904c0-4889-446d-a901-cb443cb86140/videoscrubberimages/Scrubber_3.jpg
Duration:
T00H38M09S
Embed URL:
https://stream.cadmore.media/player/200904c0-4889-446d-a901-cb443cb86140
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/200904c0-4889-446d-a901-cb443cb86140/Understanding the value of open-access usage information-NIS.mp4?sv=2019-02-02&sr=c&sig=zH%2FaP2ZWykiJHvkrksH5fUyIc8qWeTyKUYmkv47xgYI%3D&st=2024-11-22T05%3A25%3A22Z&se=2024-11-22T07%3A30%3A22Z&sp=r
Upload Date:
2024-03-06T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Welcome to the session. Understanding the value of open access usage information.
I'm Stephanie Dawson from the Open Discovery platform science open. And it will be my pleasure to moderate this exciting discussion today. Questions around how to measure, share and understand open access usage are growing in complexity and urgency, as the global trend towards open access continues to transform workflows in the scholarly communication industry.
This panel discussion includes perspectives from publishers, librarians and technology providers who will share their expertise and on issues that they face in analyzing the open access usage of their content and understanding or creating value. We'll start with around of short introductions to the issues at stake. And then open the floor for a lively discussion in which we hope you will participate.
We'll begin and end with perspectives from University presses, and we'll also hear from the library, community and technology providers with potential solutions. Please feel free to share your questions and comments in the chat during the presentations for the live discussion to follow. With that, I'm pleased to introduce the panelists. Kasia repetto Hi. Kazi is an analyst for global outreach and publishing systems at Duke University Press in the journals and collection marketing team.
In her daily work at the press, she utilizes digital systems and data to support the dissemination of journals and collection content. She analyzes digital content and metadata to detect patterns, trends and relationships that informed decision making. She was awarded the NISO Plus Scholarship in 2022 and submitted the proposal for this panel discussion.
The second panelist will be Tasha melin Cohen's. I Tasha is the project director of counter. Data and standards are essential underpinnings for our community from the metadata standards that help optimize discoverability to the usage metrics that are one aspect of measuring impact. Tasha's combined role as counters project director and founder of an independent consultancy business, helping publishers to achieve a sustainable transition to open access.
Both rely on that pairing of data and standards. The next palace will be UI me. Hello Hi. Jeremy Huddleston is the electronic resources and scholarly communication library adenosine University in Ohio. She performs electronic resource troubleshooting, troubleshooting and resource usage data processing. She also manages Denison Dennison's digital Commons faculty collection.
She's passionate about creating user spreadsheet interfaces for librarians. Over the past couple of years, she created Google spreadsheet interfaces and written in JavaScript that fetch Crossref API, open Alex API or sushi EPI data. And she's presented this work at the electronic resources and library conference and other conferences as well. So definitely somebody who's using open data.
The next panelist is Jay Patel. Hello Jay is head of sales and business development for cactus communications. Jay has over 20 years of experience in utilizing technology to solve key business challenges in publishing, academia and life sciences. He is dedicated to achieving the United Nations sustainable development goal of quality education by expanding the reach of science and research, making it more engaging and increasing accessibility, particularly in the global South.
Jay plays a crucial role in making cactus technology solutions accessible to publishers, societies and research institutes. And And finally, we have Emily posnanski. Emily is director of the Central European University Press and has worked in open access for over 10 years. During that time, Emily worked on numerous open access book and journals initiatives across STM and HSS, including models to transition to open access.
Formerly, Emily was director of strategy and insights and a member of the executive management group at the Reuter. With all of that expertise, I am very much looking forward to an interesting discussion. Thanks, kasia, for kicking off the introductions with a publisher perspective. I'll turn it over to you. Thank you, Stephanie, for this introduction.
A quick word about my organization, Duke University Press is a part of Duke University. We publish scholarly content in humanities, social sciences and mathematics. We publish we publish over 60 journals, including seven fully open access journals and around 150 new books per year. Selected books are included in open in open e-books initiatives like tome or no Knowledge Unlatched.
As we know, the open access movement has been transforming workflows in the scholarly communication industry. Along with that transformation, workflows that used to be reliable became distorted. Unfortunately, at the Duke University press, we grow our open access journals program and publish new open access books.
We also acknowledge that there are some challenges related to open access, data exchange and usage evaluation. These challenges can be associated not exclusively to the following topics that we've noticed at Duke University Press. First, some of the open access content usage gets logged in the usage class called anonymous visitors. The anonymous visitors are content users that are not logged in nor institutionally authenticated.
For example, last year, on average, 87% of the University Press open access journal usage data got locked as anonymous visitors. As a result of this, we usually know very little about these users. They can be affiliated with certain institutions, but their activity will not be attributed to those institutions. Furthermore, from the authors perspectives, there is increasing pressure on authors to show the impact of their research beyond citations.
Open access publishing can help them with new audiences, not just those with easy access to a research library. However, since some since some of the open access content usage will get locked under the anonymous visitor usage class, it will be again hard for the researcher to characterize those new audiences, their locations, institutional affiliations, et cetera. This raises the question, the questions that we are trying to analyze at Duke University Press what is the more holistic view and tools for evaluating open access content, content, readership?
The second challenge one of the complications inherent in publishing a new open access journal or transitioning from a subscription model to open access, is the lack of shared practices for evaluating changing usage statistics. For example, while overall readership readership based on the content usage of a flipped journal may increase, the institutional registered usage may decrease. That results in disturbance in trend reports, readership measurements, et cetera.
So in other questions, we are trying to, to, to get the answer for what open access usage data are a value for libraries and how should this value be presented to them? Another challenge, the third one that I want to mention about today is a challenge associated with open access data inconsistency. This can be related to open access, license tagging, inconsistencies across publishers or differences in open access coverage in bibliographic databases, or even to the possibility of non-human traffic being reported as a result of negligence related to web crawlers or do not do not report lists management.
That said, what are the reliable, standard based and technology neutral workflows for measurement and exchange of open, open access usage data. So these are just these are some of the challenges and questions that the community and Duke University Press is currently trying to address with these sessions. And the following discussions. We hope I hope that this panel brings some spotlight to the issues and solutions in progress.
And now I will pass on to Tasha. Thank you, Garcia. If I couldn't have asked for a better I couldn't have asked for a better introduction to my presentation. So let's jump into it. As you, I hope all know.
Counter started in 2003. So this year is our 20th anniversary and we originated as a standard that is in the truest sense a consistent, comparable technical standard for measuring usage of subscription content. And when we sent out. Release five of the counter code of practice in 2017, which required compliance from January of 2019, we offered ways for publishers or report providers to inform the world of open access usage.
Now we all know that the situation has changed enormously since 2017. So release 5.1, which will be going live in a month or two. Focus is much more on optimizing delivery of open access reporting. And we have done that in two ways. Let's start with global reporting. So Cascio has already mentioned this concept of attributed and non attributed usage.
So the usage that we can link to a specific institution or a specific set of institutions is known as attributed usage. And Casio has mentioned that Duke u.p could only really link about 13% of their usage to specific institutions last year for their journals. So you can see that leaves us with an enormous bulk of usage that is not attributed. That is it cannot be linked to a specific institution.
Within that attributed and non attributed split. Whether content is paywalled or free to read or open access is a secondary question. For subscription content. Clearly, report providers and publishers break down their total usage to show only the attributed usage for a single institution or consortium for open access content. Report providers don't need to do that breakdown.
What we are suggesting is that people report providers should deliver these global reports. So they are literally not doing that attributed non attributed breakdown. However, to be really valuable for open access, they need to be much more granular. It's typical for articles to be made open access or for a book chapter to be made open access rather than necessarily the entire journal or the entire book.
So for release 5.1, we are putting a lot more focus on publishers delivering what we call item reports, which are those very granular reports. We are also obviously encouraging them to deliver those as global reports, not institutional attributed reports. So here is a call to action, whether that's for NISO or for the libraries and consortia on the screen. Libraries and consortia have the option to include global item level counter reports in their requests and their contracts.
If you as a consortium are doing an open access deal with a publisher, we would encourage you to ask that publisher to offer counter compliant global item reports. So that is. Audited reports available through the sushi automated harvesting protocol. Sushi, by the way, while it is delicious. In this particular instance, stands for the standardized usage statistics harvesting initiative.
And it's how people call for JSON versions of their counter reports. Now, one of the reasons that we're so keen on this global item level open access usage is that we believe usage is a missing measure of impact. We're human. We like to simplify complex matters. And in the case of measuring impact, historically, that has meant that we look at citations which are very direct.
They definitely show that something has had impact on the scholarly record, but they're very laggy. They take a long time to accrue. So in more recent years, we added all metrics. They're quite immediate, they're very quick to accrue, but they are usually reflective of quite fleeting attention rather than lasting impact. We see usage particularly comparable consistent usage statistics of the kind produced by counter compliant platforms as a third type of impact measure.
Unlike citations, usage typically accrues from the day of publication, and unlike altmetric, we can be sure that usage reflects some form of engagement with the original published content. Now counter reports are not aggregated across platforms, so materials appearing on a third party platform and the original publisher platform, you won't get that comprehensive aggregated view.
So there are other projects happening, for example, the open access, e-book usage data trust. Looking at how we can really exchange that kind of consistent usage data. I am definitely running short of time. So I'm going to just say one last thing. Yes, we are a metrics provider, but research assessment must be a holistic exercise. No metric should be used alone, and None should be used without an appreciation of the scholarly merits of the work.
So I am going to hand over to you, Amy, and stop sharing my screen. All right. Thank you, Tasha. Let me share my screen. OK hi, I'm Jamie Lawson. I'm a librarian at Dennis University in granville, Ohio. Denison is a private liberal arts college with about 2000 undergraduate students.
And we just like previously, like Natasha and picazio, we talked about global reports and unattributed reports. And my presentation is focusing mainly on attributed usage. And we thought then. So we don't really talk about global usage yet. And so this is definitely our homework. So what makes us, Dennis, are unique in our usage context is that we are a fully residential campus, meaning our students live on campus for four full years.
So almost all of our electronic resource usage is coming from our campus IP range, all captured in counter usage reports. Therefore, we have a good grasp of our usage at our institution. And here's the quick summary of Dennison's electronic usage data. As for our regional usage, the total regional usage saw 15% moderate increase, while all journal usage increased 61% in fiscal year 2020 to.
For e-books, the total e-book usage saw 11% decrease, while e-book usage increased 78% in fiscal year 2022. And in this slide deck, when I say, oh, a, I mean gold or a. Moving on to the next slide. The charts you see here in this slide are the percentage of our oil usage by each publisher. As you see, a few publishers have very high oil usage percentage.
For e journal. The major downloads are coming from Springer, nature and Elsevier. E-book usage is much smaller than e journals, but the biggest downloads are again coming from Springer Nature. Percentage wise, on average, about 10% of journal usage is coming from away and 1% of e-books usage is coming from away.
Ebooks and journals combined together. Our readership percentage is about 8.8% So if you are a Denison student, about one in 11 times, you will be reading an away resource, probably without knowing it. So yes, currently only about 9% of readership adenosine is OK, but we know that oil usage is increasing fast. As you saw in my previous slide, and we will probably continue to see a good readership growth in the next few years.
But how did I collect the usage of oil usage, as I shared with you in the previous slide? This wasn't easy. We do subscribe to usage visualization services, but to get the granularity of our usage, I just shared in the previous slide. We had to create our own in-house JavaScript usage calculator that fetches such data and calculates usage.
To track oil usage growth. We use count of fibers type 3 and TRP three reports. Not all usage visualization providers or individual libraries collect or use tier three and tier three. To a truck channel in e-books usage. Some usage visualization providers or libraries use rt RJ one and rt rb one, which exclude gold or usage, and some visualization providers may use only.
Tr and I will not go into the details of the TR report, but tr report is a very inclusive report. A CRL survey suggests to add over usage to your e journal and e-book usage reporting. But this is not an absolute requirement. So some libraries are reporting with. While some are reporting without a. Oh and this is my last slide.
And here I summarize what I want to see from publishers, libraries and usage visualization providers in the near future. So publishers, please consider becoming counter compliant, counter compliant reports. What we librarians trust the most. I would love to see more publishers, especially fully aware publishers and preprint server publishers provide counter reports to us.
And please provide therapy 3 and 3 so that we can keep track of your content usage. And if you don't provide contact usage, that we cannot report your platform usage as part of our annual library usage report. At the physical year end, you will never appear in our Denison's top database list dance on top regional list or Denison top e-book list. And please provide the society.
A small University like us doesn't have the manpower to manually collect and calculate multiple counterfeit usage reports. So we rely on the in-house program like JavaScript. If we can get such data from you, then we can calculate many different types of calculate counter reports, including TRP three and TRP three. And usage visualization for writers.
Please consider taking advantage of countless type 3 and 3 library stakeholders want to see official, credible numbers calculated by major analytics companies like you with visually compelling charts and graphs. Also, we librarians would not enjoy showing decreasing resource usage to our stakeholders when in the future away usage gets really huge.
At Denison, as I said, we only see 9% of readership as OK now, but we know oil usage increases and so in the future we will have to start reporting our usage to our stakeholders. And last but not least, I personally would love to see more readership from smaller publishers, including global South and Far East publishers. At Denison, we see more and more read and publish deals coming from major publishers, and I believe that we will inevitably see away readership from these major publishers grow accordingly.
And next up is j. I'm passing on to you. All right. Thank you, Jamie. It is very nice to be part of this panel and present. I will be talking about our open access content discovery platform called our discovery. So our discovery is a leading mobile app for content, discovery and access.
It is absolutely free to use and also for publishers, it's free to include your content into the platform. And we also don't serve any advertising. It has been downloaded over 2 million times now from 190 different countries, and it's really well rated on the app store, upwards of 4.6 out of five. The one thing that we're extremely proud of is that we have a large repository of open access content.
To date, it's about Z39 million and growing. And we also are bringing on preprints about 2 million of 2 million preprints, which should be ready soon. The other important thing about our app is that majority of our users, about 90% of them come from developing economies and low middle income countries. And those tend to be, you know, students, academics, early career researchers who are mobile first or mobile only users.
And they want to access content where they are when they are and within their regular lifestyle. And that's where we really try to meet them is on their mobile phone through our app. And when we talk about institution access, we've seen about over 1,000 different institutions represented from our users. When we look at our get after data.
So we did a we did a quick analysis looking at December 2022 of open access usage on the discovery platform. And about 34% of articles seen on our discovery are open access. And the way we identify them is by tagging the content. Once we index it either as open access just published or open access and just published. There are a few other tags that we use, but these are the ones specifically for open access.
What we saw is about 50% of those views were for open access content. And the know, as I said, the way that we identified it is tagging post indexing. Now, you know, you might be a little disappointed, but we are not counter compliant yet. But we do hope to be. And you know, hopefully in the near future, we'll be able to report counter data back to not only publishers, but hopefully also to universities and University libraries.
One of the other interesting things that we do when we index open access content is we create summaries and highlights using artificial intelligence. And over the past few months, we've noticed that engagement with engagement with full text tends to increase when we have summary and highlights available. And open access really makes this possible for us.
And we have seen consistently that our users choose to either download or view the full paper. You know, around 70% or higher when the summary and highlights are available. And the primary reason we do it is we want to help our users make better informed decisions about what they choose to read and to make sure that they're using their time wisely. And we're also providing them with the best tools in the app.
So that they can know, they can read more and they can utilize what they learn in their research or in their studies. To date, we've developed over 6 million summaries and highlights using AI, and they're all available in the app free of cost. And it's all open access content that's C bi licensed. And we are also the only mobile app that provides summaries and highlights along with the abstracts, you know, in the app.
And you know, as I said, what's critical is and you know, if it wasn't for open access, we wouldn't be able to do this. So we really, really appreciate and love open access because it allows us to do interesting things like generating summaries and highlights. And we hope to do more interesting things in the future with open access like translations and even possibly even audio summaries.
So, you know, keep your eyes open for that because that might be coming soon. Now we also looked at where our traffic is coming from country wise. And so this is you know, this is a breakdown of the top 10 countries. When we looked at when we looked at December 2022. As you can see, you know, India and Indonesia are tops.
But we also do have more developed countries like the United States and the United Kingdom and Germany featured in the top 10. And the top 10 countries account for 59% of the views that were on open access content. And the way we measure this is every time a user clicks on View Full paper within the app, we record we record that. And, and, yeah, and, you know, as I said, you know, while a majority of our user base is in the global south, we do tend to have usage in the global North as well.
So, you know, open access usage is not just limited to low middle income countries or to researchers who are lacking funds. In all honesty, open access papers are being accessed and read, you know, from many different countries, many different institutions. And we hope to be able to track this better and identify open access content, you know, a lot more easily within the app.
So we can report, you know, the real world data back to our publisher partners and our users. Thank you. And I will hand it off to Emily. So thank you for the great talk. So far I've already been noting things down that I'm learning from you guys. So my name is Emily and I'm director of press and we're a University press based out of Vienna and Budapest.
And I'll be talking a little bit about the difference between books and journals, and this is a bit of a shout out to books because they're often forgotten in discussions and so. So starting off, why is there a difference between journal and books metrics? And one of the primary reasons is the ways in which they are delivered.
So journals are counted according to articles, and those are quite straightforward, unique items. Books are more complicated and harder to measure as a result, and we see that even when we discuss the definition of a book and what falls underneath that category, I think there's various definitions in the industry of what counts as a book and what doesn't. But books are delivered in multiple of different ways online via whole PDFS, chapters or other sections and counting.
That is difficult. There is also the issues of books, having multiple doses for the same content. So that's another level of making tracking difficult. And there are risks of conflating the metrics. And I did want to mention mention that briefly. So often when we talk in the community, we hear there's too many models and not enough time. And we do see that the bandwidth of librarians and publishers and others is often taken up by dealing with read and publish style agreements, setting up the modeling.
The systems of tracking and measuring impact does take time and energy, but what we think is that books need some of that time as well. So what happens if journal articles move into open access and books are slow to get there? And what happens then, especially in the humanities and social sciences where books are such an important format? So the transition to a wave of books is looking different.
And we see that in the models that are being developed in the market, but also seeing how the print book remains a sort of dual product alongside the e-book, which is a little different again to journals. So there is different usage of books as we know, and we probably know that from the way we consume the two. But one thing that we did see when we made our books freely available on the project news platform at the start of the pandemic period, I'll use the trigger sword as you would expect.
But what was interesting is that seven of the 10 most downloaded books were in fact, over 10 years old. So showing that sort of long tail of books usage and something that Tasha referred to at the start. So this fleeting versus use sort of longer lasting usage that we see across publishing. So when we saw these numbers, we as a press decided to launch a gold open access model.
It's a collective funding model called opening the future that allows authors to publish with us without paying a book processing charge. So making it a more equitable model, because as we know, book processing charges are high and not many authors or institutions can afford to pay for them. And with this model opening the future, the first book we published under the program and the usage figures, it was a Historical Atlas.
That was the first book that we published, and within the first few months the usage was over 10,000 downloads, and we wouldn't have achieved that kind of usage if the book was published in a traditional format. I should note that the print book remains so we keep the print alongside our open access books. And so as we grow our opening the future books, we started to look at how does this impact usage and how does it impact geographically?
And these are some early stats that we're just starting to dig into now. So on the left hand side, you see a collection of 10 books that we made open access through our opening the future program. And on the right hand side, 10 similar books, similar in scope and published at a similar time on the right. And we started looking at how, how these are used around the world. And what we can see is that open access usage is higher and the geography of the top countries that are using our books.
"The It List just is longer for our open access titles, which is exactly what we want to achieve. And I should note that This is just taken from one platform. So this is taken from our partner news and what we're doing in parallel. Is working with the Mellon funded project again. Tasha mentioned at the start and the book's usage analytics for diverse communities. The books analytic dashboard, which is a funny acronym of the bad project, which is looking to aggregate usage across various platforms with all the complexity that we've just discussed.
So different platforms counting things in different ways. How do we make sense of that data? And so we're part of that project, and so we're looking to deliver it to our authors and. Sort of aggregated stats across the platforms that we work with. So this is one step towards displaying more accurate usage, but there's a long way to go. And I think at this point I will hand back to Stephanie.
So we can discuss what other things we should be counting besides usage stats. Thank you so much. Thanks to all of our panelists for their presentations. I think we're going to have a really exciting discussion. I hope everybody in the audience has enjoyed the presentations and we will now move to a live discussion. So have your questions at the ready.
Thanks a lot.