Name:
Strategic use of COUNTER data - expert panelists discuss how they use COUNTER usage data
Description:
Strategic use of COUNTER data - expert panelists discuss how they use COUNTER usage data
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/9d87509a-e15e-44b6-a9f9-a26b4250909a/videoscrubberimages/Scrubber_1.jpg?sv=2019-02-02&sr=c&sig=pXq%2FEyybkaBPxmPtQxfpsd4wpJKyC6HVvwBVaCrxStA%3D&st=2024-11-21T10%3A56%3A47Z&se=2024-11-21T15%3A01%3A47Z&sp=r
Duration:
T00H56M41S
Embed URL:
https://stream.cadmore.media/player/9d87509a-e15e-44b6-a9f9-a26b4250909a
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/9d87509a-e15e-44b6-a9f9-a26b4250909a/Strategic use of COUNTER data - expert panelists discuss ho.mp4?sv=2019-02-02&sr=c&sig=jKU%2BLHNjnCu1dx0nyXdn%2FntrQDulKKgp6ff5UHBYI9Y%3D&st=2024-11-21T10%3A56%3A48Z&se=2024-11-21T13%3A01%3A48Z&sp=r
Upload Date:
2022-02-04T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
So thank you, I'll hand it over to Ivy to introduce yourself. Thank you. Ivy Anderson from the California Digital library, which I think many of is a sort of a quasi commercial unit that serves the University of California system. I've been at CDL for the last 15 years. I've been the director of collections for the CDL and also associate executive director and mentioned.
And in that role, we've extensively used usage data for evaluation of resources for collection, analysis decisions, negotiations. I had a similar position at Harvard before I came to CDL, where we also used usage data for cost distribution across the Harvard Library. So that's another other use we've made of that more recently at CDL.
As some of you know, we've been very involved in open access transformation and negotiating transformative agreements for open access. And so we've used usage data in that context as well, slightly different ways. So I'll be happy to talk about all of that is the discussion continues. Thank you, Ivy Joanna yeah, thank you.
My name is Joanna Ball, and I'm head of Roskilde University Library. Roskilde is one of eight universities in Denmark, and it's a small, research intensive University about 20 miles outside of Copenhagen. Several of the university libraries in Denmark have a relationship with the Royal Danish Library, which is our National Library as part of my role there. I also lead on some open science initiatives across Roskilde, but also Copenhagen and university libraries.
I've also spent a number of years working within libraries in the UK, and I've actually, both here in Denmark and and in the UK have been heavily reliant on COUNTER data. I have to say I'm not. I haven't been the person who's had the nuts and bolts knowledge of the data, but I've been the person who's been presented with it and used it to inform decision making around renewables, around cancellations and also in terms of using it to demonstrate our value as a library.
Thank you. Thank you, Joanna and Amy. Hi, I'm Amy, and I'm the product manager for intelligent as Jisc. Jisc is the National Consortium for UK educational institutes institutions, mainly higher education institutes, but increasingly further education institutions as well. I had a team of data managers and get to work with a number of data sets covering scholarly communications to drive and inform our negotiation work, but also to evaluate the existing agreements to ensure that they continue to meet the needs of our members.
And previous to this role, I was working in a UK institution, so I did the hands on data work using counters that spoke both journals and books. Thank you, Amy and Jill. Hi, I'm Jill Morris, I'm the executive director of the PALCI consortium, which we have a new acronym. It stands for the partnership for academic library collaboration and innovation. We are 71 academic and Research Libraries based mostly in Pennsylvania and surrounding states.
And prior to that, prior to working here as executive director, I was in a number of roles both for Chelsea and then previously for another statewide consortium in North Carolina. So for the past 13 years or so, I've worked in consortium negotiations of electronic resources, really looking at usage data from that perspective, as somebody who has to evaluate what works well for a large and diverse group of institutions and standardize data is a huge part of that.
And so COUNTER has been something important to me for a long time now. I'm also project director for an open source project called CC- Plus, and that is an IMLS funded project here in the United States. And it's an international collaborative effort which is really exciting to build out an open source software platform that allows us to harvest and store and then display as well usage data using COUNTER counter compliant data.
And so for the past 3 and 1/2 years, we've been working on that, and I'll share probably more information about that shortly, and I'll drop a link into the chat in case anybody's interested in it. But we're just wrapping up sort of the second phase of this project, which is going to allow us to do an open source software release that we'll put it out there for the world on GitHub to use as needed.
And it really was designed to scale to consortial needs for harvesting usage data. So really excited to be here. Thanks for inviting me. Thank you, Jill. So our first topic here will be how libraries in consortia use COUNTER data, and we would love to know what you all think. So here is a poll.
I think if, but usually, when I entered the chat, it's bad, but you could put them, you can put this the link in the chat, you can use the QR code here or get a slide and you enter the number and you can go ahead and answer these. You can select more than one. Do you use COUNTER data for acquisition decisions, renewal and cancellation decisions, informal negotiations and demonstrating the value of the library to administrators?
It's six, one, six, And more votes coming in.
Maybe we'll call it there. So here we have clearly they're all. Quite popular, but reneging renewal and cancellation decisions is that top of the list that seems to be pretty common across our group here, acquisition decisions and formal negotiations and then demonstrating the value a little less. We know how important.
All of those things are. And so with that, I'm going to stop the share, check the check real quick. OK and I think we'll open up for some other illustrious panelists, so. A question for you all. How have you used usage statistics to inform acquisition, renewal and cancellation decisions and what approaches are most helpful?
Joanna, we can start with you. Yeah, of course. It's really interesting to see the results of the survey there because I'm sure we all use counter data in terms of reviews of our current subscriptions and looking at them in terms of renewals. I mean, cost per download or cost per use figures are really important in this regard and certainly in terms of how we review our subscriptions on an annual basis within the royal library and in previous institutions.
That figure is really important. Of course, it needs to be balanced in terms of the particular discipline discipline that you're looking at. Different disciplines have different patterns of usage. So although a resource might have low usage, it could still be really important for a small group of researchers or students. But I do think the low usage is really useful thing to take into a conversation with academic members of staff because the response we quite often get is, yeah, but this resource is really, really important.
But if you can demonstrate there's only been used a few times in the last year, that makes our cases as librarians much stronger in terms of being able to argue the case for another resource that might be a better value to the institution. So both data driven and one piece of the pie. Yeah, it's a mixture, definitely. I mean, how about you?
Have grabbed the unmute button. So in our we used usage data in both renewal and cancellation decisions and also in general negotiations, so we've really tried to use it in a broad way in our general negotiations. And this is something that we did really before we were that involved in open access negotiations, more traditional negotiations. We developed a very specific approach to journal evaluation that we called value algorithm, which combined usage with a number of other factors.
So we combined usage with citation data with. We actually had three buckets of value that we identified as utility value and cost effectiveness. So we looked at usage and citation data as how we're our authors, citing works in these journals as the utility bucket. We looked at things like impact factor or snip value for the value implicit value of the journal.
And we looked at then cost effectiveness, cost per use and cost for impact or snip value. And this is something that we did at the individual journal level and rolled up at the package level. And this allowed us to really look at our packages across the board to see which journal packages were giving us the most value or the least value in a sort of multivariate sense, and also to make individual judgments about individual journals in those packages.
We were really very pleased with the results of that tool. It's complex to maintain, and so I would love to see a tool that actually could do that in a broader way, something like cc plus that could store some of this data in a more collective way, but really allowed us to look very carefully at the value of each package and that the value of journals within packages. And we would use that both to make our own decisions about where to pitch our cancellation or retention decisions, but also in our negotiations with publishers, sometimes very effectively to show them where the value of their journals did not measure up to the value of other journals in similar disciplines based on those criteria.
So that was a very valuable tool for us with e-books we looked at, and this is something that we did before counter five came along where we had to aggregate data across at the chapter level. But we looked at trying to relate the number of uses per book to what it would cost to purchase those books. And how did that measure up against the cost of a package? You know, at what point was the package more cost effective than purchasing books that had certain benchmarks?
A number of uses, and that was also a very effective tool for our campuses. So I'll stop there. Yes I think just to pick up on what Ivy said, we to use COUNTER statistics to inform a lot of the negotiations. It's one of the value measures that we use to kind of look across the publishers and identify the journals that are of interest to us.
But we're also kind of increasingly using it in conjunction with other data sets. As Johanna mentioned that the cost per download metric is very useful when looking to evaluate and make sure that it's providing kind of value for money for our members. And we're also now looking at combining it with publication data. And if I can just share my screen, I'll just very quickly show you that one of the ideas that we're working with at the moment.
So we're looking at setting the usage data against the publication data of our members to identify journals which are of real high value, both in the reading aspect and the publication aspect. So this would be this quadrant here. But then we can also identify two of the quadrants that are perhaps of different interest to different members.
So down here where we have in this case, the most bottom right is the ones where you've got the high reading, but quite low publication. So that potentially is a reader on read-only package of these journals. Or then put possibly these up here where there's quite a lot of publication, but not as much reading.
And then finally, that lower left quadrant where the perhaps not providing that value for us, for our members. As I said, that's just one of the value measures that we look at. Another thing to pick up on, I think what Joanna was talking about, about being conscious and aware of that kind of subject. The differences between the disciplines is we've played about a bit with the download per so where you get perhaps a discipline such as a humanities, where they don't publish as many articles, so they might perhaps not generate as much usage.
We've looked at that against perhaps more of the science subjects where they publish a lot more, so perhaps attract a lot more to kind of, I suppose, ease out bias reading that skew. And that makes a lot of sense. Go ahead, too. I was just going to tag on and say we use many of the same calculations that others have mentioned, and there was a question in the chat to about using those data points for actually shaping the models that you require content with.
And we certainly have done that here as well, where we've looked across our membership, taking many of those various factors into consideration that have already been mentioned and then looking at what does that look like across many institutions and what models work best for a consortium or a group of institutions based on that distribution of use? And many times I think you can work with a publisher to explain how that value is coming into consideration at all of those institutions.
And oftentimes, many of the publishers we work with are really open to considering new models or new packages, even new offerings based on where they're seeing the value actually being obtained by those institutions. So it certainly opens up conversations when you're able to look at something and talk about it together. I think different institutions value different products differently, and when you're looking at a group of especially a diverse group of institutions, it's interesting to be creative with that data and try to figure out what ends up being a win-win where you can increase access to materials because of how you've analyzed the usage that's happening.
Yeah, I would join that and say we've used some of this data, including, you know, graphics that show quite clearly where their value is not competitive with other publishers as part of our negotiation strategy with publishers, sometimes quite successfully. Sometimes that depends on the publisher how successful one is, but I think very, very useful to have data driven conversations around value when it comes to pricing negotiations.
It's hard to refute what the data shows. Yeah, I just wanted to come in now on the discussion around value because I think another use case within individual libraries is that we can use the data to identify resources that should be being used and we understand why they're not being used and then working with the academic staff working, perhaps with the publisher and running a marketing campaign or an education campaign to get the resource used more heavily.
So it's not always about those hard and fast cancellation when your decisions, but it can also be something a little bit softer around our academic engagement. That's a good point. Right positive, it's not always about cutting and canceling, but encouraging use and looking for those that are kind of falling in the gaps. These forces can do that for different reasons. Discovery and excess.
So related. So and I think at the end, this question was really interesting, is the new ones want to build off of this to look at using to negotiate different pricing model. Is that one has come up for you all? Maybe I've. We we didn't haven't used it in the context of pricing models, per say, we've used it in the context of pricing price negotiation to try and lower.
Of course, we're representing a consortium as Jill and others of you. So sometimes using data around lower use in with certain campuses can help us would help us to negotiate a lower system price for the system as a whole. Not so much the pricing model, but the actual price, right, showing where the publisher did not benchmark well against other publishers.
Well, I think to keep. Oh, go ahead. Yeah, I was just going to agree with Ivy that it's quite good to use it. That kind of cost per paper used for the benchmarking and look to see where specific journals are performing and groups of journals as well. And we've used it to sort of repackage offer so that we were identifying what's most critical for our institutions to receive, and in this past year in particular, we really did need it in several cases where our institutions just simply couldn't do what the publishers needed from a price point standpoint.
And so we sort of worked with them to determine how do we find what is most valuable to these institutions and then designed new offerings around that. So we didn't necessarily change the pricing model, but we actually changed what we were offering and the approach that we were taking, where before we were licensing everything and then using counter data to help us find out what was most important and create alternatives.
And many of you do represent groups of libraries, VIVA is in that same category, so we look at highest use of any institution or by groups. There's that holistic aspect to it. How do you think that these kinds of issues are different when you're looking at groups of libraries as opposed to individual libraries?
One difference I would note is, and it's also being at a very high research intensive University system. Often people talk about resources that aren't used or that are very little used. There's very, I would say there's very little at our University that is not used. The number of resources that show up to zero usage, I could probably count on one hand or maybe fewer than one hand.
So you're really looking trying to benchmark more aggregate data than the simple question of, oh, this is not being used, everything is being used. It's really a question of then prioritizing when you have to. What is most used? So I would always assert that there's value in most everything. It is being used by someone somewhere often uses that seem low are being used once a week by someone.
That's not really unimportant, I think, but it's still an issue of relative value that one has to have to work with. And I think it's my country is about understanding that context as well within the usage, JISC represents a large, large variety of institutions kind of ranging from very small research institutions to very big research intensive institutions to also kind of primarily teaching focused institutions.
We have a funding system for our members, so we were quite often kind of look at the usage across the different bands because we want to make sure that any negotiation that we enter into is equally representing all of our members and meeting all of their needs. And I think many times building on what Amy just said, it's helpful to find peer groups of institutions to try and understand where value is for different types of institutions and recognizing that especially when you're licensing something across an entire group that different institutions will value those resources and materials differently.
So it's never a straight calculation, right? We always have to incorporate things we know about those institutions and figure out where, why those differences are occurring. Sometimes it's related to mission. Sometimes it's related to availability of technology, resources and staffing at an institution. Knowledge of the staff and sharing those resources and materials with their patrons in some way.
Previously, when I worked for a large, statewide multi type consortium where public libraries and academic libraries all had the same set of resources, it was just across the board in terms of how different resources were being used in those different scenarios and we tried to come up with. We actually did a study that benchmarked different peer groups of institutions on particular electronic resources, taking their usage data and figuring out what various factors related to driving usage higher or lower.
And in public libraries, a lot of times it was about capacity of the staff and customization of the public library website and then in academic libraries, it was other things. Factors like do they have a proxy server in place? And this was 2014, so it was a number of years ago now. But factors like that really made it easy for people to access the materials. So there were different criteria that were in place for different types of libraries.
And I think that's important to us as well sort of our role in driving and driving usage or not and what those factors might look like. You even have a request for more details of your study, Jill, in the chat. Oh, OK. I'll grab a link. Well, another area of really high concern. We know for libraries and publishers is open access and counter is not only about paywalls or subscription content by any means, but that can be sometimes a misunderstanding.
The code includes metrics for how libraries users access open content and with release five, there were really important steps to sort of make a more of a delineation there, more separation between the two types of content. There's also currently a consultation that is out the publishers about how an optional way to report global usage and users is not attributed. I know there is a plan to have library focus groups that's coming for that in the future, so they all have certainly have a large stake in this as well.
So we have let's get some more input from the audience here. So this works. So here's a question for everyone here. Your library consortium is investing in open access. So are you? We are interested in the usage of the open access articles and books we have funded. Do you agree or not or either disagree or agree?
And it should be open now for those who might have joined this a little bit later. You can go to slide 2 and enter this number here. And we should see kind of a live. Action voting. And I can confirm there are a lot of votes. It's just all do happen to agree to 30 one, 35 and growing. Uh-huh what's more, complexity?
You know, go ahead. Quite a lot of votes. Going to close this for now. But you can see are at the very end, we have a decision, but we have quite a lot of agreement that there is an interest in the usage of open access. So stop the share here and I'll ask our panelists. About COUNTER with open access usage metrics, what do you see as the role of usage statistics within the open access ecosystem?
We'll start with Ivy Thanks thanks, Anne. So I'll say that I used to think that it was not as important as I've come to believe usage data can be very valuable in open access. And the reason I say I used to think it was less important is thinking about this the kind of hybrid open access usage that's creeping into our journal packages and how that somehow is not.
The institutional usage in the past was not able to be captured in the same way as closed access. And yet we were making decisions about packages based on the data that we had about closed access. Despite the increased usage in of hybrid access and also green open access. I think green open access is a real challenge for the way we usage data. But I think when it comes to investing in open access resources, which is sort of a different problem space, I think usage data can be very, very valuable there because we are also having to make decisions about what we invest in and why and justifying those decisions for ourselves.
And it's not always based on our own publishing activity. It may be based on much more global kinds of values or but also the value to the institution. So I think the work that counter is doing to try and measure usage and develop standards for measuring open access usage both globally but also at the geolocation and institutional level will be very valuable going forward as we invest more and more in open access resources not specifically tied to our own publishing activity, but our usage and value of that content.
So I go next. I'd say we're very interested in the counter usage on open access material. And the more we've been thinking about it, the more we're thinking of kind of the article level for institutions because as we moved through the transitional agreements and towards the more pressure to publish model, we want to be able to monitor and evaluate that publishing aspect of the agreement and being able to, I think, understand the usage of articles that your, your institution or your consortium has paid to publish is really important.
And I think we're quite I'm quite interested as well in like obviously also going beyond just the mere usage of the account of somebody read it, but also to look at that broader kind of reach of the article. So that geolocation, but also whether or not the type of people or the type of organizations that are potentially using these articles, are we moving in a can we demonstrate impact beyond academia and can we start to reach out to other areas of society?
And I think being able to demonstrate that is. For any open access publication, like I say, it's a value to both institutions and consortiums. I agree that the article level usage is as we move into transformative agreements or agreements that are focused on publishing activity is super important and I'm not I'm not sure that COUNTER is doing that specific work now, but that's something that I think is really worth discussing.
How we get article level data that we can use to evaluate the publishing, the value of the publishing activity and how we could justify that work or not just justify it, but represent the value of that to our constituency? So what are the areas where we're starting to explore? I know you mentioned it was about the green, the green usage, and we've been looking at perhaps trying to pull in some data from IRS so that the institutional repository usage stats, which I think is the counter as well, somehow that with some green articles that we know might be deposited via the green route and try to pull that together somehow as it very early days.
Not so. And Joanna, I saw you and you as well. Yeah, yeah, I saw some comments in the chat about how usage isn't important in terms of the institutional level. It is much wider than that, and I just wanted to say that I absolutely agree and we need to move when we're talking open.
We need to move away from focusing on what usage. Internal usage, our investment in open and I am open, I don't necessarily just mean the transformative transitional deals, I mean open access books, initiatives as well. What impact that is happening having on a much wider scale, not necessarily on that usage based in the institutions. I think it also needs a bit of a change of mindset from our point of view as librarians, but a really important one.
I think from the investment perspective, they're both kind of important. I agree that the global usage is much more interesting when it comes to just evaluating the impact of open access. But when one is making institutional choices about where and how to invest, I think, is that becomes more of a norm. The ability to tie that investment back to what your own institution is actually using will have value.
So I guess I'm a little bit conflicted about whether the institutional level usage is important for open access or not. I used to think it was not, but I'm beginning to think there's more value there in justifying one's own investment, right? Why would we invest in resource and not in resource it's not necessarily because it's used. One is used more widely globally if we don't actually use it because we don't have programs, for example, in that discipline.
So I think it's an interesting question the role of institutional usage and open access, it's more about one's own investment than it is about evaluating the value of the resource itself. I would just chime in and agree with that, Ivy, and I think at least here in my consortium, we're not a system of institutions. We are a non-profit organization that is membership based. Libraries can come and go, and they have very different ideas about what investment in open access should look like at each institution.
And so when you start taking that up at consortium scale and trying to argue for an investment in open access, it does matter at the institutional level to know what's happening and how we can present that case for what. What is the best? How can we best invest our dollars collectively in order to serve the community that we work for? So I think, yeah, I see both sides of that.
I think, you know, when I'm trying to make an argument about investment in open access, it's difficult to not have the institutional level knowledge of how things are being used. And so I think it will continue to be important, at least at that scale, if we're going to make group investments where it's not a system decision and that can be challenging.
Just just to sorry, just to come back on that. Yeah, I do agree it's useful at the local level, but it's more perhaps important that the local level that the research is published at the local level is getting out there than the necessarily the initiatives we invest in are being used locally. So I think it's just a shift in focus. But yeah, I think certainly for our transformative agreements where we're investing in the publishing of our authors, we're much more interested in global usage in that context, right?
It doesn't matter how our users are using our authors, it matters how our authors work is having impact in the world. And that's actually the much more important value there. No doubt. We were talking with our state, our state level folks, and it's interesting how open access is now much more broadly understood. And it's that faculty research and communicating the value of that to state funders and beyond that has been, I think it just really opened up in the past year or so.
A pandemic can do wonders for showing the value of open that information. Um, let's see, well, I think we have some oh, call Amy in the chat for you to share more information if you have a link on your quadrant analysis. We love, we love data visualization. It really takes it to the next level. well, I think we do have one more audience question, let me see if I can pull it up.
And this was at Slido, you can put in the code and then you can answer one more question. This is it. It is. Release 5 is continuous maintenance, which is very different than how COUNTER has done things in the past. This summer, there will be a few clarifications, and next year there'll be a lot of attention to open access and the larger update.
But there is so much that counter is both to it, to people in libraries and the people in publication, and also the folks that process. And how is that usage data for many publishers, there's really a broad ecosystem here, so it's kind of interesting to see from you all. What do you think counties should prioritize? We have more effort to get greater compliance from publishers and vendors for training and understanding usage data and using it to inform decision making or better metrics or open access content.
We'll see how our poll is going. See some more coming in. If you're going to call it. So we have as the top one here, more effort to get greater compliance for publishers and vendors, followed by training and understanding usage data and that informed decision making.
And then at the bottom here, we have better measure are open access content to let see this opening up here and I can go back to make sure the chat is there. OK so for our panelists, what do you think the future priorities of counter should be? And what is the library's role in that kind of work? You know, sort of doing it. Um, Yes.
Can I say all three? That's cheating now. Just reflecting on how frustrating it is when publishers aren't, they don't use COUNTER because it just means that as librarians, we're comparing apples and pears all the time, and it makes it really, really difficult to can make data driven decisions. So I would just encourage COUNTER to keep on doing the good work is doing to kind of recruit and engage with publishers.
And I think particularly some of perhaps the newer publishers, some of the open access as publishers. And the other thing was around usage data, which I think we do need to be able to understand the data we're getting. And I think as libraries, we need to also be better at making sure that our staff have the skills that are required, not just to understand the data, but to be able to manipulate the data and to enable those decisions.
Because, you know, things are getting increasingly complex. And I'm just thinking about the work that we've done at the Royal Library recently in terms of our Elsevier negotiations and. The kind of lift in skills that was required across a library to be able to complete that work. Can I go back? Yes, sure.
Yeah OK, great. So I think all of those are important. But I want to add one that I think COUNTER is already doing it and I'd like to I hope to see it continue, which is to facilitate both tooling and collaborative work that happens inside of libraries and consortia to support the actual activity of gathering usage data and analysis of it. Because I think having this standard is to rely on is so wonderful.
And it's really on us as library staff and commercial staff to then go out and apply it and use it and put it in our license agreements that publishers need to provide this kind of data. And I think the more work that counter can do to help facilitate those kinds of conversations is really helpful. I think spreading the word out there that you know, that libraries have a role in this too.
And it's not all on COUNTER to do the work, but actually it's a collaborative effort, I think, between consortia and libraries and making sure that we are doing our part to ensure that publishers are providing us data in a way that's meaningful to us. And I think COUNTER has done a fabulous job of collaborating with the CC Plus project in helping us design some tooling that would support that kind of activity.
The more that we can usage data more easily, we can use it and consume it and understand it, the better it will be for all of us, regardless of sort of which of those priorities they fall into. So so I would just add that little plug that I think we all have sort of an advocacy kind of a role that we can play in collaboration with counter. And I'd love to see COUNTER continue to facilitate and you've done this before and Lorraine's done a fabulous job of partnership.
And I think just continuing on and in that role is really critical. Ivy, I say, well, well said, I'm not sure I can add anything to that. I mean, COUNTER has played such an important role in creating standards for usage data. That said, using usage data has still continued to be challenging because of the variability in vendor adoption.
Also because of changes in the standard over time. Right so trend data, we've always found extremely challenging, and I counter does provide some guidance around how to, for example, relate, you know, the different releases of counter, but some very specific support on how to do trend analysis with when once a vendor changes moves from one version of the standard to another, that is going to affect all their customers globally in presumably a very similar way.
And so having some tools around trend analysis could be useful. I think for libraries, it's something that I know, I know we've struggled with quite a bit, but just the work that counter has done and moving into open access now we've just begun to put. Of course, we've for many years as probably most everyone has, we've had clauses in our licenses that require adherence to counter and we're starting to add open access usage tracking into those standards as well to signal to publishers that we do want usage tracking to also be applied to open access.
And that will be important to us. So I think that's, you know, an aspect again of the partnership element working together to try and advance standards activities so, so valuable. I don't miss this much more. Not really. But I would just agree the analysis, I think that working towards compliance and consistency across all the different providers is so it's kind of that foundation for allowing us then start usage data in combination with other data sets and, you know, insights from things.
I mean, I remember in the library with the book usage statistics before Release 5 and when you didn't have vendors and providers all subscribing to COUNTER usage for books. And it was fun, shall we say. Yeah, and I think a lot of what you all are talking about. And there are a number of comments in here in and talks about user privacy and you talk about non-compliance.
A lot of it does get into contractual expectations, which is really important to facilitate encounter. I would also say that what do you what do you include for non-compliance? It's really hard. You have to. Ideally, you'd have some sort of financial obligation change. I know that within VIVA, we have started adding the optional consortium reports as required in contracts as much as we can.
There's different pieces to it that I would say that publishers I've seen an uptick in publishers. Their standard licenses have more ownership of usage data. You have to strike the clauses that don't allow you to get access to your users' data or share it in the ways you'd want it to. So could you all speak a little bit to about that sort of contractual relationship with publishers and the role that the code plays?
Since I raised it, I'll say a few things about it and also responding to the question. So the ability to share usage data for us is also important. And of course, many vendors for many years have tried to restrict one's ability to share usage data. So we, we negotiate against that when we can ask our model provision certainly would allow us to share usage data with other colleagues.
By the same token, we want to restrict the ways in which vendors might be able to sort of monetize that data or use it for other purposes for the data privacy reasons. So of course, personally identifiable information in general is something that we try and we have clauses around for user confidentiality or user privacy, but we try and restrict vendors reusing their usage data for other purposes as well.
So, you know, we have our interest vendors have their own interest and we negotiate over those things. I think the issue of compliance there is not so much one that we, if a vendor is not supporting the contracting is often really it's about a dialogue, it's a document, but it's also an advocacy and education tool. So if a vendor is not supporting COUNTER usage, we probably would take it out of the agreement or we would request the language as a goal to adopt that, that usage, right?
Because we're trying to bring the vendor bring the publisher along into a more standard compliance. So it's not necessarily about being punitive as it is about gaining adherence to standards. If there is a provision that does require it and the vendor isn't fulfilling it, then it becomes just any sort of normal breach issue. know, they have a contractual commitment that they're not complying with.
But really, it's more about that, that discussion with the vendor, they through education and advocacy to help move all of us along in that space. I would be happy to share model license provisions, the license agreement has very good model provisions in that regard, which are, I think, the same as ours, so. I was just going to say, while I view sadness, we very much kind of work with the publishers and suppliers to support them and to help them in adopting counter if they don't adopt and try and provide that support where possible.
Well, I think will open up to I try to I'm going to look to the chat for questions. I see. There's some really interesting points being made here, and one of them is coordinated with other standards such as Kbart and usage and holdings. And I know for consortia that can feel particularly important as we try to carve out what is shared and paid for in essential ways of parties, as opposed to what is subscribed to and paid for locally.
Can you speak to that kind of work that you've done to sort of map to entitlements and how that has worked or not worked? Or any future goals, such as maybe the C plus? There you go. Well, maybe Amy can speak to it better than me, but we really haven't done it because it is so difficult to manage and specifically within the CC-plus project, one of the things that one of the pieces of how it was sort of constructed was to enable support of consortium wide usage statistics gathering at the individual institutional level.
That then could be viewed from a consortium perspective and lens, but also to allow individual institutions to add in other resources that they may have acquired outside of the consortium. Because we recognize we're only one tiny percentage of what many of these institutions are actually using and acquiring for electronic resources. So that's one way of managing it. But we are certainly looking for the next phase of cc plus at ways of really helping to bring different data sources in and then sort of allowing CC_ Plus data t then also export data to other systems.
So if there are other softwares out there that would allow us to make some of those decisions more easily, we would really want to be able to take data that is COUNTER data for certain resources and put it into other software that allows us to do better analysis. I was just going to say we kind of tried very much to do something along those lines to map the usage to holdings and to be honest, because with such a large consortium, we were just overloaded with data it was too much really for us to manage to achieve.
But I think it would be very interesting to see if it was ever, you know, achieved her. Yeah, I know when people who spend an inordinate amount of time mapping usage, but it is, it's a never ending challenge. There's no doubt for a whole variety of reasons, including changes in, et cetera. We have a nice question here from James, which is have you had success in publicly sharing completely openly any publishers uses data reports and I would just add as a tag on what are the motivations for doing that?
And what are you? What are you going? What are the goals here? If anyone has done that or would think to do that. I know in my previous consortium experience we would publicly post our usage data reports, and we would just put that as part of our terms in our agreements that we were able to do that and we did it as a matter of in terms of goals, we did it as a matter of just making life relatively simpler for ourselves because it is difficult to share out cat scale usage data for, you know, hundreds of libraries at any single time.
And to have that locked down didn't make a lot of sense, especially when state dollars were going into the acquisition of that content. And so in that situation, we opened up usage reports and made them widely accessible and again putting it in our agreements. That was just a requirement of the way that we did our work and it's our data, and we felt like we needed to be able to share it that way.
Then I see the Marie Kennedy, there is another example of aggregate usage data shared publicly. This kind of goes back early on we had questions that were kind of on the theme of peer groups and metrics was really quick. So have you found peer group institutions who share metrics and budget, so they're also gets into questions of cost, and I know that is.
Trying to set benchmarks, I might say, would be an aspect of that. Have your schools don't work with trying to benchmarks against peer groups and maybe to Joanna's earlier point, trying to find gaps in resources that should be more used. There's a theoretical sense.
Well, I think a lot of times budget is one of those things that goes into determining who your peer group is to. So, you know, that information is all publicly available for the most part, for most of our public institutions anyway and our private nonprofit institutions. It's easy to find the library's materials budget data point in most cases, and at least in my experience, we naturally libraries naturally look for those they know are generally running along the same lines in terms of budget so that study that I mentioned earlier, I did drop a link in the chat where you can see sort of one approach that we took in terms of trying to determine what peer groups match there.
But we were doing that for resources that the entire state received access to and didn't necessarily pay into out of a particular budget. It was just something that was a state resource. And then when individual institutions are paying for are paying a bill for resources, they're going to look for different peers that they know whether that's somebody locally or across the country.
That's a competitor in some sense to those institutions. It's important to see that. And I think when you've got controversial relationships in place, you're maybe more likely to share more information with each other because there is a relationship there and able to share data. So I know some of my institutions do that. My hope with CC-Plus is that we're going to actually be able to allow our institutions to more easily make that leap from.
I know that library over there as a peer institution for me based on similar budget structures and so on. And let's talk about what's happening and why it's different. Well now, when one is talking with administrators, there's often the question of present a number and then the question is, well, what does that number mean? Is that a high number? Is that a low number or is that an average number? that a good number without any comparative data?
It's hard to represent that this something represents good value or average value, or low value or high usage, et cetera. So I think part of the sharing value is to, you know, as an said that benchmarking right to have something to measure against whether to evaluate the numbers themselves are just numbers. They don't really tell a story. I would say we just because we're working at that kind of consortium level, we don't necessarily benchmark within peer groups or groups of institutions.
But we do look at that kind of almost benchmarking across our agreements. So we look at that value for money across all of our agreements and look at the average and where the different agreements set to identify real high value agreements and low value agreements for our members. Just to come in from my expense from the UK, and I'm sure some UK colleagues, including Amy, can correct me, but but I mean, we in the UK we use Scotland statistics a lot in terms of benchmarking and explain to our senior management why we need a particular budget and also looking at that usage figure as well, which is also one of the as I remember it anyway, one of the scon statistics that we share across UK libraries.
Well, we are basically at time. I want to thank our panelists so much for the lively conversation that leads us to say that is a poor state is one piece, but it's the context. It's the story is as Ivy says that you tell the leader that matters as well. Thank you all for being discreet participants in the chat. We love getting your questions and have a wonderful rest of your day, and go somewhere late in the day, somewhere early from all over.
So have a good day and thank you so much for joining us. Thank you, Anne.