Name:
AI and Library Discovery
Description:
AI and Library Discovery
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/949f7281-cfb1-4641-b6ab-52a4e45cd0f7/thumbnails/949f7281-cfb1-4641-b6ab-52a4e45cd0f7.png
Duration:
T00H59M59S
Embed URL:
https://stream.cadmore.media/player/949f7281-cfb1-4641-b6ab-52a4e45cd0f7
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/949f7281-cfb1-4641-b6ab-52a4e45cd0f7/AM21 Session 6C - AI and Library Discovery.mp4?sv=2019-02-02&sr=c&sig=nvxUSPMLlJ7buSFiZSv1imFbK8c3PY3%2BJGGehxpgSR8%3D&st=2024-12-22T16%3A29%3A20Z&se=2024-12-22T18%3A34%3A20Z&sp=r
Upload Date:
2024-02-02T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
ROBERT MCDONALD: Welcome, everyone. My name is Robert McDonald. I'll be moderating today's session. And if you haven't seen already on the shared slides, we're in session 60 AI and Library Discovery. For those doing social media, today's hashtag is #SSP2021. You can view the full screen by clicking on the theater mode button in the video player in the web browser. And attendees can view closed captions at the bottom of the screen by clicking on the cc icon on the Zoom interface or view the audio transcript to the right of the video by clicking the arrow next to the cc icon and selecting View Full Transcript.
ROBERT MCDONALD: For this session, we're asking that you place all of your questions in the chat box in the web browser in Pathable. It's in the chat box to the right of the video player. We won't be using the built-in Zoom chat function for this session. I'll be looking for those there and making sure we get those to the speakers by the end of the session.
ROBERT MCDONALD: And we'll have that chat active after the sessions, so feel free to leave comments or questions or continue your conversations as we move through the session. And with that, I'm going to use today's speakers, Ken Chad. Ken's work focuses on technology especially for libraries and higher education. He's helped universities from Estonia to Turkey and Cambridge to Cairo. He publishes higher education library technology briefing papers on topics such as library systems, the role of the librarian teaching and learning outcomes in textbooks and reading list solutions, and is always looking for exciting and worthwhile new projects.
ROBERT MCDONALD: Our second speaker today is Katie Fraser. Katie is a senior librarian for discovery systems and metadata at the University of Nottingham. She manages a team of experts who develop the university's flagship discovery tool, a new search and institutional repositories. It's curated metadata to make their scholarly collections as visible as possible. And she collaborates with colleagues and IT and external suppliers to keep the library system lights on.
ROBERT MCDONALD: And with that, I'm going to turn it over to Ken who will start off as our first speaker today. Thank you.
KEN CHAD: Thanks, Robert. So I'm delighted to be talking to you from Brentwood in Essex, North of London in the United Kingdom. So it's great to be here and have this opportunity to talk to everyone. So I'm going to move through this presentation reasonably quickly. And you'll see from this first screen that some of the sides have a lot of text. The reason I do that is that if you need to go back to it after we've made the presentation, I hope it'll make sense by itself.
KEN CHAD: So what we're doing is, Katie and I have both been involved in some work with AI driven discovery. I've been mostly involved in health care and [? KTS ?] from the University Library. And I'm going to be talking about a project that I did with Health Education England, based at a University Hospital, and talk about how we worked with real users, students, clinicians and so on, and talk about the problems.
KEN CHAD: I put that in inverted commas, they need to solve or talk a bit more about that in the context of resource discovery. So on Kenchad Consulting. So can just Google Kenchad and you'll find me. And that's my [? strapline, ?] libraries make the world better, so help people make them more effective. And that you can go and check me out there and this is the kind of projects I do.
KEN CHAD: So what I'm going to do is move-- my presentation is going to move through these themes. I won't read them all out but they are in context and applied to discovery. And will walk through these bit by bit as we go and then come to a conclusion with what we're going to do with all this. So just a few slides on putting AI in context. Now this for some of you may be-- you already know-- so to just a few slides, I think, firstly, what I want to say and of course, most of you will be aware of this, it just seems that artificial intelligence is everywhere in terms of the literature.
KEN CHAD: So there's books. On the left there, you can see Information Professional. That's the Journal of [? Sillett ?] which is essentially the Library Association in the UK. The Times Higher Education, supplemented Artificial Intelligence Survey, at the top left, the future of data. That was simply an insert in a newspaper the other year. So it's everywhere and we're all aware of that. And I think, just to clarify, I mean, I like this definition, if you like, of the AI bucket because it's not one thing, it's a whole bunch of things.
KEN CHAD: And this comes from an article by someone at Sheffield University. So it encompasses a number of things, of big data analytics, machine learning, and so on. And these definitions or these components, if you like, were the ones that came out from a report to the House of Lords a couple of years ago. And I think the other thing to say, importantly, is AI, of course, as we all know, has all these sci-fi connotations.
KEN CHAD: And what we're really talking about right now is narrow artificial intelligence. Which is machine learning, AI, a broad set of algorithms. And it's solving a specific set of problems. So we're not yet talking about human-capable robots or even super-capable robots. And I think that's an important qualification to make at this stage. And I think this also is one of the key things that comes across as you read the literature around artificial intelligence is that it works best when there's large amounts of rich data.
KEN CHAD: And of course, for us in the domain of publishing and libraries, this is exactly where we are. So I think that's a really good message for us is at the end whoever owns the data is king and algorithms may be being commoditized. It's the data that's really important. And I think just putting that into context-- as I say, I've been working with Health Education England, which is part of the National Health Service, the NHS, and they recently set up their AI lab, and this kind of thing is going all over the place and they're on the right hand [? side. ?] So you can say that they acknowledge AI as powered by data.
KEN CHAD: And the NHS has some of the best health data in the world. So it's really keen to exploit that data. And I'll talk about how that works for us in libraries and publishing. So after all those general comments about artificial intelligence, we see that already the people in our domain, ExLibris, part of ProQuest which is part of Clarivate. Now they're already writing about artificial intelligence in the library.
KEN CHAD: OCLC is also doing research and publishing on artificial intelligence. So it's come into our domain and it can solve now some real problems. And that's what we'll be talking about. And I think just to point out as well in publishing, there was an article recently published by UKSG, which is an organization that works with publishers and content providers and intermediaries.
KEN CHAD: And what you'll see from all these, a lot of these articles and publications are all recognizing that we're still at the early stages. So they're using words that it will profoundly changed the way things work. So it's useful to acknowledge that we are just at these early stages. And there's plenty of people though to help. So this is just an example of a few companies that are out there to help.
KEN CHAD: And I'll be talking about one of those in particular, which I'll mention in a moment. So AI is here, it's at the early stages and there's companies and organizations that can help apply AI to the kinds of problems that you might have, that you want to solve. What I want to look at, though is specifically artificial intelligence applied to really library discovery. And for this project-- actually, both Katie and I used a specific product called Yewno, and that integrates with existing discovery products like EBSCO's EDS and Primo, so that it can link out to the full text, what the link was over the calls, appropriate copy for that person, the copy that's licensed to them.
KEN CHAD: So it uses that it integrates with that technology to do that. So this is Yewno Discover. And what I'm not going to do-- I just want to say this right at the beginning-- I'm not going to give a product demo of Yewno. If you want to know more about Yewno yourself, then do get in contact with them. But it's a product that's been around, it's used at Stanford, I think University of California, one of the companies there has started to deploy it a couple of months ago.
KEN CHAD: And what it does really is it's taking what they sometimes call this huge corpus of a huge amount. That big data of quality material from various publishers from JSTOR and really all sorts. And the point here is that publishers are quite keen to get their data into this so that it's all about better ways of finding their content. So it does a number of things. This importantly this disambiguation.
KEN CHAD: So a classic idea is we know the word depression and just look at all these different kind of ways that depression is defined in geology, in weather systems, in physiology and economics and health. So what artificial intelligence can do and certainly the system we're talking about is to disambiguate, to clarify what we mean by depression. And what I want to say here is that some of the features of artificial intelligence, you'll see that some other kinds of systems can do elements of this.
KEN CHAD: But what artificial intelligence can do is I guess do it more cleverly. So it can disambiguate terms. And a key aspect here is that it works on this basis of concepts, not key words. So it understands that terms are key. For us in the UK, Brexit is now a term about, we've done it, we're now no longer part of the EU. And what happens is these terms change and new terms arrive.
KEN CHAD: And drug terms have brand names and medical names. So it's working not through authority control but this sense of concepts, which is a bit clearer here, when on the left hand side, you see a snippet, of an article about sintering and what we might, what they call selective lasing sintering. There's no mention there of 3D printing. But 3D printing is typically the things we understand.
KEN CHAD: So what the system does is understand that what they really mean here, the concept that this article is talking about is 3D printing even though that's not what we might think of as cataloging metadata or in the text itself. It's the algorithms that are making those kind of concept definitions. And I thought an interesting one here quite relevant. So this is how it looks in the user interface.
KEN CHAD: So here's a book in 1989. And we're finding the concept of gender equality, which is quite an important concept because it's one of the UN's sustainable development goals set up in 2015. So that term, gender equality, is a relatively modern one but it's an important term, it's got some solidity because it's what used by the UN. So if you want to find things on gender equality, a tool based on artificial intelligence like this, you can find them even though the term isn't in the article or in this instance, what you're seeing is the specific relevant snippet that's working-- and there are other snippets too-- that's looking at specifically gender equality in this broader content of a book.
KEN CHAD: The other aspect that we'll be talking about later on is this whole way in which it can visualize things. So certainly, as I'm working with clinicians in the health service, you can see that there's various relationships between, at the bottom paracetamol and at the top ibuprofen. So it's useful to look at all those drugs and you see other terms analgesic, analgesic opioid. So it creates a mental mind map and shows relationships between different kind of terms and the system gives you, as you can see down in the left hand side, a definition of what paracetamol is.
KEN CHAD: So it's able to visualize those relationships. So that is, broadly speaking, some of the things that we're doing, artificial intelligence can do with discovery. And now what I want to do is talk about this specific project and some of the work I did with users. So it was at the University Hospital Derby and Burton which is one of the biggest NHS trusts in the UK.
KEN CHAD: And I'll take you through that project. So there we are. That's the University Hospital of Derby and Burton hospital. If you ever get any hand injury, this is the place to go. They have actually a world class center for hand injuries there. So quickly move on, what we did then-- and I'll go through these steps-- is initially, we establish some potential value propositions. What did we think artificial intelligence was going to help us with?
KEN CHAD: And then we used a specific methodology. It's called jobs to be done, you can Google it. It's a specific user experience methodology. I'll talk a little bit more about that. We applied it in focus groups last year and a one on one interviews and then, of course, the pandemic happened and there was a delay and we had some user familiarization on Yewno and then we did some second interviews.
KEN CHAD: So I'll move through that. Value propositions are important. So first off, what do we think this was going to do? So before we talk to users, if you like, you might say we set up a straw man, if you like. So how do we think Yewno might help? And of course, the way we did this was to talk to the company itself. And I think the important thing there is I find this idea of value propositions really helpful, it's something that clearly communicates the benefits and of course it has to have value to the customers.
KEN CHAD: It's not just what you think is great and fantastic. If it's not great and fantastic for the user, it's not going to work as a true value proposition. And as I talk through this methodology, I'm going to flip through it pretty quickly. I'm happy to talk to people about it more. So I have various ways of approaching this little schema of defining value proposition. So saying to people, so what does this value proposition solve, who's it for, how do you deliver that value proposition?
KEN CHAD: And we've seen some screenshots from about how they do that. And is it distinctive? If it's not distinctive, it's not going to be as such a strong value proposition. And what we established, broadly speaking, with this artificial intelligence approach to discovery is the value proposition it's going to help you understand an area research, ensure the search uncovers relevant material with related terms and relevant variance.
KEN CHAD: So you saw paracetamol, ibuprofen in terms of drugs. The other I want to get new ideas. And this is one of the key benefits in searching across a very, very disparate corpus. Interdisciplinary search is becoming more important and I want to hypothesize. I don't know if concepts are connected. I want to explore new issues. So these were what we established early on as what we thought it might help to do.
KEN CHAD: And then we applied the user experience methodology. And really here, what we're looking at is what are the outcomes it can address? How can it help? What's the good gain creators and what barriers does it overcome? And what specific jobs or problems? I use these word jobs, jobs to be done, problems to be solved, interchangeably.
KEN CHAD: What can it really do? And it's a graphic way of how that works. On the left hand side, you've got your product which embodies the value propositions and it enhances, gain enhances and pain relievers. And how does it do that? Well, you have to define that. And then it solves a particular job or problem that the user has.
KEN CHAD: So what are they up to there? And this could be a group of students. And this one so I just need to get my assignment done on time. And the other one saying, yeah, I want to do that, but I really want to get a good grade. And I'm going to talk on this, these are some of the ways in which people will evaluate the solution. I think this is really good.
KEN CHAD: The jobs to be done methodology isn't without its controversy. It can be a bit reductive in how to make a better chainsaw or something. And so it does need a little bit of working around with, I use it particularly in higher education. I like this quote though that paradoxically the literal voice of the customer doesn't translate into meaningful inputs. You have to interpret.
KEN CHAD: If you just ask people, what do you need, classically, the old cliche from Henry Ford is you just need a faster horse, not a car. So it's a way of interrogating people to look at what outcomes they're looking for and so on. So it's not just asking people, so what do you want. It's working around issues, the job, the problem that needs to be solved. Who actually needs to get that done?
KEN CHAD: What's the circumstance? Are they working at home all the time like they are now? Then we do look at the actual process. And we look at the outcomes, the gains. How does the user actually judge whether the system is working for them and what pain points to overcome? So there are some of the key aspects. And the way we apply this methodology, I find focus groups is a great place to start.
KEN CHAD: It's really good to get people together and we can do identify a number of problems quite quickly. So in two hours has been two focus groups, two-hour focus groups. We really did well and people really enjoyed it. They interacted with each other and we got some great results from focus groups. And then you've got a typical one we sort of alluded to before.
KEN CHAD: The I've got my assignment to do. But what's the desired outcome? It's a feasible assignment for which I can get good information and a good grade in this particular example. And of course, there's lots and lots of different jobs. So the constraint is finding something that actually is feasible, it can be overwhelming. So we're trying to unpick the problem and put it into these kind of boxes if you like.
KEN CHAD: I'm rather simplifying this at the moment, but I hope you get the idea of what we're trying to do. So they're the things we're looking at. And then what I like best-- I have to say-- is we do one to one interviews and I have learned so much from all these. So these are some examples of the ones we had a clinician rather that wanted to devise a questionnaire to evaluate end treatments, as I mentioned.
KEN CHAD: Derby is one of the National best places to, if you get your hands sawn off, go to Derby and they can put it back on. So they had a questionnaire. Keeping up to date with a major health topic. If someone was doing research on how can we make sure the AI systems that are autonomous, how are they accountable? I'm not going to go through each of these.
KEN CHAD: But what you see, what these are is these are not the process, I want a journal article or I want a book. This is why they want to go through this discovery process. And, of course, ultimately end up with some content to do this. So this is what I mean by the jobs, the problems to solve. One of the nurses, they have a whole wound care plan, something I learnt about. How do you judge the state of wounds all a bit colorful, I have to say.
KEN CHAD: And of course, ones that any University will be familiar with to say that course assignment. So he's the one we're dealing with. We're establishing a wound care. You can see the user experience. This is someone that's been at the NHS task for over 20 years, worked her way up. She's currently doing a master's on health and leadership. What's the circumstance?
KEN CHAD: And you can imagine computer availability on a busy ward is very limited. Pieces are locked down. So she needs to be able to solve the problem on her phone at work. So they're the kinds of things. And, again, rather simplistically, but I hope you get the idea that you need to describe the user and place them in a circumstance.
KEN CHAD: Because that's also relevant for how they need to get the problem done. And of course, there's lots and lots and lots of these things. So we score them on the basis of how does the user think in terms of importance, frequency? Is it frustrating? If it's a really important problem that's very frustrating, you know you're on to something that's really relevant and important to solve.
KEN CHAD: And then we go and analyze it. Why do you care? And here what I think is good about jobs to be done is it looks at things in motivational estimates. So it's their reputation, we want to do a good job. And it's interesting. So you can see that then these kind of motivational factors common to the equation. And what's that fundamental problem in the health service?
KEN CHAD: Of course, It's all about patient safety and care. But this was a dietitian and he wanted to make people's lives easier. Not just say what's the minimum you can have, but try and find ways in which he could give them a more imaginative diet, quality of life. So this was from a dietitian. And again, how do they look at that? It's that we want something, what object and want something that's easy and simple to apply.
KEN CHAD: It's visual. Some context. What are the barriers? Time barriers, you see ward commitments, just those general things. I'm not going to go through all of these. But also I liked what opportunities exist for innovative solutions? So these are the kind of things where we can see where those gaps.
KEN CHAD: An app that would be portable and easy to use. You know, something this wound care thing, wound care at its best. So there's a whole bunch of questions. And it just takes time and effort to work through with people. It looks as if you just put the information in a box. But the way you manage these interviews is something you really have to learn over time.
KEN CHAD: People love to talk. You've got to manage it and get it. So now, what did we discover from all this? So we made a report, published it and gave it to Health Education England at the beginning of the year. And, well, we increased the overall-- understand this is what we set out to do-- of the value of AI-based Information Discovery.
KEN CHAD: We tested and validated use cases. And importantly, we demonstrated Health education England's NHS understanding and capability to work with innovative technology. So I'm going to mention that. Just getting involved with this is really important. And we also provided a model for implementation. So we had clear outputs, we had an understanding of how particular product works, but more importantly, how an AI approach helped, we had a clear definition of the research and discovery needs that that met and we had a toolkit for better deployment.
KEN CHAD: And what I'd say is that when we asked the interviewees, the people I interviewed, they all saw the potential in this approach. As I mentioned at the beginning, we're still early days with this, but sometimes it didn't meet their particular needs. So the whole point of jobs to be done is to figure out what can it do, what doesn't it do? And UHDB is University Hospitals of Derby and Burton.
KEN CHAD: All my piloteers, she said, were quite excited by the technology. And they like that visual layout. So what did I say? I thought this was a great one, that one of the students actually-- I think it was from Katie's University, Nottingham, who's doing a placement at Derbyshire-- she said it looks all confusing.
KEN CHAD: But then after a few moments, hang on a moment, I really like this because I can link topics together. I am a visual learner. And I think that's really important for us in this domain. We tend in libraries to be very tech space and a lot of people are very visual. So that visual approach had a real appeal. Great for looking at broad topics, great for first time topics, getting the overall landscape.
KEN CHAD: And when we-- one of the questions I asked at the end, once they'd had there-- could be trying out, wow, what would happen if we just stopped access? I'd fallen in love with it. I'd be really sad. And one was so motivated, they even took out an individual subscription. And a nurse said, I think it's going to be very helpful for my master's.
KEN CHAD: In terms of framing, she said the methodologies and framing the subject. But it was good at an overview. But what we saw in that [? Hand's ?] example is when we strive to get down into position, it wasn't so good. And people were happy with their existing solutions which I have to say with typically, Google Google Scholar. But great to do this exercise on library discovery services like Primo and EDS, because we'd find, I suspect, that they weren't as good as that we might think.
KEN CHAD: So she was happy with the alternatives. And again, know those words, broad landscape. So we're getting the sense now that the key finding was it was rated very highly for providing a general overview exploring a topic area. But where precision was really important-- and you can imagine with a lot of clinical people working on cleaning it-- that was important. It didn't meet their needs as well.
KEN CHAD: And that was consistent across all the interviews actually. So coming to the end now. So what should we do then, given all that? And last week, hot off the press, Select the Library information organization. And the US that published its report on the impact of artificial intelligence and machine learning on the information profession. So it's very much about the impact on people.
KEN CHAD: And I thought it was interesting in its impact on knowledge discovery that AI will have the greatest impact. So that's great, as validating something we worked on. I thought in terms of its recommendations, identify pathfinder organizations and people who can encapsulate these possibilities, promote knowledge sharing. And then explore the use of AI tools and share what they learn.
KEN CHAD: And I think this is really important because it's easy to focus on, oh, well, it didn't meet that problem. But what we now do need to do-- I feel, and [INAUDIBLE] here, I think is saying the same thing-- is to engage with this technology, both as organizations, as information professionals and so on.
KEN CHAD: So what I'm going to finish off with now is I'm going to do a little bit of stop sharing and start sharing again to give you an into snapshot of. I'm going to share the sound and show you a little video clip.
KEN CHAD: Peter, thanks for joining us. I would like you first to just tell us a little bit about what you do to put this into context and then we'll talk a little bit more about your experience of using the artificial intelligence service Yewno. Peter
PETER: Hello. I'm a support worker in dietetics in the NHS. I've got a background in a degree in human biology. So that's sort of where I'm coming from. The ways in which I would use Yewno is I'd often get as sort of a question of I'd have to look up specific products and their ingredients and whether or not they're suitable for a lot of our patients. And maybe that sometimes they're doing some research onto new medications that come onto the market and sort of the evidence behind those and some nutritional supplements but also products that patients find out in the community.
PETER: And they could have all sorts in. So it's a little bit of that and trying to find some quite specific answers is why I used it for.
KEN CHAD: OK. Thanks Peter. So first of all, tell us what your general impressions are when you first confronted a product. What did you think?
PETER: So we had some training. And that it made sense all at the time of the training. But it's very new. It's very different to what we used before. So it definitely took some getting used to. The visual aspect of it, you can see the potential of it completely. Because it allows you to visualize in your own mind. That's how your own mind work. Is having lots of different connections.
PETER: So you can definitely see the potential. The usability initial impression was it was very difficult. I found it hard to get used to. I think again because it was so new. And that got better with use, more slightly more intuitive as I went along. I think initial impressions as well of the ease of saving documents was fantastic.
PETER: Again, in comparison to the usual products that we would use. Is definitely a step up on that front.
KEN CHAD: So there are the things that worked for you. But you spoke initially just getting used to it. It does look very different, doesn't it? From earlier [? crashes. ?] So did you find it helped you or what were the things that didn't work for you so to speak?
PETER: So I'd say the things that didn't work were the fact that I don't use, in my robe, I'm mostly clinical based. So I spend not very much time in a week where I wouldn't use it, because and because that's not what was required of me and my role, it would be patient facing for that week, for example. And in because it's not used to it, because it's very different, I'd find that in that we could lose a lot of the progress that made in making it intuitive.
PETER: And I don't feel like I used it enough to sort of really get used to it. Let's say the other thing is when you're researching something very specific, the whole visual and giving you a broad view of the topics, which is fantastic, doesn't really apply as much. And so you can with traditional databases perhaps you can be more specific with your search terms. These is what I found.
KEN CHAD: So unlike a student or researcher, which will often spend a lot of time working on a project, you're a clinical person. You use in front line with patients. You don't always have the time to go back and familiarize yourself with these tools.
PETER: That's right, yes.
KEN CHAD: Yeah, that's interesting. Because we did find that the students did find that easier, so to speak. So looking at that approach as you say it, it looks very different, it's using artificial intelligence to bring concepts together, what are your ideas? How do you see a future for this approach?
PETER: Yeah, I absolutely see a future. I think if I were a student or if I someone heavily in research, I think it would have been incredibly useful. I think some of the lines that are drawn and the fact that you can slide in and out and get more or less specificity, some of the lines that are drawn are gray. And I think something like the cross subjects ideas as well. I think if you just initially coming new to a topic and trying to get a real broad view of it and trying to sort of understand how it might be connected to other things that you already know, I think that's really great, like the visual aspect of it.
PETER: I think it really has potential for that. I think also if you do a sort of literature research, the fact it's a lot more intuitive than other programs I've used in saving those documents into where you can then go back to them rather than having to sort of trawl through as much. And I also think if I used it more if I had the opportunity, more time to spend on research. I think I would have found it more and more and more useful.
PETER: Yeah. That's my vision of it.
KEN CHAD: That's really helpful, Peter. Thanks so much for your help here and your comments. Thanks very much. OK.
PETER: Ken, you had one question. I don't know if you want to get to it now in the context or come back at the end. I can read it to you if you want to try it now or you can make to the end.
KEN CHAD: Yeah, I'm just conscious of time and maybe less we try and take them at the end and get Katie's. Yeah. Thanks.
KATIE FRASER: OK. Thanks, Ken. Yes, a brilliant approach. I'm just going to share my screen. So hopefully, you've got my slides OK there. Yes, I'm going to talk a little bit about our experience at the University of Nottingham Libraries. And has build the contrast of Ken's which is very much going into depth with individual users.
KATIE FRASER: We didn't get the opportunity to see that, but we did get a broad picture and some feedback from across the academic community, which is really interesting. And just to set the context a little, the University of Nottingham is a Russell Group University in the UK. Russell Group is one the research intensive institutions essentially.
KATIE FRASER: And we're a large service. We've got eight physical libraries on our UK campuses and the University Nottingham also has campuses in China and Malaysia, each with their associated libraries. Back in 2018-- and I did work within libraries at that point, but not within the realm I'm in now-- we launched something called the integrated scholarly information project.
KATIE FRASER: And we were really think that a step change in how we delivered library discovery technology, there was 1.8 million [? P ?] investment as part of the University of Nottingham's digital Futures program. And I won't tell you about all the strands, but strand 3 was digital content and discovery. And within that, we upgraded the interface of premade discovery system which are more traditional than discovery system.
KATIE FRASER: We implemented the library access plugin from Lean Library and which really enhanced access to resources off campus for our users. And we got an initial 12 month and that [? great ?] 18 month subscription to the next gen rediscovered, generation discovery tool I discover. And we really wanted to learn from that and see whether or not it was going to be useful for us and what we would do going forward.
KATIE FRASER: So you probably will see a little bit smaller picture of a surface within [? pre-mode ?] implementation. We started to discover in March 2020 and that date might ring a bell with it because that's when the pandemic hit. And we thought quite carefully about what to do at that point.
KATIE FRASER: And we were really conscious that our students and academics couldn't access our physical libraries to the point. There was no way to browse or find resources in kind of a serendipitous manner that we really found took off. And we thought maybe there was a job to be done, it can turn to allow those experiences for staff and students. So we launched Yewno discover.
KATIE FRASER: And by that, I mean, we just told people we had it by all our communication means. And it wasn't until February this year that we also implemented a widget within our cool discovery tool, Yewno new search. So what we did at that point was when you did a search with a new search, as you can see, the Yewno Discover knowledge map surfaced. And it sort of picks out interesting strands, those related topics that Yewno has found, trying to lure you into the system and people would click through from there.
KATIE FRASER: And we had a few hundred accesses of months when we were just telling people about it. As soon as we put it within our main discovery system, we have 1,000 plus sessions a month people going to look at it. So it clearly expanded the audience and then, yeah, [? our comms ?] haven't quite achieved the same as showing it to people in [INAUDIBLE] discovery tool.
KATIE FRASER: We had a feedback from associated with the implementation. And this is the feedback we've got. And in terms of the things I've ordered them in terms of how often we got the response. And, unfortunately, the main response that we got was people saying that the widget was in the way, it was right lots of search results. And in discussion with Yewno, that was the default implementation within Primo.
KATIE FRASER: We could have stuck off to the side where all the filters and things were if it has the same authentic results, [INAUDIBLE] and that might be a good idea. Interestingly, we also got quite a lot of feedback from people. And I think wildly informed by the expectations. And those expectations were based on their experience of using our usual discovery tool. So they'd still say X is missing or x is wrong. We have [INAUDIBLE] specialists within our library, and when something appears differently to expectations or if resources in there, we can fill in that gap quite easily with [INAUDIBLE] discovery tool.
KATIE FRASER: With Yewno, there wasn't the ability to do that. Yeah, we could. It works of that corpus, as Ken was explaining, and applied machine learning to that. If something isn't within the corpus, I'm sure they can take on board suggestions for things to include. But we couldn't include it in dynamically. The definitions that came up were interestingly controversial too.
KATIE FRASER: So when people search for term disambiguating, the definition associated with the term comes from Wikipedia. And I'm sure that's being worked in academic libraries, now that Wikipedia is in a debated topic. Again, conversations with Yewno, if you need to apply a definition to a term Wikipedia is just the only resource that has the coverage you need for a system like this, but we did have some controversy about that.
KATIE FRASER: But we also had a number of people come back and say this is exactly what we needed. We've been looking for something to change the way that we search and this is it. So it was really interesting the different proportions. There was this small vocal community of people who really loved it. And a bunch of people who you could argue just didn't get it.
KATIE FRASER: So there were a few cases of this here. Because Ken's been talking about a kind of scientific context. [INAUDIBLE] folks in the arts and social sciences here and the like, but you've got put picture there as an Arts Social Sciences lobby in the [? whole ?] in campus. And so that concept of definitions was really debated in the social sciences. And we found out that a lot of our courses students accept an essay where they have to define their terms quite early on.
KATIE FRASER: Defining those terms, a significant proportion of their marks. So it became rather controversial that a definition was given to them straight away, and that was the law to go and use that Wikipedia definition straight away. So that was a really interesting case study of how it kind of landed in a slightly awkward way.
KATIE FRASER: It was our Interdisciplinary Arts course where it really received the best feedback. And they said the students were beyond disciplinary boundaries. These could be really challenging to them. And going to something like subject database of the [? compounds ?] that we're working within the confines of the discipline. Yewno told students to look and arguably think beyond their discipline and that concept of serendipity in finding things you wouldn't expect was really powerful for them.
KATIE FRASER: And so that was the area where they said, we want you to come in and it's students we think it's going to improve the quality of legislation. And last but not least, we had feedback from law, saying the key problem for them in terms of showing Yewno's undergraduates was the interface doesn't distinguish between jurisdictions. So UK and US law jumbled them together.
KATIE FRASER: Which is fantastic if you're researching a concept and you know what you're doing as an academic. But as a student, when you don't have metadata, kind of saying, hey, wait a minute, this is a different context, you just got that mind map. And then the documents underneath there. They felt it was a bit of risks to show students and they might get a bit confused.
KATIE FRASER: So I guess a lot of people might be thinking this, well, regardless of what will you show your students, good information literacy skills, a key to approaching the tool, they've got to understand what they're looking at. No tool is unbiased, no tool finds the information without gaps. I guess what we learn is that Yewno isn't the best approach code.
KATIE FRASER: So you can't come to the interface and just dive in and start working straight away. You need the context, you need the explanation of what it's doing in order to kind of understand the implications of that. And so maybe it would have been that excited launched by our information literacy teaching. We had hoped to do that originally and we weren't able to because all our teaching was getting moved online really fast.
KATIE FRASER: So there was no opportunity to integrate, everything had to be really targeted. I guess in the context of negative feedback was inevitable. So people just didn't get what they were looking at. And we did put it somewhere where people are going to stumble across it.
KATIE FRASER: But I think what the most interesting thing to me was this concept of disciplinary best into this interdisciplinary study. That when people are looking for terms within a discipline, they have these kind of assumptions about how the system work and Yewno is violating them all over the place. But when we were looking at the disciplinary context, actually, this was an amazing world of opportunity and I love it disciplinary works.
KATIE FRASER: So I can see Yewno really taking off in that context. And somebody who manages on metadata team is in the library. I was also really pleased that people are asking us to change things and showing that they did value even if they don't always understand the day to day work of the work we were doing and kind of improving our discovery tool in the next day.
KATIE FRASER: So challenges, COVID-19 was obviously a big one. Managing two discovery tools split our attention. I've got to say I wasn't expecting the strength of emotion associated with Wikipedia challenge. And we're doing a lot of work around digital accessibility at the moment. The unique interface is a bit of a challenge from accessibility perspective.
KATIE FRASER: On the one hand, it supports different kinds of learning really well but on the other hand, can navigate around it with a screen reader. [INAUDIBLE] tough. And there was little pushback on the approach, generally. Nobody really challenged that. But I think we uncovered some assumptions about how the two would work that were violated because people coming from more traditional discovery tools.
KATIE FRASER: Successes. I mean, just in terms of our understanding within the Library of how AI could help in the future, that was a big success. We improved our grasp on how an AI compliments the current methods of generating curating metadata. So this machine learning was drawing connections between things that we wouldn't be able to do as metadata experts.
KATIE FRASER: And we saw the value of AI making new connections between the topics. I don't have a lot of evidence to back up this last one, because we didn't get that candid. But I'm sure our users found resources they would never found without Yewno discover. It's just that kind of tool. So this is where Ken and I are going to reflect a little bit on what we've learned about AI and discovery.
KATIE FRASER: And the first question we were going to reflect on was what we think are Yewno's strengths. So, Ken do you want to kick off with that one. [INAUDIBLE]
KEN CHAD: Big thanks, Katie. I think it's really what you talked about in terms of interdisciplinary stuff. And what I'd say is that if we'd done this kind, if I'd done that kind of user experience exercise with any discovery tool, we would have encountered issues with them all. And I think it's important to say that. And so I think what Yewno does is clearly-- what Peter was saying in the interview-- it gives you those links, it gives you an ability to explore a subject in a way that's not so easy with other tools.
KEN CHAD: So I think that's where I place it. I think the visual interface is good, it's not unique. I mean there are other metadata based visual interfaces that people have done, which I don't think that's necessarily, but I think AI does make that approach, notwithstanding your accessibility issues, a really interesting way into the information world.
KATIE FRASER: I really agree. I think prolly, you can miss when you comes Yewno as well, is the fact that map is generated by the artificial intelligence. And that just makes it completely different from a knowledge map generated from metadata because it means that you can ask the system to try and generate inferences to [? linked ?] concepts it hasn't linked together already, which is just mind boggling when you see it happen.
KATIE FRASER: And so what do you think the library community needs to learn from tools like Yewno?
KEN CHAD: I think this is a really good question. And I was really pleased at that [? Sillett ?] report published last week addressed those issues about the library community. And I'd really say it, it's let's get involved. We've seen what it can do, we've seen some of the things it isn't addressing. So I think the key messages, get involved. And I hope what we're doing today share the information. That's the case for me.
KATIE FRASER: I think that's important. I'd agree. And I think we need to learn more about and become more comfortable with black box tools within library community. Those things where we can see what inputs come in and we can see what outputs come out. But the algorithms in the middle, we don't quite know how they're working. On the one hand, we should be critical and reflective about that and conscious of the biases that can perhaps go unnoticed.
KATIE FRASER: But on the other hand, we know that people use tools like Google Scholar, then work in exactly that way to find things quickly and easily, and it really solves useless problems. So I think it's something we need to, it's a tool that we need to have in our toolbox. I don't think we could ignore it and hope it'll go away. And the last question is where do you think systems like Yewno in the future?
KATIE FRASER:
KEN CHAD: Well, I think it's the link between your last question, I see there's a discovery ecosystem where there isn't a single discovery. There's a set of options and tools to meet people's different needs. And I think as this becomes-- I think it's an absolute opportunity for a tool artificially intelligence to be more specific, some of the issues.
KEN CHAD: We could have domain-specific tools in law or in science. I could see that happening. The compliment this more general approach. And I think there's interesting opportunities to work with institutional repositories and mine that kind of data as well. So I think it's getting all kinds of data in and that kind of thing. So that's how I'd see it.
KATIE FRASER: I think if your [INAUDIBLE] status is definitely coming and I'm really keen to work out or to explore how we use kind of our local metadata specialists who are promoting our own unique scholarly collections. But kind of allows to focus more on that work that relates specifically to an institution by automating some of the more routine metadata work that happens. So I think kind of in the future, I hope the systems like Yewno don't just try and act independently, the metadata specialist as a role in funding.
KATIE FRASER: For those who have questions, yeah, should we go to the question a bit?
ROBERT MCDONALD: That sounds great. This one is specifically from AI and was talking about focus groups versus one on one sessions. And what you see are the pitfalls of focus groups. And how do you overcome them in terms of diversity or commonalities among participants?
KEN CHAD: I think that's a really good question. Excellent. Yeah. I would say that some of those issues about focus groups are one of the real positives. Peter, whom I interviewed-- you saw at the end-- one of the things he said was, being in the focus group was fantastic. We had, he was a dietitian, we had all people from all sorts of walks of life in the NHS.
KEN CHAD: And he said it's absolutely fascinating listening to people and say, oh, you've got that problem too, you're trying to address that issue too. So I take what you mean. But I think that's why we do two things. The focus groups, it's a really efficient way to get perhaps 15 people in a room and pull out from those 15 people a quick list of the kinds of problems that people are solving.
KEN CHAD: So it's very efficient. Two two-hour sessions gave us an enormous pot of problems if you like to address. And I thought the interaction was good. And what I'd say and I'm not being racist, perhaps being a bit better but it's true that you need to manage it, facilitating a focus group is really quite hard. And I find you have to control it, but you have to let people go off on their own way too and then pull them back.
KEN CHAD: So there's a whole thing about managing focus groups. The one on one interviews are designed to complement that and be another approach. And I find them just so, so useful. And again, I think you're able to drill down, you're able to deal with the process in some detail, what people think of the outcomes they want.
KEN CHAD: So I think the two go hand in hand, which is why we took that approach. Focus groups are quick, efficient, really get people together, individuals, you really drill down into the detail. I would have loved to have done 50 more of them, to be honest. Excuse me.
KATIE FRASER: I think one of the benefits of talking to Ken and kind of working across is that we use kind of different methods but we got very similar responses. Build in many ways, they kind of triangulating is what is really important when you're using like that.
KEN CHAD: Yeah, I absolutely agree.
PETER: The last part of [INAUDIBLE] question kind of on that same thing was, would you say that focus groups are better for getting an idea of how significant the problem might be to a wider audience rather than just the one on ones?
KEN CHAD: Yeah. The one on ones are a specific problem for a specific user in a specific circumstance. So they are quite narrow. Whereas the focus groups-- I think [INAUDIBLE] got it right-- you can go across a broad set of people. So that they use them together I'd say.
KEN CHAD: And I think they're complementary. And I think as I've been doing this for years now-- I'm an enthusiast, I find it really interesting learning about this-- you have to put some energy in the room and combine that with a level of control. You need to get those informations, you need to know what are the pain points, we need to know what are the ways in which you evaluate success.
KEN CHAD: But you have to let people-- you have to give them a bit of space to just talk. But I think the two together work well.
PETER: Thanks, Ken. And I think this question is for me and is for everybody. She understands the benefits of the kind of AI you're talking about with discovery. But is wondering if they're outweighed by the cost at this point, not just financial but also training staff time for transition guidance. And do you anticipate those costs declining over time?
KEN CHAD: I always anticipate cost decline, I suppose, over time. And of course, exception. [INTERPOSING VOICES]
PETER: We can do something about price.
KEN CHAD: Yeah, someone called me an insufferable optimist once, but there we go. I think that also, as I said, we need to get involved. It is an evolving technology and let's shape it. That's why, as Katie said, there's a whole bunch of issues there. Well, let's get stuck in and address them, so the cost. And I also think that, as you saw with that interview with Peter at the end, what's intuitive is probably what we're using already, right?
KEN CHAD: It's not that it's inherently intuitive it's what we're used to. So I think as we get used to other things, they become intuitive. So, yeah.
KATIE FRASER: Professional development at this point, isn't it? We need to understand [? this. ?] And then I think that was something in the [? Sillet ?] report about getting dirty with them. And I think that's exactly what you need to do kind of get hands on with them and understand them. That said, obviously, we've got a challenge coming up as we come to the end of our subscription about whether we can afford to keep Yewno with the kind of scale [? usage ?] that we've got.
KATIE FRASER: But I think even if we have to let it go, we'll be back in two years, three years, five years' time.
ROBERT MCDONALD: Thank you both. And I don't see any other questions right now and I think we've covered them all. And we are getting down to the end of the hour. So I just wanted to thank you both for your wonderful presentations and to thank all of our attendees. We had about 36, 37 at the most there, and for all the great questions everybody put forward on today's topic. Thanks, everybody.
ROBERT MCDONALD: I did want to point out the recording of this will be posted soon after and I'll be up for a few more months for people who want to come back and review or come see it for the first time.
KEN CHAD: And we'll upload the presentation. And that will also be on my Ken Chad Consulting website tool. So if anyone wants to go back and read it, it'll be there.
ROBERT MCDONALD: All right, thanks, everyone.
KEN CHAD: Bye bye.