Visions of the Future-NISO Plus
Visions of the Future-NISO Plus
https://asa1cadmoremedia.blob.core.windows.net/asset-ea53bab2-4f25-4e53-89d9-bb314316b6e4/Visions of the Future-NISO Plus.mp4
JASON GRIFFEY: Hi, everyone. Welcome to NISO Plus 2022. My name is Jason Griffey. I am the director of strategic initiatives here at NISO as well as the chair of the NISO Plus conference. This session is entitled visions of the future, and it began as a lightning talk session But over the planning process for the conference, it became apparent to us that we had a few sessions that were proposed that were really interesting having to do with a variety of different future-facing services in the information industry.
JASON GRIFFEY: And so we decided to package those together into several short presentations. Thanks so much for being a part of the session After the video is over, we will have a discussion and conversation about the various projects that you see here. So stick around. Join us in the Zoom afterwards, and we'll see you there.
VINT CERF: Frode, it's a real pleasure always to have a chat with you, especially about your ideas for very active interaction with textual material. One of the most recent additions to your portfolio of products is Visual-Meta. And here, we're presenting to make documents more self-aware. Perhaps you can give us a little more detail about how you think about that.
FRODE HEGLAND: Vint, I'd be happy to. Absolutely. So Visual-Meta addresses the problem that documents, particularly published documents, such as academic PDFs, lack access to digital affordances beyond the basic web link. So our aim with Visual-Meta is to give such documents metadata to enable rich interaction, flexible views, and easy exciting in a robust way. And the approach is to write stuff at the back of a document.
FRODE HEGLAND: It sounds simple and ridiculous, and it is. So in a normal paper book, one of the first few pages has information about the publisher, the title, and all of that stuff. And that's metadata. It's the metadata you need to cite the document. So Visual-Meta-- well, before we get to Visual-Meta, PDFs currently have the ability to have metadata, but they just don't.
FRODE HEGLAND: It's complicated to put it in there. So what we've done, we're taking the idea of having the metadata from the front of the book to the back of the document. This is the proceedings of ACM Hypertext from last year. At the end of each document, there is Visual-Meta. So the formatting is inspired by BibTeX. Not everybody knows what BibTeX is, but it's an academic format, part of LaTeX.
FRODE HEGLAND: This bits is actual BibTeX. And to highlight how simple it is because there's a lot of stuff on the screen, author equals, in curly brackets, the name of the author. Title equals, in curly brackets, the title, and so on. That's basically all it is. It's based on wrappers. So you have the start and end tags for Visual-Meta. Within that, you have the header, which basically just says what version we're using.
FRODE HEGLAND: And then the self-citation bits, that's what we call the actual BibTeX because that is what someone would use to cite this document. Importantly, there is an introduction in plain text saying what this is. We have grand goals of this being read in 200, 300, 400, 500 years' time in the future. So it says the stuff below is so and so that can be used in this way.
FRODE HEGLAND: ACM Hypertext uses the mandatory Visual-Meta. There is also optional Visual-Meta to have a full set. This includes document type. And Vint, through one of your connections at Wikimedia, the document type importance was highlighted but exact kind of document is this that can have many different meanings. Author details, maybe the name is Chinese or Arabic. Maybe the affiliations can be in their contact information.
FRODE HEGLAND: Stuff you would normally have in academic document, you could put it in a passable form here. References are crucial. In a readable way, it says everything this document cites. Document headings means that you have the structure of the documents. You can have a glossary and endnotes. So let's just visually collapse that, and then it's interesting.
FRODE HEGLAND: This appendix at the end of the document can also have a pre-appendix, which has content that appears to verify the documents if it's a very important document. It can also have three types of appendices afterwards. One is errata which is like an old-fashioned piece of paper coming in saying there are errors here and there, which is useful part of the history of the documents. Then you have another kind of history which is who has read the document?
FRODE HEGLAND: And what have they done with it? You could imagine that could be part of, let's say, an intelligence workflow who has approved certain parts. And finally, there's information about augmented views, which we will get back to. Current implementation, copy and paste a citation. You just copy text from a PDF and paste it on the Word processor, and it's pasted as a citation, not just plain text.
FRODE HEGLAND: The reason that works is this. What is copied across is the full Visual-Meta with all the citation information. Plus, there's a field called quotes which has the selected text. It also has the addressing and ID and other things so you can know where in the document it was. The workflow we've implemented is your author and author to a PDF with Visual-Meta that you read in Reader.
FRODE HEGLAND: This is our demonstration workflow of how this can work together. Though, of course, the system is open and used by others. We just looked at copy as citation. We'll now briefly look at clicking on citation and endnotes, fold and find in Reader. So here's a PDF. There's a citation. Click on it to see all the information and the endnotes.
FRODE HEGLAND: We fold the document into a table of contents. You can jump around, and you can select text to see all the occurrences of that text in the documents. These are, quote unquote, "rich interactions" that cannot be done in an ordinary PDF. So let's talk about interactive glossary with concept mapping. So you can define any text. I like to define Hamilton.
FRODE HEGLAND: It's simple. Then we'll go into a map view, or I will also write Hamilton. And we've now clicked on my name. There's a line to Hamilton because, in my definition, it says that I am a fan of Hamilton. That's the only reason there is a line. If I click on Hamilton, there is a thin line to indicate that something else refers to it.
FRODE HEGLAND: Here, you see my definition has the sentence a fan of Hamilton. The reason we do it this way where you click to see lines is that if you select everything and have lots of lines, it becomes really, really messy. We'll export that to a PDF. And now, these definitions are turned into glossary terms simply because glossary terms is what we call definitions when we read. So if you now select text and do the find all occurrences, automatically for free, if it has a glossary definition, it'll show at the top of the screen.
FRODE HEGLAND: And other definitions will be bold so. You can click around, and follow the connections in that way. And now, we're down to the benefits. Vint, do you want to talk about some of these benefits? Or do you want--
VINT CERF: I'll look at them. Well, let's just pause for a moment, and let's talk a little bit about the implications of what you've been able to do so far. The first observation I would make is that by making this simply text at the end of the PDF, you preserve its utility. And over long periods of time, in theory, it could be printed physically. It could be scanned and character recognized, and there are a variety of things.
VINT CERF: So we're not trapped into a particular computing representation of the material because it's in this fungible text form. So that gives us some potential longevity. The second thing is that you slid over the URL references, but I want to emphasize something about those. They are fragile. And if someone's domain name is no longer registered, then a URL that contains that domain name may not resolve, in which case, the reference is not useful.
VINT CERF: So the reference information that you incorporated into Visual-Meta is a much richer and probably a more reliable and resilient form of reference. So that's an important thing. The third thing I would observe is that you've designed this to be extensible. And that, I think, is extremely important because it anticipates that there will be other reasons for and classes of document needing references.
VINT CERF: Since you had a document typed in as a component, it crossed my mind as you were putting that up that oh, that's wonderful because now, documents that aren't necessarily simple text could be referenced there. It could be cited. For example, it could be a program. It could be a reference to a virtual reality space, which we might chat a little bit about later. So the extensibility of this design is also extremely important and its resilience as well.
FRODE HEGLAND: Thank you, Vint. That's very relevant to exactly this point. In terms of web references and so on, the point is so strong, and it's not very well addressed here. One of our goals is exactly that. If you're reading a document and you are citing another document, if you have that document already, you should be able to open it with a click, not go to a download site, for instance. Absolutely.
FRODE HEGLAND: So to go through these interaction benefits you saw presented advanced interactions folding and all of that good stuff augmented citing through a copy and paste, which is not only easy. It's also robust because there's no typing or human error being introduced. And then, Vint, there's your term computational text. We can have things like maths and other things that can be manipulated logically on the page.
FRODE HEGLAND: And server surfacing, this can-- items can go into this, let's say, a table or an image or a graph, and the variables in that can be encoded in Visual-Meta at the end of the document. So it's no longer just picture in there. So you could say I want to see only-- out of these million documents, I only want to see the ones that have an xy-axis with certain variables.
FRODE HEGLAND: And then you talked about extensible. Absolutely anybody can add anything as long as they say this is this. That's simple. But it's also extensible media. This does work on DocDoc and other types of documents. We just don't have the reading software right now. But recently, it's also been implemented as a WordPress plug-in.
FRODE HEGLAND: If you go to journal.global, select text, a little doc comes up. If you choose copy BibTeX because one of the key things there is that change is expensive, but change is necessary. So we're trying to change as little as possible. So again, we're staying with normal BibTeX. We're just adding this field that you can see at the bottom of the screen now, the quote field.
FRODE HEGLAND: So a copy is the full BibTeX plus the selected text. And then the thing that I'm very excited about is how this can help with VR. So the idea is very much if you go into a workroom now, you can easily bring your laptops or your computer screen, and that's quite simple. But what you cannot do, you cannot take what's on your screen and lift it into the space to share it. You can share it on a flat panel only.
FRODE HEGLAND: But imagine a knowledge graph richly multidimensional being shared and manipulated. That can be done. Already, now at my group is experimenting with this using Visual-Meta because we know the headings, and we know a lot of the structure of the document. But the neat thing that you saw earlier in this extra appendix, the externals, we call them, is the augmented views.
FRODE HEGLAND: So imagine you put everything everywhere, and when you're done, all the locations of every item is recorded, all the attributes, the possession, the orientation color, anything into a new Visual-Meta at the back of the document. So when you then open it again in a VR space, it will be put back where it was. And these pages by your preference can either be deleted or they can keep buildings.
FRODE HEGLAND: So over time, you have a record of how this document has been viewed. It is really, really important for me that we own our own data, and I know, Vint, that is a very big thing for you too. And right now, my team, we use the Oculus headset. We're not big fans of Meta. That's for sure. Privacy is an issue, but we know that in about a year, Apple will release its system.
FRODE HEGLAND: And suddenly because Apple is good with consumer products, everybody within the next few years will have their VR headsets. And Apple is no angel either. All these companies will try to own the whole space. So I think it is up to us as a community to make sure we can own the data that goes in and out of VR.
VINT CERF: There's another point to be made here apart from owning data, and that is interoperability of the various systems. And simplicity is our friend here. The specificity is our friend. And going to the trouble of validating interoperability and somehow encouraging that among the producers of these products is very important. It is easy to fall into a trap where you want to create a walled garden, and your equipment only works with your designs.
VINT CERF: But if we're really talking about information space, it's very important that all the avenues that should get there are-- essentially, I won't say they're equal necessarily, but at least they're interoperable. And that is not going to be easy to achieve. We've achieved it almost by accident with electronic mail for the most part, but it never worked with instant messaging.
VINT CERF: It turned out to be walled garden products, and the operators of those applications did not agree to interconnect in an actionable way. So I think we have a lot of work to do to persuade the makers of the products and the servers of the products to conclude that interoperability among their various systems. It is more valuable than isolation.
FRODE HEGLAND: And as you already said, this approach here is completely legacy safe, doesn't add any data to a PDF other than text. And it can be printed, scanned, OCR and nothing is lost. We have-- we believe with this delivered on, our aim of enabling rich interactions, flexible views, and easy citing in a robust way. And we are now looking to expand who uses this system and to build this with the community completely open so it can go where actual users want it to go.
FRODE HEGLAND: So thank you guys for listening.
ANA VAN GULICK: Hello, I'm Ana Van Gulick. I'm the government and funder lead at Figshare, and today, I'm going to give you a bit of a summary about the results of our state of open data survey from 2021. This is just a glimpse at the survey, so I do encourage you to go read the full report. Find the results, the expert essays that we've included in the report, and really dig into it a bit more.
ANA VAN GULICK: But I hope this will get you interested in the initial results. We've been conducting the state of open data in collaboration with Springer Nature for six years now. Over that period, we've had more than 21,000 respondents from 192 countries. And this has provided us with a great way to take a sustained look at the state of open data, data sharing, open science, and data management during that period.
ANA VAN GULICK: Our 2021 survey was conducted in the summer of 2021, and we had nearly 4,500 responses that were analyzed as part of the results. You can see that our respondents came from all over the globe, from Europe, Asia, and North America represented most heavily, but also from the southern hemisphere. The fields of interest from our respondents included the biomedical sciences as well as many other scientific fields like the applied sciences and physical and earth sciences as well as the humanities and social sciences and other fields.
ANA VAN GULICK: Our inferred career stage based on [? Yierson's ?] first peer-reviewed publication indicates that a majority of respondents were late-career researchers. However, nearly 30% were also early career researchers. It's worth noting with a survey like this about open data that our survey population is probably likely to skew towards those already interested in open data, practicing data sharing, and being advocates of open science.
ANA VAN GULICK: However, this still gives us the best glimpse we can at a large population and what's happening with these practices. An important thing to note in the 2001 survey results is, of course, the impact of the COVID-19 pandemic. We've truly seen the importance of open data and open science and having scientific results shared quickly. We've seen that having open data be shared across countries, across regions has really proved important.
ANA VAN GULICK: And we've seen scientific results that haven't even been published yet in preprints and open data sets reaching the mainstream media. This was reflected in our survey results that open data had reached new importance. So about a third of our respondents indicated that they had reused their own or someone else's openly accessible data more during the pandemic before. This could also be because research-- researchers were not able to produce new data as much.
ANA VAN GULICK: So it emphasizes the importance of reusing data sets collected by others. So what I'll do today is walk you through a few key takeaways of our survey. This is just a snapshot. So please do go dig in more. The first takeaway I want to talk about is that surprisingly, given the current global pandemic, there was actually more concern about sharing data than we had ever seen reported previously.
ANA VAN GULICK: The top concern that researchers reported were concerns about the misuse of data, about how data might be applied, taken out of context, reused in ways that they did not originally intend it to be. And this could actually be driven by the pandemic and by that quick adoption of results by many different groups and by the media. Researchers are also still concerned about getting credit and about data licensing.
ANA VAN GULICK: Well, another way we can look at these results is to look at them longitudinally because we frequently ask the same questions in our survey year over year so that we can compare the results. So here, we can look at concerns with data sharing over the last four years. So what's going up as a concern? So beyond the concern about data misuse, we also see concerns about the rising costs or just the cost in general of data sharing.
ANA VAN GULICK: I think there's recognition that data sharing does have a real cost, whether that's expert curation or data storage. Data storage has become an issue as data sets themselves have become larger due to the technologies that we have to produce data and analyze data as well as the importance of AI, machine learning, and the ability to combine and mine data sets for results. And it's become recognized that if you need to store a 20-terabyte data set forever or for many, many years, there will be a real cost to just even that data storage itself.
ANA VAN GULICK: And so that's something as a community, we need to come together to solve. We still see that researchers are concerned about the incentives for data sharing, whether the effort is worth the reward. Are they going to get credit for doing this? Will they be acknowledged by the-- by their funders? Will they be rewarded by their institutions for taking the time to share their data?
ANA VAN GULICK: And there also continue to be concerns about copyright and licensing as well as about sharing sensitive data. Copyright and licensing are always thorny issues to wrap your head around, especially if data sharing is not something that you work on every day. And this is where training may come into play. So as a community, how can we better train researchers across the graduate training cycle, across the faculty cycle?
ANA VAN GULICK: How can we build data management best practices and data sharing knowledge into those curriculum so that that training isn't being offered just at the time of data sharing but really being built into larger research practices? What's going down, though, is that researchers have less concerns about selecting a repository, which is great. The repositories are there.
ANA VAN GULICK: The experts are there, and they know where to share their data. A second key takeaway, which is great news, is that there is now, more than ever, familiarity and compliance with the FAIR data principles, to make data findable, accessible, interoperable, and reusable. These FAIR principles were published more than six years ago. And I think this result speaks to how FAIR has been a great way to do outreach to the research community to explain that simply making data open does not make it reusable, and that one also needs to apply best practices to make it discoverable and reusable both to other researchers and to machines.
ANA VAN GULICK: A third takeaway is that repositories, publishers, and institutional libraries have a key role to play in helping make data openly available. When we asked researchers who they would turn to for help, nearly a third said repositories, a third said publishers, and a third said their institutional libraries. So there's no need for these groups to compete about who gets to help researchers the most.
ANA VAN GULICK: This is something we can all help address together. We all have the same goals of making more data open, making data more fair. And how can we best support this with our infrastructure and our trading practices? Now, I'll highlight a few key takeaways for a couple of different segments. So first, for institution, we've just seen that researchers are going to rely upon you for guidance and that they would specifically turn to institutional libraries, which is great to see because I know so many libraries have spent time, money, effort building up expertise in data management and data sharing over the past 10 years.
ANA VAN GULICK: And the fact that researchers are now turning to you as experts is really lovely to see, and also turning to your resources, so turning to institutional repositories. Nearly half of our respondents said they had shared data in an institutional repository. 58% of our respondents, on the other hand, did say that they would like greater guidance from their institution in how to comply with data sharing policies.
ANA VAN GULICK: And this is again where we come back to offering training and support over a broad period of time and building that into practices so that librarians are not inundated when researchers are trying to get help at the last minute, when a data management plan is due, or when a data set needs to be shared for a journal publication but that this guidance can be provided across a long period of time. So a few key takeaways for publishers is that researchers were quite motivated to share their data if there was a journal or publisher requirement that they had reused data from publications.
ANA VAN GULICK: And the important-- they thought it was quite important that this data was actually available from a publicly available repository. So they weren't interested in finding broken links to data, in finding statements that said data available upon reasonable request but wanted to be able to go to a data set DOY in a trusted repository and easily find that open access data. And publishers have a great amount of power here, I think, to help continue the growth of open data.
ANA VAN GULICK: Researchers are really receptive to this mandate. The other mandates come up with funders and government agencies. So our survey respondents were really favorable, actually, to funders requiring data sharing as part of awards, even withholding funding when these requirements are not met, and even more favorable, 73% towards national mandates for making research data open. However, once again, they would like more guidance on how to comply with these policies.
ANA VAN GULICK: How do they build these practices into their workflows? And this is where we can all work together as a community on those efforts. So that's it for me today. I do encourage you to go read the full state of open data report. It's available freely on Figshare as well as the raw data. Do check out some of those expert essays and opinions that have been included in the report.
ANA VAN GULICK: There's some great ones on the role of data collaboration-- sorry, the role of data curation, on open source collaboration, data in the life sciences, and even data support from publishers specifically. So thanks very much, and I look forward to talking to you during the session.
JENNIFER KEMP: Hello, I'm Jennifer Kemp from Crossref, and I'm happy to have the chance to talk about building a more connected scholarly community. Over the past several years at Crossref, we've adjusted our focus a bit from talking about DOYs and persistent identifiers on their own to a broader view of the information that is associated with those identifiers, more and better metadata in short. DOIs are critical, of course, but so is the quality and completeness of the information that's included in each record.
JENNIFER KEMP: More recently, we've evolved this further to concentrate on connections or relationships among the records, which now number over 132 million. Of course, relationships of various kinds among different outputs have long been part of our shared work. Supplementary data, however, defined comes to mind along with translations for two examples. Citations are probably the best-known example of relating one output to another, whether using the Crossref service or not.
JENNIFER KEMP: Linking preprints to versions of record and in some cases, versions of record preprints have been a very fast-growing relationship type over the past few years. A couple of years ago, we introduced an option for registering peer-review reports. Thankfully, links to data sets are becoming more common, and records between Crossref and data site can be linked up. Event data collects online commentary from a number of sources, including Wikipedia, and there are many more.
JENNIFER KEMP: I'm not going to go through all of these. Only some of the connection types are shown here. And a lot of this probably seems very obvious, but we really need the information linked up in the metadata to know, for example, that there is a data set available that is associated with a journal article that may have had a preprint and has software and probably some funding associated with the research behind it.
JENNIFER KEMP: Having information available in an open structured way as opposed to reading about it in a journal article, for example, is especially helpful because machines do a lot of the heavy lifting when it comes to using the metadata. But it also just makes a scholarly record much more powerful and really accurate because then it mirrors all of the outputs and all of the contributors.
JENNIFER KEMP: So at this point, we have about a million and a half relationships in the metadata. So you can just imagine what the full breadth of those connections would be for 132 million records and growing. So what we envision with all of this is a Research Nexus, and you can see it includes a lot of the outputs and connections that I've mentioned already and some others that may not often come to mind in this context.
JENNIFER KEMP: It's ambitious and largely aspirational so far. But hopefully, this helps illustrate just how much potential there is in linking all this information up. This isn't a fixed list. It will evolve over time as the metadata always does. So let me quickly share two recent examples. In November last year, we made available the grant records that are registered by our funder members, which means, among other things that publishers can use this information in their own records for better linking between funding and published outputs.
JENNIFER KEMP: Last month, we announced that the Research Organization Registry or ROR IDs are available in our open APIs as well. So collecting and including this kind of information and deposits is a necessary and very welcome first step. But the information really needs to be linked together and made openly available. So we know it takes work to do that, to build up these connections, and to make use of them.
JENNIFER KEMP: So we're always looking for ways to help make that easier, including considering a new relationships API. So stay tuned on that. Finally, I want to mention POSI or the Principles of Open Scholarly Infrastructure, which is a set of guidelines around governance and sustainability for scholarly infrastructure, organizations, and initiatives, a number of which, as you can see, have signed up to the principles already following a self-assessment.
JENNIFER KEMP: So that list is likely to grow. POSI makes explicit the-- how a lot of these organizations were already working anyway. But because the availability and persistence of all this information requires a healthy network of organizations, we hope it serves as a useful resource for understanding how the research support community can sustainably operate.
JENNIFER KEMP: Like the Research Nexus, some of it is clearly aspirational. Like the metadata, the information is openly available for the community to evaluate. So please do have a look and share your feedback. Thank you.
MIKE NAPOLEONE: Hello, and welcome to our session on accessible discovery via mobile for an economically diverse world. My name is Mike Napoleone. I'm a product manager at EBSCO, and I'm pleased to be joined today by Dr. Monita Shastri, chief librarian at Noma University in India. So, we at EBSCO have been striving to improve research and discovery workflows for libraries and their users through mobile solutions.
MIKE NAPOLEONE: As part of that journey, we released our mobile app to address some specific problems we had found in the market. For example, we heard a lot about academic research being a nonlinear process with intermittent steps along the way, and also being a cross-device process with needs and expectations for seamless synchronization across devices. We also heard a clear need for ubiquitous access to research and making it easy to conduct some of the steps in an anytime, anywhere fashion.
MIKE NAPOLEONE: Ebook downloads on mobile was another problem, and overall just making research accessible to anyone with a smartphone. Our app serves as kind of a Swiss Army Knife in that it performs some key functions as a subset of desktop capabilities but in a portable easy-to-use package. We consider this a start in a way for us to continue learning and iterating further as we continue on this journey for improving research in the markets we serve through mobile solutions.
MIKE NAPOLEONE: As we think more about the opportunity for mobile, we feel mobile has faced barriers in gaining adoption to fully realize its potential. To understand that further, we've made somewhat of a pivot in our research and design approach. Historically, we had modeled our approach around role-based personas, for example, undergraduate students, masters, faculty, librarians, professional, researchers all with key variations across markets.
MIKE NAPOLEONE: This worked to an extent for identifying distinct solutions suited to each role. However, when we look at today's generation of academic and professional researchers across roles, they share common expectations for an efficient and effective research experience, and in this way, there's less variation across roles. But at the same time, it's a more complex world with huge variation within each role, based on factors like their use and skills of technology, differences in the environment in which they conduct their research, and their goals and intentions in conducting research.
MIKE NAPOLEONE: So, with this, we've shifted to more of a needs-based persona model in order to draw distinctions and therefore design solutions between users with different types of needs. This transcends across roles and even markets. I'll highlight further with a couple of examples, one being a student with a high level of digital competency, who is very savvy with modern software apps and their functions, and who can adeptly conduct advanced searches in a discovery platform but chooses not to, simply because they were not inspired by a particular class assignment.
MIKE NAPOLEONE: So, not a limit of technology, but rather just not something that suits their needs in that particular use case. So the question here is how do we tap into their savviness and search as other research access points in a familiar way that might pique their curiosity a bit more and gain further engagement? Or how do we get them to use research apps, maybe rather than Google, for researching topics they are interested in and therefore are more likely to use some of the advanced features for more robust searching?
MIKE NAPOLEONE: Maybe that involves going to where they are, which could be other apps and hooking into those apps via interoperability for example. Second example I'll highlight would be a serendipitous, enthusiastic researcher who might be so locked into their desktop habits for research that they dismiss or overlook opportunities for mobile to play a role in their research. Or maybe they're just simply not aware.
MIKE NAPOLEONE: So, again, it's not a limitation of mobile, but just around raising awareness and helping those researchers understand where mobile can fit in a seamless manner. A lot of this is depicted visually in the slide, but these are some of the things that we're thinking about as we continue on this journey. At this point, I'll turn it over to Dr. Shastri for her perspective on the impact and opportunity for mobile in her library.
MONITA SHASTRI: Hi, all. I would like to start my presentation with philosophy of librarianship in India. Dr. SR Ranganathan was founder of library science in India. In 1931, he gave five laws of library science which are applicable even today. First, books are for use. Second, every reader his or her book. Third, every book its reader. Fourth, save the time of the reader.
MONITA SHASTRI: Fifth, the library is a growing organism. Here, book can be referred as information in any format, readers or the users are the techno-savvy generation which are comfortable with e-formats. Basically, the reason being that they're portable and the features that the e-formats have, and the searching facility of e-formats are saving their time.
MONITA SHASTRI: The fourth law of librarianship in India says save the time of the user that is being served. Second, I would like to say that portability of e-formats they can access through palmtops, mobiles, iPads, is a good feature and that is where they are finding it quite comfortable. During our orientation sessions, information literacy sessions, we find that we have to make students understand that instead of using open sources, all the information that is available through search engines or Google, may not-- you may have to go and authenticate those information if it is right or wrong.
MONITA SHASTRI: You may have to see the sites through which you are accessing are authentic sites or not. And as a student, they will not be subject experts and it is a struggle for them to understand which information they're accessing is right. But we have subscribed to academic products for them and they should access academic product so that they don't have to get into this fuss of authenticating the information.
MONITA SHASTRI: But since they have to access all these things through laptops, desktops, it is becoming a bit clumsy for them. In this orientation in 2021, well, we introduced EBSCO Mobile App. Students found this as a welcome change because here, wherever they go, they're carrying their mobiles. So, if they have a problem, they just surf Google through their mobile and they get information. But now, with the mobile app, they can even search the academic resource at school.
MONITA SHASTRI: So, as a library professional, I think that such kind of app will encourage students to use academic resources or any other search engine, which is not providing them academic information. Moreover, it is helping them in multitasking, as PDFs can be converted into audio and listen to their convenience like while driving, gymming, walking. They can listen to the PDF documents that they need to study.
MONITA SHASTRI: For further upgrades, I would like to suggest usage reports can be modified and upgraded. Users reports should be given to the users also, like how much work they have done until now, or they may get some target setting in applications like EBSCO app, where they can set a target like every day they should be-- or in a week, they should be getting at least 10 or 20 articles to complete their literature survey, and the app will tell them that, your target of five articles has been done in these two days.
MONITA SHASTRI: There are another five articles to go if they have set a target as 10. So that usage report will help them in understanding and in motivating them and fulfilling their target and target-based what can be done. Secondly, as a librarian, I want qualitative analysis of the usage of databases. So there I feel that category of users who are accessing the database should be known to us like if the students are using, faculty are using, which category of students are using EGPSV, publications which are most accessed by them, the subject area which is the most studied.
MONITA SHASTRI: Such kind of details should be available. The third thing that I want you to-- EBSCO should work on if its utility is possible on smartwatches, or such kind of wearables, because today's user if we see, they don't-- carrying mobile is also a task for them. They will not carry their mobile with them everywhere because they are wearing the smartwatches. If while they've gone on a walk and not taken their mobile, and they feel they need that some idea clicks, they can just go to the app, open it, and put that request and then just start listening to the article.
MONITA SHASTRI: So that is how they can continue with their research and then they can do their work. Thank you.
MIKE NAPOLEONE: Thank you, Dr. Shastri. So as we look globally at technology use, it's pretty clear to say that mobile devices are now the most used and most important computing platforms. There are many ways of demonstrating that through data, some of which are shown on this slide and the following one, and by now that's true in almost all industries. As we look at our world of academic research, while we're excited by the progress in advancing mobile for research, we still feel it's underplayed and that mobile can be much more prolific in the research process going forward than it is today.
MIKE NAPOLEONE: We aim to enable this through needs-driven mobile solutions, rather than just making it something that's a sideshow brought along for the ride with desktop applications. The key here is finding the right needs and solutions. The hexagon visual on this slide shows some possible examples of this. For example, this could be around rich and visual ways to consume and interact with full-text content or leveraging other information on the device to deliver highly relevant content and experiences, or through deep interoperability with apps and services to enrich the research process and also embed it in other workflows.
MIKE NAPOLEONE: So those are some areas we aim to continue to explore as we continue on this journey. And we'd be glad to collaborate with anyone interested in doing so. Overall we hope you enjoyed this session and found it insightful, and thank you very much for your time today.
MOHAMMAD ALHAMAD: Hello, everyone. This is Mohamed Alhamad from Missouri State University. And I'm presenting an uber-like scholarly communication system. Let's start with the driving reason why I'm proposing a new scholarly communication system. At the beginning, we need to consider the current method in which research is disseminated.
MOHAMMAD ALHAMAD: Current model for Scholarly Publishing originated back in the 17th century when the growth of experimental science led to the need to share the results of research with fellow scholars. Over time, publishing in scientific journals came to serve additional functions, including as a tool for evaluating individual scientists' performance and validating the quality of research, such as impact factor and each index.
MOHAMMAD ALHAMAD: The purpose of scholarly communication remains the same today. Publishing provides scholars the platform to share their theories and discoveries. However, the increased pace and specialization of research in the mid-20th century led to a rapid increase in the number of journals published, which attracted the attention of commercial publishers.
MOHAMMAD ALHAMAD: This introduced a major defect in the current scholarly communication model. Publishers can make large profits by selling research to the very universities, taxpayers, and grant vendors that pay to produce them. This, they do not just once, but twice when the research is conducted and again when libraries purchase the outputs, who support these scholars.
MOHAMMAD ALHAMAD: Furthermore, this model prevents research findings from being disseminated as equitable as they could be. In the current model, publishers set prices with little reference to economic realities. Subscription prices has no impact on demand. When specific journals are determined to be core as a discipline, libraries have little options but to pay for them.
MOHAMMAD ALHAMAD: As a result, access is not equitable but it's based on ability to pay. The consequence of this way in which the scholarly communication model works is that the results of research are not disseminated as widely as they could be. And the publishers monopoly the scholarly communication system. Open access is one of the good solutions to get this scholarly communication system back on track.
MOHAMMAD ALHAMAD: However, there still remain many challenges with the open access including high article processing charges, double-dipping, funding, predatory journals, copyrights issues, talking about academic social networks and Sci-Hub, and most importantly demise of small publishing research societies. That's to name a few. Then viable question arises.
MOHAMMAD ALHAMAD: With all available developed technology, why not to rethink scholarly communication system? And before launching into the solution, let's ask how many researchers still choose a specific journal to conduct a topic search and why? It is now common for users to go directly to the library discovery service, search, and database, or Google Scholar.
MOHAMMAD ALHAMAD: So in this presentation, I submit an open like scholarly communication system. As a platform for communicating research outputs to the community and for sharing across disciplines. This system will include new tools to evaluate individual scientists' performance and validate the quality of research aside from their impact factor and each index. Here, the focus would be on the articles rather than the repetition of individual journal titles.
MOHAMMAD ALHAMAD: The quality of scholarly work, of course, would still be maintained through the peer-review process. In fact, reviewers will have profiles and can gain scores for their review to maintain credibility and account accountability. The new platform will not be a repository system for the hosting preprints of research results. Instead, it will be the platform to publish original articles with a unique digital object identifier.
MOHAMMAD ALHAMAD: The new system will not necessarily replace the current scholarly communication system. However, it will complement it. This may parallel the development occurring in media where social media and streaming services enrich, complement, and in some cases, have overtaken older forms of media, for example, the radio, newspapers, and cable TV, where the news media technology offered new methods of interacting with each other, managing our well-being, studying, and working.
MOHAMMAD ALHAMAD: Most importantly, it provided us the freedom of choosing when what, and how we want to read, listen, or watch. Also opened doors for entrepreneurs and talents to innovate and thrive. There is also a great potential for big tech to step in and collaborate with libraries to make that happen, considering the success of Highwire, Google Books, and Google Scholar.
MOHAMMAD ALHAMAD: These are my references for my presentation. Thank you. [MUSIC PLAYING]