Name:
The values and challenges of the CRediT taxonomy
Description:
The values and challenges of the CRediT taxonomy
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/38e9d9f5-a463-4ddc-a039-6ba8e630e709/videoscrubberimages/Scrubber_1.jpg?sv=2019-02-02&sr=c&sig=sws%2BGeqTcCR3NZwW78bzpxyiLN5bFonLqWHsQ%2F3hb2s%3D&st=2025-01-15T13%3A29%3A39Z&se=2025-01-15T17%3A34%3A39Z&sp=r
Duration:
T01H00M10S
Embed URL:
https://stream.cadmore.media/player/38e9d9f5-a463-4ddc-a039-6ba8e630e709
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/38e9d9f5-a463-4ddc-a039-6ba8e630e709/5 - The Values and Challenges of the CRediT Taxonomy -HD 108.mov?sv=2019-02-02&sr=c&sig=3maWosS7IRKBkUvyjVqsmuuj0vX6FFCEhk0DWhJri8s%3D&st=2025-01-15T13%3A29%3A43Z&se=2025-01-15T15%3A34%3A43Z&sp=r
Upload Date:
2023-02-13T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
[MUSIC PLAYING]
JASON GRIFFEY: Hello, everyone. And welcome to The Values and Challenges of the CRediT Taxonomy here at NISO Plus 2021. My name is Jason Griffey. I am the Director of Strategic Initiatives here at NISO, as well as the Chair and Director of the NISO Conference. I am doing an introduction here in place of Todd Digby, who is the moderator of this session. He is the Chair of Library Technology Services at the University of Florida.
JASON GRIFFEY: He'll be joining us at the end for the conversation and the question and answer part of this session. But I'm introducing the pre-recorded section, which involves Liz Allen, the Director of Strategic Initiatives at F1000; Richard Wynne, the founder of Rescognito and Alex Holcombe, a professor at the University of Sydney, all of whom are going to talk through their perspectives and their particular interest areas having to do with the credit taxonomy.
JASON GRIFFEY: After the recording finishes, we will move on to a discussion and conversation, and we will see you there. Thank you very much.
LIZ ALLEN: OK, thank you. My name is Liz Allen. I'm the Director of Strategic Initiatives at F1000. And I'm going to talk through my perception of the values and the challenges of the Contributor Role Taxonomy, known as CRediT. And I'm someone who's been involved in the CRediT taxonomy since it's founded-- it was founded in the early 2010s. So what I'm going to talk through today, I want to give you a little bit of background about the origins and why CRediT came around in the first place.
LIZ ALLEN: I want to talk through some adoption and implementation examples. I want to show you some insights into how it's now being used and its potential to really shed light on how research works, so sometimes known as research on research. And then I'm also going to give you some thoughts and some direction about where we're going next with the CRediT taxonomy. So firstly, in terms of the origins and demand for CRediT, authorship is essentially seen as quite an outdated and static concept in many areas and for many reasons.
LIZ ALLEN: Being an author doesn't really describe the range and nature of what the people listed actually did. And it's always been thought that it would be helpful to be able to do this. And traditionally, people described what they did in relation to an article actually in an acknowledgment section or somewhere on an article, on a published article. But often, that detail was never available to be able to be used in any useful way, and it was quite opaque, what was being listed.
LIZ ALLEN: And there can be some quite bad behaviors that listing contributionships can actually show. And there are a lot of assumptions made about the roles and contributions that people listed as authors have actually done, and particularly when thinking about author position. So this is quite an old slide from 2005, but I tend to show it in a lot of meetings and discussions around CRediT.
LIZ ALLEN: And some of the issues still persist in terms of the perceptions around the first author and the second author and what the last author did. And even though it's pretty humorous, it's actually quite interesting how some of these behaviors actually influence people's careers and also perceptions around what they've actually contributed to a piece of research. So the question around does authorship really reflect contribution in a really accurate way is something that is talked about a lot.
LIZ ALLEN: There's also a demand and a need for transparency and accountability in scholarly research and scholarly publishing in particular as one of the main routes to making research available. And it's not a new issue. There have been calls for making authorship more transparent since the mid 1990s, and actually before then, particularly for publishers to make the contributions of their authors and the actual authors accountable for the work that they are publishing.
LIZ ALLEN: And there have been calls, particularly in a lot of medical and clinical research areas, for more transparency around those contributions and authorships. And again, it carries on. There's a lot of recent papers around the challenge of ghost authorship and gift authorship. And then they're terms that are used quite a lot across scholarly publishing.
LIZ ALLEN: And the need to make sure that people who are putting their names and providing information for others to use and reuse are actually responsible and have some accountability for the work that they're being associated with. And this is a really pertinent issue for today, as lots of new, exciting models of publishing research that facilitate faster publication are being made available.
LIZ ALLEN: So it is really, really important that researchers are accountable and we can track responsibility and be able to find out who has contributed and is involved in certain pieces of research. Another practical demand for the need for something like CRediT was there has definitely been an upward trend in collaborative and team science across most disciplines. There's much research that shows the shrinking share of lone authorship papers.
LIZ ALLEN: So having a solo author on an article is pretty rare, especially in a number of the STEM disciplines, but actually in other disciplines too. There are many examples of extreme team science where lots and lots of contributions and authors are listed on an article. And a recent paper shows that for lots of physics areas, there are more than 5,000 authors on some papers now. Which actually shows trying to use any kind of proxy about authorship contributions based upon things like author position or name-- naming-- nomenclature is just really unhelpful and unuseful and actually don't tell you anything.
LIZ ALLEN: So the idea of being able to understand what types of contributions people made, really, really useful. And this also supports the move more generally for information around research contributions and people's careers, and how they're actually working to support research. The other kind of main issue that's been happening is actually the concept of being-- your career as being judged on whether you have published articles as a main output of your research.
LIZ ALLEN: It's also another pressure that researchers face. And this has also led to lots of people publishing articles and maybe adding their names or perhaps not getting the credit in the opposite direction for work that they've really contributed to. So there are a number of examples of how the publish and perish culture that we live within actually is making the publication of knowledge and the sharing of information unhelpful in many ways.
LIZ ALLEN: And again, more practically, there's been lots of moves for-- it's really important to have more information about what authors-- what authors have contributed. As space limitations have actually gone away, it is possible now to be able to describe and capture information on a digital record through online publishing more than perhaps had happened in previous times when actually information around contributions were not routinely captured.
LIZ ALLEN: The information about what authors actually did is just really useful. So in 2014, the Contributor Role Taxonomy was developed. It was born out of a collaboration between funders, research institutions, researchers, and learning societies and publishers. And there was a workshop held at Harvard University in 2012 where there was a move to create a taxonomy that was simple, easy to use, and that could be used in the scholarly workflow that could better reflect and better capture information around the roles that researchers normally do when publishing a piece of scholarly work.
LIZ ALLEN: And it was intended to complement the concept of authorship, but actually turning much of what was traditionally in the acknowledgment section into some kind of dropdown structured information that could then be used for a wide variety of purposes, and that could better describe contributors' specific contributions to that output. So a 14-role taxonomy was developed, and it has been being used by a number of publishers in the scholarly workflow since and in other workflows.
LIZ ALLEN: These are the 14 roles, and there's a definition describing what actually constitutes each of those roles. So just to move on to some examples of how CRediT's been implemented, as I mentioned, a number of publishers are now using the Contributor Role Taxonomy through editorial management systems and manuscript submission systems and actually in the publishing workflow to capture contributions during the submission of an article.
LIZ ALLEN: So those information are actually captured in a structured format, and then it can now be sent to cross ref and other indexes increasingly so as part of the metadata of an article. This is something that needs further development, but once the information has been captured in the structured format, it means that it's there alongside the rest of the article metadata. And there have been a number of calls within publishing circles as well to make sure that credit is used in a much more holistic and consistent way to bring some of the transparency and author contributions to promote integrity as described-- as described on the first article there.
LIZ ALLEN: And there have been other implementations in other parts of the research system. So there are now a number of applications being developed that actually allow researchers to add to their contributions as part of a team or as part of a laboratory workflow so that you can capture that information before you even submit information to a publisher. So you have the description that a whole team can agree who's done what.
LIZ ALLEN: And therefore, it's publisher-agnostic, so you can actually capture that information and get it all agreed and work out how it can be used when you submit an article somewhere. Or you can actually use it in other workflows. I think another of the presentations in this session will also talk about how Rescognito has been using it and also the Tenzing app. And there are a number of universities and academic institutions now looking at the credit concept to be able to recognize and support people's careers according to the roles and contributions that they've made instead of relying on proxy measures of whether they were first author or last author on an article.
LIZ ALLEN: So it just gives you a greater nuance of how contributions are being made to different parts of the scientific workflow. There are also a number of initiatives across the world looking at encouraging and supporting collaborative and team science. And again, this is an example from the Academy of Medical Sciences in the UK.
LIZ ALLEN: And in their report, which is about how to support team science contributions from researchers across the career stages, they do recommend the use of transparent standardized information contribution, and specifically the credit taxonomy, in their document and in their report as a recommendation for how to improve transparency and recognition for people's contributions. And there's also specific mention to the credit taxonomy in the Declaration of Research Assessment Initiative, which is all about making sure that people are recognized for the wide variety of their research outputs and contributions as opposed to just relying on proxy measures of where someone has published.
LIZ ALLEN: And they specifically say encourage the responsible authorship practices and the provision of information around contributions of each author within published work as well as a way to avoid a focus on a narrow definition of contributions and authorship positions. And as I mentioned, CRediT had origins in the early 2010s. And actually, it's been used by publishers since then, as I've shown you.
LIZ ALLEN: But recently, last year, we were awarded some funds working with NISO, and our recent affiliation with NISO, to actually help take CRediT to the next level, make sure that it is being used properly in workflows and is actually being used to support the potential that we hope for it. And it's been sponsored by the Sloan Foundation and the Wellcome Trust, which is really exciting to have designated funds to help do a number of things to keep credit being used consistently and effectively in research workflows.
LIZ ALLEN: So specifically, we'll be doing work around further implementations in the scholarly workflow and looking at persistent identifiers for those roles and making sure that they are part of the better data in any scholarly publishing workflow. What we want to do is, obviously, try to keep it simple. Capturing more information when an article is being submitted is another burden for researchers, so we want to make sure that where it is being captured, it is being captured in a simple and effective way.
LIZ ALLEN: Where you have huge numbers of authors as well, the challenge there is to make sure that we have the information without burdening the authors, so looking at how other applications and other systems might be able to support that in keeping it simple. But nonetheless, even in the current system where there are huge numbers of authors, those authors are still supposed to be able to say who did what anyway in a non-structured way.
LIZ ALLEN: So in a way, credit isn't really adding to the burden. It's just adding to another opportunity to be more transparent around that. But we are mindful of the need to keep it simple. Through the credit support through the Sloan Foundation and Wellcome, we'll also be providing more resources and materials, specifically through the NISO site, to make sure that people understand how it can be used and some of the potential opportunities.
LIZ ALLEN: And we'd love more people to come up with ideas about how to make it used to best effect. And then finally, we'll be developing a credit interest group to keep the taxonomy fit for purpose for the coming years and also to cover this broad spectrum of disciplines that it's intended to serve. It was developed from a clinical and STM life sciences perspective and physical sciences perspective.
LIZ ALLEN: It has been used across all disciplines by many publishers, but we want to make sure that what we have really supports disciplines across the whole spectrum. I just wanted to show some information also on how the value of credit can be used to support understanding of how research works and how to make the research process more effective. There are a number of studies that really focus on how research is done to best effect, and credit can really help us start to look at that.
LIZ ALLEN: I just wanted to show some examples of how it is being used, which is an alternative approach to looking at the value of credit as well. So there are a number of studies looking at the gender and diversity in research. Again, if we have contributor roles on top of those kinds of issues, we can start to look at different contributions and some of the issues faced in different career and diversity paths across the research ecosystem.
LIZ ALLEN: Again, there have been other studies looking at the division of labor and the evolution of roles. And again, there's been a lot of work looking at author contributions and position, author positions, according to different disciplines and how different roles and skills are being used throughout the division of labor in certain research areas. Going back to the initial point, there is a lot of work needing and calling for more transparency and recognition in research.
LIZ ALLEN: So this, again, is able-- CRediT is able to show how that can be opened out, and we can look at different contributions across different disciplines to bring more accountability to how data, for example, and other pieces of research outputs are used. And then there has been quite a lot of work around the team science and the collaboration, the benefits of team science in large teams.
LIZ ALLEN: And again, it's really interesting to look at different contributions and which disciplines are using large teams and when contributions are really helpful and perhaps when they might not be. So there's lots of really interesting studies that having more information, more granularity around contributions to research can really help to open our eyes to which bits of the research process that they might have been really helpful in, which areas have lots of contributions from certain areas, certain disciplines and certain contributions, and also, many, many ideas around gender and diversity in research more generally.
LIZ ALLEN: So I just want to leave you with a couple of thoughts and go back to where next. This relates to what we will be doing with the funds that we have received from the Sloan and the Wellcome Foundations. Some of the things that we do here-- there are some critiques and potentially unintended consequences by adding more information and more granularity to the scholarly record.
LIZ ALLEN: We certainly don't want to bring unintended consequences around. And it's always hard to balance what are unintended consequences from what we're trying to do good to the system and how we're trying to improve the system. So some of the things that might be quite interesting to talk about in the discussion session is adding this kind of contribution information in a structured format.
LIZ ALLEN: Is this going to have unintended consequences in providing another target for evaluation around contributions? Is it going to help or hinder people's careers if they are linked to specific roles? There have been some discussions around whether we should-- with the credit taxonomy, some journals and some publishers have been using estimates of whichever roles that you have contributed to, whether you had a major or a minor role in adding some kind of granularity around the level of effort.
LIZ ALLEN: And if you do that, does that start to mean there's some fractionalization in how you might judge or assess that contribution? I'd always been of the view-- and this is my personal view-- that actually, you wouldn't want to add too much granularity into those levels of contribution because that is almost providing extra information that is not really helpful. What you want to know is that somebody has contributed to a specific piece of work, and that's kind of enough.
LIZ ALLEN: But whether people do think they need to have a lot more information, whether that actually leads to more bad behavior, or whether there could be misuse of metrics and information in relation to that-- that's another issue that's being discussed. And there was an interesting piece published in "The LSC," a blog, around this, that should we welcome the tools to differentiate contributions, or is it a detail too far?
LIZ ALLEN: It's a really interesting read. I think I would argue that this information was already written in a very opaque way on most journals and most articles. And I think, actually, by making the information more transparent and usable, the harms done are probably much less than the harms not done by having the information and not being able to use it in the first place.
LIZ ALLEN: Because we certainly shouldn't be capturing any information that we can't use, that isn't useful to people. And the feedback we've certainly had around CRediT is that most-- particularly early-career researchers and people working in areas of research such as data curation, software development, visualization, and any kind of new sort of areas-- informatics and methodologists certainly have much more visibility through the CRediT taxonomy than they have done previously when they might not have been a single PI who ends up being first author on many papers.
LIZ ALLEN: So there's lots of discussions around, actually, the credit taxonomy is really helping researchers who perhaps aren't the PI, but really have had a major role to play in the development of certain areas of science, and increasingly so as we move towards a more multidisciplinary and collaborative world. Again, related to the first point, if there's a metric to be had, would people then seek to display certain types of behaviors and actually say that they've contributed to something that perhaps they haven't?
LIZ ALLEN: Again, this is all about people being honest and transparent in the research process. And again, this must happen now, so I can't see why CRediT would make it any worse. But it is an interesting thing to think about. Are people going to be striving to say that they contributed to a specific role that is perhaps a favorable one to say you've contributed to?
LIZ ALLEN: Another thing goes back to the point-- one of the things that is really key is to keep things simple. What we don't want to do is add to all the burden in any way by adding extra information that they must supply during the submission process. So we do want to keep it simple, but we do want to make it valuable. If we're going to capture it at all, it needs to be usable, and it needs to be valuable across different fields and to different people and relevant to, obviously, the NISO link with others.
LIZ ALLEN: And that's why we're really excited to be partnering with NISO with the CRediT taxonomy. It's so important that any taxonomy is kept up to date and used consistently. So this is something that's obviously a challenge, and we are keen to make sure that this is part of the workflow going forward. I wanted to just mention again the CRediT Community Interest Group is coming soon.
LIZ ALLEN: There's lots of information and resources on the NISO website about CRediT and the standards that they operate and the systems that they work. So please have a look at the NISO website and sign up if you're interested. And I'll leave it over to the discussion. And hopefully I've given you some food for thought in some of the questions that people might want to raise and some of the things that might be concerns around CRediT.
LIZ ALLEN: But I'd like to think that actually, it's all a very positive thing, and it's being used very fruitfully across the world. So thank you for your time.
RICHARD WYNNE: OK. Hello, everyone. It's a pleasure to meet you, and thank you for participating in this workshop on research credit. My name is Richard Wynne. I'm the founder of Rescognito. We're based in Boston, Massachusetts, and our platform enables the recognition of scholarly contributions throughout the research lifecycle.
RICHARD WYNNE: I've worked in scholarly publishing for more than 30 years, most recently at Aries Systems, the developers of the Editorial Manager peer review system that was acquired by Elsevier in 2019. Because Editorial Manager was adopted by over 7,000 scholarly journals, I've been able to observe up front and personal the numerous problems and opportunities that characterize the publication of research findings.
RICHARD WYNNE: In particular, it was a mystery to me why journals were so slow to implement the CRediT taxonomy. CRediT is an amazing opportunity for publishers to improve their product, and yet a decade since its launch, the adoption has been minimal or suboptimal. Why? By the end of the presentation today, I hope you'll understand why the adoption of CRediT has been so slow and how we can fix that problem.
RICHARD WYNNE: So I'm now going to share my slides. I'm not seeing the option to share, Jason. Oh, yeah. Got it now. I'm going to switch to that and then PowerPoint.
RICHARD WYNNE: OK. First things first-- why bother with CRediT? Well, the alternative is to continue spending $2 trillion a year on research and neglect to identify as effectively as possible who did the work and what they contributed. With such a massive societal investment, I think we owe it to do the best job possible identifying how the funds were spent and how we can improve research decisions in the future.
RICHARD WYNNE: The most fundamental assertion or question we can make about contributors to scholarly content, of course, is who they are. And for over 200, 300 years, we've done that using a text string such as Jay Smith or Allison Lee to identify who contributed to scholarly content. Fortunately, we now have ORCID as an unambiguous and persistent identifier, meaning that we can leave behind the problems of using text strings and move into a better way of identifying contributions.
RICHARD WYNNE: So I would say, first of all, ORCID is a fundamental foundation onto which we need to build CRediT because there's no point in assigning CRediT recognition to someone if it's not associated with an ORCID. What's the point of associating CRediT just with a text string? Not very much. Given the importance of ORCID, about six months ago, I analyzed the data from CrossRef to see how many ORCID IDs were included in newly published manuscripts on a daily basis.
RICHARD WYNNE: You can see from this analysis that it has climbed to about 7,500 per day. And as of a few days ago, that has now reached over 10,000 ORCIDs a day on a regular basis. Another important metric is the number of ORCIDs per article. And again, we can see that that's climbed to about 2.1 ORCIDs per article and now is over or reaching 2.3. This is a slightly misleading statistic in that we're not using the denominator articles that have no ORCIDs, but I think you can see that the number is climbing steadily.
RICHARD WYNNE: So is the glass half full or half empty? Well, it's certainly still somewhat half empty. In the decades since its launch, ORCID is still not used to identify contributors to most scholarly publishing. Most contributions are still only identified by a text string. And we have to think whether that's a success. However, research adoption of ORCID has grown strongly, and as I showed from those graphs, is now achieving quite a high level.
RICHARD WYNNE: What is the second most fundamental question you can ask about contributors to scholarly content? It is, what did they contribute? Historically, contribution has been measured by using citations and citation counting. But can citations really answer this question about what someone contributed? Authors are identified in citations as a list of names, and the contributions of individuals are usually inferred from the position in the list of authors or by some other cultural artifact.
RICHARD WYNNE: So the problem with citations as a measure of contribution is that they're not explicit, they're not granular, they're not transparent, and they don't specify exactly what someone contributed. Citations are a fantastic way to show how ideas are connected, but they're not a great way to identify individual contributions. So fortunately, again, we have CRediT as a great way to answer that question.
RICHARD WYNNE: And I am assuming everyone in this presentation already knows what CRediT is. But in the decades since the launch of CRediT, why has the adoption been so limited? Well, the answer is that it's expensive and complicated to implement CRediT using the processes that publishers use today. And this is what I call the journal-publishing sausage machine.
RICHARD WYNNE: Editorial systems are used to collect information from authors. Those data are then passed to a production system, and then a hosting platform such as a HighWire or an Atypon, or a Silverchair are used to present the information to readers. So when we want to add credit to this workflow, we have to spend money to collect that information from authors and modify our system to do that.
RICHARD WYNNE: We have to synchronize relational data, which is how we collected it with XML, which is actually the content. We have to have a mechanism to transfer that to a production platform which is often offshore and a combination of automated and human processes. We then have to transfer those data again to the hosting platform where they are massaged again for output to other platforms or for presentation in HTML and PDF formats so that readers can finally view the credit assertion that was made upfront.
RICHARD WYNNE: So this leads to a number of cost drivers. The multiple touch points of the assertion is very expensive. There are multiple data formats to support throughout this process, meaning that there needs to be transformations which are often fragile. There are multiple handoffs between people and systems, and there are often different teams across different continents. So this requires continuous synchronization from one end to the other and coordination, and that absorbs and creates management costs that adds cost to the entire process.
RICHARD WYNNE: This is a lockstep workflow leading to a static endpoint, so it's incredibly inflexible because it's a one-time process. And it's very high friction, so any changes you want cause cost. In an ideal world, we would say the author would make the credit assertion, and the reader would read the assertion. And all of these interim steps that we have today are expensive.
RICHARD WYNNE: They add no value. They cause delay, and they reduce agility. And this results in publishers cutting corners in the way they present the credit information. So this is a screenshot of an example from a publisher website showing credit attribution using the initials of the author. So in this format, it's not clear who made the assertion of credit.
RICHARD WYNNE: It's not clear who received the credit. In other words, it's not associated with an unambiguous and persistent identifier. It's poorly structured and presented in that we can't see what one individual contributed easily. It has limited reusability. It has no aggregated or contextualized use, so we can't see what SW did. It's not machine readable.
RICHARD WYNNE: It's a very static presentation, often encapsulated in a PDF. In other words, it's a data cul-de-sac where there's very little reusability or useful presentation of the data. So the solution to this problem, of course, is to directly connect the author and the reader with appropriate provenance to allow the creation and viewing of credit. And It won't be any surprise to you that I conceive of Rescognito as a solution to this problem.
RICHARD WYNNE: So is there a better way? I think so. And rather than tell you about it, I'm just going to go ahead and show you. So I will now switch to my browser window. And what we are looking at here is a published article randomly picked. And you can see the DOI for this article is in the article. I'm now going to switch to the Rescognito website, and I'm simply going to add this DOI to the end of the URL.
RICHARD WYNNE: And by the way, I'm using our QA website so we don't add real data here. So when I load this page using the DOI, it presents me with a list of contributors identified by their ORCID ID. And here I can select what the individual contributors contributed using credit terms. So I can say that Lee contributed conceptualization and data curation, and maybe Helen did the writing of the original draft and the review and editing.
RICHARD WYNNE: And when I click the Recognize button, the system asks me to confirm my ORCID ID, which I do here by clicking Sign Into Orchid. And this will validate to Rescognito who is making this credit assertion. And after the confirmation screen, you'll see that the page is updated. And I can see the recognition of credit for each of these authors.
RICHARD WYNNE: So this is all of the recognition for this manuscript. And if I click on one of the individuals, we can see what their contribution was. If I drill down to look at the record for this particular individual, within Rescognito, we have something called the Open Ledger, which is a single place where we can see all of the credit recognition and other recognition for this individual.
RICHARD WYNNE: So in this case, we can just see that I made a data curation assertion. We can see who made the credit recognition, Richard Wynne, and we can see who it was attributable to. And there's a link here to the DOI for the object that the credit recognition was applicable to. If I go and look at someone else's record-- in this case, my record-- the other way that we can access the credit recognition is simply by clicking the Recognize button next to the published object.
RICHARD WYNNE: So I can go in here, and in this instance, recognize myself for having performed supervision on this particular project. And you can see this will update the recognition for this particular manuscript again. And we can see the additional assertion about recognition here. So one way to view the recognition is in this ledger format, the credit recognition.
RICHARD WYNNE: The other way is a visualization tool where we can see all of the people who have made assertions about my contributions. And we can also see how those people are interconnected with other people and other organizations that have made credit and other recognitions about this person. So by way of digression, quickly, I'm going to talk about institutional assertions.
RICHARD WYNNE: This is not credit, but it's related. So we've worked with the Earth Science Information Partners to allow them to recognize their members for activities such as committee work and conference sessions and conference presentations, conference posters, et cetera. And this, again, is shown in the ledger where the organization is identified by their [INAUDIBLE] ID. And we can visualize this in a chart here where we can see what all of the recognitions were, or we can drill down and see who was recognized for an award.
RICHARD WYNNE: So the visualization of credit-type information is very important, but equally important is accessing this information in a programmatic way so that credit can be read by computers and AI tools. And so this is the API link on our QA server. It's a similar link for our live platform. And you can see that it's driven by the ORCID ID of the person you're looking at.
RICHARD WYNNE: So this is my record viewed via the API. And I'm just going to reload the page, and you'll see here that the supervision recognition that I just made is now visible in the JSON. It shows who made the recognition, what digital object that was related to, and what type of recognition. And there's other data in here. And the value of an API of this type is that you can then build other applications on top.
RICHARD WYNNE: So I'm going to switch to a little application that I've built myself. And using the ORCID-- my ORCID, it's using the API to retrieve the JSON and display it. So if I now refresh this, you'll see that the credit recognition I just put in for supervision now appears in this visualization. So this is what I mean by making credit more useful, not a dead end solution-- in that by making it readable in JSON, we can then build other applications on top of it that are of high value.
RICHARD WYNNE: So what did you just see? You saw a friction-free credit assertion and creation process. It had very high fidelity and attribution, and that's because we're using the persistent identifier infrastructure such as ORCIDs and DOIs, and we're also using ORCID federated validation to ensure that there is provenance for the attributions.
RICHARD WYNNE: We're creating a value-added presentation and visualizations of this credit information, either aggregated based on a particular manuscript or aggregated based on an individual's activity across multiple manuscripts. There's superior useability and reusability, in particular because of the JSON API, that lets this information be presented in other applications.
RICHARD WYNNE: And because the process doesn't use that middle chunk of information of workflow, it's much, much more flexible. But most importantly of all, it's much less expensive than the traditional process, the manuscript sausage machine process that I showed you in the earlier chart. Now, you might have a couple of objections, and I'd be happy to talk about these in the Q&A.
RICHARD WYNNE: First of all, you might say a publisher, a gatekeeper is needed to ensure the provenance of the credit assertions. Well, what I just showed you was that by leveraging the federated access to identifiers, we can make fantastic provenance claims that are actually superior to those found in journals today and can greatly enhance, potentially, what journals do.
RICHARD WYNNE: And your second objection could be that credit needs to be presented inside the document, and I actually agree with this. I think it's very useful to present the credit information inside the document itself. But using an API or a structured link, as I showed during my presentation, this can actually be very easily achieved. So instead of showing a static piece of credit data inside a document, you can show a dynamic and value-added value that can then act as a through street to more exploration and use of the data.
RICHARD WYNNE: So in summary, CRediT is a fantastic opportunity for publishers to add value to the content that they publish. However, the cost of collecting CRediT using existing workflows is prohibitively expensive and inflexible. And I don't think CRediT will ever succeed while we're trying to push it through that legacy sausage machine. However, using platforms such as Rescognito, we can decouple credit assertions from the content workflow and therefore dramatically reduce the cost and increase the flexibility that we have in the deployment of CRediT.
RICHARD WYNNE: So how do we get CRediT done? Well, of course, I'd be happy to tell you more about Rescognito and to hear your questions and suggestions after you've watched all the presentations. Thank you. It's been a pleasure meeting you.
ALEX HOLCOMBE: OK I'm Alex Holcombe. And the way I like to approach CRediT is from some broader principles. And I like to think of CRediT as helping us redress imbalances in the kind of science that gets done and who gets credit for it. So the way science is done has changed a lot over the years. It's become much more of a team sport than the endeavor of lonely aristocratic gentleman. And my background-- I came to this-- one thing that really sparked my interest was some efforts that we made in response to the replication crisis that we really became aware of in psychology around 2012.
ALEX HOLCOMBE: Here I am in a comic book about the replication crisis. And just got a plug here for a meta-science conference that I'm involved in that I encourage you to check out a little bit later this year. But with response to the replication crisis, one thing that we started doing was soliciting research projects in psychology that involved many more authors than was traditionally the number of authors you might have in a manuscript in experimental psychology.
ALEX HOLCOMBE: Because what we were doing was coediting a new article track where we would have many labs contribute to these replications of classic studies. And so in some of these labs, the people involved, some of them were crucial, but they were really mainly doing data collection. They weren't terribly involved in the writing of the manuscript or even the conception of the project or how the data would be analyzed, et cetera.
ALEX HOLCOMBE: But it was only thanks to that kind of specialization of roles that we were able to get this somewhat definitive experiment done with very large amounts of data. So I realized that the specialization of roles is really important in science. But I noticed as a journal editor when I was working with these contributors who were sending in their manuscript that the authorship guidelines that we traditionally have-- for example, from the International Committee of Medical Journal Editors, which sets guidelines that have been adopted by over 3,000 journals-- these sorts of authorship guidelines really have roots in a writing-based conception, like, as you might, expect with the word "author," that who's an author is all about who writes the paper.
ALEX HOLCOMBE: And their criteria also really emphasize intellectual content, which is never defined, but worries me as potentially an elitist sort of thing where there's a certain core group of people that are the masterminds behind things, and lots of other people, they may be contributing to a research project, but they don't really matter, or they shouldn't be authors on a paper.
ALEX HOLCOMBE: And that really can hinder the specialization of roles that we need in today's science because it means we're not giving credit to people that we needed to contribute to work in order to get that work done. And this is really unfortunate that we haven't updated very much this conception of authorship to be more what I might call contributorship in contrast to other realms of society.
ALEX HOLCOMBE: For example, already 250 years ago, in the very first sentence of Adam Smith's The Wealth of Nations, he noted that the division of labor, he thought, has increased productivity for the world more than any other factor. And similarly, Kant in 1785 was writing things like "where work is not differentiated and divided, where everyone has to be a jack of all trades, the crafts remain at an utterly primitive level." And actually, that's what I felt or recognized at the time.
ALEX HOLCOMBE: Back in graduate school when I was getting my PhD, I had certain specialized skills, but it was clear to me that the route was that I needed to become a-- to get a job, I should be a PI, and I should be able to handle everything in the lab. There wasn't really a route to specialize in something like computer programming even though everybody recognized that we needed more computer programmers in science.
ALEX HOLCOMBE: Now, fortunately, despite authorship guidelines holding us back somewhat, we have nevertheless seen specialization in science. But I contend that science will become more effective more rapidly if we start encouraging specialization when it's appropriate rather than hindering it. So I and many others have advocated for moving away from these traditional authorship guidelines to have a more inclusive framework which you might call contributorship to move away from the word "author," which has got this writing connotation, to recognize broader sets of contributions.
ALEX HOLCOMBE: And we created a tool, which I'm going to get to, to facilitate the reporting of this. But before I get to that, I just want to make note that here, I'm talking about one reason why we should say who did what in a research project, why we should attach names to a research article. And I'm mainly talking here about-- well, I'm really only talking here about assignment of credit. And of course, that's really important for individuals to increase their prestige, get promoted, or just feel good about being recognized, or just morally, that's appropriate.
ALEX HOLCOMBE: But also, of course, there's this other side of we need to credit people who contribute things in order that we can allocate finite scientific resources efficiently. The grant funder-- they need to be able to recognize what teams of researchers are going to be the best for a particular project. And just knowing who did what on all these different research papers is going to facilitate that.
ALEX HOLCOMBE: But there's another kind of reason to say who did what which I'm not going to talk about, but I hope it comes up in this session, which is that we need to also certify who is responsible for the content of a research article so that when questions start getting asked about potential suspect data or something like that, we need to be able to know who we can call on to take responsibility in terms of doing some additional investigation about the records that they had associated with this.
ALEX HOLCOMBE: And I feel like credit-- the framework we're talking about is more designed for the assignment of credit than of responsibility. And this has always been a tension in authorship because it's not clear within this list of names who's most responsible when questions start getting asked. But I think that CRediT, by indicating who did what, provides some progress on that issue because traditionally, we just have this list of names attached, and there's a diffusion of responsibility because there's no indication of who did what.
ALEX HOLCOMBE: So if a question gets asked about a particular part of a paper, everyone can point the finger at everyone else without there being any documentation to indicate who was more responsible for that part. So I think CRediT helps a little bit with that, but isn't the full solution. But really, I want to talk about how you indicate who did what in a concrete fashion. So my collaborators and I created a tool that we call Tenzing, which is to help authors document which researchers contributed to which aspects of the research project.
ALEX HOLCOMBE: And we thought this tool was needed for two basic reasons. One is that the roles of the different people involved in a research project should be agreed on before submitting the manuscript to a journal website. Now, currently, a lot of time, what happens is that authors don't really discuss which of them contributed to which of the 14 items, for example, of the credit taxonomy during the project.
ALEX HOLCOMBE: But then when they submit to a journal website, they're going to see they have to indicate who did what in this credit taxonomy. But by then, it can be a little bit of a problem because a lot of the researchers, including even the first author, are now mainly working on some other project, and they may not remember very well who did what. And their memory is going to have a self-bias, I think, as we know from psychology.
ALEX HOLCOMBE: So this can start creating conflicts in terms of people saying, well, no, I did more of that, and so on. So there isn't something out there that we know about that facilitates researchers documenting this-- well, there's different ways you could document it, but we wanted something specific that would directly link to credit.
ALEX HOLCOMBE: And another issue with the present journal process is that already, submitting articles to journals is very time consuming in terms of all the information you have to enter. I've cursed many a journal website. And then adding this credit burden makes it even more time consuming. So we want to be able to make it easier. So what we did is create a tool that interfaces with a basic Google Doc spreadsheet.
ALEX HOLCOMBE: I hope you can see this. So this is our little article introducing our Tenzing tool, which is named after the sherpa who didn't probably get as much credit as he deserved in helping-- in being part of the climbing of Mount Everest with Edmund Hillary. So this is our Tenzing website. And the way it actually works is that you can go to this info sheet template, which is basically a Google Sheet in which researchers have one row for each contributor.
ALEX HOLCOMBE: And then there's little checkboxes for each of the categories of the credit taxonomy to indicate who did what. And then we also have some additional information as well, which gets spit out in various formats, that helps researchers to properly format their manuscripts for a lot of journals. But a lot of this is about providing something that researchers can circulate among their team during the project before they come to journal submission so they can get some kind of rough agreement on who did what and have the right expectations of who's going to get recognized for which things when it comes to journal submission.
ALEX HOLCOMBE: So then what happens is you-- the instructions say on our website you make a copy of this template, and then you can, of course, add your own information, circulate it among your research team like you can any Google Doc or Google Sheet. And then you download the document in any of various formats, but here I'm doing it as a Microsoft Excel file.
ALEX HOLCOMBE: And then when we go back to the website, you upload your info sheet, and that's going to allow it to prepare for formatting that information in a way that's suitable for pasting into your manuscript that you're going to submit to a journal. So here, for example, this is a succinct way of indicating who did what with the credit taxonomy that can then be pasted into your journal article manuscript when you send it to a journal.
ALEX HOLCOMBE: Because a lot of journals don't have the machine readable thing where you actually have to indicate it in a form. Instead, though, you can indicate it in an author contribution section with free text. And then also, it provides this little thing which can help with the title page of your manuscript. But what we really want to get to as well is allowing authors to be able to upload the XML data, which is what the journal really uses behind the scenes in order to make the credit information machine readable rather than authors having to go through all these very time-consuming web forms.
ALEX HOLCOMBE: If you've got, like, 30 authors, it could take you a lot of time to indicate for each-- tick off, for all 30 of them, which of the credit taxonomy parts that actually contributed to. So we've contacted publishers about being able to upload to their journal management system something like this, but we don't have anybody who's done that yet. So that's our little tool. And I look forward to hearing about other people's thoughts about CRediT and how to make it easier for authors to use.
ALEX HOLCOMBE: Thank you. [MUSIC PLAYING]