Name:
Transparency in Research to Support Credibility and Trust
Description:
Transparency in Research to Support Credibility and Trust
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/cb204bd1-7747-4817-80dc-6168802c984b/thumbnails/cb204bd1-7747-4817-80dc-6168802c984b.png
Duration:
T00H58M03S
Embed URL:
https://stream.cadmore.media/player/cb204bd1-7747-4817-80dc-6168802c984b
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/cb204bd1-7747-4817-80dc-6168802c984b/GMT20251007-165911_Recording.cutfile.20251007221601544_1920x.mp4?sv=2019-02-02&sr=c&sig=9UqPEF1PQpf%2Fq9BfcGS1qa11sZS1gFKZKEqAvleLugI%3D&st=2026-04-03T14%3A33%3A03Z&se=2026-04-03T16%3A38%3A03Z&sp=r
Upload Date:
2026-04-03T14:38:03.4924876Z
Transcript:
Language: EN.
Segment:0 .
Good afternoon. Well that's loud. Good afternoon, everyone. Hope you enjoyed your lunch. And the roundtable discussions before lunch. I know that we ran over a little bit at our table because we got so enthralled in the discussion, and the time just does fly by when you start talking.
So we have a session planned for you in the maybe post lunch, you know, relax. Settling back in transparency and research to support credibility and trust. We're joined by 2 speakers here in the room, Jonathan and Mia, and online by Jeff, who many of you, many of. And before we kick off, you know, I just wanted to talk a little bit about transparency.
So I'm on the new direction seminar planning committee as, as mentioned. And when I, I was a little bit late to the meetings and I was told, you know, you can work on the transparency session. And I was like, can you be a little bit more specific. And they're like, no, no, we can't. So I thought, well, you know, let's think about what, what transparency means.
Because obviously it's a huge topic. You know, shout out to Megan for bringing that up this morning. And I think we all want, more transparency in things. And I was thinking this morning, you know, listening to the remarks that transparency and silence, you know, as kind of two ends of the spectrum, like to be completely opaque. You just don't say anything. And how, you know, we're operating, you know, on, on that spectrum.
So when I thought about reaching out to speakers, you know, to join us here today to talk about things, we I wanted to represent the varying levels of transparency, but also the different audiences of transparency, because when you think about wanting to be transparent, what's the purpose of that. Who are you being transparent to. Who are you being transparent for. What does transparency bring.
You know, to all the stakeholders, in our industry. So we're going to move through a little bit of a progression of different levels of transparency with different audiences and different purposes for our speakers. And then when we get to the question period, we hope that you can not only fill in with maybe perspectives that we have not been able to represent here today, but also, you know, making the connections on what transparency means to you and your constituency.
So I am now going to I'm going to go through the code of conduct slides, and I'll put this worked just a minute ago. Like what it is gremlins. All right. Step back. Oh you have. So you have to do like, some kind of a weird gesture in between each one, and then it will go.
So there we go. Code of conduct. And then I will hand over to Jonathan to kick us off. Sure well just watch. All right. So Hello, everyone. I am Jonathan Schultz. I am the senior director of journal operations for the American Heart Association, one of the many represented in this room today.
And I'm here to talk about conflict of interest and transparency. Did I get right? Yep there we go. So conflict of interest disclosure is very important to the American Heart Association. It's we have a very comprehensive conflict of interest policy that is available online. You can find it here.
And it really, it comes down to, you know, how we help to establish the public trust in our science and the information that we put out our policy, our policy covers of volunteers, editors, leaders, officers as an example. Editors in chief can't be a let's see. Let me get right. They can't be API on an industry sponsored trial as an editor. And so we get a lot of pushback from that, but that we hold a very tight line with that.
We have similar principles for authors and reviewers for our journal articles, but of course, we can't be as strict with that. So we really focus on emphasizing disclosure and transparency around that, which aligns with the International Committee of Medical Journal Editors. Roles around transparency being the key for, you know, author disclosures.
So because really, transparency is essential, as we tell our authors, you know, we ask them to provide of your provide readers of your manuscript with information about your other interests that could influence how they receive and understand your work. And so we're asking, both for actual and perceived coi. So actual the, you know, the action potential conflicts of interest and the perceived is what could a reader potentially think was influencing your work.
A lot of times we will get pushback from authors who will say, I don't want to disclose. This really has nothing to do with anything, but I'm not sure. And so anytime they say that, we say, well, just disclose it. Be safe. Always be transparent because, you know, they have the concern and it's, you know, rightly so, that we're in a time of especially now of weaponized disclosures where somebody will look at this and go, are you were you were sponsored by this, or you have this relationship with this organization or this pharma or whatever, and that that means that all of your work is suspect.
That is a real concern. The rebuttal to that, though, is the cover up is usually the crime. And there are a lot of journals like the ones pictured or whistleblowers or, you know, independent investigators who, if they were to discover that you have a relationship that was not disclosed, that seems so much worse than if you did disclose it in the manuscript, it feels like you're hiding something.
And you can rationalize it away, but it definitely feels it makes you sound guilty if you haven't disclosed it. So we always on the side of being as transparent as possible. But full disclosure, collecting koi is a pain in the butt, so we so probably want to focus on is how we've kind of tried to make it easier for authors and for everyone to collect these. A conflict of interest disclosures the way that we used to do it in the past is we like probably a lot of people, we had a blank form that basically says at the top, you know, here's what we consider a conflict of interest to disclose.
And there were a lot of open, you know, it was a blank form, basically, and that, you know, caused a problem. It's the same thing with a blank page. You go to write something down, you're looking at a blank page. I don't know what to put. You know, maybe you're maybe you've done this a lot and you have you already have saved for yourself a spreadsheet or a Word doc that you can copy and paste in.
But for a lot of people, you haven't done that. So we were getting a lot of incomplete forms. We would have to go back afterwards. People would come, readers would alert us, Oh, this person was funded by this or something like that. And and we were getting a lot of, you know, incomplete information, plus the information we were collecting was it was just coming through as text, you know, it wasn't structured in any way.
So there's really nothing you can do with it. So it made it very hard to track. So a couple years ago, the American Heart Association decided that to combat this, you know, to improve this whole process across the board, we're going to work with convey. Convey is a web based financial and relationship disclosure platform. And you can read the rest of the description there.
But what is important about it is it's not tied to the ha. It's actually produced by a C, and it's not tied to a specific article. It's not tied to a specific journal. It's not tied to the or any organization. It's tied to the individual. So it's an individuals repository of all their relationships.
So we adopted it society wide, so we use it to collect conflict of interest forms for editors, for committee members, for volunteer staff disclosures, all of our writing groups collect and all of our journal disclosure, author journal author disclosures are collected and convey. We have. We use and convey a form that is based on the form.
And since we started this in about may 2021, we've collected over 180,000 author disclosures. All right. So a little bit about our process. So we have it integrated into our tracking system. That's our internal press EJP where. So for all of our manuscripts at revision stage authors, all authors receive a link to complete their CTA or license to publish form and also link to complete their coi form.
Previously, it was a link to just a blank form in the system as I mentioned. Now they receive a link to convey where they complete their coi disclosures in the conveyor system, and then the PDF gets sent back along with a structured XML in convey. For the. When the authors encounter it for the first time, they do have to create an account, and they do have to enter all the relationships that can take a bit of time to go through and enter all the relationships.
There's specific forms or asking specific questions, you know, when did the relationship start, how much do you pay. That sort of thing. But going forward, users only have to either update the relationships or if they don't have to, they just select the ones that are appropriate for the manuscript. Answer a few questions and they're done.
So ideally it makes the second time you go back third time, fourth time go really quickly. It should really only take a few minutes to complete a form versus, you know, a very long time. Also convey has some, you know, tools built in. For example, they link directly to the Center for Medicare and Medicaid Services public database of, you know, at least clinicians all have their potential relationships between their relationships in this database that anyone can go and see.
And so it brings it up. And you can compare the two. So you can update your relationships directly from that. So we think there's a lot of benefits for this. It's we definitely think it's an improved author experience. It's much more guided. It's not that blank form they're getting. They're putting a lot more complete information in there.
And it makes it easier for, you know, like I said, first time's a bit of a pain, but the second time, third time, it's a lot easier. We're getting the information a much more standard and reusable fashion because it comes back in that XML data. We also can export all sorts of reports and information from convey itself. And that's what really helpful for the society, society wide disclosures.
Because, you know, for example, for our editors, we have over 300 editors that we, you know, handle manuscripts. And so we're able to export a report of all of their disclosures that we then post online as a PDF. But doing that is much easier now that we have a way to collect it and track it very easily. Of course, there we had some lessons learned.
Mentioning this. I will say, you know, users don't like setting up new accounts. That's still the one complaint that we get. We got a lot at the beginning, but at, as you know, as people were first encountering this, but over time it's gotten a little better. I will say, full disclosure, since we're talking about coi, I do not get compensated by convey by any means.
I'm doing this on my own accord, but there is a network effect, where the more people who are on it, the more organizations, more journals, the more people experience they have a convey account they have in there. So we get less complaints on our end. So I, I evangelize about it in that regard. When you are setting, if you do something like this, I definitely encourage people to consider the author experience because you do want to make this better for people.
I will say when we adopted the initial ICM form, it was a very different form than what we're using right now because it had a lot of redundant questions in addition to the repository part where authors could just select what they considered, conflict of interest from their repository. It also asks a bunch of redundant questions like are you sure. Are you sure.
Are you sure. And then you had to enter more stuff on. So the second time was taking just as long as that first time. So we went back to our legal team and said, do we really need to ask all these additional questions. And they said, no. So we now have a much more simpler, much more straightforward plan form that is much easier for authors in the long run.
And right now, our next step is we're looking into how we can improve the tracking and checking process so we can get this information back. We can provide we provide it to corresponding authors. We provide it to staff so they can compare the individual author forms pretty easily with what's in the manuscript. But that's still a manual process. It takes a lot of time.
So we're looking at ways. Is there a way we can automate that process to compare things a little bit better. So that is that I think we're doing questions at the end. Right? yes. OK so I will hand it off. What was the move supposed to do. Are you going to just like Wonder Woman.
Wonder woman. Yeah OK. Hi, everybody. It's my. Yeah, my name is there. OK, good. My name is Mia Ricci. I am the director of publication operations for the Au. You're my building.
Welcome, everybody. Hey, I have to do the plug. OK so I am here to talk about publishing policies for ethical, equitable, and transparent research collaboration. I'm reading a lot of notes, and it's going to be a lot of words on the screen part of it, because I'm very bad at synthesizing for PowerPoints. But the other part is because the project that I'm about to talk to you about is a community project that has very intentional language, and I cannot paraphrase it.
A lot of the stuff that I put on the screen is verbiage that has been discussed and argued, and I can't mess it up, so I'm going to read some of it here. But this session talks about transparency and research to support credibility and trust. And we talked this morning about trust with the public trust in science by the public. But there's also needs to be trust within publishers and researchers in our ecosystem in scholarly communication.
There's work on that part to be done a lot of work. So transparency can mean many things. So today I'm focusing to highlight research collaborations, how there needs to be better publishing policies and guidelines to promote transparency in these research partnerships, in community relationships and in research, data management and research data sharing. An example of this is a community led project to create the guidelines for Indigenous data and Indigenous data governance in scholarly publishing.
This project is a collaboration between the Au, NISO, the collaboratory of Indigenous data governance and the Research Institute. There has been increasing representation of Indigenous knowledge and Western science. There's a lot of move about open data and open software and open code and sharing and open data increases access to complex and large data sets for innovation, for discovery, for decision making.
But Indigenous peoples governance rights to this data remains limited. So I want to talk to you a little bit about the care principles. If you're not familiar with the care principles, it is a set of principles and framework for Indigenous data. Sovereignty care stands for collective benefit, authority to control, for control, responsibility and ethics.
So increasingly, care principles for Indigenous data governance are being applied and operationalized into various phases of the data ecosystem. However, there have been inconsistent practices across the data lifecycle and by data actors. What we did in this project is that we have convened scholars, publishers, editors, metadata experts to develop publishing guidelines to operationalize Indigenous data sovereignty across the data and research lifecycle to make sure that data that is used in our journals or in books is not only fair, which I think a lot.
You've already heard about fair, which is findable, accessible, interoperable and reproducible, but it's also care compliant. There is this question that was asked by one of the authors in an article. When Cara was the creator. Care principles was announced a few years ago by tahu kukutai and quote, who said, how do we University research institutions move from extractive, transactional colonial data practices and reconfigure power relationships to put Indigenous data in Indigenous, in Indigenous hands.
In the slide here, there's a graphic that I might be hard for you all to see, but this graphic is created by Professor Stephanie Russo Carroll, one of my partners in this project, and she is the author of the care principles. So this graphic is showing that publishers is one of those entities with institutional responsibility to Indigenous data governance that upholds Indigenous rights and interests.
So this is a lot, and we have been working on this project for over two years at this point. Now there's 100 plus participants. I think a few of you are in this room. It is truly a community project, but it is led by our Indigenous partners, Indigenous researchers. We had four workshops between November and July. The guidelines itself will be published before the end of the year, and I will be knocking on your virtual doors to talk to you about it and like maybe help you if you're interested in thinking about this for your organization.
But so I'm going to give you a little bit of a preview of what it is. So what does it look like to operationalize the care principles in publishing. Well, it's basically including the Indigenous voices and into the entire ecosystem within the various stages of publishing. The graphic that's on the screen right now is created by my colleague Christina velder.
This is the traditional publishing workflow. You might not recognize it for your own organization, but for the majority there's like, you know, this process. So publications is, as mentioned again, is one, one component of the scholarly research ecosystem. But even within our process there's all of this. There's authorship, there's attribution, there's metadata, there's editorial policies, there's peer review, there's indexing, there's linking back to communities, all kinds of stuff.
So there needs to be some kind of guidelines to guide our industry. But these guidelines have to be created in collaboration with the Indigenous community, with the people that owns the data, the people who lives in the lands of where this data, where the research is being done. Anyway, so I'm just illustrating. There's a lot here and it's a layered and complicated process, but it's something that can be done and it could be done together if we do it as a community.
The guidelines itself is structured following the various stages of the typical scholarly publishing workflow. There's about 25 pages in it so far. There's 60 plus recommendations, which I know is a lot. Who has time. You have time. You could do maybe one or two, but there's 25 pages. There's 60 plus recommendations.
But what's great about it is like within each set of recommendation, there are examples like we don't talk to each other a lot as publishers, as journals, like one journal might be piloting one thing and another is doing another thing. But like just from this project alone, I learned about so many things that people have been doing in this area. I'm just not aware of it.
So the guidelines is a lot, but from there you'll be able to see on whatever stage of publishing that you work in, whether you are an editorial team, you're in a production team, you're in marketing, you're in metadata. There's an example of something. So I think this is going to be a tremendously valuable. I want to take you to a few examples. We did our final workshop in July is we asked all the participants to rank their top three of 4 from the 60 plus recommendation.
Which one do you think you could most likely, you could take home to your organization and start to explore and implement. And this is the top the top four. So I'm not going to read it all to you. But I'm going to give an example of the second item, which is asking authors to clearly describe describe Indigenous people's engagement in their research at submission. So there is an example journals are already doing that.
The Canadian Journal of Public Health has been doing this for a while. This QR code leads to their press release about it. But these are the questions. These are the questions that every author that submits to this journal must answer when they're uploading their manuscript. So there are folks out there that's experimenting with this and is doing it.
And we should know and like, I would love to know the language. And maybe this is something I can model. At the same workshop, we ask our Indigenous scholars, what do you want publishers and the publishing ecosystem do. Like what? What is your top list. You're not a publisher. You're an Indigenous scholar. You're an Indigenous person.
What do you want them to do the most. And these are their picks. Number one is they want us to rethink the way we think of authorship and reflecting it in our requirement to increase community recognition. Let's see, I think. Oh Yeah, that's right. The examples from Au are that's my journal. So one example is you might have heard not just argue, but a few other journals have been doing what they call the parachute science policy.
So for Au, we launched it last year called The inclusion and global research policy. So this is something, if you're not familiar with the concept of parachute science or helicopter research, it is when researchers from high income countries or communities fly into or do research in a low resource or low income country, and then they fly out and they don't recognize their local partners. They don't credit the folks that they work with.
They do damage and like it's unethical and it's harmful. But this has been happening since the beginning of research, since the beginning of international research collaboration. So now journals are taking their part in asking questions about this. And so for Au, this policy means it's an extension of our authorship policy. You are asked at submission to look at our policy and to include your local collaborators as co-authors if they meet our authorship criteria, meaning they have looked at the paper, you know, they like agree to all of it or you have to acknowledge them in the acknowledgment section.
You have to. Include an inclusion in global research statement saying what did you address the ethical and scientific consideration. Did you get permits. So all that good stuff. It started when I was first telling our editors about this. We have about 1,000. They were like, Oh, Mia, this is terrible. Everyone's going to hate us.
This is impossible. It's been a year, not a single complaint. We have not gotten a single complaint. Wait a minute. I want to give you an example. What? bob? Next slide. Thank you.
Bob I think I'm already way over time. Probably I have AI should I keep going. OK I have a case study. And then I can show you what it all looks like. OK the case study is with one of our journal called JGR biogeosciences. The editor in chief wrote to me and said, I got this paper. The data was collected on Indigenous land, so care principles should be applied.
The authors made their data and code available for peer review. This is required for our journal has to be available, but they are restricting this access to the public if the paper is accepted. So because this guidelines that Mia Ismail have been working on for two years is not yet available. Why should I do as an editor in chief. Can I get some advice. What? what should I tell the authors about their data statement.
So I thought about this for a while, and I reminded her that we are actually already doing a lot of the things that a lot of the recommendations that's in the guidelines. You know, the guidelines upholding Indigenous data sovereignty and scholarly publishing looks like a bunch of different things. That includes having an open data policy. But what the data is coming from.
We're already doing that. Our data policy also recognizes care. Recognizes care. You mentioned care specifically, but it also mentioned that the data or software should be as open as possible or as close as necessary, meaning you have to be flexible. In the case of somebody this the data belongs to them. You have to make an respect that.
So and then we have the inclusion and global research policy. That's the parachute science policy. Again authors are now asked to detail their partnerships with local communities, which includes Indigenous people. And then the last bullet I put in there, just like, you know, like our editors are trained and engaged on DEI, a best practices because to me, for us at Au discussing equity, diversity, inclusion is something that we do a lot.
And to get, you know, it's a tricky conversation, but because we have now done it for so many years, it's part of who we are as an organization. Editors aren't surprised when I come to them, and I don't have to fight every single time to say why it's good to think about our local partners. Why sharing credit is good. You know why being at, you know, all of that stuff. So I put that in there.
So anyway, two more slides. Sorry, Jeff. OK, so next is the editorial board consulted with each other. And you know, they decided that the data must be made available for reviewers and editors for peer review. And it's this close to the care principle do not apply to code in this particular manuscript. So the code or software must be publicly available.
And three the authors are asked to provide an inclusion in global research statement. So when they talk to the authors, they also gave the author a little more, a little more what's it called. Guidance it's like they asked the authors, yes, we understand why the data is restricted. It's belong to the Indigenous community. But why can you put that for transparency in your data statement.
So they asked the authors to mention care principle because right now, the way it's written, it, it sounds like it's really just a high barrier to access to the data instead of the process of respect and communication for the data. So it needs to be detailed and let the reader understand, and that the restricted, the restricted data that is managed by Indigenous community has to have provide multiple ways to access the data.
It can't just be an email of like one person has, it has to be like sustainable. And last, so it's a generic email address. So they talk to the authors who are very excited to actually to work on this. And they did it. They worked together. There's a lot of text up here. But on the top part this is in the finished paper.
The top part is the statement about their global collaboration within it. It mentioned that it has an acknowledgment, you know, Thank you to the knowledge holders, the Indigenous knowledge holders, the permits, what are the organizations that they work with. And then within the data statement. There's also it mentioned specifically all those things that the editor has asked about how the raw data is managed, and it's stored jointly with the community.
So not just by the researcher and the research lab, you know, all that stuff. So this is what it looks like for this one case study. Wait hold on Jeff I have a closing notes. That's supposed to be inspiring. But I don't know. I wrote it this morning. Transparency instills trust.
It's not just about shining like a brighter light what we do as publishers, but it's also recognizing the areas that are lacking and doing the work to improve it. I feel really encouraged that there's a lot of folks in our industry doing this work and making meaningful engagement, building the scaffolding and making structural changes together with community members.
So it's not just us in our ivory towers, you know, but with community, with community that we have not historically included. So I think this co-creation, community centered approach is one of the way forward for transparency. I'll stop there. OK Thank you. Well, that was awesome and super inspiring. And I know when we all spoke as a group before in preparation for this, you know, we talked about how we were really going to be presenting these different ideas.
But I do think we all kind of came from the same perspective of thinking about how important the trustworthy and transparent elements of publishing are. So even though we were focusing on really different topics, it felt really cohesive even as we were starting to map it out. So fantastic segue. Thank you so much, Mia. And in the spirit of transparency, this is not where I thought I would be when I first agreed to give this presentation.
So I'm disappointed that I missed all the other great conversations going on there today. I do hope to be in person tomorrow if I can make it back in time. So hope to get a chance to see many of you there. I am Jeff Lang and I am the founder and CEO of figure 2. So you can go ahead. Whoever's advancing my slides and doing the weird pointy thing.
Thank you very much. I wanted to take one step back from transparency for a moment and get into that proverbial tree falling in the woods. And if you publish your data, will anyone see it. Which I think is the mindset of a number of authors these days. And many times when they're doing other types of transparency things in addition to their data. And I think the real challenge is not only will anyone be able to follow a link from your paper to be able to find it.
But what will they be able to do with all of that data that you make in available when you are trying to be transparent. Here's an example on the right side of the screen of a very well organized box of 3.5 floppy disks. It's nice that it's in these two different collections, and there are labels, and there seems to be some organization scheme on here.
But after all of this time, would you be able to necessarily get useful data off of here just because the data is available. Are you able to access it. Are you able to do anything with the files. Even if you can access the files, would you be able to do anything with the data that's in them. And that's I think, one of the major concerns about data transparency and data management statements these days, even if it continues to exist in this state, will it continue to be useful.
And will people who have the opportunity to see that data continue to have trust in what they're reading and in the paper that makes assumptions based off of it. Next slide, please. Here is another issue when it comes to trust. Some of you may have seen news recently about I'm using air quotes here, but there on the page as well. An actress really a character Tillie Norwood, who's being presented as, as a character that can be licensed out to movie studios as if she is an actress.
And the particle physics group that has created and is promoting this character have been advertising a number of services for faster and cheaper and less environmentally impactful video footage. The way they describe it, some people inevitably will find these compelling reasons to use AI rather than traditional methods of gathering data, i.e. recording video.
And of course, this could be happening in scholarly communications as well already, and certainly is likely to be in the future, if not yet. And to address this new reality, all results will eventually need to be displaying hallmarks of good publication metadata. So attribution metadata, things of that nature. This will be true of every element of the publication and not just of the article itself.
Cameras can already capture a lot of this metadata. That's really good, because you'll want to know where a picture was taken, and you'll want to know who it was taken by and when. But they'll also need to be signatures on those files to make sure that we can trust that the camera that set it took the picture is the producing the picture that you are seeing, and that will help us those trust markers to feel confident that the thing that we are seeing is real and not simply know, something fabricated or done as a shortcut in order to try to make it easier to produce that data.
Next slide, please. And we of course, already have a lot of solutions for some of these problems, many of them invented or perfected by librarians. So evidence is already following a chain of custody. As soon as it's taken into custody, it's put in certain places, it's treated a certain way. Metadata and other processes are followed to make sure that it is trustworthy.
The same, of course, is true for all kinds of collectibles, and other historical archives and artifacts follow that same process. It's hard enough for us for pictures, but when you think about these other tools, there's already a much more complex process that's in place. But for someone to be data that we're talking about here, there may be hundreds or there may be dozens, if not fewer, people in the world who are truly capable of understanding what's going on in some of these data sets.
Will we expect those people to follow these processes in a manual way. The way much of this data is, is managed today. I think what we're going to need to do is build up a set of tools that make it easy to follow these kinds of processes, and make it so that individual researchers don't have to go through a major process in order to be able to get that same level of trust and accountability applied to their work, and that authors will be easily able to get the kind of information that builds that trust through transparency without having to do a lot of backflips to get there.
Next slide, please. Following my theme here, of those 3.5 disks, I don't know how many of you remember that you actually could lock those disks back in the day. Great So you have a way to slide that little plastic thing and even maybe break it off if you want it to be permanently saved. But that doesn't necessarily mean that just because it can't be changed or that it hasn't been changed, that it is trustworthy and transparent.
There are other elements that make data trustworthy and transparent. Next, please. Here's a case when we use accessibility tools all the time for ways to make things more transparent and trustworthy. Certainly, if you're watching cinema in another language, it's helpful to be able to understand it based on some of these accessibility tools. But for myself and a number of other folks I spoke to, even watching some material in my own language, it can be made more understandable by having the closed captions showing on the screen.
Next slide, please. So that's the problem that I'm looking to tackle here with figure 2. Accessibility features make figures more understandable by people of all ability types. Figure two makes web native visualizations that get a Doi and get archived along with the rest of the scholarly record, and they can be linked to a pre-print PDF caption, or they can be embedded straight into a publisher's website.
Next slide, please. With figure 2, your data and your figure are the same thing. There's no need to archive it separately. Your figure is your data. Your primary data is the foundation of your figure. So anyone who reaches your figure 2 figure can compare it with the source. This goes for tabular data like spreadsheets and imaging like Cell lines.
Figure two lets authors use AI responsibly to save time configuring their figures while maintaining the source data separate and out of the reach of the AI itself. And the result is a figure that you can trust because you can trace it back to its source. You can go beyond the pixels. And you can get to the data that tells the full story. So in my opinion, the combination of these tools that help us to, you know, to follow the provenance of a work.
And to make it more accessible for everyone in the end has the impact of making it both understandable, transparent, and ultimately more trustworthy as a result. Thanks, and I hope to talk to a couple of you in the session room tomorrow. Thanks to all of our speakers. And I have a couple questions, but I'm cognizant you guys may have better questions than I have.
So if you have a question, please. I'm having trouble seeing through the see through podium doing my best. If you have a question, please make your way to the microphone and just be mindful that not everyone here knows everyone yet. So please, do you know, introduce yourself. Give your affiliation for everyone, especially for folks online who may be seeing you in a smaller format.
OK, well, we have a question. Yes Ryan Johnson, I'm the head of research services in the library at Georgetown University. And so my question is for me, and I actually two part question, I'm curious, first of all, what you mean by Indigenous. I mean, my colleagues work with researchers from all over the world on topics all over the world. Is Indigenous American concept or an international?
I'm just because that's a term which can mean different things in different places. And the second half of this is does the metadata about research that follows the care piece, the care standards, is it representing the metadata so that, say in a bibliographic tool, can I limit to research that meets that standard. I mean, is that an available metadata standard for researchers on that back end.
So I'm then I'm coming out from a different direction. But so those are the questions in my head as you were talking. I know those are hard questions. I don't know how to answer them. But the question, the first question about, what is indigenous? Why did you decide on Indigenous. And it's an American centric. It's a US centric word.
This is something that has come up during the conversations. And, you know, there was like, Oh, well, some people use first nations and some just use the word community, some use Aboriginal. And it's really it's different around the world. And like, I don't think the goal with the guidelines is to make a decision of which term is the one that is the official one. But for this one, as we're writing it, it's like this is, you know, the project team, like the original organization that worked with it was mostly US based.
So we decided on that. But we have a big caveat in the introduction section that you will see when it's published that talks about the language that we pick and how it's not perfect and it's evolving, and people are really to take this as a starting point. Yeah and take it from there and see it as it applies. And I think the goal is also to make this a living, evolving community.
I think we will be doing a SLACK of something and like people will join in and just share their experience and feedback. So great question. The second one I don't know the answer to that one, but I will note it. And we have actually quite a few metadata experts in the team. My part is like the editorial policy part, but we have folks from ORCID, from NISO, and from crossref in the team that's looking into that kind of stuff.
And Yeah, that's a good question. Yeah Yeah Yeah. No Yeah. I have a follow up to that first part. Do you envision it that the principles could be used not just with Indigenous populations, regardless of what they're called, but like just in what you said, like parachute research helicopter could it could be applied to just, you know, populations of in different, different places.
Yeah I mean, the parachute research policy, that one is actually the one that's applicable to all kinds of international research collaboration. But this guidelines is specific to Indigenous data. Yeah and a lot of the examples are organizations that work on that. You'll see throughout the document, and it will link back to resources that those organizations have published.
But Yeah, many people will take it, as you know, and get inspired to think about rather, I don't know. Yeah, Yeah. Jen Hi, Jennifer. I think this is a question for Mia, but I'd be curious for others in the room as well. The NSF has a policy about Indigenous getting permissions for research that kind of thing. And there's a rule about it that came out last year, I think.
I don't know if your policy aligns with that or if it's a thing that anybody is paying attention to or enforcing these days. I don't know if anybody else has experience. Yeah I think they do align. They do align. I don't know the details of the NSF data, but I will tell you that, well, prior to this year, one of the workshop was we got a grant funded by the NSF.
So it's a work that they've supported and they've seen the details of what we aim to do. A lot of the Yeah, so they've given money for it. We have not been able to use that money this year. So but we'll see. Yeah Thank you. Great I have a question actually for Jonathan. So whenever you're adding something new that the authors are going to have to do obviously a lot of discussion around that.
I adult I think we do a lot of work with editorial boards. And I remember I was on a call one time, and one of the items on the agenda was, should the journal require orchid? And so the person who was on the call said, should the journal require orchid? And one person spoke up and said, Oh, I can never remember my ORCID. I hate to have to look that up.
And that was basically they decided not to do it based on that discussion, rather than framing it as here are all the benefits that you could have. And now let's have a discussion. So when the idea first came up to move from just a blank form into using a system that was going to structure things, what kind of internal conversations were there. Were there.
What were the barriers that were put up. And, you know, how is that kind of evolved over time. Yeah, I mean, it's definitely something that came up and we still, like I said, we still get complaints about there are people who to this day will refute. They say they refuse to be an author on one of our manuscripts. They don't want to go through this hassle.
I think that's a relatively small number. Like I said, we've had over 180,000 author disclosures. So there's still a lot of people who are doing it. I will fully admit, we probably should have done a better communication plan to authors and editors. It took a lot of work to get the implementation in place in each AP, and so when we were ready to launch it, we were like, let's just do it. And I think if we needed to spend more time with that, it helped that it was something that the ha adopted Association.
Why Because we can point to them and we can point to. We had a lot of material already where we were saying, this is why we're doing it. Like in really focusing on the mission of the ha, like transparency, the importance of transparency, the importance of making sure all of this is collected in as public as possible. So I don't know if that exactly answers the question, but Yeah, it definitely something we took into consideration.
And I know we probably could have done a better job. But we're still working on that. And I know when we had our planning call, you know, Mia, you mentioned you could identify with Jonathan's experience because you guys were kind of, you know, playing with the idea or starting to get into the idea about the conflicts of interest. And, you know, who's the ag folks has been traumatized by this. Raise your hands.
There you go. My my folks, I think, yes, this is something that we tried very recently and we failed at it in the sense that it was a hot mess. We are not. You know, we're earth and space science. We have we have 24 journals and one of them is a health journal. Geohealth so the health folks they know about cool stuff there.
They've been doing it for a long time. The rest of them. So like they know very basic of it. So we try to do the form. It's just like we're just going to do the form full transparency. It's going to be amazing. And it's like staff and editors were just like, it's so many complaints.
Nobody knows what it means. Like is this a co. Is that a CEO. This is not needed in my institution for my research like I work with I don't know the moon I don't know. But like, you know, like it's like. So we had to really backtrack when she says it. It could be true, right? We really did.
We do work with the moon, but it's like we realize that there needs to be also learning about the community. Like, what do they know. Like what? What do they know at their institutions, what kind of language they're familiar with. Are there other journals that's doing this in our field. There's a lot of work before, just like putting out a form out there and there's a lot of upskilling, you know, with our editors and our staff to answer some of these questions.
So we kind of backtrack it and we start with a, with a simple like a, a form, like we have a policy about a conflict of interest. If you read this and disclose it and put it in your paper, but we're not using exact language or so it's something that we're working with. But Yeah, that was really hard. I'm sure folks can really identify with that. Christina Hi there.
I'm Christina Drummond. I'm the executive director for the open access book usage data trust. And first of all, Thank you so much for spending this time talking about transparency and trust. My question really comes at transparency in the age of AI, when data, we are now learning so much more about all the potential creative uses. I'll use that in an optimistic way, but also potential harmful uses of data that we make transparent with the best intentions, and then it gets scraped and repurposed and reused in all kinds of creative ways.
So I'm curious if both of all three of you can mention how you're thinking about ethics. And, you know, I like to use the word catastrophizing, but how in your organizations, how have you thought about, well, what happens if this information is public. Does it need to be as controlled as necessary for these processes.
How about jeff? Jeff? Jeff, Jeff. Or you can punt. We'll start with Jeff. I like this one. And I think it's both a challenge and an opportunity because you're right. You know, there are people who will inevitably try to use this for nefarious purposes.
But I think the vast majority of people are going to be using it for ways to simplify their process. And if we don't give them tools that help them simplify their process in ways that maintain provenance and that make it easy to do the right thing, they will find their own pathways to doing it in ways that we may have questions about that may become problematic. So I think the right solution here, from an ethical standpoint, is to make sure that we're finding ways to say, if you're going to use these tools, here are ways that we think are the right way to do it.
Here are things that make your job easier, as opposed to things that make your job harder in order to be able to comply with these policies. And I actually think in the end, this could solve a lot of the problems that we have, encouraging people to do things that can be appropriately tagged, it can be appropriately identified if we start from the beginning. That's actually long been the dream of the, you know, the, the taxonomies and, and other folks who want to have metadata included from the beginning.
We can say there's a reason now to use these tools, and they add the benefit of being able to add all of this extra metadata and provenance and security information right from the start. So, you know, sure, these tools will be used in some ways for things that I think are problematic, but the more that we can say, and here's the right way to do it, and it's easier and maybe more fun, maybe the output is something that you like better.
I think that'll be a compelling story for anybody. I'm curious, how have you been, like, thinking about it, like in your organization, like with this conversation. That's a great question and actually a good pitch for tomorrow, because we'll talk about that tomorrow at the first panel. I don't have my speaker thing, but yes, like so we've thought a lot about this in terms of trust indicators.
We're actually using something called the data space protocol, which is emerging in Europe to provide the mechanism to have that, what they call a data control plane. So if you have data, what are the controls you need to put into place that align with your data sharing and use policies. But in addition to that. So if you think of the data we share in the metadata that it needs and this control over those data elements, how we align it with care, for example, you still need a mechanism to hold everyone accountable.
We can't just wait till the lawyers get involved after the harm has occurred. And so to that extent, this additional layer around community governance and having a trusted, neutral entity that can hold parties accountable is a key piece of that. So I'll talk more about that tomorrow. Yeah no, Thank you for bringing that up. And it's I was talking to somebody about this maybe over lunch.
How like it feels insurmountable. Like some of this work, it's so big and it requires like an uprooting of, like, the infrastructure, the architecture underneath. And, you know, and it can't be something that a publisher would do alone or something that a librarian can do by themselves or something that ORCID could just like always going to do. But it has to be everybody but who's doing it. When do you do it.
Who's funding it. And you know how long it's going to take. And Oh Yeah, society is crumbling a little bit. You know there's all that stuff to like. But to me, it's like, I think this is the ligand of, you know, I've been in publishing for almost 20 years. And I do think in the last two or three years, there have been more of this coalition, just community efforts.
And, you know, for better or worse, I think I and the worries about the ethics, the concern is kind of giving us more of this sense of urgency to work together. So I really look forward to your talk tomorrow. So, Yeah. Thank you. Thank you. Yeah just before Stacy asked a question, Jeff, when you were talking, I was thinking, could we run into, equity issues like, it's great if everyone has the fanciest new I'm just going to say mass spectrometer because I think that sounds fancy, but maybe it doesn't.
Maybe it's an older model in a less resource institution or an older area of the world. And they've got, you know, some equipment that's not up to date. Or is there a is there a risk that folks could be locked out of participating because they can't meet these super high standards of proof that the top journals are requiring. Yeah, I mean, it certainly is a risk. You know, I think to your point, some of these tools that you're talking about here are like generational tools in the sense that somebody gets one and they keep it in a lab for 30 years, maybe more in some cases.
And so I think, you know, what the archivists have found for the most part, is that you don't have to necessarily go back to the very first instance. You don't have to know that this piece of history passed from the hands of George Washington into, you know, the next person and the next person. If you have that data, that's fantastic. That all gets added to the metadata and it gets added to the record.
So that people who are looking at it can know what level of provenance information is available. Any level of provenance information improves transparency and improves the trustworthiness of this material. So the sooner you start gathering it, the better. But to limit and say only material that has the highest level of provenance because it came directly from a machine, I think we're very long way from expecting anything like that.
But in order to be able to make that common, we also need to work on those tools to make sure that we can make it easy to do and that it doesn't become disruptive. So I think the sooner we start, the better. But that doesn't mean that things that didn't start initially in the system will somehow be off limits because they didn't have that, you know, that highest level of provenance.
Thanks, Stacey. Hi, everyone. Stacey Berk, American Physiological Society. Me and I were at the same table and we were talking about AI, and it's also rapidly happening and we don't know how to guide our authors. So, Jeff, I was wondering, like, are you partnering directly with authors. Is this an author service directly.
It's not something that you're partnering with publishers on a system level and having them do that or, you know, certainly from my perspective, yes, I'm working directly with authors. We're reaching out to them. We're about to launch our tool and get authors coming in the door. But I think when we think about it broadly, I don't think it's a solution.
And maybe the same is true, you know, for work that you're doing. Mia and I think, Jonathan, absolutely. You know, you're looking for information from others. The publisher can't add that information in each of the cases that you're hearing today. It doesn't work unless you're partnering with the authors. The sooner, the farther upstream that we partner with people, the sooner we can make their process easier.
You know, I think the conflict of interest information, the idea that you could have it in more than one place and use it in more than one system, I heard Jonathan say the more people using it, the less onerous it becomes for, you know, for his group to use. I think that's true for all of these tools we have to be thinking about not just how do we target specific areas, though we may pick priorities, we have to start thinking about how we cast a wide net and make this a common use case, a common practice for everyone in any kind of data creation sphere, primarily certainly for researchers, but broadly as well.
I mean, and I'm just going to piggyback on, again what Mia is saying. She's been very influential today. No, but I mean, all of us working together. And that's why this these community meetings are so great to learn about. I mean, I'm already like, OK, we've got to connect with these guys. Yeah, me too.
And see how we're doing this and see how we can improve our processes internally. So, Jeff, I'll be in touch. Thank you. Stacy I also want to say that like I know very little about metadata. Like it's like not everybody is supposed to know everything about everything. You know, like, I'm going to.
Yeah like I'll talk to you, like, if I want to talk to, like, like my expertise is in talking to editors and like getting them to get behind trying something new and, you know, putting my editor hat on and my staff hat on. That's that's what I know. Well I don't know about, a lot of under the hood stuff, you know, like, but there's folks out there. So I think when I first joined this project and I was like, this is I'm in way over my head.
Like, this is impossible. But actually we over the two years, we really need everybody with their little bits of expertise, like piled on top to create this kind of big thing. And it's like, now we're at the end of the process and I'm like, super stoked. Like, I think it's going to be great. I'm sure people are going to hate it. I mean, like everybody, but, you know, but there's got to be people that's going to be really excited and like, be like, be like this part.
Actually, you should do it this way. I know one journalist doing it that way or like and then it doesn't get better and snowball. So I don't know. Yeah and I think the metadata question raises an interesting, you know, issue. So all of the time is looking for ideas for new working groups that could become recommended practice, that could become standards and you know, and encourage anyone who has an idea but have maybe hasn't really happened yet that you're at that point where you're coming across.
There's not a metadata field to express this, but, you know, when that point does come, or maybe there are people in the group working on it. I would encourage, to move into the standards space. There's one example. There's this thing called the tc labels or BCC labels. It's for traditional knowledge labels like that an organization that's, you know, it's a, it's like a label, like a, you know, like the Open Data badge is a badge, like a badge batch that you could say that like, you know, if you have this batch, I mean, your data is accessible.
So it's the same like tc labels mean like the data belongs to. It's a traditional knowledge. And it's been published with permission or whatever. So this group that's working with this, the folks, the local context is the name of the folks they are working with ORCID to, you know, build it into the infrastructure. So there are some people that's already working on this. Yeah, I don't know.
I don't know why I mentioned it, but I got excited. I was like, it happens, look into it. And Jeff are the necessary tags and the like available to make sure the different types of conflicts of interest are expressed adequately. Sorry Yeah. Jonathan to j to J's. Yes that's one of the benefits of it is it comes it comes into us with it's the organization is tagged the amount is tagged, all that kind of stuff.
So we can kind of track it a little bit better. Yeah and I do have a question for Jeff. So they were swirling around in my brain. You know, I know it's kind of early days at this point for figure 2, but I'm sure there's probably going to be metadata implications for tracking all of the kind of things that are coming up. Are you already having conversations with standards folks along those lines.
A number of the standards folks, actually, so far we've been focused more on the preservation side of things, but I think that corresponds a lot with the metadata. And speaking with one of the folks at one of the preservation groups, they said, no one ever comes to us early. They were so excited that they had an opportunity to be in on the ground floor, thinking about how we take this data that's not simply easy to print out, and think about ways that we can make sure that it is permanently accessible through multiple different levels of, you know, fail safe.
So working with the community is a huge part of making sure that this is not simply just another fun tool to use, but actually something that fits into the packaging of the scholarly record so that people will use it. That's so interesting because I've been working in preservation space for quite a while, and yet I never hear of anybody coming early in the process. So they must have just kind of fallen over themselves to help you on that.
So appreciate that if we have no more questions in the room, hopefully you've stayed awake for this post-lunch session. I will just ask for one more round of applause for our speakers. Thank you guys. And looks like letty has an announcement, so we'll hand over to letty. We have a short break to prep for the next session, so don't go far.
But we do have a few minutes. We are not taking a formal like coffee break or anything this afternoon. We will have little breaks in between each panel. So this is your moment. Don't go far. Welcome back. In a few minutes. Wonderful