Name:
Trust in Science
Description:
Trust in Science
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/dc6cc9e1-d194-4c37-a661-1f5c6e71a57c/thumbnails/dc6cc9e1-d194-4c37-a661-1f5c6e71a57c.jpg
Duration:
T01H00M58S
Embed URL:
https://stream.cadmore.media/player/dc6cc9e1-d194-4c37-a661-1f5c6e71a57c
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/dc6cc9e1-d194-4c37-a661-1f5c6e71a57c/GMT20210325-150048_Recording_1760x900.mp4?sv=2019-02-02&sr=c&sig=KrH7zmEJJHe1R542ZF2fomtC8AToC791y8wuM8%2FUb70%3D&st=2024-11-26T09%3A27%3A09Z&se=2024-11-26T11%3A32%3A09Z&sp=r
Upload Date:
2024-02-02T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
DR. KATHERINE PHILIPS
GREFE: Thank you,
EEFKE SMIT: and welcome to today's "Ask the Experts" panel. We are pleased that you can join us for today's discussion on trust in science. My name is Katie Grefe. I'm the working group lead for the SSP Education Committee and Associate Director of Education at the American Society of Clinical Oncology. Before we get started, I would like to thank our sponsor for today's event, Silverchair. I also have a few housekeeping items to review.
EEFKE SMIT: Your phones have been muted automatically, but please use the Q&A feature in Zoom to answer your questions for the moderator and panelists. Our agenda today includes time to cover whatever questions you may have, so please don't be shy about participating. At the conclusion of today's discussion, you'll receive a post-event evaluation via email. We encourage you to provide feedback to help shape future SSP programming.
EEFKE SMIT: It is now my pleasure to introduce the moderator for today's discussion. Anita de Waard is Vice President of Research Collaborations at Elsevier. Her work focuses on working with academic and industry partners on projects pertaining to pressing modes and frameworks for scholarly communication. Since 1997, she has worked on bridging the gap between science publishing and computational and information technologies, collaborating with groups in Europe and the US.
EEFKE SMIT: Anita has a degree in Low-Temperature Physics from Leiden University, and worked in Moscow before joining Elsevier as a physics publisher in 1988. Anita.
ANITA DE WAARD: Thank you so much, Katie, and welcome to everyone joining today's "Ask the Experts" panel. I am thrilled and honored to be joined by three experts, and I am very happy that you can all join us. Today's format will be that we will start with asking a couple of questions of these experts, which they will answer, and, after that, we do want to open the floor. We really want this to be an interactive session, so please post your questions in the chat, and we will try to address them as we move forward.
ANITA DE WAARD: So, my name is Anita de Waard. I use she/her pronouns, and it's my pleasure to introduce the three panelists. I will ask them in the order of backwards from the slide that you see, so we'll start with Eefke Smit. Eefke is the Director of Standards and Technology at the International Association of STM Publishers. She also coordinates the activities for STM members in the areas of technology developments.
ANITA DE WAARD: She works on the annual STM Tech Trends reports, and coordinates the work of the Future Lab group, and supports a number of other task forces, representing STM in a variety of industry-wide standards organizations and projects. She's been active in academic publishing for more than 30 years, and, as Katie said, we go way back, Eefke and myself. We met, I believe, in 1989, and it's really an honor and a pleasure to invite her on this panel.
ANITA DE WAARD: The second panelist is Richard Sever. He is the Assistant Director at Cold Spring Harbor Laboratory Press, and he is the Co-Founder of the pre-print servers medRxiv and bioRxiv. He also serves as Executive Editor for the Cold Spring Harbor Perspectives and Protocols journals, and he has a degree in Biochemistry from Oxford, and worked in Cambridge, and has been working as an editor for a long time.
ANITA DE WAARD: He moved to Cold Spring Harbor in 2008. Our third panelist is Tracey Brown. She is the director of Sense about Science, and has been the director since 2002. It is a charity that really works on sound science and evidence, and explaining these to a larger audience. They've launched initiatives such as AllTrials and the Ask for Evidence campaign, which really engages the public in requesting evidence for any claims, and she was made an OBE in June 2017, so it's quite exciting to have Tracey with us.
ANITA DE WAARD: And, also, she was made an honorary professor at the University College London in Public Policy in 2020. So I think these are three fantastic panelists to discuss today's topic, trust in science. And I'm going to ask the panelists to please, in turn, respond, first, to two different questions. First of all, how do you define "trust in science?" What do you mean by "trust," and whose "science" are we talking about?
ANITA DE WAARD: And then, what are some of the key challenges that you are grappling with? So I'd like to hand over, first, to Eefke, please, to address that. Thank you.
EEFKE SMIT: Thank you, Anita. Wonderful to be on this panel, not just because it's always fun if our roads cross, but even more because I think you chose this very timely topic, especially now that we are one year into the pandemic, and we could see, this year, that people wanted so badly to put a lot of trust in science. You know, when we saw people growing more desperate around the world, I sometimes had the fear, myself, that people wanted an overdose of trust in science, as sort of the final rescue to come from there.
EEFKE SMIT: And, if you then start thinking, what is it that that "trust" is of your first question, I believe, again, with some kind of fear. That people often mix up the two concepts of "trust" and "truth." You know, people have trust in things if they can see it's true and if, over time, it's proven that something is true. So a lot of people mix the terms "trust" and "truth," and that makes it very interesting, sort of philosophically, because truth, in science, is a very fluid thing, because something is true in science until it's replaced by a newer truth, which also means that we don't know everything for certain, especially in a case with a pandemic like this because, you know, who knew about this virus, and what it would do, and about the medicines, and the treatments?
EEFKE SMIT: And, very often, it's difficult for people to grasp the idea that a lot of thinking in science is probabilistic. You know, probabilistic thinking also requires a certain twist of the mind and the thinking. And, also, that science is not a consensus-based activity. So, part of your question was also, what are we grappling with if we're talking about trust in science? And I think it's the mix of all these things, that trust has something to do with truth, that truth and science is a fluid thing, that it also requires understanding probabilistic thinking, and knowing that, in many areas, things move on, and, at a certain point, are no longer true, but are replaced by a better truth.
EEFKE SMIT: And, yeah, if you then look at what is "trust," really, and you mentioned that, in STM, we annually do a Tech Trends forecast. The motto of this year-- this is a little sneak preview-- the supporter of this year is, "Seeking the Land of Trust and Truth," which is exactly, of course, in the middle of this discussion. And, for that exercise, I sort of parsed out the concept of "trust" into five elements, and we can discuss those later in more detail, but it just depends a lot on transparency, on reliability, on predictability, on responsibility and accountability, and on self-correction mechanisms, and I would see those five as the pillars for trust in science.
EEFKE SMIT: I hope I answered your question a little bit in this way.
ANITA DE WAARD: Fantastic. Thank you so much. Richard, I'd love to hear your thoughts on both points.
RICHARD SEVER: Yeah, well, I mean, I think that what I would say is that trust is kind of multi-faceted. There's several aspects to it. And, as you mentioned, it depends who we're thinking of. I mean, I think that there are three, kind of, components, in some respects. There's the trust in the scientific method, which is the understanding of the scientific method, the understanding of what it means to make observations, to draw conclusions, and develop hypotheses, and test those hypotheses based on those observations.
RICHARD SEVER: And so, you know, that requires an understanding of what science is, and what it should be, and how it works, which not everybody has. Then there's the aspect of trust in the integrity and honesty of scientists, and this gets to the, kind of, transparency point, that people are motivated, and are genuinely seeking the right answers, and don't have conflicts that lead to kind of a loss of trust in an individual.
RICHARD SEVER: And then, finally, there's trust in the methods and the data that are used to draw conclusions, and those are all looked at in slightly different ways by kind of an orthogonal way of it breaking down, which is the trust of the general public in science and scientists, which is something that some, I mean, given the audience here of publishers, that will concern some publishers more than others. And then there's trust of other scientists in the work that is presented, and that is something that is of great interest to publishers.
RICHARD SEVER: I think some of the issues that we're grappling with in the former case with trust in science from the general public is there's a bit of a gulf of understanding among the general public about how scientific consensus is arrived at. I mean, we've seen this with the autism and MMR connections, where people say, oh, well, you know, my kid had the vaccine and then got autism shortly afterwards, and is trying to explain the difference between correlation and causation, and that you have to do a study to establish that, because it just happens that the features that you see in autistic individuals emerge around the time people get all these vaccines.
RICHARD SEVER: That doesn't mean the two things are connected. So it's explaining that, explaining that, you know, if one person gives a treatment to a patient and they seem to get better, that doesn't necessarily mean that the treatment works. You know, the science itself, you arrive at consensus by a number of studies where observations are made, they're built on, and some kind of consensus is established through reproducibility of that work, and then, right at the end of the process, there's actionable studies, like clinical trials, and that's the kind of scientific method in the biomedical sphere.
RICHARD SEVER: I think, on the issue of trust among scientists, I think one of the challenges that we grapple with is we know that people's careers are critically dependent on getting, for want of a better word, "exciting" results, so there's pressure at every step of the way, from PhD through to somebody who's trying to get tenure, to yield positive results, and one worries that that pressure can become so extreme that you get selective reporting, and, in the worst cases, outright fraud, and this kind of gets to the point of the need for transparency that Eefke made.
ANITA DE WAARD: Thank you so much. So, I really appreciate that you're distinguishing between the trust by the scientists in the science and the trust of the general public in the process of science. And I think that's a great point to ask Tracey to comment on these points. Thanks.
TRACEY BROWN: Thank you. Thanks, Anita. Well, I very much agree with what Richard just said about making these distinctions between, sort of, institutions, and scientists themselves, and methods, and data. And, in many ways, we could look at the emergence of the COVID crisis as an opportunity for people to see a lot of the processes of science reaching consensus, the methods to cut through the noise, and try and work out what's really going on, and so on.
TRACEY BROWN: People have experienced that live and in real time in a way that I just can't see any other comparison, and I think there is a lot to be embraced about that. You know, if I talk to my mom now about models, she knows that I'm talking about data, rather than about cars or fashion shoes, and this is quite a kind of step-on in terms of the conversation that many nations are having with their publics, and what a great thing, in many ways.
TRACEY BROWN: But I do think there's a concerning preoccupation with trust that we perhaps need to have a bit of a look at here, because, whilst all the indicators tell us that trust in science is actually, if anything, in many countries, at an all-time high-- the Pew Research last September showed that, apart from, I think, Malaysia, where it's a bit shaky, most countries around the world have, if anything, seen an increase in trust in science, and in medical professionals, and so on-- and yet there is a real feeling of anxiety about sort of the extent to which scientifically based information has got a purchase in the public mind.
TRACEY BROWN: And I get this kind of very strong kind of existential worry coming from the research community, and scholarly publishing, and other places about, you know, whether we talk about the infodemic, we talk about fake news, real concern about losing that kind of connection with people. And so I think we have to ask ourselves what we're meaning by trusting that context, and I find it very useful to think about a difference in trust that's been described by the philosopher Matt Bennett, which is, you have epistemic trust, the information that you're given, and recommendation trust.
TRACEY BROWN: And I think what the COVID crisis has done is put those two things right up there together, and it's a very confused and blurred line. It's like, well, do I trust you to tell me what's going on and to use adequate methods to try and find that out? Do I trust you how to run my life, and, what's more, am I prepared to actually run my life that way, is another question, and they're completely mixed up together, and, in most nations, the advisory system, the supply of the data, and all that, is completely mixed up with the policy making processes, and the guesswork, and all of that that's going on.
TRACEY BROWN: So I think that's an important thing for us to distinguish, because, if we over-internalize, we know that, when we get into that domain, people's trust in political systems and politicians is a whole other thing, and that hasn't seen such high numbers, up there in the 70, 80% trust levels, by any stretch. And I think what we need to be careful of in the scholarly community is over-internalizing a crisis of trust, to the point where we don't see the opportunity, and I do think there is a really big opportunity here to help people to navigate the world of information.
TRACEY BROWN: But if we come at it from a rather defensive, sort of backfoot position, then I think we won't see that, and I feel I'm very much borne out in that worry by the amount of public blaming that I see going on kind of on Twitter and in academic forums, and a sort of almost real, visceral kind of reaction against public conversations about research, which concerns me. So I think we need to kind of come at it with a more open kind of understanding that people rightly are suspicious of the authorities in their lives.
TRACEY BROWN: It's right for people to know that there are some interests in society that don't serve theirs. I think that's reasonable, and we ought to separate out from that our understanding of the information that's useful to us in our lives, and our decisions, and accountability, and so on. One of the things that I think we grapple with, and I've said this about science, you know, we equip the public.
TRACEY BROWN: We equip people to see these things for themselves, to understand the relative merits of information, and we equip the research community to respond to that adequately. But one of the things, I think, that's troublesome for us, at the moment, is a big part of that is clarity about the status of research findings, and, you know, we've popularized an understanding of peer review. I think, you know, I'm rather proud of our global campaign to get people to talk about whether something is peer-reviewed.
TRACEY BROWN: It even was featured on The Simpsons. It's, you know, something that is kind of in the vernacular for most people, for most of us, or most media reporting, anyway. But I feel that, at the moment, the research community itself is unclear about how to describe the status of research findings. or is finding itself behaving in ways that don't sort of fit what they think is right, like, in a very productive way, sorting out what they think about latest CDC figures on Twitter, for example.
TRACEY BROWN: A really, really great exchange of views, but what status does that have now, when the picking-apart of the model there was perhaps more rigorous than the peer review of the model that was published, and so what to say about that? And that makes it very difficult for us to think about how to equip the public.
ANITA DE WAARD: Thank you so much, Tracey. Wow, already, in this opening round, we've had five definitions, and then three definitions, and then this very brilliant, philosophical distinction between epistemic and recommending of trust. I have the feeling we could easily fill an hour about this. But I'd like to bring us back a little bit to the activities that each of your organizations have taken regarding trust in science. So, I'd love to turn it back to Eefke, and say a bit about how STM views this topic of trust in science, and what some of the initiatives are that you are doing within your organization?
ANITA DE WAARD:
EEFKE SMIT: Thanks, again. And, actually, the activities we undertake directly tune in with several of the things that Richard mentioned, and several of the things that Tracey mentioned. Maybe, for the people who don't know, STM is a member organization of international scientific and academic publishers. Jointly, our members publish around 70% to 80% of all peer-reviewed literature, and, of course, with mentioning peer review, we're already in the heart of the trust discussion again.
EEFKE SMIT: Indeed, as Tracey said, I think that one of the benefits of last year, when people were suddenly all looking at science to bring the big solutions to the pandemic, suddenly, in the general audience, people understand, or at least start to get a notion, of peer review, you know? Because preprints became so important, because people were chasing after the first results of all kinds of studies, and, suddenly, you know, first, you saw The New York Times explaining if something was peer-reviewed or not peer-reviewed, and then, in a wave thereafter, you saw, in most of the other newspapers, but also television programs, in many, many countries, that, suddenly, journals were emphasizing whether something was peer-reviewed or not peer-reviewed, whereas, maybe, a few years ago, that was something that was well-known in our circles, what the difference was, but, suddenly, it goes wider.
EEFKE SMIT: Well, back to my own organization, because that was your question, I gave you the list of the five elements that I think are the pillars under trust, and they directly coincide with everything that Richard mentioned. So we have several projects going on which we have put under the umbrella of a trust and integrity program, and providing trust and integrity tools for publishers, because we think, and we hope, that publishers can be very important actors in improving trust in science.
EEFKE SMIT: And, for example, we have a very successful program now on research data, where we share between the publishers best practices for sharing research data, how to make it easier to share them, how to link to them between articles, and, also, have some common conventions on how to cite data, and that will, of course, help the transparency of the research methods and findings, et cetera.
EEFKE SMIT: We also have another project going on, which is called "transparent peer review," and, there, we've been constructing, with a lot of other stakeholders, and researchers, and funders and many parties involved, a taxonomy on peer review, because peer review is so important, but has remained surprisingly opaque in what is happening and what is not. You know, if a journal says, "This is all peer-reviewed," well, then, actually, "peer review" can mean 20 or 30 different things, from single-blind, double-blind, anomalous, post-publication, pre-publication, whatever, so this taxonomy is sort of a first step to best practices and, at least, we hope, to be used by publishers, so that they indicate, journal by journal, and even article by article, what kind of peer review actually took place.
EEFKE SMIT: A third project is in a much earlier stage. It's about tracking, tracing, and identifying parallel duplicate submissions, because they overload the peer review system and make the whole publication system less transparent. We also were part of a NISO project on reproducibility badging. Richard mentioned the importance of reproducibility to improve the transparency of science.
EEFKE SMIT: Another project is on image alteration detection, and we're also exploring data peer review, and each of them relates to those five elements that are mentioned as transparency, reliability, predictability, self-correction-- Oh, yeah, because I forgot to mention another project on taxonomy for retraction policies, so that the self-correction mechanism becomes a bit more clear.
ANITA DE WAARD: Thanks so much. And, actually, Eefke, you were already addressing a bit of the next question, which is what the publishers do.
EEFKE SMIT: I am.
ANITA DE WAARD: I think, obviously, as a publisher member organization, a lot of these efforts are of interest to a lot of publishers. And it also seems that they mostly focus on improving the process of the publication of science, which sets it a little bit apart of how does the-- and that transparency, as we've just discussed, is one of the elements by which to create trust for the general audience.
ANITA DE WAARD: Anyway, thanks so much. I'd like to go to Richard, because, of course, we've mentioned peer review many, many times. I think, if you made a word cloud of the conversation so far, that could would mentioned. Then, of course, where you say "peer review," you some say "preprints." So it would be really interesting to hear what your organizations do currently to support trust in science.
ANITA DE WAARD: Thanks.
RICHARD SEVER: Yeah, well, I guess the first thing to say is to mention that, of course, I'm embedded in a larger organization, which is called "Spring Harbor Lab" itself, and so there are a number of things there. I mean, I think we've always recognized that education is important, and so we have a DNA Learning Center at Cold Spring Harbor, which teaches kids on Long Island, and in Harlem, and in Brooklyn, and various other locations about biology, molecular biology, and genetics, in particular.
RICHARD SEVER: So we think that's very important. And I think, you know, on a kind of personal level, I and a number of faculty members have been giving talks to the general public, recognizing that it's important for people who know about science, and that can be an editor or a scientist, to engage. I did one, recently, with a New York state senator about explaining how vaccines work, and so I think there's an obligation of everybody within this ecosystem who has knowledge about things like this to get involved in that kind of communication.
RICHARD SEVER: And, on the sort dissemination side, sort of, you know, the trust in science among scientists, which I think you're kind of referring to, I mean, obviously, Cold Spring has a number of journals, and we'll probably talk about best practices and transparency, et cetera, which we obviously follow. On the preprint side of things, I think one of the things that we do think is important, as I mentioned, this issue about the bias towards positive and exciting results, and there has always been a feeling within the academic community that journals want those types of results, and are not interested in the more pedestrian findings, you know, that something doesn't work, or they can't reproduce that.
RICHARD SEVER: And, so, one of the things, of course, that a preprint server does is it makes no judgment on the quality of the work. So I think that's one of the things that I have been saying to many, many academics. You know, I routinely hear that, you know, somebody makes a knockout mouse, and then somebody else says, oh, well, you know, I made the same mouse, and it doesn't have that phenotype, and then I say, well, write a paper.
RICHARD SEVER: Put it on bioRxiv. Then we can have that conversation. So I think providing a venue that is not selective is important, and could address some of that balance of the skewing, not necessarily of the results, but the skewing of what results are actually disseminated. And, to mention transparency, another thing that we've done on bioRxiv and medRxiv is recognize that peer review is potentially a much more multidimensional thing.
RICHARD SEVER: You know, when we think of peer review, we tend to think of this notion of sending off the two people and giving them 14 days to get a report back. But peer review, in the broader academic sense, if you go back hundreds of years, is about having work assessed by other academics. So one of the things that we do on bioRxiv and medRxiv is try and aggregate conversations that are happening on social media, and dedicated discussion areas, and in peer reviews.
RICHARD SEVER: And so a number of journals and a number of independent peer review entities are now posting peer reviews alongside preprints, so that you can imagine that-- I mean, if a paper is published, we already link from bioRxiv to that published paper, so that somebody can say, this preprint, that thing you read before that was kind of caveat emptor, now has been reviewed. Here is the journal it's in.
RICHARD SEVER: But, also, as I said, aggregating a number of other trust signals around the paper, or, potentially, signals that may make you think that shouldn't trust that paper. And that's, you know, where a lot of our attention is focused right now, but I think, beyond that, they're probably all the things that we'll go on to talk about in terms of what publishers can and should do.
ANITA DE WAARD: Thanks so much. And that is, indeed, going to be the next question. So, Tracey, if it's at all possible, it would be great if you could say a bit about, first of all, how Sense about Science, obviously, has contributed to trust in science, and maybe you could then follow with what you think publishers should do, and then we'll go around in the other direction. Thank you.
TRACEY BROWN: Yeah, nice segue. I'm very happy to do that. So, the thing for us, sort of looking at the demands of recent times, has been very much about a big expansion of what it means to ask for evidence, and so we've found ourselves, over the last year, spending a lot of time helping people understand limitations and uncertainty, and asking about those, and dealing with volume.
TRACEY BROWN: You know, journalists are not equipped to deal with the kind of volume coming out of the research community at the moment, and just sifting it on the basis of those things that are published in, say, elite journals, or very-high profile journals is kind of not good enough. In fact, it took journalists down some really wrong paths early in the pandemic, as well. So helping different sectors, policy world and media world, to kind of think about the sorts of questions they should be asking has been a real stretch for us over the last year in terms of demand.
TRACEY BROWN: And, in particular, one thing that's really struck me, and that Sense about Science is working on a lot now, and we'll be doing globally for the next five years, I think, is making sense of models and data. So, really, as a society, in every society, we do not have the skills to ask the right questions. I mean, an example of it was at the beginning of the pandemic. Everyone said, well, why are you using a flu model? Why aren't you using a model based on SARS-CoV-2.
TRACEY BROWN: It's like, well, because you don't actually have any data to build that model, but, you know, the fact that it was you had very senior people in government departments exasperated that we didn't have a model for the coronavirus rather than flu, and not understanding why. And so I think, you know, it really sort of brings home to us that even, kind of, the people in the position of commissioning the outputs of research are really not in a position to ask the right questions yet.
TRACEY BROWN: We need to equip people, so that's what we are spending our time doing. I just wanted to make a distinction between motivated and unmotivated audiences, because motivated audiences are people like the journalists that you can persuade it's in their interests. They do journalism better when they know how to ask good, skeptical questions, or get to the bottom of things, or understand what we'd expect to see versus what we do see, and all those things, whereas there are lots of unmotivated audiences who have had to be interested in science at various flashpoints in their lives, and COVID is one.
TRACEY BROWN: And I think there's an interesting, different position for that, which is-- so here's the conversation I'll relay to you. It was, if you don't have the answer, why are you here? And that's a really fair question for someone to say. Why should they? They're not there to be interested in science. It's like, well, if you haven't got something to give me as a scientist, as the answer, then why are you even in the conversation with me?
TRACEY BROWN: And the answer from the scientific community has to be, to make sure that someone else doesn't pretend to you that they have the answer. So I think there's a different orientation there towards helping the public to protect themselves from misinformation, and that's something that's been really a big part of what we do. Publishers, though, and I'm not going too far from our cause to talk about what publishers can do, I think there are four things that publishers should do.
TRACEY BROWN: Number one, I think we need to expand, broaden that commentary about peer review to include the kinds of things that Richard and Eefke have been talking about, make sure that, as part of publishing a paper, there's much more conversation about things like emerging versus settled science, you know, looking at what other things have gone on before. I think we should be insisting on much better reviews of prior work in a lot of papers.
TRACEY BROWN: I really do think so. And I also just think we need to talk a lot more about what has gone on, what has and hasn't been looked at, in a paper. I mean, we've already got this situation where people are very confused about whether data that's included with a paper was reviewed also, and, well, it usually wasn't. But I think there needs to be a much better conversation that goes with it, and a more nuanced one. The second is to think about, in the discourse about scholarly publishing and what it is, to think about that question that faces most people who are users of scholarly discourse outside of the research community, which is, how much weight can we put on it?
TRACEY BROWN: And I think that's usually what people are asking, in one shape or other. That's what people want to know. And I think, if the scholarly community internalizes that question a bit, then it's outreach beyond the walls of the research community would be more effective. The third thing is support public education, big time, via the research community encouraging researchers who publish to talk about the status, and standing, and context of their findings, as well as their findings.
TRACEY BROWN: But, also, directly, I mean, you guys have a lot of media contact, a lot of policymaker contact, but also directly to the public, influences and others, and I think, you know, take seriously the fact that we have a big, I think, it's a campaign. You know, we need to launch a campaign for understanding epistemology, basically. And, fourthly, come up with a new word for "epistemology," and my version of that, at the moment, is "evidence know-how." That's what we talk to the public about, about having evidence know-how.
TRACEY BROWN: But I think that's what publishers need to do, is say, it's time now to realize that this stuff that just was never talked about, like, what's the nature of what we know, well, suddenly, the nature of what we know is very much at the center of decisions we take as a society, and I think publishers really have to grasp that in a very full-blooded way.
ANITA DE WAARD: That's fantastic. And I love this point of education being very much at the forefront. As a former physics teacher, I'm hugely supporting that. And, also, Richard's work, I think, is fantastic, just as a publishing house, to undertake a role in educating both kids and adults about science. So, I'd like to go to Richard. Thanks.
ANITA DE WAARD: And, actually, after this round of questions, we will get to the audience questions, so thanks so much. Richard, any thoughts to add on what can publishers do? Thank you.
RICHARD SEVER: Well, I think the critical thing is for publishers to kind of recognize their role as sort of stewards, certifiers, and correctors of the academic records. And, in some respects, kind of the watchwords here are, sort of, "transparency,"-- again, as Eefke mentioned, there's more and more need and recognition that transparency is really going to help here-- "documentation," and "standards." So, if you're publisher, or a journalist, and a good example of this is in genomics, if you want to report that you've found this gene or whatever, then you have to provide an accession number linked to the NCBI database where somebody can go find it.
RICHARD SEVER: So, I mean, you know, Tracey said, ask for evidence. One of the things that you can do as a publisher is say, I need to see the evidence. We need to document it. We need to make sure that what you've done is according to a known standard, so that anybody who reads it can compare it against other work. So I think that's critical. Beyond that, I think, I mean, as I said before, it's being open to results that aren't so exciting, to get away from this skewing.
RICHARD SEVER: I think that is the big worry, as I said before, that the academic record is skewed to things that are exciting and will advance people's careers, because they seem novel, rather than the less exciting, more pedestrian things. And, you know, given that replication is absolutely critical is providing homes for those types of data. I mentioned correcting the record.
RICHARD SEVER: I think we do a pretty bad job, right now, of correcting the record. You know, people talk about the number of retractions, but I think everybody would say that the number of retractions is probably much, much lower than the number of papers that should be retracted. I mean, every scientist knows of papers that are out there that nobody really believes. That doesn't mean there was anything bad, or that the motivation was wrong when the experiments were done.
RICHARD SEVER: But, you know, I mean, trying to move into a scenario in which we have a more multidimensional way of assessing papers is one thing, but, also, there are papers that we can do a better job of signaling that, you know, we don't believe this. We don't believe the science paper that says that DNA can basically include arsenic in the backbone, instead of phosphorous.
RICHARD SEVER: Nobody believes that. You know, we need to do a better job of signaling things like that. But I think we're kind of moving in the right direction with a number of these things. More and more people are ensuring that conflict of interest statements are present. We do that on bioRxiv, even. We can do a better job of examining-- I think people often see journals as entities that administer peer review, and that all the examination of the content is done by academics.
RICHARD SEVER: But, actually, I don't think journalists do a good enough job of saying what good journals do internally, checking on ethics, enforcing standards. Academic editors are notoriously bad at doing this. So, actually, you know, journal staff do this, and it's important. They're the ones that will make sure that, you know, the X-ray coordinates are in the right database, the gene sequences are the in the right database, that these were done according to standards.
RICHARD SEVER: So I think it's important that journals do this, and explain what they do, really, to distinguish between this notion of peer review as just some kind of stamp that happens because a couple of people have given it a thumbs up, and getting towards a kind of better constellation of signals around a paper that can give readers trust in it.
ANITA DE WAARD: So, I think that's really interesting, and you're really talking about this multidimensional view, but also including what I thought was very interesting when Tracey said "weight," what weight is attributed to this? And there are cases, of course, where there are nefarious publications that are driven by, say, political interests, which, in general, the scientific community can not attach any weight to, but that is never very clearly communicated, I think.
ANITA DE WAARD: I'd like to move to Eefke, just to answer, and maybe you've already addressed it, but just the question of what can publishers do to support trust in science? And, after that, I'd like to turn it over, because we have a question from the audience, and then we'll move into a broader discussion. Thank you.
EEFKE SMIT: Yeah, great. Thank you. And I'll keep it very short, because I agree with everything that Tracey and Richard have mentioned. I think, in that sense, we're all thinking along the same lines. Of the many things you mentioned, I would like to emphasize two Tracy mentioned peer review. I think that that really requires high priority, especially transparency about peer review.
EEFKE SMIT: I won't go into it deeper, because I mentioned it before already. But I would also like to emphasize, and Richard touched upon that as well, that I think we should have more common practice around self-correction mechanisms. So clearer retraction policies are really very important, and what I wanted to say about that, also, is, first of all, the taboo around retractions should go.
EEFKE SMIT: You know, nobody should be ashamed, because, as I said at the very early beginning, science is something where truth evolves over time, and, at a certain point, there could be all kinds of reasons for stuff to retract. But I think that there are less retractions than there should be, as Richard also noted, because there's too much of a taboo around it. And then, if we talk about self-corrections, I think it would also be good to start a conversation on how preprints and peer-reviewed publications relate together, because you now see a growing number of collaborations where the published article refers to the preprint, or the preprint refers to the published article.
EEFKE SMIT: But, also, there, we should maybe talk about retractions. You know, when does something disappear from a preprint server? You know, if it never finds a way to an article, or it never gets through peer review, do you still want it to be around, or should there come a point that, actually, some people say, well we tried to keep it drifting, but it can no longer? Because one of the things that also jeopardizes the trust in science is there's too much debris around.
EEFKE SMIT: You know, it's like this satellite in orbit around the Earth, and in outer space. You know, some things were put out there but never removed, and there's just more and more of them. So I think those self-correction mechanisms are also very important.
ANITA DE WAARD: Thanks so much. I want to take one second to have either Richard or Tracey respond to any of this, and then I want to move to the audience questions. And thank you so much for typing them, and we will address them in order, so thank you. Otherwise, I'll move on. So, the first question in the audience was for Tracey, really.
ANITA DE WAARD: Is this really what you're talking about? Is it more about communicating uncertainty? So, not saying, well, science is right because it's science, and done by scientists, if I understand the question correctly, but it's really, and I think this was mentioned before, as well, and you alluded to it, as well, the process of arriving at a shared truth. And maybe this also ties to your point about weight, and also both Richard and Eefke's points about transparency.
ANITA DE WAARD: So I want to turn over to Tracey, first, and then Eefke and Richard.
TRACEY BROWN: Yeah, I don't think uncertainty is just the whole of that. I mean, the question, really, is we need to have the conversation along the lines of, what's a very good way of asking this question? You know, what's a testable question here? Because, if we're looking at it from the public's point of view of, you know, what should we do about climate, or what should we do about underachievement in schools, or what should we do about a pandemic, and we say, which are the testable questions, and what's a really good way to answer those, and then part of that is saying, and what are the limitations of what we've got in answering those?
TRACEY BROWN: And, you know, I would say that, right across the board, and, I mean, every area of research in this-- I see someone's mentioned others, and social sciences, which is my background-- you know, I think we need to ask more searching questions about have we really designed to study to find the answer? I mean, you know, I appreciate all that Eefke has said about the quality of what's out there, and what Richard was saying about the need to kind of indicate that there's a lot floating around that we all know isn't right.
TRACEY BROWN: What I'm shocked by is the number of journals that accept publications of papers where, clearly, the study was not designed and statistically powered to answer the question it set. I mean, there's just so much dross. You know, a good question, a really interesting area to do the research in, we want an answer, about that study will not find that answer, or it will give us such wide confidence intervals that we can't really say anything much useful about it.
TRACEY BROWN: And so I think that is definitely a conversation to be had publicly more, the quality of the research. I would like to see a lot less research, that's better, myself, and I think that probably goes right across the board. But, yeah, there's a lot of just poorly designed research out there, and I think there's also just limitations that we have about what's available to us to try and answer the question with, or what we can ethically do to find the answer to the question, and these are all things that I think we can have relatively broad, public conversations about limitations, so I would talk about limitations more than, specifically, uncertainty.
ANITA DE WAARD: Thanks. Richard?
RICHARD SEVER: Yeah, I mean, I agree with everything Tracey said. I would also go back on this issue of weight. I would go back to the point I made earlier about conveying the difference between isolated observations. I mean, you know, the classic cases are these things with hydroxychloroquine for the treatment of COVID-19. You had very small numbers of people who felt that it was making a difference, but they weren't particularly well-controlled, and that is different.
RICHARD SEVER: Then you get a larger number of studies of people giving it to patients and watching what happens, and then you get to the point with the clinical trial, the recovery trial performed in the UK, which categorically shows it doesn't work. And so it's important that publishers explain those differences and present them in context, in the way Tracey said. I mean, we've just seen a similar example, and this is isn't published, but we've just seen this with the AstraZeneca trial, where entire nations stopped administering AstraZeneca vaccines because of this concern about thrombosis.
RICHARD SEVER: But, you know, if you look at the actual numbers, you would conclude that vaccines protect against thrombosis, because you give a million people the vaccine and fewer of them have got thrombosis than the control group. But, you know, everybody has to learn that, and so I think it's important that publishers explain that, particularly when presenting things like case reports. I think that, on the weight issue, there's also a kind of duty not to amplify lone voices.
RICHARD SEVER: You know, I mean, I always joke that, on one of my journals, if anybody likens themselves to Galileo or Stanley Prusiner, who discovered prions, my instinctive response is, yes, those people did, but 99.99% of the time when somebody says that, they're wrong. You know, so, if you think you're like Gallileo or Prusiner, the chances are that you aren't. And, so, you know, there was a question about social media that you mentioned earlier, Anita, and I think that's another thing publishers should do, recognize that extraordinary claims require extraordinary evidence, and, if you have an extraordinary claim that you're publishing, then, you know, put that in context, and don't amplify it, and avoid-- You know, there's a massive rush around the whole web to kind of clickbait headlines.
RICHARD SEVER: Let's not have scientific journals go down that route.
ANITA DE WAARD: Really good point. We have 10 minutes left, and we have two very interesting questions that I really would like to get to, so I'd like to move to the question about the humanities, and a comment that these were overlooked in the conversation. I think it's a very good point, and thank you very much, Laddy, for raising this point. Is there a particular thought on how humanities and social sciences practices can play a role in this discussion in the industry as a whole, because we really are talking much more about the STEM, biology area?
ANITA DE WAARD: I'm going to hand it to Eefke first, I believe.
EEFKE SMIT: Yes, sure. For background, I'm a social scientist myself, as well. And, actually, in any of the comments I made, I, at least not consciously, made any difference between any of the disciplines. But, of course, we do that methods and approaches differ greatly between the traditional sciences and the social sciences, and then, even more, the humanities.
EEFKE SMIT: I think that everything we said about transparency, about methods, especially about data comes in social sciences too, maybe even more than in the hard sciences, because measurements can be less, sort of, maybe I should use the word "calibrated," or reproducibility is more difficult, because, you know, in social sciences, it's often about people, and it depends a lot which group of people you have in your group.
EEFKE SMIT: So I would tend to say all these things that we said about transparency, and trust, and reliability, and self-correction mechanisms count equally as heavy as in the other disciplines.
ANITA DE WAARD: Sorry, I was muted. Tracey, yeah, thanks.
TRACEY BROWN: I thought it was a little late to save me. So, I just would say I think that the issues are similar, in a way. And, if we sort of stick to the whole discussion that we have here about information being reliable, I mean, maybe reliable is the thing we were also talking about. But, you know, I think there's more of it across the social sciences, that you have, for example, ethnography.
TRACEY BROWN: It has, I suppose I would say, similar issues to what Rich just said about case reports, really. So the danger is that we sort of end up being quite dismissive of big areas of social science which are discursive in their nature. And the way that I tend to deal with this when I'm talking to skeptical politicians, and so on, and media, is that I see a lot of what comes from the social sciences, from that side social sciences, the more ethnographic side, as an early warning system, rather like a case report is, and some of it will turn out to be false alarms, and some of it will give us the basis for things that we want to investigate in a more rigorous fashion.
TRACEY BROWN: But, you know, it's really important that we have this constant commentary on society, but I think that the line between commentary and scientific investigation is a much bigger, grayer one around some areas in the social sciences. But everything else pertains, and I would like to see a lot more discussion about, for example, how we look at social trends. I feel that that's something that social sciences, and social science publishers, need to be much more active in, and trend analysis is a particular concern for social sciences that isn't really there in, like, medical publishing, for example.
TRACEY BROWN: So, to come back to that issue of putting things in context, there's often a historical context to social developments that needs to be brought into play, as well as a, kind of, what did other research find, but it's also what do we know previously? That's really important. So I think there are some unique challenges in the social sciences, too.
ANITA DE WAARD: That's wonderful. And, of course, one can argue that should be there in the physical sciences, as well because, there's all social elements. Thank you so much. I'd like to move on, because we have about five minutes left, to the question of Alice Meadows. Thank you so much for the last few questions, but I think that's the last question that we have time for, and, Alice, thank you so much for joining.
ANITA DE WAARD: Alice asks, "The transparency issue is critical, so what can we do to make it easier for publishers and others, like funders, to be more open about these processes like reviewer's selection, acceptable reviews, et cetera?" So I'd like to give that question to Richard first.
RICHARD SEVER: Well, I mean, I think, on transparency, one advantage of the journal system, as opposed to, say, preprints, is that you can refuse to publish the paper until somebody does something. So, you know, I mean, I think that's the biggest weapon that you have. We know people have judged on the formal journal publication that's been certified, in a way. Plus, Biology do this.
RICHARD SEVER: They say, you know, we will not publish your paper until you deposit all the data in a recognized repository. So you have a great stick there. You're like, we're just not going to publish it until you do that. So, I mean, I think that's the most obvious thing the publishers can do, and that comes to enforcing standards, and that's everything from pedantry about genetic nomenclature through to conflict of interest declarations, and verification of patient consent forms, that type of thing.
RICHARD SEVER: So, I mean, as I say, I think you wield that stick, and you say, you know, you've got to follow these standards. These are the ethical procedures. This is the transparency that we require. And, if you don't do it, then you can't publish, then.
ANITA DE WAARD: That's great. Eefke, I'd like to turn to you, because I know that a lot of your efforts have to do with this. And I think, in particular, I like the comment of, sometimes, we publishers speak in a bit of jargon about safety review issues and such, so how do we explain this to academics and, if possible, the greater public, also taking into account Tracey's point that we need to be much clearer?
EEFKE SMIT: Well, this is exactly one of the aims, and in the heart of our transparent peer review project, because this is exactly what we try to do, and provide a system by which publishers can make it much clearer what kind of peer review took place. I really think that that is step one. One of the other points I would like to make, because we're almost to the top of the hour, I'm very happy that the focus of this discussion is on what can publishers do, because it all starts with us, but I also want to emphasize, this should be an activity and an exercise across the whole ecosystem of research.
EEFKE SMIT: You know, we need to do this with the researchers, but also with the funders, with the policymakers, and, especially, I sometimes think that publishers and funders should work together more, because funders set a lot of policies and standards at the start of the research process, you know, when the grants are given, and publishers act at the very end of it, and I think, if we combined forces, we could set a lot of these quality and integrity standards jointly, and it would really round the circle of the research cycle.
ANITA DE WAARD: Thank you. And so, to that point, to Tracey, who is not a publisher, and I'm very happy you're on our panel. And I just want to add that Alice added that she's really talking about the lack of transparency by publishers about the processes of review, and evaluation, and acceptance of manuscripts. If you have any thoughts about that? Thank you so much.
TRACEY BROWN: Well, I really agree with Alice about that, and I think it's a really easy win, as well, for publishers to take steps in that direction. Just my kind of closing thoughts, really, are that, if you think about what's being said, not so much by me, actually, but by Richard and Eefke, and especially by Richard, we're kind of looking at the role of publishers becoming a bit more like a watchdog, aren't we? We're sort of moving away from a world in which it's all just about what's new, interesting, important, and that seems, to me, a very important thing to do, and it's a harder thing to do, when you've just got a sea of stuff to sort through, and submissions at a volume never heard of before.
TRACEY BROWN: That's important. But, also, what we're saying is that there's a gatekeeper's role, there's a watchdog role, effectively moving from kind of gatekeeper to watchdog, and there's a role in explaining that to the public, because, effectively, it's a public interest service that you play when you do anything to safeguard quality, and standards, and adherence to standards, and I think, maybe, pursue that much more self-consciously, and then we're in the domain of trustworthy behavior, but also giving people the basis on which to put their confidence in sources, and in methods, and in information.
TRACEY BROWN: I think that's really what lies before us.
ANITA DE WAARD: Thank you so much. I love that as a closing statement. Richard, Eefke, I want to give you one minute, or 20 seconds, to maybe say a last few words.
RICHARD SEVER: I mean, I've got almost nothing to add. I think that was great, what Eefke and Tracey said. I thought it was a great point about the ecosystem. I would just add institutions to that, institutions and employers, and anybody who has worked at a journal knows the headaches that you deal with around retractions and ethical issues, when there's been bad behavior by academics, and you want to have a good, collaborative relationship with the institute, rather than a scenario in which everybody's trying to sweep things under the carpet.
ANITA DE WAARD: Thank you. Eefke, one last word?
EEFKE SMIT: I agree to it all.
ANITA DE WAARD: Great. Laura, thank you for your question. I'm sorry we didn't get to this. It's really talking about incentives, and I can only say, maybe this is a good topic for our next SSP "Ask the Experts" panel. Thank you, and, with that, I hand it over back to Katie. DR. KATHERINE PHILIPS
GREFE: Thank you.
EEFKE SMIT: Thank you for participating in today's virtual discussion group and, many thanks to Silverchair for sponsoring this event. And thank you, also, to our panel for an engaging discussion. You will receive a post-event evaluation via email, and we encourage you to provide feedback and help us determine topics for future events. Please check out the SSP website for information on future events, such as those seen here.
EEFKE SMIT: Today's discussion was recorded, and all registrants will receive a link to the recording when it has been posted on the SSP website. This concludes our session. Have a wonderful day.