Name:
Addressing problems in peer review: metadata, incentives, etc Recording
Description:
Addressing problems in peer review: metadata, incentives, etc Recording
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/42e2f105-aaae-4d07-9453-1e2c50ea4b95/videoscrubberimages/Scrubber_3.jpg
Duration:
T00H37M05S
Embed URL:
https://stream.cadmore.media/player/42e2f105-aaae-4d07-9453-1e2c50ea4b95
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/42e2f105-aaae-4d07-9453-1e2c50ea4b95/Addressing problems in peer review metadata incentives etc-N.mp4?sv=2019-02-02&sr=c&sig=DUdZ2ZXytrtWfhiRl6SITK8CJTgS6hyHLEIx7N9wljY%3D&st=2025-01-22T04%3A25%3A26Z&se=2025-01-22T06%3A30%3A26Z&sp=r
Upload Date:
2024-03-06T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
CHRIS LEONARD: Hello and welcome to this NISO Plus 2023 session on addressing problems in peer review.
CHRIS LEONARD: My name is Chris Leonard, and I'm the chair of the session. My day job involves looking at the big problems our industry faces, and they don't come much bigger than peer review. Whilst it is a reasonably modern invention, it's starting to creak at the seams and some publishers, reviewers and authors are looking for something a bit better, a bit quicker, a bit more reliable. So to that end, we have some amazing speakers here today who are going to examine what can be done in terms of processes, workflows and incentives in order to improve the current state of peer review.
CHRIS LEONARD: There is even a question about whether we should reappraise. Peer review as a worthwhile activity at all. I'm going to let each speaker introduce themselves at the start of their 10 minute presentation, and at the end the final presentation we will come together again to discuss what we've heard and to answer some of your questions. So without much more ado... Let's go.
JASMINE WALLACE: Hi my name is Jasmine Wallace and I am the senior production manager at PLOS. Today I'll be talking to you about addressing problems in peer review. So I saw, I took a look at, I really like this word cloud. Sorry to get started really like this word cloud that was put together by a working group after a session on open transparent, open peer review and transparency. And what this highlights is what we start to see as the main problems that arise in scientific publishing and peer review.
JASMINE WALLACE: I start here because I think we're all aware of a lot of the problems that tend to come up primarily regarding the reviewer. Oftentimes they are seen as the biggest lags in the system. But I want to highlight here as we move forward and as we're navigating this sort of post-pandemic space is to reconsider how we see the reviewer as we're moving towards more open science and open science practices, I think it's going to be inevitable that we start to have an increase in influx of people that start to integrate with our systems, almost making it more important that we focus in on how they are in how they're integrating into those systems as they our reviewers.
JASMINE WALLACE: If we already are seeing problems, preexisting problems with the reviewers, I think it takes we should take some time to start to focus on focused on them a little bit more thinking about some of these tools that we're starting to create right now. The biggest part about machine learning is the idea that these machines can learn errors, they can learn bad behaviors, they can learn bad habits. So I think it's well within our roles to start to make sure that we're developing our reviewers, that we're making a better experience for them as we're navigating, you know, of course, open science.
JASMINE WALLACE: So I'll start by asking this question of whether or not your reviewer experience is ideal. How is your reviewer experience for your group? Are you doing everything in your power to mitigate the challenges or barriers that prevent access to what we consider qualified reviewers? Are we really making it a better experience for them? Are we working better with them and for them. So that as we are developing these tools.
JASMINE WALLACE: And we're asking them to start to use them or utilize them, or are we ensuring that they are having the experience we want? Right so the very first thing I think you can focus on when it comes to your reviewer experience is easing up on them. I know this sounds almost rudimentary in thought, but making sure your processes are very, very easy and also considering them as a whole person. I think that that comes up a lot as we're looking towards our groups moving forward.
JASMINE WALLACE: This idea of the whole person and thinking about their needs that are now surfaced. We had a lot of things that were put into place during the pandemic, for example, increase in an extension of time frames. And we made a lot of movements and changes. But I think that those types of things are something we should still be integrating into our newly developed processes.
JASMINE WALLACE: So ensure that the user experience for our viewers are ideal. So when it comes to making things easy, are you doing some of the easy asks within your systems and processes to make this happen? So one area to just focus on really simple are your reviewer invitations. Can reviewers accept or decline invites with ease? Can they invite someone new with ease?
JASMINE WALLACE: Oftentimes we need two, three, four, several iterations of reviewers and rounds of review. But are we making it an easy process to add new people when it comes to declining the ability to even do the review in the first place? Can your reviewers easily extend deadlines? That sounds simple, but sometimes these processes to just grant an extension is pretty extensive, making it not an easy thing for reviewers to do.
JASMINE WALLACE: And are you giving your reviewers enough time? And this is not just the notion of how many days do we give reviewers to perform a review. That's sort of the basic idea behind this reviewer, given enough time or are you taking time to look into your processes and say, are we getting are we sending these on the weekends? Are we counting or accounting for weekend dates? What are we doing with our... are we really giving these people enough time to perform the reviews that we actually want them to do?
JASMINE WALLACE: Another easy task is reviewier reminders. No one no one wants to be constantly nagged or pinged, if you will, about anything. But at times we do have to do that. I think that that is acceptable. One thing to really consider about this user experience for the reviewer is the frequency of your reminders. How many notifications are they actually receiving?
JASMINE WALLACE: When are your reminders actually being sent? Oftentimes we are sending these reminders via a script that's run by our peer review systems. But are we aware that they're going out at 2 in the morning on a Saturday? Is that really an ideal experience? Again, we weren't. We're working with a group of people who are volunteers. So we want to make you this.
JASMINE WALLACE: We're want to make this more engaging and easy for them to do and participate in. So after we've done that work of making sure that we've made our processes a little bit easier for the reviewers, and we've made this experience or we're starting to gear this experience up to be an easier one. And because we're going to start doing things like being more flexible and accounting for all these other things that may the reviewers may start to need or they are starting to need.
JASMINE WALLACE: You want to build in other costs? Other time savings. You want to account for that lag that you're going to have with the reviewers? We know they're going to take longer. We know they're going to need more extensions. We know that they're going to have these fluctuating needs. So account for that lag. And you want to build in time savings in other areas.
JASMINE WALLACE: So let's say you don't know where those areas are. I think a lot of times we are kind of pushing our systems and our processes to the max. Right but I think that in order to do this well, you want to perform some audits, you want to perform some system audits, some process audits, some workflow audits. I lean into the system audit because I think it directly reflects that whether or not your reviewers can easily accept or decline an invitation, there are new features that are introduced by our peer review systems and oftentimes we don't know what these new features are.
JASMINE WALLACE: We're not integrating them in our processes or sort of doing things. We've always done with this. But take some time and do a system audit to ensure that everything that your system can do you are doing and you're doing it well. Maybe it's whether... maybe it's sending out invites to reviewers automatically, whatever that may be.
JASMINE WALLACE: Just taking the time to look for those time savings within your system. Similar to that is the process audit in the workflow audits. Are you in fact doing everything you can to make sure that your processes are efficient. So that the reviewers are having a good experience with your peer review? Then you want to take some time to really prepare for review. Again, I think this is a one that seems like a really easy act, but people tend to forget or don't often focus on.
JASMINE WALLACE: But you want to make sure that prior to review again, these people have asked for extensions. They may need more time. They're in their labs. They're dealing with the concerns for their actual science. When they get that paper in their hand, we want to make sure that that paper is, in fact ready to be reviewed.
JASMINE WALLACE: It doesn't have any complications or things that would need to be sent back to the author. So try to use some of these new tools that are starting to surface around preliminary things to check for within papers and manuscripts and use those things as you're navigating this user experience for your reviewer, and then you want to also make sure your reviewers are aware of their role in your processes.
JASMINE WALLACE: I know this sounds something like something that you should automatically do or reviewers are going to do, but oftentimes we leave our reviewers up to their own biases, right? We ask them to go read the reviewer qualifications and expect them to fully understand what that looks like. But in my experience, I found that reviewers are not always aware of how the process works.
JASMINE WALLACE: Sometimes they submit their reviews and they don't even know what happens after they submit a review. So they do a lot of follow up. Again, this puts strain on the reviewer, so you want to make sure you're thinking about how they are experiencing your peer review processes, ensuring that they're very aware of what their expectations are in the process.
JASMINE WALLACE: How do they play a role in this is very important. Again, they're the volunteers. They're the expert groups that we tend to rely on to give us this feedback that we, in fact, use to make these published works. So let's make this easy for them, right? And build in these cost savings. When we can. And then we can start to talk about, including new people, as I talked about in the beginning, as we move towards a more open science space, we're going to see an influx of new people.
JASMINE WALLACE: That's almost a guarantee. But when you're introducing these people into the systems and how we go about doing that, you want to make sure that they are set up well. So the first thing I challenge you to do is educate your stakeholders, anyone having an impact in the system that is going to have to have a decision making or have to have a decision making aspect of whether or not new people are even introduced.
JASMINE WALLACE: Right so you want to educate your stakeholders on why they should include new reviewers, not just, oh, we need early career researchers. This is the trend. Let's get more new people in or using groups of people from underrepresented groups, which I totally agree with. But make sure your community is aware of why we're including these new people. So they can be more supportive of the needs around that introduction.
JASMINE WALLACE: You want to gather the data to determine where they're focusing, not just picking haphazardly, but really reassuring your community that you're focusing on the areas that have that need. And then developing a plan to engage with these new communities. That's very, very important because if you just set it up and you let people come in, you're not really thinking about how that plays out into their engagement.
JASMINE WALLACE: And again, when you get this group of people in, we want to keep them in. We want them to be really effective reviewers. So again, make sure your spaces are actually inclusive, not just introducing new people, not just running the early career researcher or whatever you're calling it at your organization, not just doing these things, but really trying these new people, especially, again, from these underrepresented communities, but making sure they're set up for success.
JASMINE WALLACE: This for you may mean different things, but just to take in consideration things like training programs, really reassuring that they have and are equipped with everything they need to do a really good job, right? Making sure that there is proper movement in your system. So again, you're integrating these new peoples into your system, you're setting them up for success. But in that training, are you finding ways to grow them as an individual or are they just going to stay and remain pretty decent reviewers? Or is the expectation for them to one day be an editor-in-chief and take your journals to the next level?
JASMINE WALLACE: So that's my highlights for today of just areas to focus on with this reviewer experience. I think it lightens up the load for us if we're equipped with better people in our systems, training them and making sure they're properly equipped to do really good jobs, I think helps as we're integrating new things like platform introductions or open review, for example. It doesn't change if you've not tried it, it's not going to change the review output.
JASMINE WALLACE: At least studies have shown that it doesn't really change the output. But again, what we can start to do is make sure that these reviewers are more engaged, they are more a part of our systems and that we're showing them we care about you. We we want you to come back and to be good players in the space. Thank you.
TIM VINES: All right. Thank you, Jasmine. That was amazing. OK so I took this title a little bit, literally. I'm Tim Vines. I'm CEO of an organization called DataSeer. We use AI to help promote compliance with open data policies, open science policies in general. But before that, I was a managing editor at a journal called Molecular Ecology.
TIM VINES: I also ran an independent peer review platform called Axios Review. So I've been sort of thinking around problems and peer review for a while now, I think. So I'm just going to move on to the next slide, which summarizes all of my talk in one go, which is a sort of a questioning of what is really a problem in peer review and emphasizing that the only way we're really going to be able to effectively improve peer review is by having a very clear picture of what is a bug and what is a feature.
TIM VINES: And so what do I mean by that? So this is a thing we come across when we're considering any kind of system in that it's got lots of aspects to it. And some parts are actually features of the system and other parts are bugs. And to be able to fix the bugs, we need to work out whether or not they're actually bugs. So here's a good one.
TIM VINES: Peer review is too slow. This is a very, very common complaint. But this is actually reflected on the other side by it. May it be a feature. That peer review takes a while because it allows people to properly evaluate the manuscript. From this point of view, peer review is an essential process whereby the community works out whether or not something is true and therefore it should take some time because if we're doing it far too quickly, then we're not able to give things the consideration that they need.
TIM VINES: And therefore bad work, work that is flawed or needs considerable amount of updating before it can be put out into the public space. Then taking your time can allow us to really decide what to do and how this sort of substandard work can be improved and the system as a whole may currently achieve a good balance of speed and validation. But on the other hand, it could be a bug. It could just take too long.
TIM VINES: And these are problems because it slows down science, especially during the pandemic. We saw that there was an urgent need for information and we needed that information to be validated. And by and large, we achieved that with some missteps. It stops a common complaint is it stops important research results from reaching reaching researchers. And indeed, peer review wastes researcher time. They finish an article and they have to wait months before it becomes part of the public record.
TIM VINES: So here's another view: Anonymous reviewers are horrible. They're always so horrible to me and rude about my paper. Anonymity in this case could be viewed as a solution. It promotes reviewer objectivity in that they can say what they want to say without fear of retribution from the authors, particularly if the authors are powerful academics in their field. People that are potentially sitting on grant panels that they're involved with and.
TIM VINES: And so you want to be able to give your frank and honest opinion about an article without worrying that your career is going to be jeopardized. Another aspect of anonymity that people talk about is that it makes the reviewers unaccountable. That is not the case. Editors are the key linchpin of this process and the reviewers are accountable to the editor.
TIM VINES: If the reviewers are spouting nonsense, then the editor will communicate this to the authors. They don't listen to this reviewer. Listen to this one instead. At the same time, the editor also vouches for the reviewer expertise that allows the reviewers to be anonymous whilst still saying yes, these are experts that are equipped to value evaluate your paper and therefore you should feel confident in listening to their opinions.
TIM VINES: Because without that editor there they are just opinions, just random people giving their opinions. But because the editor is there, they can actually express themselves and be vouched for by this editor. And ultimately it was also a problem. You can be rude to the authors and apparently there are no consequences.
TIM VINES: That's a big perception. Authors never get to find out who the reviewers are, and so they can't alert the editors to the fact that this person may have an ax to grind. And on top of that, it fuels suspicion that the process is rigged, that what's going on behind the curtain is being unfair to the authors, and that their work is not getting the fair hearing, fair hearing it deserves.
TIM VINES: So what I want to contend is that. Peer review has been around for decades and its current form. based on a journal led by an editor. The reviewers are typically anonymous. A single-blind editor-led peer review is actually a fairly optimal solution for the problem that we have set ourselves that is, evaluating manuscripts before they become part of the version of record.
TIM VINES: And the fact that something is an anachronism that has been around for a while does not mean it is wrong. So here's a clock. This is one of the earliest clocks from 1386 or the earliest clock with this kind of structure. And the basic format is identical hundreds and hundreds of years later. And what we risk if we start. Misdiagnosing features of the system as a problem.
TIM VINES: We risk breaking the system altogether. So here is somebody fixing a clock. It's got the same sort of arrangement. It's got hands, it's got numbers, it's circular, but it's useless. It doesn't do anything. And this is because the person has not understood what the hands do. And so generally fixing problems with peer review requires a clear understanding of what is a bug and what is a feature.
TIM VINES: That's my main point. So in my opinion, what is wrong with peer review? What do we need to fix? And I guess these are just points I want to put out there for discussion, the assessment of articles for both robustness, that is, is this good enough to be in this journal? And the assessment for fit that is, is this article important enough to be in this journal and within the scope of the journal?
TIM VINES: These are orthogonal processes and they should be separated. That is a perfectly good manuscript, should not be rejected on the grounds of fit. And this is a problem that's partially solved with manuscript cascades so that an article that's not judged to not be sufficiently important or exciting for one journal can but perfectly technically valid can be moved to another journal where it better fits.
TIM VINES: So these processes are solution to this problem, but maybe we need a broader solution and also compliance checking. There are three broad strands of peer review that I see. The two I mentioned here, assessment of quality. This is done by external peer reviewers. Assessment of fit. This is done generally by the editors and the editor in chief.
TIM VINES: And then there is also assessment of compliance. And this is generally done by the editorial office. Have the authors plagiarized? Have the authors given a proper. conflicts of interest statement? have the authors used standard checklist like PRISMA? All these sort of compliance elements are an important part of the process and generally this is assessed by an editorial office.
TIM VINES: However, compliance checking is very spotty, depending on the amount of resources that are available to the journal, on whether it's part of the policies of the journal. And so this is an area that needs a huge amount of attention. And since it is fairly consistent, it's fairly objective, I would say that the compliance assessment aspect of peer review is ripe for replacement with artificial intelligence or natural language processing based processes.
TIM VINES: And that is where I'm going to stop. Thank you very much.
FRED ATHERDEN: My name is Fred. I'm Head of Production Operations at eLife. For those of you unfamiliar with eLife, it's a non-profit organization which has been operating an open access journal for the last 10 years, and it peer reviews preprints in the life sciences and medicine. Our aim is to improve the way that research is practiced and shared. So I wanted to start with outlining some of the problems with peer review, which has led to some of the decisions that eLife has made since its inception and most recently in announcing a new model of publishing.
FRED ATHERDEN: The first being that the nuance of peer review and the feedback of reviewers and editors is not well captured, and assessments that are reduced to binary accept or reject decisions and fails to completely shield the scientific literature from error. Indeed, the perception that peer review is an effective filter contributes to the proliferation of errors, as it can cause researchers to let down their guard against flawed research.
FRED ATHERDEN: And the emphasis placed on directing papers into certain journals has turned journals into the de facto currency of academic careers and institutionalized the practice of assessment based on where, rather than what is being published. The dismal replication rate for works, which are published in leading journals, evidences that this is not really a true reflection of their quality and a practice has led to influence by bias and what's in vogue.
FRED ATHERDEN: The current publication system is derived from an era where print necessitated pre-publication peer review. So continuing this practice in a time of relatively cheap and instant publication has needlessly slowed the communication of discovery and invention and the communication of scholarly output. A system designed from scratch today would arguably place peer review and the curation of content after its publication.
FRED ATHERDEN: Confidential peer review is also incredibly wasteful. When a paper is rejected from a journal with no transparent peer review process, the evaluation is completely lost to the reader and upon review, submission to a separate journal, the same facets of the research may be interrogated and queried once again. Consultation between reviewers is lacking. The prevalence of cascade journals has provided a solution to this problem, but that solution mostly suits publishers, not always authors, and nor does it suit scientific communities.
FRED ATHERDEN: And unreasonable or unachievable requests for revision from reviewers have led researchers to undertake unnecessary experiments or research. For this reason, and various others that are elided over the system is fundamentally unfair for authors. So by way of a quick introduction to how eLife got to where it is. In 2012, it was launched with a mandate from a collection of prominent funders in the biomedical sciences to improve the way that science publishing works.
FRED ATHERDEN: When the journal launched, it had a number of notable features. It was an open access journal, and it implemented a new consultative peer review process whereby reviews are discussed between reviewers and a reviewing editor and a decision is made collaboratively. In 2016 eLife committed to transparency in the peer review process by publishing peer review materials for all articles.
FRED ATHERDEN: Previously, this had only been for a selection of articles up until this time and shortly following the appointment of our latest editor in chief at the end of 2020, eLife moved to exclusively reviewing preprints, recognizing the positive effect that preprints can have on the speed and democratization of access to scientific content. The review process was changed to produce two outputs public reviews, which would be posted alongside the preprint on the preprint server for the benefit of readers, and recommendations for the authors, which would be published in the decision letter.
FRED ATHERDEN: So at the end of last year, eLife announced the implementation of a new process, many parts of which were already part of eLife's existing process that I've just outlined. For example, eLife is still exclusively reviewing preprints and producing public reviews and assessments to be published alongside them. So here's how the new process works.
FRED ATHERDEN: After submission to eLife editors will decide whether to send the paper out to review. This decision will no longer be based on whether an article is potentially worthy of publication in eLife, but instead whether editors are confident that eLife can produce high quality reviews that will be of significant value to interested readers, as in the current process. If authors haven't already posted a preprint, eLife will help facilitate doing so, and eLife's normal, consultative, preexisting peer review process will occur.
FRED ATHERDEN: The key change in the new process is that following peer review, eLife will no longer make a binary, accept/reject decision. The review process will also be used to craft concise assessments which summarize a consensus between the reviewers commenting on the strengths and evidence and the significance of findings via a controlled set of terms. Every every paper we published in a new format that we're calling a reviewed preprint, which is a journal-esque paper containing the authors preprint, a rendition of the authors preprint.
FRED ATHERDEN: The eLife assessment and individual public reviews from the reviewers. The authors will also be able to correct any factual errors in the public reviews prior to them being posted. And they'll also have the option to respond to the reviews, which would be published in an author response and included in a review preprint. Following publication of the reviewed preprint authors will have the option to revise their preprint based on recommendations from the reviewers and editors.
FRED ATHERDEN: If they do opt to revise following re-review, these recommendations will be published in a revised version of the reviewed preprint, along with updated versions of the assessment and the public reviews. At any point after publication of the review pre-print, the authors will be able to declare a final version of their article in a version of record, which will have stricter rules around reporting standards, ethics, competing interests and the availability of data, code and materials.
FRED ATHERDEN: This version of record will mark the formal end of the publication process and will be sent downstream to indexes such as PubMed so that the work can become part of the existing formal scientific literature. So what are the benefits of this process? So our intention is that authors will benefit from a process that has clear outcomes and reviewers will benefit from this process as well.
FRED ATHERDEN: But more importantly, it re-imposes authors autonomy, since they can opt not to revise the paper if they wish to. The lack of a reject decision following peer review also avoids the cascading reviews in the ecosystem and the potential of doubling or more of effort by reviewers. Following the publication of the review, pre-print authors are also free to submit to another journal if they wish.
FRED ATHERDEN: Provided they haven't published their version of record at eLife. And the transparent and public nature of the reviewed preprint and eLife reviews can permit reviewers working for that next journal to gain from this process and make a more informed assessment of the work. We believe that it will provide a richer, more nuanced assessment of the work rather than where it's published (the journal title) which is still compact and still digestible by readers who are not experts in the particular field.
FRED ATHERDEN: And finally, the work and the reviews of it are published much more quickly than under a traditional journal system, because there's not a second or a third stage where reviews are requested are required, rather where revisions are required prior to publication. So thank you very much for listening to me. I look forward to hearing any questions or thoughts you might have in the Q&A session.
FRED ATHERDEN: Or please feel free to reach out to me by email.
ADAM MASTROIANNI: Hi Thanks for having me. My name is Adam Mastroianni. I'm a trained as an experimental psychologist. But really, what I do is I write a science newsletter, slash, blog, slash, whatever you want to call it, called Experimental History. And one of the pieces that I wrote recently, which is why I'm here today, is called 'The rise and fall of peer review', which set up a lot of discussion on Twitter, got me yelled at, one person tried to get me fired.
ADAM MASTROIANNI: And so I'm going to tell you today a few of the points that I make there that hopefully get you thinking. You can go on experimental history and read this in full if you're interested. The first point is that the way that we publish science today is historically really weird. I think we have this myth in our heads that like somewhere in the 16 or 1700s, we invented the scientific journal.
ADAM MASTROIANNI: And we invented the way modern peer review at the same time. And since then, that's the way we've been doing it. There's actually been some recent historical work showing that it's actually much more complicated than that. And in fact, all the scientific publishing was a real hodgepodge up until really the 1960s. So different societies were publishing in different ways. Individuals were putting out their manuscripts, people were sending each other letters.
ADAM MASTROIANNI: The things that we now call scientific journals had different ways that they so so-called peer reviewed things, some of which amounted to the whole society gets together and votes on it. Some of it was just an editor taking a peek and choosing to what to publish. Just I think one tidbit that puts this in perspective, Einstein had only one paper that was ever peer reviewed. And he was so surprised and upset that he retracted his paper and published it elsewhere.
ADAM MASTROIANNI: So this is just to complicate the idea that this is how we've been doing things for a very long time, when in fact this pre-publication system of peer review is very new, or at least doing this universally is very new. The second point is that this system has extraordinary costs and uncertain benefits. So the costs are very obvious. There's one estimate that peer review takes 15,000 person years of labor per year, and if you value that at the value of the postdoc salary, that's $1,000,000,000.
ADAM MASTROIANNI: And obviously some people are getting paid way more than that, not me. But so this obviously takes a lot of time. And time is money. Does it do does it have the extraordinary benefits that we would expect from an intervention with extraordinary costs? I think the answer is pretty obviously no. So there've been studies now where they put deliberately errors into papers, send them out to reviewers and see how many they catch.
ADAM MASTROIANNI: And the average is 25% These are major errors. Things like this study claims to be a randomized controlled trial, but didn't in fact, in fact, randomized people to conditions and things like that. Another way of measuring the value of peer review in terms of ensuring the quality of research is to ask how often do we hear stories like somebody tried to publish a fake paper, but fortunately the reviewers caught them and then they got fired.
ADAM MASTROIANNI: My answer to that is I can't find a single instance of that happening. Maybe it has. But usually when we hear stories of people being caught doing fraudulent research, which we do here, it happens after they've published many, many peer reviewed papers. And so really the way that this happens, if someone takes a close look at the paper, which didn't happen at the peer review stage, and find out that the numbers don't add up.
ADAM MASTROIANNI: This is another of the limitations of peer review. Is that the most important thing that could go wrong in a paper is that something went wrong at the data stage. But we know that most people don't actually look at the data when they review a paper. It takes way too long. There is Richard Smith, the former editor of the British Medical Journal, try to do a lot of different studies on peer review when he was in charge there.
ADAM MASTROIANNI: And and sums it up as it is interesting that scientists believe in peer review and tend to not believe in God because there is no evidence one way or the other for God. But there's a lot of evidence that peer review doesn't work. Which leads me to the third point, which is that maybe it's worth trying something else. So we used to live in a world with a lot of diversity in the way that people did research and communicated it with one another.
ADAM MASTROIANNI: And now we live basically in a scientific monoculture where if you want to contribute to the scholarly discourse, there's one main way that you have to do it. You have to put your research in such a form that it can be published in a peer reviewed paper. I think that makes total sense that that is part of how scientific publishing works. But I don't think it makes any sense. That this is how every single part of scientific publishing should work.
ADAM MASTROIANNI: So here's my small part of that experiment. Late last year I published a paper by posting it on the internet. So I'm an experimental psychologist. I ran a bunch of studies investigating a bias in human imagination, and I was trying to write this up for a scientific journal, and I found I couldn't do it without basically lying at some point.
ADAM MASTROIANNI: Because if you want to get something past peer review, you have to tell some maybe not lies, but creative untruths. Like I definitely know why all these things happened. Or I didn't forget why I ran study eight. But those things, those things weren't true for me. I did forget why I ran study eight. This happens when you work on a lot of projects. I don't understand why all these things happened. And so my collaborator, Ethan and I, we wrote this paper, being completely honest, writing it in normal language so anyone could read it from beginning to end and posted it on the internet.
ADAM MASTROIANNI: And I thought that maybe no one would pay attention, but in fact, just from posting it on PsyArXiv, you can see it has now been downloaded or viewed over 50,000 times, another 50,000 or so on my blog. And I think this is a promising way that some research could work, and I think it's a worthwhile way of experimenting with contributing to scientific discourse. I know it's not the way that would work for everyone. So, in fact, so many people reacted to my piece that I wrote a follow up piece reacting to those reactions.
ADAM MASTROIANNI: And so you can read it there. And so part of what I understand is that some people really like the system the way that it is. Maybe people have been very successful in that system, and I think that's great. And really the message that I want to get across is the way that we do things now is not the way that we've always done things, that we've lost a lot of diversity and that it's worth experimenting with other things and bringing back some of the diversity that we've lost.
ADAM MASTROIANNI: This is my part of that experiment, and I look forward to seeing yours. And with that, I'd rather leave more time for discussion because I think there's a lot to talk about. So you can see my blog there or email me or get at me on Twitter. Thanks for having me.
CHRIS LEONARD: That's great. Thank you to all of our speakers there. We heard some fascinating views. Please now come and join us in the live session where there is a Q&A session. We'll be able to answer some of your questions. Thank you.