Name:
                                The Evolving Knowledge Ecosystem
                            
                            
                                Description:
                                The Evolving Knowledge Ecosystem
                            
                            
                                Thumbnail URL:
                                https://cadmoremediastorage.blob.core.windows.net/fbe4eaf1-cbdc-4fd0-a7b5-b802e354d6c1/thumbnails/fbe4eaf1-cbdc-4fd0-a7b5-b802e354d6c1.png
                            
                            
                                Duration:
                                T01H01M14S
                            
                            
                                Embed URL:
                                https://stream.cadmore.media/player/fbe4eaf1-cbdc-4fd0-a7b5-b802e354d6c1
                            
                            
                                Content URL:
                                https://cadmoreoriginalmedia.blob.core.windows.net/fbe4eaf1-cbdc-4fd0-a7b5-b802e354d6c1/plenary_session_the_evolving_knowledge_ecosystem (1080p).mp4?sv=2019-02-02&sr=c&sig=FXtbAaVqgFHu1mtz0LjxlB%2BVOuN%2F1fKEOQ8GT5NdrwY%3D&st=2025-10-30T14%3A22%3A21Z&se=2025-10-30T16%3A27%3A21Z&sp=r
                            
                            
                                Upload Date:
                                2024-02-23T00:00:00.0000000
                            
                            
                                Transcript:
                                Language: EN. 
Segment:0 . 
 Good morning.   All right.  Good morning, everyone.  I'd like to welcome you to our plenary this morning.  I'm excited for the second day of the conference.  I hope everybody had a good time last night  and got to do some fun networking.   
Um, I just want to take a quick moment just to, of course,  recognize our sponsors and encourage everyone  to take a trip down to all the in-person folks  to take a trip down to the exhibit hall.  There's some interesting conversations  to be had and interesting topics to learn.  I'd like to again, really, really Thank our program  chairs Lori Carlin, Tim Lloyd and Emily  Farrell and the entire annual meeting program committee.   
You guys have done a wonderful job.  Quick Thank you.   Just a quick reminder that you can use the program app.  I cannot say the name.  Thank you, Gabe.  Um, the meeting for the agenda for the meeting  is also use it to connect with fellow attendees,  both in person and virtual.   
Um, we'd like to please, you know, again,  take a moment to Thank and say hello to everyone online.  And I apologize to anyone that I walked up to last night  with an iPad and just put it in your face  and didn't explain that I was live streaming  to our virtual attendees.  So I think there's going to be more of that today.  So if it shows up in your face, just  say Hi, and share a little bit about yourself.   
Just a reminder that our Wi-Fi network is SSP 20, 2023  and the password is areas 2023 with a capital A. As always,  please remember to silence your mobile devices  and remember that this is a meeting environment that  fosters open dialogue and the free expression of ideas,  free of harassment, discrimination  and hostile conduct.  Creating that environment is a shared responsibility  for all participants.   
Please be respectful of others and observe  our code of conduct.  If you need assistance during the meeting,  please stop by the registration desk.  Recordings of all the sessions will be made available  within 24 hours, and at this stage  I just want to introduce our plenary sponsor from Mercier.   Hello good morning.   
Oh, well, that was quiet.  Good morning.  A little bit of energy just to help Roger out here as well.  So Othman al-talib, chief growth officer at Mercy.  This is actually my first SSP and really  like a Hello industry or a Hello, world event for me  as I just joined last year.  So really great to meet the people  that I've already met and hopefully  the people that I haven't met.   
It would be great to meet you at some point.  This next few days, I was told before also coming up  here that I have to say a joke.  And then and then I was also told.  But if you say a joke, make sure you don't  breach the code of conduct.  So then I'm you know, I'm a bit I'm a bit off,  off center with that.  So I looked up like dad jokes, unfortunately.   
And I tried to look up publishing dad jokes  and it got worse and worse as I was looking it  up for the last 15 minutes.  But I found one that was interesting that said,  what happens?  What does a what do two tectonic plates  say when they run into each other?  My fault. There you go.  Well, there you go.   
Exactly so just to give a little bit about Marissa,  we started, you know, trying to build innovative workflows  for early stage research, posters, presentations,  abstracts.  We then graduated into proceedings and publishing  workflows and then journal publishing workflows.  So we're really excited about all  of the work that's been happening  over the last few years and all of the great partners  that we've been working with and really looking forward  to chatting to with more people and seeing  how, you know, research integrity being top of mind  across the industry is something that we're heavily  investing in.   
And so we really would love to have conversations.  We're in booth 107.  So if you'd like to stop by and just say  Hi or trade better jokes for the next time, please do.  And Thank you and welcome and to the great session by.   That was great.  And you got to give him credit and anyone else  that comes to the mic from now and the rest of the conference,  be prepared with your jokes.   
Precedent has been set, so this time, I  would like to introduce the annual meeting program  co-chair, Tim Lloyd, to introduce our moderator  speakers.  There he is.   Way to get me to rewrite my short speech  with one minute's notice.  No jokes.   
I'm going to make it brief.  What I'm going to do.  And firstly, welcome, everyone.  Welcome to your virtual attendees  who are watching this being streamed.  This is day two.  This is when all the fun begins.  We have a great couple of days of programming for you,  so we're really excited that you're here to join us.   
When we were planning the keynotes for this year's  conference, one of our goals was to have  a thoughtful and informed strategic discussion  on consequential topics in our industry  from speakers that represent influential organizations.  We felt that all too often discussion  was limited to quotable bit-sized chunks designed  for the press or social media.  We wanted a plenary that was different.   
One that helped us, the audience,  to get an insight into the future shape of the industry.  And one that was comfortable digging into nuance.  The moderator for this session was a critical component  of success.  They needed a strategic understanding  of the key issues we face as an industry  to ensure the discussion covers weighty matters of consequence,  not the material of press releases.   
They need the experience and credibility  to orchestrate an effective conversation  and the independence to be comfortable asking  pointed questions when needed.  So I couldn't be more delighted to introduce our moderator,  Roger Schoenfeld.  Roger is the vice president of organizational strategy  for Ithaca and of Ithaca libraries,  scholarly communications and museums program.   
Most importantly, for today, he proved to be a perfect fit.  As the moderator of this plenary session,  Roger will take it from here and introduce the participants.  Over to you.  Thank you, everyone, for participating.  We really appreciate this.  Thanks, Tim.   Well, good.   
Good morning, everyone.  And Thank you for Thank you for joining us today.  This panel is ultimately going to be  talking about the second digital transformation of research  publishing.  Many of us were here for the first digital transformation  when digitization and born digital publishing  used largely print processes and practices,  but brought broad access to research to a far broader set  of society.   
But today we're living through a second digital transformation  in which the adoption of digitally native business  models and formats and business practices  is taking hold across so many of our organizations.  And we're also seeing the risks that  are associated with digitally native threats  as well to the work that we do.  So today, I'm just thrilled to have had the opportunity  to assemble what I think is just an extraordinary panel  of publishing leaders.   
I'm just going to introduce each of them very quickly.  Amy brand, who is the director of the MIT press,  Greg Gordon, who is the managing director of SDN  and knowledge lifecycle at elsevier,  Julia kostova, who is the director of publishing  development at frontiers, and nandita chaudhary, who  is the editor in chief of web of science at Clarivate.  We have some really different and I  think, complementary perspectives  to bring to you today, and we're thrilled  to have a chance to do that.   
So I want to start by noting that it's very, very  easy for all of us to get caught up in the day  to day of our roles inside of our organizations.  And this a conference like this one  is an opportunity for us to step back and really  think about what are the bigger issues  that we need to confront, that we have an opportunity  to deal with.  It's very easy also to argue by analogy  that research publishing is a content business like music  or the movies.   
Right? and we sometimes hear those kinds of conversations.  So I wanted to start by asking the panel  to step back and, and really start with a big question.  What what is the purpose of scholarly publishing?  And we'll start with Amy and then hear from others as well.  Sure So I think of scholarly publishing as being.  Core to the academic ecosystem, but in a number  of different ways.   
Working within a University at a University press.  I'm very aware of what we're doing is providing services  and information back to the academic community that  relate not only to curation and quality control,  but also to sources of information that  feed into academic career advancement.  I'm aware of what we do largely in our books programme,  but also in our journals programme,  how we translate research for the public,  the media for policy purposes.   
And so I think of scholarly publishing,  not just in terms of we are delivering research content  to the world in a variety of business models,  but really being integral to the academic and scholarly  ecosystem.  Greg thanks, Roger.  You know, I guess helping found SSN over 20 years ago.  And then spending the last several years at Elsevier  has kind of given me this.   
People often make fun of what the heck is knowledge lifecycle  mean and the title.  And I think the short answer is if you  have a title that nobody understands,  you get to do whatever you want.  But that aside for a moment.  So I guess I think about this, this, this,  this broader contribution to knowledge in general.  The thing I really, really like about what we get to do  is we have this opportunity for arguing, for this course.   
Sometimes that goes down a bit of a rat hole.  But but for the most part, the challenging of ideas  and the ability to build on the knowledge of others.  And to continue this kind of communal evolution of knowledge  is what I really, really like about what  I get to do every day.  And we don't have to all agree, but I  think we do all agree that the evolution of knowledge  and solving hard problems is a pretty good reason  to get up in the morning.   
So that's kind of my overview.  Sure Yeah.  Thank you, Roger, for this question.  I think it's a good question to keep coming back to precisely  because, as you pointed out, you know,  we often get caught up in the day to day,  and this is a good moment to step back and think, why are we  doing what we're doing right?  I mean, I will agree with everything that you guys said.   
I do think that certainly scholarly publishing has  multiple purposes, including supporting career  advancement for researchers, including contributing  to knowledge, delivering services to the communities  that we serve.  Plural right.  And I think one of the core beyond that or actually  as part of that, I think one of the core purposes  is validating and disseminating knowledge broadly.   
Right and here I want to go maybe  a step further from what was already said  than what was already said.  Frontier's motto is healthy lives on a healthy planet.  Right? so we really think about our mission  as making all science open so that all of us  may live healthy, lives on a healthy planet.  And here's where I'm kind of going beyond.  I say this because I think it nicely  summarizes and synthesizes how we  view the purpose of scholarly publishing beyond, you know,  supporting career advancement for the communities  that we serve, beyond ensuring that they have access  to the latest research knowledge.   
I think disseminating research widely  is critical for bringing about the solutions  that we need to deal with the crises that we are faced with,  whether it's the climate crisis, the health  crisis, public health crises, pandemics.  And so on and so forth.  And so our position.  And certainly my personal view is  that we think that with political will,  with global cooperation and scientific breakthroughs  at scale, just because incremental progress is not  going to cut it where we are, I think we can respond  best in managing these crises.   
Right and so based on that, I think our success  as a society is going to depend on that widespread sharing  of knowledge, the latest scientific knowledge, all of it  at scale, right?  So that we may respond appropriately,  given the gravity of many situations  that we are faced with.  And and I think that this is an important role  that we as publishers can play, right,  in ensuring that there is wide dissemination of the latest  research to inform how we respond to these crises.   
And I think this is a new mandate, an expanded  mandate from what the industry has traditionally focused on.  And I think it is a very exciting one.   I'm just going to start by apologizing  for relying on my love's more than I would normally,  but my brain seems to have stayed in London,  even though my body is here in Portland.  So I can't really disagree with anything  that's been said there.   
Absolutely the original purpose of scholarly communication  was for sort of scholars to communicate their ideas  with each other and through critique and review,  to build that provisional corpus of knowledge,  which was then the foundation for further research.  And I also think it's true to say that scholars being human  have always used that also as a way of establishing  their primacy by boosting their boosting their reputations  and to assert their leadership in a given field.   
And publishers have long been custodians  of the scholarly record.  They're the ones that are responsible for deciding what  goes into the scholarly record, for doing the quality control,  and also for deciding what's excluded  from the scholarly record.  And that's a big responsibility.  And as the pressure to publish and to cite  gets harder and harder, bigger and bigger,  that responsibility gets harder and harder for publishers.   
And that's why we also now need an additional layer  of curation.  And that's where companies like web of science  come in to see which publishers are doing a good job of being  custodians and which perhaps are doing a slightly better, less  better job.  A worse job.  That's the word I'm looking for.  Less, better.   
Apologies, people.   I'll probably stop talking now.   I think it's very interesting to hear both a set of mutually  reinforcing perspectives, but also  with some real distinctiveness in them.  And so I really appreciate that.  I want to turn us now to a series  of more pointed questions, I guess we could say.   
So of all of the strange things that  have happened in the last several years,  we've seen this sector emerge as a vehicle for research  fraud and academic misconduct.  And Elizabeth Beck spoke eloquently and extensively  yesterday about some of the particular dynamics  that we're faced with in that respect.  We also have seen research publishing  as become a vector societal vector for misinformation  more broadly even than the specific fraud and misconduct  issues and at the most basic level.   
And nandita, you just sort of mentioned this.  At the most basic level, publishers  are responsible for what they publish.  And so there's a, there's a responsibility to the,  to the scholarly record that all of us should, should feel.  Many publishers have been investing more and more  in addressing this set of issues in professionals  and infrastructure to block fraudulent or otherwise wrong  submissions as they're coming in.   
So we know that there's activity already taking place.  The question that I want to put to the panel  is beyond what's already being done today.  Right?  beyond that, what more should your organizations  and our sector be doing to address research, integrity  and importantly, societal trust in science?  Nandita, I'd like to start with you.  The web of science is probably more  responsible than any other party for determining  the metes and bounds of what counts as the trusted  scholarly record.   
So would you like to start?  Absolutely I think I'd like to start  by echoing what everyone, including Elizabeth,  has said so far, that this is a multi-stakeholder problem.  None of us can still work in isolation  and we need to communicate and collaborate.  And this is already happening.  So last week I was I had the privilege of being involved  in a summit that was co-organized by SDM  and co called United to action.   
And this brought lots of different stakeholders  together, including funders and research institutes  and publishers to tackle the problem of paper Mills.  And that was actually written up in nature yesterday.  And I really recommend everyone has a quick read.  So I really go into what was said there.  But the way I think of it, there's  sort of three levels of intervention  that are needed at the top level.   
We need to remove the perverse incentives that are currently  in place that are really to do with the overreliance  and in fact, the misuse of bibliometric indicators  and research assessment that's really  pushing this driver of sort of quantity over quality and site,  site, site by site to site to be cited.  So the second level, we need better measures  to identify and to block bad content from entering  publication record.   
And the third stage, and this is where I think.  The science is mostly involved is  to acknowledge that the record.  Unfortunately is already polluted  and we need to clean it up and reduce further pollution  as much as possible.  And so the web of science, we apply very rigorous selection  process that filters out the journals, books  and proceedings that don't meet our quality criteria.   
And to give you a sense of context, only 15%  of journals that want to be evaluated actually  make it through into the Web of Science.  And it's worth noting that this is a selective,  but it's not a competitive process.  We're not looking for high citation activity.  We're just looking for journals that  do what it says on the tin.  And even with that sort of minimal barrier, it's only 15%  And then once the journals within the web of science,  it's not in forever.   
We periodically reevaluate journals  to make sure that they still meet our criteria.  Once they're in.  And with the increasing amount of fraudulent activity,  we are having to spend more and more time  re-evaluating index journals.  And this comes at the expense of evaluating journals  for the first time.  And we've always been very responsive  to community and customer feedback  when we're trying to prioritize which journals to reevaluate  and things like pubpeer retraction,  watch the work of Elizabeth and others of super sleuths.   
But that's always been quite reactive, if you like.  So what we've really been investing in over the last year  are some AI tools that we are able to be much more proactive.  And so we've got an MVP now, which we brought out this year.  And what that does is help us to focus on which journals  are showing signs of concern.  So we can really target those reevaluation.  The tool doesn't tell us to delist something,  it just points our attention to the right place.   
And that's somewhere where we need to improve.  So we've got an MVP now and we need  to sort of become more sophisticated in what we do.  We also need to become more transparent.  So some of you may have noticed earlier in the year,  there was quite a lot of brouhaha  about us delisting 50 odd journals  and it caused some confusion because we  said we delisted the journals, but we didn't actually  name the journals.   
And it's clear that more transparency is needed.  So as of last month, we are now each month  publishing an update of which journals  enter the Web of Science and which journals are delisted.  And importantly, the reasons for delisting,  because sometimes journals are delisted  for non editorial reasons, just for production reasons.  We can't get hold of the publisher.  They're not responding, they're not sending us content.   
So not all the listings are the same.  And we'll be much more clear about that in future.  And another thing I think it's worth pointing out  where we can get better is so when a publisher retracts  an article, we don't remove that article from the website.  So what we do is flag it as retracted  and that isn't 100% accurate.  Some publishers are more clear than others  in their marking things as retractions.   
And so we have to bring in data from other sources,  such as crossref, for example.  And again, there's room for efficiencies and improvement  there.  And the final point I want to mention  is, is about this month's JCR release and about the GIF.  So as you probably know, the journal impact factor  is one of the most misused metrics in research assessment.  It's a journal level metric that is often  used as a proxy for researcher performance as well.   
And so at the moment only a certain set  of journals within the Web of Science  get a journal impact factor.  And they are the most impactful in terms  of scholarly impact journals in the social sciences  and Sciences.  What we're doing from this month's JCR release  is giving all journals in the Web of Science  a journal impact factor.   
So what that does then is creates  the GIF becomes an indicator of trustworthiness rather than  just of high scholarly impact.  And I think that's important.  When the GIF was introduced back in 1975,  we didn't have the problems we have now  and there wasn't really that need  to have that clear line between trustworthy and untrustworthy.  Whereas now that is the boundary I  think we need to defend more than impactful, non impactful.   
Thanks, Amy.  You've thought a lot about some of the standards and practices  that inform this work.  Would you like to speak?  Sure sure.  I thought, you know, nandita did a great job  of touching on the major issues around,  I mean, largely kind of top down ways  of creating processes and filters and best practices.   
I think another way to think about it, which  is complementary is how do we build in  through our infrastructure, through our metadata,  through our signaling systems, other ways of having  machine readable tracking of accountability and trust.  So even things like, you know, the credit  taxonomy that we worked on with niso  and others that forces an author to disclose what exactly they  contributed creates that accountability.   
But even more so, you know, having  transparency around the kind of peer  review that an article would say also potentially a preprint has  undergone is, is a way of adding information  about trust and provenance into the systems that we rely on.  So I think that that's extremely important.  But, you know, I would agree with what  you said at first about the major issue  really being around the kind of incentives  that we have now in our business models,  especially with gold open access and, you know, to value  quantity over quality and academic incentives  as well as being probably the largest driver.   
And the issues that we're seeing now  and, you know, the issues that are keynote speakers  spoke about yesterday.  So OK.  Um, you know, I love that phrase.  Perverse incentives.  I mean, it just holds so much of the problem that we have.  And, you know, I the way I think about this is,  is that it's a balancing act.   
In other words, if we wanted to take one paper  and publish it every five years, we probably  could do a pretty good job of figuring out whether it  was trustworthy or not.  The problem is that that doesn't work,  especially doesn't work to solve the hard problems that Julie  was just talking about.  And so it is this balancing act, you  know, from a preprint standpoint,  Susan's always been doing a level of review,  certainly not peer review, but a level of review that anything  we wanted to put on the platform was as trustworthy  as we could make it within the time frame that we had.   
And again, all of those things are  kind of gotchas when you're starting to actually bring them  into the real world.  But to me, this balancing act also  kind starts at the grassroots level and Ed.  And I did a Ted Talk a few years ago, and in that Ted talk,  I said, where are all of our we are all our own editors.  And what I suggested was we spent five minutes  actually figuring out what we're reading.   
So don't trust the bus that's going down  MLK Boulevard outside.  Actually spend five minutes being responsible for learning  what you're reading.  Actually do a little bit of research on your own  and try to figure out, is it trustworthy?  Take some of the indicators that Amy was talking about.  Take some of the things that are available to us and not just  because it's got a bunch of downloads  or because it's from a popular or highly cited journal.   
Believe that it's trustworthy.  Actually, take some responsibility on your own  to figure out whether you should trust it.  And it's the same thing I say to my children is actually  don't believe everything that you read.  And I think that that's kind of a good Nugget  for us to take away from the trust questions  that we all have a level of responsibility  and don't just expect the publishers or chatgpt are  suddenly going to solve this problem for us.   
We argue, yes, about that.   And then let's argue with Greg.  I think several of us might like to do that.  Go ahead, Julia.  Well, I think, I mean, fraudulent science and just  broad unethical behaviors that exploit  the faith and the goodwill that exists in our field  and ultimately erode trust.   
I think those are issues that the entire industry is  grappling with.  Right and so what I'm hearing is,  is precisely what we need to be doing,  a wide range of responses from multiple stakeholders, not just  publishers.  Right this is the point that I think  has come up repeatedly over the last several sessions.  Right and I think, you know, some of the measures  that you shared, nandita and Amy as well, you know,  I think are a step in the right direction in bringing  transparency in, in including signals or indicators  of trustworthiness or lack thereof.   
Right?  I mean frontiers, you know, we've responded strongly  to this in a variety of ways, including  by pioneering in 2019 an AI tool that  runs a wide range of checks, including  on figures and images, which, as we heard yesterday,  agreed is the need is really great there.  And this was in 2019.  This was before most of us could use in a sentence correctly.   
Right so, you know, I think this is the kind of know,  we've since expanded these capabilities.  And I think, you know, to Roger's point,  you know, this is something that we  will continue to have to invest in because it is important.  So our investment will continue and so  will our participation and cooperation  in industry, industry wide initiatives  to respond and to manage these situations.   
Right so so I think that's the first part of it, right?  I firmly believe that we as publishers, you know,  need to set and of course uphold highest possible standards  of quality of ethics of operations.  Right to make sure that the quality of what we publish  is, is top notch.  Right and that means investment, technology, training,  resources, cooperation.  Right I think this is really our social purpose as a business  to ensure the integrity of the scientific record.   
Right and this is a point that I think made really  quite eloquently, eloquently.  So that's the first bit.  I think the second part the second point  that I want to make here.  I think has to do just about trust in science  and what we could do to bolster it,  I think has to do with how we collectively make  sense of new knowledge, right?   
So particularly in an environment  where there is distrust, sometimes manufactured  doubts, right, that we find ourselves wittingly  or unwittingly caught up in.  Right so a colleague of mine recently gave me  a book by Naomi Oreskes.  She's a historian of science.  And I think a geologist.  And the book is called Why trust science,  and it is published by our friends  at Princeton University Press.   
And her argument there is that trust in science  comes not only from evidence, but also from what she calls  the public consensus around it.  Right? and so when that consensus is  fragmented or siloed, when there is no transparency around how  the methods that allowed us to reach certain conclusions,  when the research is incomprehensible,  it is reported inaccurately in media and so on and so forth,  that only fragments and compromises  our collective sense making further right.   
By contrast, whatever consensus we can achieve, right?  I think it's going to be stronger.  When it is backed up by the latest knowledge,  by the latest data that is openly accessible,  that is globally accessible, shared fully and in full.  And transparent way.  And so I think this is, you know, just.  In a more kind of bringing this down  to some of the pragmatic questions that we deal with.   
You know, we heard yesterday from Dr. bick  just the importance of open research and just transparency  for the integrity of the scholarly data.  Right? it is it is, you know, easier to verify.  It is easier to validate things when  everything is accessible for this kind of validation.  And so this is my organization's mission  and this is how we see ourselves contributing to,  I guess, a vibrant knowledge ecosystem that really  has an important role to play.   
Amy yeah, well, actually have reactions  to both of those things.  Was this wonderful conference last week  at the National Academies that was co-hosted by the National  Academies and the Nobel foundation  about trust and truth and hope.  And it was largely about disinformation.  And a lot of the research on public lack of trust in science  shows that what really works is not persuasion.   
It's kind of creating this sort of sense  of belonging in the community of science and kind  of, as you said, knowledge.  Science is very social, and yet our structures  are such that there are many people that  feel excluded from it.  And that has been a big issue.  It was I learned so much at that conference  and we can talk more about it later.   
What I was responding to with respect to what you said,  Greg, is that it just doesn't scale.  You know, you hear that from the preprint community.  I'm not saying that we don't have to read and interpret,  but, you know, often I've heard said,  oh, we'll just post preprints and everybody will read them  and researchers will decide for themselves what's good science  and what's not.  You know, we, all of us in the fields in which we  have different levels of expertise,  don't have enough expertise to do that  and certainly don't have enough time in the day.   
So realistically, when we are, whether it's  to read a whole article and consume it.  And understand it and work with it to skim through it,  we will always be relying on different forms of signals,  whether it's the journal brand, whether it's the impact  factor, whether it's the author we know,  whether it's, you know, we can tell  it was peer reviewed 10 times.  And so I think that that reading and interpreting  is unfortunately, you know, it's just  not the way the world works.   
So Well let me be clear.  I'm probably.  Maybe Rogers aligned somewhat, but I'm probably  the biggest fan of preprints up here at least.  I'm not in any way, shape, or form  saying that you should trust a preprint just because it's  been shared.  What I am saying is that the process of sharing knowledge  needs to get faster, it needs to get more iterative  and we need to learn from each other faster.   
So I do think that preprints hold a very important role  in that process.  What I am saying though, very clearly  is I don't think that it's fair for any of us  to depend on some external source MIT press, Elsevier  frontiers, anybody to be responsible for all  of the trust in science.  I agree that communities make a big difference.  I also think that we all individually need  to own some level of responsibility in the process.   
And if, if, if, if Elsevier says, OK, this is a great paper  and you read it and it's not, then you should say  it's not a great paper.  Just because Elsevier says it's great doesn't make it great.  And so I think that we all have an individual responsibility  in this process, not solely.  And I do agree that there are a growing number of tools  to create.  Fraudulent research by nefarious actors in this process.   
But I just think that we all have  to own some level of responsibility  and not depend on the big houses to be the ones that basically  hold all the responsibility.  So how does that play itself out?  And I don't want to put this just to you, Greg.  Maybe others will want to respond as well.  But how does this process of generating trust in science,  which I think is a little bit of what we're ultimately  talking about here, how does that play itself out  in a polarized, extraordinarily polarized information  environment where there are bad actors who intentionally  misconstrue science and have used by no means  just preprints.   
I mean all parts of the scholarly record  in recent years to do so.  I have trouble understanding what know,  the we all should be critical thinkers  of the world around us.  But increasingly few of us actually are.  And I'm not talking about folks within our community  necessarily, but in the society broadly  that we rely on ultimately to fund and support  the work that we do.   
So I don't know if there's exactly a question in that,  but I feel like there's some set of responsibility  here that I'm wondering where that falls.  Maybe it's outside of the work of our organizations here  to some substantial degree.  He there's a lot to unpack there.  You know, I think it was like 1974 granovetter  wrote that article, the strength of weak ties.  Right and that your strong tie community,  the communities that you're talking about,  reinforce your beliefs and that you actually  learn from the weak ties.   
You really learn more from the connections  that you make with communities that you  have a weaker tie with and the stronger ties of the people  that reinforce your beliefs.  And so I think that that's kind of that exposure  to those other ideas with algorithms and social media  and other things that have come about in the last decade  or so actually are part of this problem  and that a lot of people will read the stuff that's  Fed to them instead of looking for something to consume  that is not basically, you know, they're at the ready.   
I think that, you know, from an perspective  when somebody submits a paper to cern, we have a team of PhDs  and early stage professors who basically then look  for other places to place it.  So if I'm an accountant actually,  but if I wrote a paper about Sarbanes-Oxley  and I wanted to put it into financial accounting,  it could also wind up in regulatory economics  or corporate governance or securities.   
And so it's that sharing of ideas.  Here's the way the accountant looked at this.  Here's the way the economist looked at that.  And so I think it's that sharing of perspectives and ideas  that actually helps create some of these weak tie connections  that I believe start to break down some of the constraints  and some of the problems that we have in the way  that things are siloed right now.  Well, I mean, the cognitive scientist in me  just wants to say that human beings are so gullible,  you know, and that is and really and yes, we  should be more discerning.   
But, you know, part of what we're seeing in  and I think we actually have, you know, almost or have  crossed that line into this disinformation utopia,  I mean, certainly in the near future.  And and I think we need those external mechanisms  for understanding the provenance of the information  that we're consuming, for sure.  I think you're absolutely right.  Human beings are gullible, but also we  want to be right, right.   
We want to be on the winning side.  And I think that's the problem we have now.  The way we consume our information is often  by using algorithm that serves us something up  and it serves us something else up.  And it takes us deeper and deeper  into our particular rabbit hole of choice.  And this weak tie business, this, this,  this sort of being confronted with views that oppose our own.   
That's not the world we're living in now.  We're, you know, we're having these reinforced views on us.  And I don't know how we can break  that unless the algorithms change or we take  responsibility ourselves.  But it's hard to take responsibility.  I think.  I absolutely agree.  I'm not saying it's easy.   
I'm just saying I agree with everything that you're saying.  I'm just saying that it we're not going to I  don't think that I certainly don't have the ability  to go change the algorithms.  So, I mean, I think that we have to take responsibility.  And I do actually agree with you in the sense when  I speak to my team about trust signals,  I use the analogy when you cross the road,  you've got the green man, woman, and the red one.   
Just because it's green, it doesn't  mean you abdicate responsibility to look left or right.  You still have to check there's not a great big car coming at.  So I think it is a mixture between agreed consensual trust  symbols and individual responsibility.  So I'd like to move us on now from this very interesting  colloquy to another question, which really takes  us more in the direction of the business,  of the work that we all do.   
Many, many, many, many of us have  been following some of the winds of consolidation  that seem to blow almost inexorably  through different parts of the scholarly publishing sector.  I've spoken in recent months with some analysts  and observers that predict that any organization with revenues  of less than 50 or $100 million will in publishing revenue  will struggle to remain independent in the long run.  And whether through an outright acquisition or some other kind  of services agreement.   
So I'd like to ask the panel about the consequences  of a scenario in which all publishing is ultimately  wrapped up into, I don't know, 5 or 10  or maybe it's 15 major publishing houses.  Amy, I want to start with you.  You've been a strong voice for the importance of publishers  thinking at a network level.  So tell me a little bit about how you see this?  Sure I, I agree.   
I mean, the data speaks for itself.  We are going to see more ongoing consolidation in this industry.  But I also think those of us who work on the mission driven  side, universities have to do what we can to work against it  and build hedges against it.  I think the thing that's less talked about.  Is this unintentional but still kind  of unholy alliance between the large commercial sector  and its profit motives, especially  for commercial publicly held companies.   
And what we see in a sort of stronger voices  in the open access movement that call for open at any cost,  because where really that has landed  us is the kind of dominant open access business model  that is enriching these larger commercial companies  and creating the kinds of problems  that we were just talking about with respect to incentivizing  the publication of quantity over quality  and devaluing those mechanisms that build curation  and trust into the system.   
And so that really is my biggest concern  and why I spend a lot of time trying  to campaign for more institutional investment  in publishing as an alternative, if only  for the sort of bibliodiversity that we all value.  So, you know, those are really my main points on that.  I think that this is exacerbated by large language models.  But even putting that aside, you can't ignore the fact  that all the content and data that is being made open  through the growth of open access, which we're also  very much a part of being provided  under unrestricted licenses like cc0 and cc-by  also plays directly into this commercial growth  into the pockets, if you will, of these same commercial  companies by serving as content or fodder  for the other tools and analytics and technologies  that they sell to the research community.   
So we're in sort of a bit of a perfect storm where  if you believe, as I certainly do,  as Julia does, and you know, even Elsevier does,  that we want to make as much content as open as possible.  I think we have to be really careful and discerning  about what is really going on.  And not even saying that there are bad actors  or bad intentions here.  But, you know, we see this right knob right  in front of our eyes with respect  to the way in which generative AI is consuming information.   
And I think we'll have a chance to come back  to that question and issues around copyright  and large language models.  But yeah, Greg Elsevier has been grown in part  through acquisitions.  Tell us, tell us a little bit about what that looks  like from your perspective.  Um, well, first, I don't disagree  with anything Amy has said, and for the record,  we're actually good friends, although it may be  a little questionable at times.   
Not at all.  No, no, I'm just kidding.  No, no, no.  I don't agree with any I don't disagree  with anything that you've said.  You know, Elsevier is a very successful business  that's run extraordinarily well by a bunch of very, very  smart people.  And and it's objective is to be a profitable company.   
So that it continues to continue to be a profitable company  and support science and do all the great things that it's  done from a commercial side and for the communities  and for open science in general.  It would be silly for them to look at open access and say,  Oh gosh, we can't do that.  I mean, they would figure out how  to actually do it and do it profitably and make  a successful business out of it.   
And they've done that.  You know, I think one of the core problems  is that the industry that we're in is really hard  and it's very, very expensive.  And I mean, the amount of money that Elsevier spends  just combating paper Mills.  Would would bankrupt many small publishers.  I mean, the magnitude of some of these problems  is so great that even in Elsevier or Springer  or frontiers or anybody Clarivate  can't handle the cost.   
That's why we have to do these things  and solve these problems together.  But to me, it's, it's this balancing act of,  of trying to figure out where do you provide the most value in,  in the process.  And, you know, if you can provide value,  then you should continue to provide that value,  whether it's inside of a large publisher or independently.  But part of the consolidation is certainly  because some people have created businesses.   
I know a handful of people that I  saw last night had a glass of wine  with who were trying to create a business so they can sell it  to Elsevier or somebody else.  And that's their business model.  And God bless them, that's great.  But a lot of the niche providers of science,  I think should be able to continue to provide that.  It's just really hard to do that.   
And very expensive.  Julia frontiers is a relatively new entrant  into the publishing landscape.  What does this question.  Look like from your perspective?  Yeah, I mean, from my perspective and certainly  harking back to the first question  that you posed to the panel, Roger,  I think the purpose of publishing  is to disseminate research widely, right?   
This is our starting point.  This is our end point.  And it's certainly, you know, our mission  to make all research available openly.  I'll take a more Solomonic angle here  and I'll say that I think openly available science can  and probably should be delivered by more than one publishing  model or, you know, frameworks.  Right?   
I do think that competition Spurs innovation.  I think that it prevents the incumbents  from becoming complacent, if you wish,  or losing focus off of the needs of their customers  and partners.  Right and I think competition is good for author choice.  It's good for the robust quality of science  that we want to publish.  So I think that this is an important piece  to keep in mind because I think in a competitive landscape,  all publishers need to maintain a high quality right  and to continue to meet their authors  and their partners needs, because otherwise they  will just vote with their feet.   
And I do fear that consolidation of the scale that we're  talking about here, Roger, probably limits that.  I mean, granted, maybe in some cases,  it can provide some smaller players access to technology  that they otherwise might not be able to develop themselves  or or, you know, maybe scale and reach and certain expertise  around workflows or what have you,  that by themselves it will be difficult  for them to put together.   
But but ultimately, though, I just  feel that this kind of highly consolidated market  or an oligopoly is not the right starting point for us.  Right I think it has not served researchers well.  It has not served our community well.  It has not served many of the stakeholders that  are part of our ecosystem.  Well And so I think that's where I'll pause.  Nandita what are some of the consequences of consolidation  on the scholarly record, on the quality  of the scholarly record?   
So I thought an interesting point  to start answering this question would  be to look at how consolidation has ended up concentrating  content amongst the biggest publishers at the journal  level versus the article level.  I'm sorry, the numbers are tiny here,  so I'm going to have to really squint.  So unsurprisingly, to me at least,  the concentration has been more at the article.   
But the journal level, for example,  if you look at the top five publishers,  they contribute 42% of the journals in the Web of Science.  If you take that down to the top ten,  that's 51% If you look at the article level.  Top 554% of content and the top 10 are whopping 67% of content  comes from the top 10 publishers and this notable sort of things  here.  So if you look at the journal level,  their top three publishers are Springer nature, Taylor Francis  Elsevier.   
Springer Nature is about 12% of the journals TNF and Elsevier  around 10% each.  If we look at the articles, Elsevier  has 19%, followed by Springer Nature at 12% and Wiley at 9%  But what's particularly interesting, I think,  from the article level is that we get the emergence of the two  big born publishers.  Mdpi comes at number four at 8%, and frontiers  comes in at number six at 3% And obviously neither of those  have been involved in the M&A game.   
But I think this just shows how the unit of value  has shifted from the journal level to the article level.  And I think that that's sort of born out  by these very sort of top level numbers here.  And then I was trying to think about  beyond what the panel has so eloquently said already,  how could these differences be explained  through looking at what's motivating,  motivating the acquirers?   
And you can think of many different reasons  they might want to accelerate their penetration  into the market.  They might want to retain more manuscripts through better  transfer Cascades.  They might want to expand coverage  through their own existing coverage in the arts  and humanities into the sciences and vice versa.  And then I kind of fell asleep if I could really  bottom out that argument.   
But I think if we sort of delve into that, that might give us  a little bit more understanding of how things are going  to play out in the future, because I think it's still  very early days to see how things play out.  I think some of the things that we've opined here will happen.  We we're very early.  I mean, commercial acquisition happens  and then it can take years, if not  decades for true operational and cultural acquisition to happen.   
So you've got this umbrella company, Andre elsevier,  but in a different bits of it operate  under very different circumstances  with different priorities.  So yeah, time will tell.  Those numbers are really interesting.  Thank you for, Thank you for sharing them.  We we're going to finish up with one final question, which  has already been hinted at.   
We're living through a mania about generative AI,  and that just because it's a mania  doesn't mean it's not going to have  transformative effects for research publishing as well.  I have been fairly nonplussed about this, particularly  for STEM.  We've long been foreseeing a transformation, a transition  from human authorship and readership towards machine  to machine communication, and so that that transition,  if it is happening, will continue,  perhaps be accelerated, but not be fundamentally changed  by these large language models.   
So I want to ask this panel about your organization's  strategy for enabling machine to machine research communication.  It's an enormous transformation, both ethically as well as  competitively.  What what kinds of opportunities do you foresee?  Greg, I think we'll start with you here.  Um, you know, it is I mean, I see this  as I actually maybe see it as more of a spike in the process  than you do.   
Roger and I do think that that, as Anita just  said, that there's we have no idea how much generative  AI is going to affect our jobs.  I mean, I think that it's going to create  an opportunity for the nefarious actors  to push more content into our submission systems  and challenge them.  I think it's going to make, as Elizabeth showed last night,  it's going to make the Super sleuthing of unethical behavior  even harder to detect.   
And I think I mean, I was on a strategy  call with some technologists a week and a half ago.  And they were talking literally about this concept  of using an lm to detect LMS who are creating fake submissions.  And I was just thinking, just trying to wrap my arms,  my mind around this whole process of the machine catching  the machine.  That's it's this whac-a-mole of machines back and forth.  And it'd be fun to watch if you're a fan of like Tron  or something like that back in the day.   
But from an Elsevier standpoint, you know,  we're to be quite frank, we're still really focused  more on the side of the fence than we  are on really trying to figure out  this whole machine to machine.  I think I know there's a bunch of smart people thinking  about it, but but I don't think that we're  anywhere near ready to try to actually have that.  Be part of the real world.   
Yeah I mean, it's.  It's a great concept.  I love it, but.  But it's.  It's still.  It's still fraught with perils.  I'll leave it at that, Amy.  Um, so, I mean, there's no way that a modest sized  University Press like ours is going to be on the cutting  edge, right.   
Of of using large language models in how our workflows are  and how we publish.  But we are one of the leading publishers on AI and machine  learning and are doing a lot of publishing in this area.  So that's one way that our organization is responding.  We work with the University Press community  on developing best practices around requiring authors,  for example, to declare, to disclose that they've  used these tools in their writing personally  and in the work, I do.   
That kind of goes beyond my day job at the press.  I'm spending a lot of time talking  with other stakeholders.  I'm much less concerned about, you  know, a long form or a short form piece  being partially written by an artificial intelligence  than I am about how this exacerbates  all the issues that we've been discussing  around trust and information.   
And I am also concerned about copyright.  I've found as I've gotten older, I've  gotten a bit more conservative on this topic  for a number of reasons, having mostly to do with the fact  that if you are trying to do sustainable open access  publishing, you need protections to do.  So around the open works that you're publishing.  And you know, hence my not being a supporter of, say,  of controlled digital lending, but I'm watching the space  now where there are conversations around know, is  and I'm seeing Creative Commons trying  to make this argument that the consumption by these models  of published scholarship should be considered  fair use because it is transformative and not  exploitative.   
I'm not sure I agree with that.  And I think it's something that we should all  be thinking about, what the unintended consequences of that  that might be for all of us.   Yeah Thank you.  I too will preface my remarks by just  saying that I think it's too early in the game for us  to be able to make ground pronouncements  and predictions about the implications of this technology  on our industry or society as a whole.   
I think we all probably get a little overwhelmed  by the endless saponification of this topic in the media.  And part of it is that I think we  as a society are still looking to understand a lot of issues  here, right?  Whether it's, you know, the data that feeds these models,  whether it's the implications of some  of the harmful or incorrect results  that are being rendered, whether it's  IP, whether it's the ethical concerns around this,  concerns about trust and expertise.   
I think those are all legitimate issues.  And it's not something that I think  it's just too early in the game for us  to be able to make some meaningful to draw  some meaningful conclusions or kernels of wisdom there,  if you wish.  Right those are big questions.  And to Amy's point, as a research publisher, you know,  one of the ways in which we are looking  to contribute to this conversation  is by facilitating the kind of debate and dialogue  and just in depth discussion around some of those issues  and some of our publications, including frontiers  in artificial intelligence.   
One one of the several journals that we have that  are looking to understand better the implications of this.  I think from a pragmatic angle, I  do think that this technology has the potential  to be transformative, right?  I don't think that we're just talking about glorified search  here, you know, and I think we are  really kind of on the cusp of a paradigm shift.  Right all of the consequences intended and unintended,  you know, still need to be understood better.   
But I think that this has the potential.  Sorry, can you guys hear me?  This has the potential to be transformative.  And we're already seeing some very promising applications.  Thank you.  In biomedical research, in neuroscience, that I think,  really have the potential to amplify how research is done.  And this is not this is not anything to smirk at.  I think this is really important, right?   
I think just frontiers has long been  interested in this technology, as I mentioned earlier.  And we're exploring potential applications just to what we do  and how we can do it better.  One key point that I want to make,  though, is that we want to make we  want to take advantage of this technology  in a responsible and accountable way.  Right so not to replace, but to complement  and to enable the important human element here.   
So we're really talking about human LED technology enabled  kind of an approach to this, to this.  But I think there's a lot of exciting opportunities,  the risks and the unintended consequences notwithstanding.  So I'm going to come at it from a slightly different angle.  So for hundreds of years now, we have  operated in a journal based system  and articles are selected.  They're collected.   
They're read within a particular journal  aimed at a particularly scholarly audience,  a human audience.  And we've moved to digital.  We've already seen a shift to an article based system.  We can see that, for example, with the growth in preprints  and articles are no longer grouped necessarily  with similar content aimed at a particular set  of human experts.   
We don't scan a journal from cover to cover anymore,  and so there's already been some sort of loss  in the opportunities for serendipitous discovery.  However, the articles themselves are still sort of constructed  as narratives that are appealing to a human readership.  You know, humans appreciate the flow  of context that comes from the typical structure of an article  with an introduction, a methods and results  and and a discussion, not so much with machines.   
I don't think they care about that quite so much.  And so we're increasingly seeing the output that's more.  It's more geared to machine to machine communication  with machine readership.  And we see the atomization of articles  into discrete parts which are published independently,  often in very different locations as well.  And rather than sort of being published as a single unit,  and I think this is the big change  that we're sort of beginning to see now  and as sort of the range of publishing outputs is evolving,  we're trying to evolve at the web of sites as well.   
So we've been indexing journals for decades now, since 1964.  We began indexing data in from 2012  when we launched DCI, the data citation index.  And earlier this year, we launched the preprints index,  which indexes preprints.  So it's of trying to sort cover all these different types  of outputs and join them together.   Well, we are.   
I think we could have continued a colloquy  on a number of different topics today,  but I can see where the red now.  So I just want to Thank the panelists.  This has been a terrific information from.  So many different, really valuable perspectives.  And I hope this has been.  So thank you.  Thank you very much.   
Thank you.  Thank you.  Thank you.