Name:
Charleston Trendspotting: Forecasting the Future of Trust and Transparency
Description:
Charleston Trendspotting: Forecasting the Future of Trust and Transparency
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/7ce5e11f-8277-4560-8430-7b7c601fb17b/videoscrubberimages/Scrubber_0.jpg
Duration:
T00H47M11S
Embed URL:
https://stream.cadmore.media/player/7ce5e11f-8277-4560-8430-7b7c601fb17b
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/7ce5e11f-8277-4560-8430-7b7c601fb17b/session_1a__charleston_trendspotting__forecasting_the_future.mp4?sv=2019-02-02&sr=c&sig=iuj87oK49I28uu8hgI%2FQAfB2IkBLAHay22sY67mFTgw%3D&st=2024-11-20T03%3A39%3A17Z&se=2024-11-20T05%3A44%3A17Z&sp=r
Upload Date:
2024-02-02T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Everyone Hello. We're going to go ahead and get started. I'm Leah Haynes, executive director of the Charleston hub. And I'd like to welcome you today to our Charleston trendspotting initiative session. I'm thrilled to be here with all of you in Portland, and I'd like to Thank the SSP staff and board of directors for gathering us here together. The Charleston conference is proud to be a sponsor of the meeting, and we appreciate the ongoing support of SSP and the partnership we've developed with them over the years.
Just a little bit of backstory for any of you that aren't familiar with the Charleston conference or against the grain. The conference was founded in 1980 by Katina straw. We're in our 43rd year this year. At the time, she was a newly hired collection development librarian at the College of Charleston. She had no travel budget to attend meetings and meet with the big names in the publishing world.
So she thought, I live in a cool city. I'm going to invite some people here to talk to me. It's grown from a group of about 25 people in 1980 to almost 3,000 last year. And we're planning for a good attendance this year as well in November. So against the grain or ATG, as we fondly call it, grew out of the conference as a way to continue the conversations that we were having in a published journal six times a year.
And the same sort of topics that are covered at the conference and the goals of both the Charleston conference and against the grain are to bring together librarians, publishers and vendors to discuss issues of importance to us all and to develop strategies and solutions to face those issues together. So we're going to be participating in a future activity today.
And Lisa's going to go through more about what Futures thinking is and what the activity will be is designed to be a very interactive hands on session, as you can probably see from the arts and crafts materials in front of you on the tables. So we're going to be doing some small group activities later on. At that time, we may ask you to join together and regroup in larger groups, but we'll cover that when we get there.
Um, so this is, I believe, the sixth year of the trend spotting initiative. So how many people here have attended one of our sessions before? Is there any. OK so have any of you attended more than one of our sessions before? So that next year we need to bring a prize like whoever has attended the most.
Trend spotting initiatives will give out a prize next year. So it started from a conversation at the Charleston conference between our founder, Katina strauch, and some attendees that stressed the importance of being proactive rather than reactive to the trends that are coming through our industry in the near future and how they will impact the world of libraries and publishing. So we met for the first time as an organized group at the 2017 Charleston conference, as a Futures Lab project and then 2018 with a refocused format and updated title of the trend lab.
We've done pre conferences at SSP. We've done sessions in Charleston, reports have been published and against the grain. The results of these meetings have been published each time we took a break in 2020 as a lot of things did. We experimented with virtual workshops in 2021 and we started back last year with our regular meetings. We did community polling in April of 2022 to ask for community input and identifying trends that impact our work.
And we use that information to shape the discussion for workshops at both SSP and Charleston's 2022 meetings. Last year, we examined pestle trends with a focus on the T of that acronym looking at top 25 technology trends in our industry. And this year, we're taking a different approach with the Futures wheel activity that Lisa will explain shortly. So here's our agenda for today. We're currently in the welcome and introductions portion of the day.
Um, Lisa will do Futures thinking 101. We'll have a Futures wheel activity a small group reporting back to the large group and then a wrap up after that. So with that, I'm going to introduce Lisa, turn things over to her to get started. Lisa Yanagi hinchliff is Professor and coordinator for research and teaching professional development at the University of Illinois, urbana-champaign.
She's also the project director for the trend spotting initiative and trend lab leader. Many of her already from her Active Roles in presenting, writing, teaching, conducting, research and scholarly communications. So thank you so much, Lisa, for your continued hard work on this project. Thank you, Leah.
And Thanks to again, Charleston and SSP for continuing to support this work. Don't be frightened by the craft supplies. It's not an art project. It's just something to help us with our thinking here today. So some of you will be familiar with Futures thinking, but one of the things that we want to do is make sure we sort of squarely situate the kind of conversation we're going to have today.
And I think in many ways, it's a great follow on to the plenary we just heard. So I hope you had a chance to listen to that plenary and some of the things that Roger was particularly sort of trying to get a longer term and a strategic perspective on. This is going to be a nice sort of follow up to that. I think. So Futures thinking is a way of addressing, anticipating and maybe even helping to shape the future.
So it's not gazing into a Crystal ball about what will be. It's thinking about what could be as a way of saying, can we prepare for what could be? But also, if we don't like some of the could be's, then what do we do to disrupt that potential future? So it helps us think about how policy strategies and actions can promote what is a desirable future and help prevent those that we consider undesirable.
I think this morning we heard about a lot of current trends related to trust and transparency that are undesirable. They were going in not the best direction on some of these things. Other things we heard are pretty desirable. And so how do we keep going in those directions? So our goal today is to not look into a Crystal ball and say in 2030, here's what trust and transparency will look like.
But instead to have a dialogue, to have some thinking that you can then take back to your own organizations understanding what is possible in order to strengthen your own leadership in your organization and inform your decision making. Um, so again, our goal is identifying, assessing and perhaps shaping the way that systems and relationships develop over time. Different aspects of future thinking is that it requires careful and thoughtful analysis of current conditions.
The pressures on the system risks in the current system, resources we have, and potential implications of current trends. It's a process to reveal potential futures, not predict a particular future. So that's why we say Futures thinking is it? The Futures right now are plural for us. There are multiple possibilities. Um, only when we look backwards do we have the ability to say there was one thing that happened.
Um, so the other thing is we want to also think about in the context of like these, these three words, probability, plausibility and feasibility. So as we think about the potential ways things could develop, we can think about what's probable, we can think about what's plausible, right? And we can also think about what's feasible, like with the resources and the like, that we have a common mistake in Futures thinking, which is why I want to call it out right now, is that people conflate their values and their theories about how the world should be.
With how the world currently is. So we need to stay firmly grounded in the way the current world is, which is not to say that we don't have our aspirations about how we want it to be, but the Futures that we create will be founded on what's currently the case and then informed by our beliefs and strategies about how we want it to be. So there's a place for the ideology and the vision, but it's we describe the world as it is so that we have a better chance of creating the world that we want to have.
Otherwise, we end up with really unintended consequences through naive thinking about the way the world is. And and it actually decreases the likelihood that we can take affectionate, effective action to get where we want to go. So in the abstract, and has already mentioned today is that our we often use a different technique every year because we know people come back for this same session, but we won't want to do the same session.
So this year's technique that we chose is called a Futures wheel, and it's a structured brainstorming technique and it's a visual method of exploring consequences, and it's centered on a specific change or trend. So we identify something that has happened and that is continuing to happen perhaps. So it can either be episodic like something that happened that causes us to think that the world might be changing or a trend that we've observed.
So just to give an example, episodically, we could have the whole thing that happened around hindawi journals and Wiley retracting hundreds of articles and those journals getting delisted from clarivate like that was a big episodic moment around research integrity in our industry. We also have sort of this trend of paper Mills growing in impact in the field.
So either is fine. So one of those two things, either episodic or an overall trend. And the way this works in the most abstracted way is you put the trend or the event in the middle. OK that makes pretty good sense. And from there, we then say, what's the first order consequences? Because this happened.
Here's what could happen. OK and/or because of this trend. Here's what we expect to see happen. A first order consequence. So the changes in the middle. The thing that happened or is happening and then we get the first order effects. And so I'm trying to compensate for bad color in this Wikipedia image.
You get your first order effects and then from the first order effects, you say, OK, if those things happen, what could happen? And that is our second order effects. Now, you can actually play this out to third and fourth order effects, but the further you get out, the more speculative you're being. So most futurists advise no more than four order effects. I personally think 3 is a little bit more manageable.
And what we start to see is that some of the effects start to show up multiple times. So all paths lead to write a good or bad thing. But if all paths are leading to a bad thing, like no matter what pathway we see, it's going to be bad. We're going to really want to try and intervene in that. If all paths lead to something good, we can be like, excellent. We're on the road.
We're in good shape. How do we keep that going? How do we do it faster? OK, so here's an example that you can't read in detail at all, but is from mind tools. So if the website mind tools and if you have a device, you might even Google features wheel mind tools. And what they did on this is they have an example here and this is a specific one to a particular company.
And unfortunately, I think it's an example, many of us have had to deal with of late. The change in the middle is a 20% budget cut, so it's not a positive change. And from that 20% budget cut, we get a first order of effects of things like problems with morale. The Canton vest in it. People can't go to external training, can't take on any new staff.
OK so this is our first order effects for this one person. And then you start to see things like, OK, well then it's going to be difficult to increase productivity, it's going to be difficult to expand sales. These are obviously negative. If on the more and the no external training, it says, well, then we're going to do more internal training. And that actually will be positive in the sense that we'll get skill sharing and on the job training.
So again, the specifics of this one don't specifically matter. I just want you to have kind of see another example and you'll see that like some threads Peter out, right? Others start to really blow up and we can see if we look in the green and the blue, diffuse large B able to read them that you can see some things show up time and time again. So no matter what, a 20% budget cut relates results in low staff morale, I don't think this is a particularly surprising thing, but it helps us sometimes see like this is going to be a major issue.
So if that's going to be a major issue, then maybe it needs major attention. So how do you do this? There's like literally a really straightforward process because as I said, it's a structured brainstorming technique. So you choose a change or a trend for your central focus. Then you identify those first order consequences, positive and negative.
And I do really want to emphasize that you make sure to not just think about what the potential negative consequences are, because it's really easy for us to doom, you know, say things are bad. It could be that there's really positive consequences. I mean, for Wiley, there was some like really immediate negative consequences of their action of stopping publishing journals and the stock price and et cetera.
But even Wiley says there is a silver lining here in a good place that we'll go to, right? So there's some positives, right? Um, second order do. Third order if you want. Fourth order. Please don't go behind four. Um, then comes actually the like, then you have to look at your map, your future wheel and say, OK, synthesizing this, what are the implications?
What are the big takeaways? What are the biggest threats, biggest opportunities, and then evaluating the desirability of those things. And then, of course, eventually you would identify actions to shape those implications. The further we get down on the list is, the harder it's going to be for us. If we're not sitting like just with a team from our own company.
So you might have to think like, well, what could we generally do as opposed to specifically? So we are now going to turn to the part of the workshop that is actually doing this work and an opportunity for us to do a deep dive. So our first thing is that we have to identify potential changes or trends of interest to us. And though we would encourage you to choose trust, transparency, transformation, the theme of this conference feel free to brainstorm other recent changes that you think are impactful in this industry.
So I think it won't be surprising if somebody puts down the OSTP memo. You might want to choose an aspect of that memo since it's got like open access, open data metadata like or it might be that you want to do just the NIH is data plan, the Open Data plan because that's a very specific one. Um, it could be that most recent report that came out in Europe that was like, hey, we want to go all in on diamond, right?
And what's the implications of that report? So this is what gets a little tricky because you sat down at your tables without having anything in common necessarily with everyone else. So we're going to do a little bit of brainstorming. You're going to have a chance to sort of come to something and then some of your tables probably aren't quite large enough.
We probably want about four people at least at a table. And we'd also like you at a table that has a piece of flipchart paper and post-it notes. So some of you who chose the back pews, we need you to come on up. So you might want to come up. I'd like you to go ahead and spend some time to introduce yourself to each other. You're going to be working together and to do some brainstorming.
Of what? Like you could even as you introduce yourself. What's the biggest thing on your mind as far as an event or a trend that you'd like to spend some time thinking about today? OK so let's go ahead and spend about seven minutes going through that process, get everyone resituated, and then we'll start to work through the process. OK I'm starting to get the same question from everyone, which starts to mean we should move on to the next thing.
So hopefully, as you did your introductions, you started to get a sense of some of the issues people are facing. You might have said like, oh, at our table, there's a lot of different perspectives. It's a little different than a company setting, right? Where you'd probably come together like this thing we need to figure out and talk about as opposed to here, we're kind of having to agree.
And Lee and I decided it was better to have you brainstorm at your tables and at least agree on something than us to assign you a topic, because some of these topics you have to know something about, right? So if you haven't read the memo, that's going to be a really tough like implications of conversation. So hopefully you had that chance to hear trends. You have to choose one at your table.
So I just want to remind you as well, like this, you'll have a good conversation about the topic, but you're also learning the method. There's like a meta thing happening here of learning this method. So you can apply it in other places. So I know some tables have already chosen. They're like really, really efficient. Other tables are like, we have so many issues.
So at some point you're going to just have to choose. OK, so don't belabor it too long once you have chosen. You should have multiple different colored post-it notes. I would try and do like a little bit of sort of like, OK, what's the first order consequences? And just to let you know, this is where we're going, then you're going to put your second order consequences. Are you impressed with my PowerPoint clip art?
Like making all these little things. And then you can even do a third. OK so just going back here, we're just going to start with identify the change in the center. You can even and you can write on these sheets of paper. It's there's no we're not preserving them. Um, and then start to talk about like, OK, what would this mean if this is the case?
I'll use the one table that's decided on like the trend towards an increase in the number of, of co-authorship both co-authoring but also sort of like number of coauthors. So what's the implications of that? And you know, you can ask at your table like, well, if you're a librarian, what's the implication you see as a librarian, what's the implication you see as a publisher?
What's the implications? And then you can also think topically, well, what are the implications for business models? What are the implications for integrity? Right so some of those challenging questions. So we're going to have a significant amount of time for you to work on your first and second, et cetera order consequences. I will definitely come back up here though, interrupt you and make sure you've gone on to your second order consequences at some point.
So use those post-it notes. We've got a few others. You can horse trade with other tables if they would need a different color that you have. Et cetera. So OK, we're good. Flag me down. If you have questions, I'll be walking around. OK let's do just a little bit of a check in.
Hopefully you've got some first order consequences and you're ready to move on to maybe some second order or maybe even some third order. But I want to just kind of repeat some of the tools for figuring out the ordering of consequences is sometimes it's helped if we just have a little bit of phrasing that we can use to help us sort of organize our thinking. So you've got this change at the center and you because of this change, this could happen.
OK? because of this change. This could happen. That's the first order. And then the thing that you wrote on this could happen. Then you say, if that happened, what could happen? OK, so we're working in the hypothetical after the thing that is our center. And so if this happened, that would happen.
And then if that happened, what else could happen? And if that happened, what would happen? So either an if then statement sometimes works for people or I kind of find it easier because of x, then y and then because of y, maybe z, because of z, maybe. Et cetera. So a way of pushing this out is to use that structure because I had a few people like they were like, my head is now lost the first and second order effects.
So let's go ahead and work on this for about another 10 minutes. I'm hearing great conversations. Make sure you're getting stuff on the post-it notes. All right. Let's keep going. OK let's do just a little bit of a check in. Hopefully you've got some first order consequences and you're ready to move on to maybe some second order or maybe even some third order.
But I want to just kind of repeat some of the tools for figuring out the ordering of consequences. Sometimes it helps if we just have a little bit of phrasing that we can use to help us sort of organize our thinking. So you've got this change at the center and you because of this change, this could happen. OK? because of this change.
This could happen. That's the first order. And then the thing that you wrote on this could happen. Then you say, if that happened, what could happen? OK, so we're working in the hypothetical after the thing that is our center. And so if this happened, that would happen. And then if that happened, what else could happen? And if that happened, what would happen?
So either an if then statement sometimes works for people or I kind of find it easier because of x, then y and then because of y, maybe z, because of z, maybe. Et cetera. So a way of pushing this out is to use that structure because I had a few people like they were like, my head is now lost the first and second order effects. So let's go ahead and work on this for about another 10 minutes.
I'm hearing great conversations. Make sure you're getting stuff on the post-it notes. All right. Let's keep going. OK are we ready to stand and deliver? Do you have somebody whose hand is going to go up? Now, when I ask who is speaking for your table, please, let's see those hands that you've got somebody at your table.
There's two tables that are currently avoiding each other's eye contact. Um, all right. So I'm going to ask because this is being recorded and also just because the acoustics are not great in the room, if you would come up to the mic and actually go ahead and come all the way up here and speak outward rather than as if you're coming to ask me a question, like, let's do our best to pretend this is a good room for doing this workshop in.
So who is going to not only speak for their table but going to break the ice? All right, we've got someone in the back here. Yay! it's just a minute. You don't have to, like, solve the whole world. So our table talks about AI, unsurprisingly, and the there were a lot of positive and potentially negative implications that we could think of.
But when we got down to it, it seemed like the main things that were going to result from it were potentially a loss of human roles that deal with more basic or simple functionality, but then a need for new human roles to provide oversight and kind of higher order thinking. And the emotional nuance that hopefully only humans can bring to the work and to then kind of review what the I had done and provide that level of oversight.
So in terms of actions that would be needed, potentially, there's going to be a need to retrain, reskill, whatever, whatever reword you want to use to help people potentially move out of sample roles and into more complex roles that are dealing with AI and dealing with the products and outputs of AI.
So the trend we identified was open as the destination. The implications were equity inequity, trade offs that came out of that. So increase in equity increase on some sites. But we did have a lot of undesirable implications and actions to shape the implications is we didn't get very far on this, but somehow to move towards open as a means to an end rather as rather than as the destination itself, but moving towards more equitable production, more community production and co-production of scholarly work.
And Yeah. So you can see you can actually line up. You can come on. Hi so we talked about the increase in content submitted to journals. So the, the implications were twofold, both so a positive would be an increase in diversity of content since more is coming in.
But there are the stresses with the lack of peer reviewers and things like that, and there can be stresses on systems if journals and associations are not prepared for this increase in submissions. So the positive consequences are the ones that are desirable. So that would be the increase in the diversity of content and authors from a variety of backgrounds. And so the actions would be to improve recruitment and training of peer reviewers and incentives for viewing.
So I have the first duplicate topic here because we also talked about I and I wish I could have brought my the sheet that we created with me, but I'll try to remember what we talked about. We found that there were some clear positives and negatives that could come out of as we tried to narrow down the scope of the discussion.
That was one of the big positives is an equity dimension, particularly with regards to expression in english, but also and as a researcher tool to help with bibliographical searches of keeping up with research and everything like that. And there are negatives. We are worried about outsourcing judgment to AI and that this could have a knock on consequence of an erosion of evaluation of expertise, which has a further knock on consequence of eroding trust in the scientific project as a whole.
And as we were, we were also worried about fear and of overestimating or underestimating the extent and scope of the powers of AI technology as a whole, as a remedial or gestures or whatever it is that we could do about all of this. We settled on things like having clear policies from the perspective of publishers and also other stakeholders, but entering into it with a curiosity and a spirit of curiosity about what it is the researchers actually need, rather than just handing down diktats from on high that we do value and would like transparency and disclosure about the use of AI, which also feeds from this curiosity angle more education about what is and can and cannot do to avoid fear mongering, to avoid outsourcing judgment, and really to think about what it does to people in terms of morale, in terms of society and how it affects people positively and negatively.
Thank you. I think we've unearthed our own trend here because we also focus on AI. Um, but we were specifically looking at the capacity of large language models to generate text that is plausibly could be human written for human readers. And so I so many of the issues that we've heard about so far are kind of concerns, the broader concerns about trust.
They came up. But some of the things that we specifically were interested in is whether we maybe need new definitions of author and whether they, you know, whether they end up fields diverge as different fields say, you know, authorship is the production of new ideas or the supervision of an experiment. But the actual writing that doesn't need to be know, that's something that we're fine to delegate to a machine learning model.
And so I think that question of whether that's something that pushes fields apart and how do we kind of negotiate differences there and create industry standards and also, you know, create appropriate guardrails. It was quite a wide ranging discussion. And I think the desirability of the implications, the main thing that we were most concerned about is the speed at which change is happening means that none of the implications are desirable, even though in themselves they might be.
But just the rapidity with which we're having to adapt is really difficult. And so I think that was a big one. And then I think as terms of immediate steps beyond kind of taking precautionary measures in terms of creating guardrails and then updating them regularly related to submission or trying to think about new techniques for peer review and how we use different kinds of softwares to kind of spot or flag things, but is really to try and have informed conversations about large language models and how they work and what they're doing and what their possibilities are, their potentials.
And so I think doing that requires having sensible conversations with experts, and that requires having a better understanding of what field specific terms are, because sometimes these kind lot of nuance can get lost in kind of interdisciplinary communication. So that was something that we were sort of thinking would be a way forward. Thank you.
Brought the biggest snow card there is. So we talked about the increase in co-authorship. And as an example, I'm just going to take credit for my entire table. It was just me. These are all my ideas and no one should question it. So so we had positives and negatives that I was really happy about.
I feel like that kind of brought it a little bit more into reality. But so if there's more co-authors over time as a negative, it could be a devaluation of the role of an author. There's a common lack of understanding of what it means to be a to have authorship. Um, let's see. We have an inflated candidate pool for tenured positions at universities, which was a really fascinating part of our conversation.
Um, let's see. We have, um, when it gets into we have inequity and an equity split. So, so as far as inequity goes, you have the possibility of a senior leaders taking credit over junior leaders and what those junior staff need to do to get their names onto the author list. If your author ten, that person should probably be questioned and why they're being included in that list.
Um, for equity for a positive we had that it could be more um, for groups and identities and perspectives that may not historically have been recognized as co-authors. So those co-authors being listed, we have a diversified backgrounds and expertise contributing to the article. And as far as a way to identify actions, there could be a means to identify what this person contributed to each journal article.
So we're talking about someone who works in data on an article. If there's a way to say this person dealt with just the data, does that help the person? Possibly does that devalue them if they were just before listed as one of the authors? Um, there's a lot of ways that can go, so it's, It was great. Good conversation. Thanks rise and co-authorship with LMS.
So hey, guys. So our challenge, the trend that we the trend that we pick was the increasingly challenging peer review process. So there were a number of implications. We identified majority of them were negative, but there were a few positives. So I'll run you through a few of them. So reviewer fatigue was one. Um, slowing speed to publication was another increase in reviewer fraud.
So those were obviously all negative implications. A positive one was seeing more reviewers solutions out there in the market, some of which may be. Are a better choice for some aspects of finding reviewers because they can do all the boring work at scale and leave humans to do the bit that they're good at. So that that was a really nice, positive one. Um, we also talked about, um, the second order effects, so things like entrenching biases in peer review by relying on the pool of reviewers, you know, respond to you.
And that kind of the flip side of that means that people who want to review that aren't in your network aren't getting that opportunity to progress. And obviously those were both negative implications. We also touched on things like reduced quality of research output. If the peer review process is compromised and general lack of trust in published research and in peer review overall.
So yeah, like I say, overwhelmingly kind of negative implications. What we came out with were there were a few positives. Um, we talked a little bit about solutions. We didn't have loads of time to dive into this. So one, one thing we talked about was some aspect of scalable, talked about compensation. I think we decided that would be great, but it wasn't scalable. So some sort of scalable recognition for peer review that would be valuable for researchers in terms of their career to incentivize.
We also talked about support for editors to help them find and connect with pools of reviewers that weren't in their existing network just to kind of help them spread the load. Thank you. OK um, my group talked about degraded or decreased trust in scholarly research.
It's a big problem, and it was a really wide ranging discussion that it's difficult to do justice to. Um, but some of the, you know, obviously there were some negative consequences we could immediately tie to that. So lack of trust can translate to decreased engagement. Lack of demand for content usage goes down, revenues go down, librarians cancel subscriptions. And then it's also increased pressure on librarians to try to educate their students and the other folks at their institutions about.
The research process for smaller publishers. It just kind of makes that threat of consolidation even. Even more imminent because they may not have the resources to deal with this. So those are clearly all negative so far. But we did talk about opportunities that this brings. So, for example, opportunities for collaboration in the publishing community.
How can we work together to address this? And by the way, there is a standard for communicating things like retractions. And if anyone wants to know about it, I can talk to you about it. But we did also talk about, well, just other ways how can we address retractions or fraudulent research? You know, could we have better processes to deal with that? There's also some opportunities here to perhaps educate more people on critical thinking.
I'm thinking of something Amy brand said earlier today about I think it was Amy brand who said humans are really gullible and just how can we make critical thinking a more central part of our curriculum at universities team? Did I leave anything out? OK, great. Thank you.
So I think we got everyone. And I hope that you have a sense of just how deep you were able to kind of go with this structured approach that moves us a little bit beyond just sort of especially for some of these really thorny topics, it gives a little bit of a way to explore different aspects, because otherwise we end up in these conversations of like, I like and not right and we're all over. And it at least gives us a tool.
This is what I would put forward. But I also am fine with you saying, no, I don't like this tool. So with our final, we want to take a few minutes here to just sort of take a moment to reflect on the method itself that you learned here today and see if you have any questions or comments about the method before we wrap up. So any reflections?
If you come up to the mic, that's great. Otherwise I'll repeat it if I can't, if so, that other people can hear it. So Yeah. Great Thank you for being willing to come. And this is just a comment. I actually found it really helpful to split the thinking into first, second and third order because it helped me and I think the group, it helped you categorize your thoughts.
So you kind of get them all out and then you have to think about the sort of order that they came in, which made it a lot easier once we had it on paper to then think about. Implications, the solutions. Right? rather than just a big mess of stuff that might happen, you're kind of automatically having to categorize it. So I can see myself using this again.
Thank you. I think it was really useful. Thank you. Oh, Yeah. I think I like. The conclusions, not where the action.
Think there's something that follows up from this to say, OK, here's a key takeaway. This is what we actually do. MHM Yeah. I mean that's the challenge definitely of doing it in this kind of context. Right it's a little bit easier to say like in a business context and a company context, a library context, you'd be like, OK, so then what do we, as an organization take action for?
A little harder in this sort of especially this kind of setting. I do think there are things where you could do some sort of like if it was a half day and we chose a topic and had people read ahead of time and was like, OK, so like you could come out of some sort of more structured conversation like this with some sort of white paper recommended industry actions from a sort of call yourself a blue ribbon panel or whatever, that kind of thing.
But I think have to have you have to definitely have to have done a little bit more prep work on getting people prepared that this is going to be the topic. And make sure you have the right people in the room to really generate that. So in your company, it might even be challenging of figuring out who is the right person in the room, but you at least kind of have a context for figuring out actions.
Yeah so I am very. Oh, go ahead. Yeah, just a follow up on that. I'm sorry I missed the first. No worries. So so part of the question is, so in identifying trends, is there any sort of good guidance or best practice on how to identify one that sort of requires multiple stakeholders to engage on within the organization?
And I might provide some follow up questions. The first order question. So there's definitely work in the Futures thinking world. There's a number of associations of futurists and there's also the Institute for the Future Institute for the Future iftf in Palo Alto that does a lot of trainings for companies to. There's much more around, like what's your current organizational challenge?
Um, so there's different models there. And I feel like when like some things we've done in, as Leah mentioned, we have written up some of these, we always do some sort of write up. So when we did scenario based Futures thinking activity, we did a call out to the community and then we did a voting process. Of which of these scenarios did people want to talk about because they felt most salient.
So there's different ways you can get to sort of what matters for us. Second order question, just kind of because I imagine that these conversations can. Truly so is there sort of a cap on the number of people that you want? Such that that conversation. Right? Yeah.
So, I mean, I think it's more a matter of thinking of if you have a large group, then you're going to break like let's say there's 50 people at your company who need to be part of this conversation. You're going to have 50 people work on one flip chart. You're going to have them in groups of 5 to 6 to sort of generally considered the good number where they work on their own.
And then you have to do a process of consolidating those first. And so you might need an iterative process in order to get all of that input. But one thing with brainstorming, brainstorming is a process of trying to get as many ideas out. And I think part of what your question might imply is, OK, how do we get from the many ideas to the prioritizing? And that might be a different process than the Futures wheel, and you might even need a different structured technique for doing that.
And you know, strategic planning literature may have some of that to offer us as well of like when you have a million ideas, then how do you so typically what we've done in the past is we evaluate them for desirability, feasibility. And one other ability that I can't remember off the top. No, no. Yeah so anyways, lots of things standing between you and the poster session.
And then lunch is this last slide. So if you attend the Charleston conference, we typically run the same session, so you could come pre ready with your trend or your event that's happened. It's also really fun for us because this tends to be mostly publishers with some librarians mixed in at Charleston, we have mostly librarians with some publishers mixed in.
So it's always interesting to run these two sessions and see Lee and also want to invite you that if you have another event that you're involved in, we're very happy to come and facilitate any number of the approaches that we've done over the years, if that's of interest to you. So we'll stick around. But otherwise, Thank you so much for your really engaged participation.
You made the photographer. So happy with all these action shots. I don't know if you saw how much time he spent in this room, pictures of people doing something much more interesting if you would leave the things on the table. Lee and I do want to go around and take a photo of them for part of our understanding of how this exercise work and how we can improve it for the future.
So thank you so much for joining with us and we'll see you at the rest of the conference. We'd be happy to. We'll probably put some of them in either the Charleston blog or against the grain. But if there's something you.