Name:
Measuring Success and Ensuring Progress: Accountability and Metrics in DEIA Programs
Description:
Measuring Success and Ensuring Progress: Accountability and Metrics in DEIA Programs
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/c97d7c1e-9cfd-4152-8e1c-08c56ae9fea4/videoscrubberimages/Scrubber_1.jpg
Duration:
T01H01M02S
Embed URL:
https://stream.cadmore.media/player/c97d7c1e-9cfd-4152-8e1c-08c56ae9fea4
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/c97d7c1e-9cfd-4152-8e1c-08c56ae9fea4/SSP2025 5-30 1330 - Session 5A.mp4?sv=2019-02-02&sr=c&sig=H6pWZgkS6iIrsBRGGB5Z4eXVV%2FU%2BZPRr8f%2FtLg2QCsE%3D&st=2025-12-05T20%3A52%3A47Z&se=2025-12-05T22%3A57%3A47Z&sp=r
Upload Date:
2025-08-15T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
You all so much for joining us today. We have a nice sized group here. We so appreciate you all sticking it out until the end to be here for the last few sessions of the day. This session is measuring success and ensuring progress, accountability, and metrics in DEIA programs. My name is Steph Pollack. My pronouns are she/her.
I'm the equity, diversity and inclusion lead for the journals team at the American Psychological Association. I am joined today by my two wonderful magical colleagues over here, Camille Lemieux and Alyssa Clark. Alyssa is an inclusive publishing and psychological safety consultant, and Camille is the manager for global DEI at Springer Nature. So we're just going to give a little bit of context for our conversation today.
We are at a critical point in our journey towards fostering more inclusive, Representative and authentic industry. There have been a lot of conversations about the importance of DEI over the last several years, but we've seen comparatively fewer conversations about how to measure and ensure progress against the many commitments that we've made to our. As for those of you who are at the awards lunch yesterday, Heather Staines mentioned that it's been just a short five years since George Floyd's murder, and especially in the US, we're now seeing systemic efforts to dismantle data, values and programs.
But as demita snow, our esteemed President-elect, has said, this work was happening long before we called it data. And it will continue as long as there is someone there to do it. So how do we ensure, excuse me, that the work continues to have an impact. How do we decide when and where we allocate what are often really finite resources to ensure that we're continuing to effect change. So these are the conversations that we're going to have today.
So just a quick reminder about SSPs code of conduct. We're going to assume that if you're here today, it's because you want to foster a more equitable inclusive, accessible, Representative industry code of conduct affirms that our sessions should foster open dialogue and expression of ideas free from harassment, discrimination or hostility. So we just ask that you join our discussion today, assuming positive intent of your peers with an eye towards of course, respect, community and solutions.
So we're going to start off with a fun little audience poll just to shake off the cobwebs from lunch a little bit and to get a sense as to where you all are in your journeys as we have these conversations today. So we're going to do the poll first, and then we're going to turn to a dialogue style panel with the three of us talking about what each of our organizations are doing to measure and assess progress on DEI.
So we're using Slido, fancy Slido. There's a QR code here. If you're joining manually, you can go to slido.com. The join code is 70038. I'll just pause for a couple seconds so you all can join. Great is everyone feeling good. So the first question we have for you is where is your organization on its Dia journey.
So some of you might be familiar with the Harvard Business Review maturity model for measuring progress against Dia. So you can be anywhere from aware. Your organization is maybe aware of it but hasn't really taken any meaningful action to compliant, tactical, integrated. These are the midpoints places where you're thinking about it, but it's not really systemic or sustainable. Maybe the work is a little bit piecemeal or heavily done by volunteers, as opposed to paid positions.
And then the culminating piece there is being sustainable, right. So it's integrated. It's consistent throughout the organization. It's aligned with the organization's values, and it's everywhere internally and externally. So we're getting some live results here. And it looks like OK, so about 40% just a little more OK.
We're still getting some live responses. But it looks like most of you have your programs integrated into your organization, which is great to see. You're at almost at that culminating place. Where perhaps your organization has been doing this work for a while. You've made some progress and you're close. You're so close to having it sort of sustainably implemented. So that's great to see.
It sounds like most of us are a little bit farther along on the data journey. OK our next question with that in mind, knowing most of you are a little bit further along, does your organization actually have goals that they're setting related to Dea. Some of us don't know which is telling. I think that is an answer in and of itself.
Whether your organization does or if you're not aware, then maybe there's some progress to be made in communicating that work. But a lot of you have, which is great. All right. Third question. The focus of our conversation today.
Do you actually use metrics and data to measure progress on dia? I love to see that a lot of people are saying yes. And a few responses that always interest me. Maybe there's one or two people saying no and we don't plan to. I'm curious. I would like to learn a little bit more about why that might be, but it sounds like most of us are here because we are doing this work, and we want to build community in ways to do it better, which is great, I love that.
All right. Thank you for participating. So we're going to move now into our panel discussion. I'm going to sit down next to my colleagues and we'll have a little conversation. We have four questions to map out our conversation. And then hopefully we'll have plenty of time for Q&A at the end. So with that, Alyssa or Camille, whoever wants to start, how are you deciding what to measure at your organization.
Sure OK. So hi everyone, I'm Elisa, I am going to be talking a lot about work that I did as the head of DEI at cell press over the past year, specifically in two categories. We're going to frame this with today, first of all, psychological safety work. And then second of all is going to be a really overarching project that we did under the kind of Elsevier data pool with editor, author, reviewer demographics and the collection of that kind of data.
So what we're looking to measure fundamentally when we're talking about any demographic data gathering or really any type of measurement in the area of demographics of the people we're serving, we're trying to answer questions about who is part of the scientific conversation that we're facilitating as publishers. And most of the time, I would say the unspoken ideology behind that question is we want to advance diversity in science.
Or more plainly, we want to be sure we're not making decisions that leave people out of a conversation that we believe would benefit from the inclusion of a wide range of voices. We want to be especially sure that we're not leaving people out who have in past conversations, often been deliberately excluded. So we want to make sure we get information in pretty much every demographic category where author behavior or characteristics might differ in meaningful ways.
So especially where it might affect the research itself, whose stories get told, what problems get time and money. Who gets to set priorities in a given field or discipline. So those are the things that gear up when we start to say, OK, what points of data do we want to gather. What do we want to measure. And creating those categories in and of itself can get a little sticky depending on who your audience is.
So you also have to decide, is this an American audience. Is this an international audience. What categories are we going to define. Because those categories can be very different. If you're in an international audience, for example, we can't use traditional like American demographic categories, those aren't going to make any sense to our authors. So we want to make sure that we're measuring data in a way that makes sense to the people that are providing the data.
And that might give us useful information about what kind of behaviors those authors might be carrying out. Thank you Elisa. So I will be focusing more on the internal employee experience, since that is my role at Springer Nature. So in terms of deciding what to measure for employee success and inclusion and equity and diversity, we have a very globally dispersed company.
So as Elisa mentioned, we do collect some demographic data from people across the globe, our employees across the globe, which is about 10,000. And part of that is to help us identify different populations, to then focus on in terms of our programs. So when we're looking at how do we decide what to measure. We always start with, OK, what are we going to do with this data.
So when it comes to the demographic piece, we want to make sure that we can identify people based on various demographic and diversity dimensions so that we can for example, lean on our abilities to go to our employee networks who are cultivating communities with specific marginalized groups, and make sure that we can specify in our data trainings or in our policy making and revising, and in our day to day processes, we know that these specific groups are experiencing more or less inclusion than others.
We know they're reporting more or less access to promotional opportunities, things like that. Outside of the demographic piece, we have a broader organizational strategy for our DEIA work, and that really guides a lot of what we measure. So I work with all of our program managers to say, OK, what is success for this program. Whether it's a mentorship program or an internship program or various trainings within DEIA, we want to have really clear and achievable goals for those initiatives.
So for a training program, for example, I think we know from research literature that training can lead to behavior change, but it's more common that training is going to lead to knowledge change. So let's make sure we're measuring that knowledge change first and foremost. So that helps guide some of the metrics setting. So I'll talk more about examples in a little bit.
But that's a bit of the framework we use. Thank you ma'am. Working working now. OK thank you. I love that piece about knowledge change versus behavior change. I think that's such a key piece that is often missed at times when we're talking about measuring impact and success.
So I'm so glad you brought that up. For APA, for the two programs that I'm going to be talking about today, the first is our editorial fellowship program. So this is a program for researchers who are from who are early career psychologists who have demonstrated a commitment to or who work primarily with underserved populations. So the goal there is to really broaden out participation and pathways to leadership programs at the journal editorship side of things.
So whereas Camille was talking internally at staff, I'm talking a little bit more about representation and experiences for folks kind of in the research pathway. So that's one program that we'll be talking about. The other one that I'm going to talk about is our EDI toolkit for journal editors. So I should have said before APA, our mantra here is that we're leading with equity. So our acronym is EDI equity, diversity and inclusion.
Of course, it means the same thing that we're talking about today. So the second resource is our EDI toolkit for journal editors that was published first in 2021. And just with great timing, in January of this year, we published a second edition of that toolkit with a lot of updates and built out resources for journal editors to use as they are thinking about how they want to create a more inclusive and representative science.
So the toolkit, again, is really thinking about, how are you, what standards are you asking authors to adhere to when they submit to the journal. How are you conducting peer review. What considerations do you have to the ways that you are asking authors to conduct science to begin with. So again, it's looking at that research pathway. So those are the two programs that I'm going to be talking about.
And for both I will say how we are deciding what to measure is to start with a vision. So what do we want. Our program, our community, our field of research to look like in five years. And 10 years and 20 years. And then how do we get there. So if you're of mapping out, like the ideal of what you want it to look like, then you can assess what you need to measure in order to get to that point.
And part of that conversation is thinking about what defines success for these programs. So for the Fellows, we're looking at everything from rate of participation. So how many of our journals APA publishes 90 journals. How many of those journals are offering editorial fellowships. And then how many folks have applied to serve in those editorial fellowship positions. We're also looking more at that sort of qualitative data.
What is their experience once they're in the program, do they feel supported by their editors. Obviously, Leslie mentioned psychological safety. Like, do they feel psychologically safe in these fellowship roles. Are they learning new skills. Are they building their networks. These are all things that we can use to measure the success of a program like that.
And then we're also looking at long term impact. So the fellowships are typically 12 months or one year long. The hope is that you don't just serve as a fellow and then go off into the abyss, never to be seen again, right. This is the whole goal here is to build community. And so our folks who are participating in this fellowship program, then going on to serve in other leadership roles at other journals, whether or not they're published by APA. The goal is that we are just of expanding the pathway to folks for leadership throughout our community of psychology journals.
And then for the toolkit, the things that we're measuring are things like, are the editors even aware of the toolkit. We publish this, we try to push it out. But are people actually even aware that this is a resource available to them. And when they are aware, is it useful to them. Or is this something that they're like, great, another set of guidelines that you're expecting me to follow. And then the next step for us there is to assess how many journals are actually implementing the different initiatives that are listed in the toolkit.
So I should have said before, the toolkit has it's a huge resource. And our newest version, our second edition, has about 45 actions listed out that editors can take to effect change as it relates to Gia in their journals. And so one thing that we're tracking is how many of the journals that we publish have now started to adopt those initiatives, those actions that are recommended.
So that's what we're thinking about when we are deciding what to measure. So the next question for us, now that we know what we're measuring, how are we doing that right. How are we measuring. How are we collecting. How are we storing data. Because I think especially as it relates to demographic information, that's a huge conversation around, protecting PII and ensuring that people's very, very sensitive personal information is protected.
So let's go to that next question. Whoever wants to respond first, please feel free. No pressure. I'm happy to go first, but I hate to go first every time. So I'm going to talk a little bit about the kind of thorny issues of storing the demographic data first, and then we'll move on to site safety because that's a lot easier.
So for collecting data on editor and author demographics, that's both pretty reliable and pretty limited because we are fundamentally looking at self-reported data. We know that's the most valid. You have to say. How do you describe yourself. What do you want to what do you want to say about yourself in the systems that we're giving you to do so.
And so people are justifiably apprehensive about how some of that data is going to be used. And of course, we have to be sure that the storage is GDPR compliant and access restricted and as anonymized as it can be. So when we see that data, we don't have personally identifiable information associated with it. But somewhere in the system that does exist, and we just have to make sure that's separated from where most people can see it.
But we will often see when we're gathering this sort of data, there's really large numbers of prefer not to say or did not answer. So we're limited in what we can assume about people that just choose not to answer our questions. So we just kind of have to throw out some of that data. So a lot of the data collecting is a process of communicating with our author groups to say, this is transparently what we're doing with the data.
This is why we want you to give it to us. This is why you should care. And it's just a trust building process. Anytime you're collecting author data on those sensitive topics to say, I promise we'll be using it this way. And this is how you can look into what we're doing to encourage them to give you that self-reported data, because that's going to be really the basis of a lot of your decisions.
And of course, for those things, it's the core metric for how we're measuring. That is a lot of demographic change. And so those are correlational data. Obviously we did this program and we don't know if it increased the proportion of authors from a specific demographic. We have good reasons to believe that it might be. But there's kind of a complex web of conclusions we can draw based on those things and how the demographics change over time.
Frankly, a lot easier to talk about psychological safety and the measuring, collecting and storing data just because it's such a widely validated measure. So there are pretty clear definitions, of what psychological safety is and the internal consistency of the kind of validated measure we can use to judge how psychologically safe a team particularly is. And then there are basically seven questions that we ask in the psychological safety index.
And those individual scores that a team has on those questions are just really useful data, longitudinally speaking. So we can ask them questions. We can do workshops. We can ask them questions again in six months. We can ask them the questions again in three years and get a sense for where the team is going, how it feels, where the organization is going, how everyone is feeling, and even on a very granular level, what are the correlations between the answers to certain questions and certain demographics that we have.
So that's just really useful measurements because they're so widely used across a huge range of industries, not just in publishing, but really, from companies the size of Google all the way down to small consulting teams, like a lot of people use psychological safety by this particular index. So it's a very reliable measure that we know how to use. And so we're lucky to have that. We don't have that in most measurements we have.
We're just doing our best to figure out whether what we're measuring is a good indicator of what we want to see. But the more we do this sort of work, the more we'll get those broadly reliable measures over time. And numbers. Yeah, I think I'll be quick on this question compared to the others, but in terms of how we're measuring, collecting and storing employee data, we follow GDPR since we're a Europe based company, so that makes things pretty straightforward.
I think in terms of how employees are experiencing their work environment, though, I mentioned we have an annual DEI survey and an annual employee engagement survey that we use for the DEI survey. We also ask voluntary questions about the diversity dimensions that I mentioned earlier. In addition to measures of inclusion, psychological safety and equity.
So we can then analyze that information by different demographic groups and different business areas, different locations, et cetera. I think in general, most of our employee feedback is coming through surveys because they're just quick to implement and quick for people to take them. But Elisa kind of alluded to this point of moving from just like requesting information from people to communicating why it's beneficial for them to share that information, I think is crucial and something we're continuing to iterate on in terms of how we're communicating that.
But I think it's really important for us to recognize that getting this information about people's experiences in the workplace is a privilege to have the knowledge about it. I tell people all the time, if you don't share how you're doing, especially if they're experiencing something negative like workplace bullying or harassment or whatever else. Like, please tell us that we know and we can solve it.
So I think having survey data really helps us to direct our attention to the areas of need that people may be in only one location are experiencing, or people in one business area. So it's just helpful in that way. So I'm happy to talk more about that. That's of interest later on. Thanks, Camille. So we also use survey data.
So we're clearly there's a common thread here. Yeah so for the Fellows we conduct an annual we call it a midpoint survey. So it's when the Fellows are halfway through their terms. They've had enough time to get used to the program and get a sense of what the work is, but they still have that opportunity where it's like fresh in their minds. So we do a midpoint survey on different parameters of the things that I mentioned before.
So finding ways to measure site safety, do they feel psychologically safe. Do they feel supported. Do they feel like they can voice kind of opinions or express ideas to their editors that they're serving under. Their overall satisfaction with the program, their willingness to recommend the program to others, all of those parameters that we use to assess, did they actually find value in their time in this work.
Do they feel well compensated. Is the time commitment reasonable. All of that. And then we also collect qualitative feedback from our editors. So our editors are the ones who are they don't interview but are the ones who are selecting the Fellows and mentoring them. So the editors are the touch point, through point to this program.
And so we meet at least annually with our editors at a council of editors governance meeting. Typically virtually. And usually that's an opportunity for them to express this is what's going well. This is what is perhaps not going so well. Here are the questions I have about the program, and that sort of qualitative, almost informal process for them to provide feedback is really valuable because you're getting that real time perspective.
And then we do use demographic data at this point. We're not using demographic data to set any sort of goals or parameters. It's really just a sense to understand who is represented in this group of Fellows. And then for the toolkit, again, more surveys. So we do an annual survey to again to our editors asking them, baseline to are you even aware of this resource. And if you are or are not how useful is it to you or does it seem to you.
And part of that survey also includes asking them piece by piece. So again, going through each of those 45 recommended actions that we have in the toolkit. Do you plan to implement this. Have you already implemented it. Were you not aware of it or you are aware of it, but you're not planning to implement. So it's kind of a long survey.
It's a little bit laborious for our editors, but that is just a really clean way for us to understand which editor, kind of where throughout the whole view of our program, what areas are of interest to our editors and what areas maybe need some more education or training for them to find that piece to be useful. We also have a lot of this is really manual. And really of low tech.
We have a tracking sheet, a master tracking sheet to see which journals have adopted, which recommendations in that toolkit, and then we calculate the percent change in uptake each quarter. And we then check that against things like our submission guidelines tab. So if this is just an example, if a journal says that they are offering credit, if they are using the contributor rules taxonomy to more transparently identify who is responsible for what in the journal, that is something that we're tracking as an EDI initiative because fosters more representation and a broader perspective of who can be involved as an author that goes in the tracking sheet.
And we also make sure that that's showing up on our submission guidelines tab. So those are the ways that we're hopefully identifying like how and whether the toolkit is useful, but also measuring in real time the effects that we hope that toolkit is having. which is to actually change how the science and the research is conducted. So that for my response to that second question.
So moving on. How are you all using the data that you have to inform decisions about your work, whether it is where to allocate resources or time or. Yeah, just how are you using this data once you have it. It's fine, it's fine. So psychological safety is one of those things that you have to constantly work on because it's fundamentally a social measure.
It's not an individual measure, it's a measure of how a given team is interacting with itself. So every time that team changes, you introduce new variables. Every time someone gets a promotion, every time someone leaves all those things are constantly in flux. So we're really just using those measures on a consistent basis longitudinally to tell you how a given team is doing and whether there's something that we can flag that we need to fix, because we know based on the fact that it's a validated measure, that if we're not having a pretty high level of psychological safety, we're not going to see the results that we're looking for in things like innovation and employee retention, in creativity, in just general.
There are a lot of things that are influenced by that. So that's a consistent built in tracking point that will flag for you when something's going wrong, or what teams you can probably rely on to be a little more self-sufficient. As far as the demographics for our community, our editors, our authors, our reviewers, how we use those can be very specific and very complex.
The easiest way I think that I can say that we use those is to figure out what is not working. So if we are spending a lot of time and resources on a particular initiative to increase a certain percentage of authors in a certain population, and we don't see those numbers change, then we need to stop doing that, because that's a waste of time and energy. So a lot of the kind of low hanging fruit is changing what we're doing or stopping certain initiatives based on a lack of moving the needle in the ways that we want to move it.
If we do see the needle moving, that's great. I always recommend that people have a failure threshold and a success threshold and then a pretty wide range in between. So if I'm trying to increase my percentage of authors from the Global South and it's at 10% is 12% enough to continue doing what we're doing is if we hit 11% we're going to say, OK, we're going to stop doing what we're doing and try to do something else.
If we hit 16% that's like, yeah, great, that's an absolute success. Let's keep doing it. If we're at 14% maybe we can do it a little better. But like defining for yourself what those failure success redirect numbers are I think is really important as you create these programs. And those demographic numbers are really the basis for how you can assess whether you need to continue or shift or redirect or just absolutely stop whatever activities you're doing and direct those resources elsewhere.
There's a lot more to be said about that, but I'll let Camille. Yeah, it's a big, big topic. I think so in terms of how we're using employee specific data. So I've mentioned our survey a couple times. We work directly with all the business leaders across the organization to help them develop action plans for their area. And then we follow up with them periodically to see how it's going, basically, and help them measure their success along the way.
A few examples I can give for our day programs generally are. For example, we launched a couple of years ago. This day learning journey that was intended as a year long training where people could opt in to learn about lots of different topics related to building an inclusive work environment. So there were trainings offered.
I think it's about 3 to four trainings offered each quarter. And the idea was that people would sign up for the year and take a bunch of these trainings, and we could set people up for becoming DEI champions in the company. Well, in the first year of the program, we found through our internal team evaluation that most people at the company who were taking the trainings were only signing up to for one or two during the year average of 2, really.
So the idea of having this year long kind of opt in program wasn't matching the reality of what was happening. So with that, we revised the program after that first year to really focus on, introducing people to die topics that they wouldn't otherwise leveraging the fact that we're a global company. There are so many nuances and what inequity means, or what diversity dimensions are emphasized in different locations.
So that's our way to get people in the door of data, and then we can leverage their participation in those trainings to introduce them to other aspects of data going on at the company. So that's one way we can course. Correct yeah, plenty of other examples I could share, but I'm also curious to hear from you all later on. If you have any examples yourself or any challenges you're facing there.
Yeah so that theme of course correction is definitely I think resonates with how we're using data at APA as well to inform decisions around EDI. I will say also another piece there is it helps us determine how to allocate finite resources. most of the work that is happening with regards to EDI is either through my role or through staff members who already have full workloads on their plates.
And the hope is that we're integrating this work into the work that they're already doing. But of course, you can only ask so much of your team and your colleagues. And so a lot of this data is used to help us decide, OK what people time, energy, monetary resources that we have, what needs the most attention. And so for the Fellows, the survey feedback really helped us evolve and update the program course.
Correct as Camille said. So for example, one of the pieces of feedback that we got from Fellows in the first years was that it seemed like folks were having maybe varying experiences with the level of mentorship that they were getting from their editors. So we launched this program where we were like, yeah, you get a mentor, have fun. But we forgot to train the mentors in how to best support their Fellows.
And so it turned out that Fellows were having really wide experiences with, how often they were meeting with editors or what level of support they were getting. And so the survey data, we received that feedback and we were able to course correct and build on it the next year. So we built out a editors guide for managing Fellows, which has best practices and recommendations and expectations for what it means to have a fellow serving with your journal.
And it also just set the standard, so that there's a baseline expectation of what everyone's experience will be. And then for we also received feedback from them that they wanted to see a more formal orientation. So that was something else that we did the following year was we instead of just launching them into the ether, gave them a more formalized introduction and training how to use the peer review system.
For example, how to get to know each other so that they can build community with one another and not feel sort of isolated on each of their journals. And that's something that we're continuing to try to do as well. And then for the toolkit, as I said before, kind of talking about finite resources, the survey to editors really helped us determine which areas needed more attention or time related resources, training, staffing, that kind of thing.
So, for example, one of the recommendations that we have in our toolkit is that we invite editors to encourage their authors and their reviewers to think about citational justice. So how can you be critical and thoughtful in how you are building out your reference list so that you're not continually citing the same legacy, quote unquote, classic articles that are perhaps outdated or typically are authored by people from positions of power, of positions, of social power.
And so, this at the time was a really new idea. And the response that we got from editors when we had this in the toolkit was, how am I supposed to know who you know what, how an author on a citation list identifies like I don't have the capacity to do that. The feedback that we got from the survey, a lot of people saying, no, I'm not interested in doing this because I don't even really understand how I would go about doing it.
That allowed us to understand that we needed to provide more training and nuance to what it means to pursue citational justice. So we brought in an expert who had done a lot of research and writing on this work to have a conversation with our editors at a council, at an annual editors event that we had. When we did the second edition updates to the toolkit, we built out this much more fleshed out version of that recommendation that now has resources of people of tools that people can use to assess their reference list for gender and racial balance, as imperfect as some of those tools are.
That's at least there as a starting point to have those conversations. So the survey feedback again, that's really how we're using those data points is to build upon and create a more sustainable and meaningful experience for the goals that we're setting for our journals. So this, I will just share very quickly, I'm sorry. The graphic here is very small. It's not I won't go into detail, but this is just an example of how we present the data to assess, our progress on uptake.
So you can see the x-axis just has each of the different parameters that we are measuring for the recommendations that are in our toolkit. And then the y-axis is the percentage of uptake of journals. So the darker color is the baseline from 2020 2 January 2022, and then the lighter color is uptake as of I think, end of April this year.
And so this is updated quarterly. So it's kind of a real time demonstration to us of where we started and where we are now. As you can see there's a lot of things like credit right. So credit was something that very few journals had had offered at the beginning. And now I think we have almost 70% of our journals offer it. So this is also useful for us because it tells us what to elicit had said before, what's going well, right.
What maybe needs a little less attention because we've gotten there and how can we reallocate those resources to other areas. So that's just an example of what that looks like in terms of presenting the data to ourselves. So final question and then hopefully we'll have time for Q&A. What advice do you have for your peers in the room.
I mean, it sounds like a lot of us are already measuring the impact of your program. So, perhaps this is a question we can pose to the audience as well. But for us at the table, what advice do you have for your colleagues. Yeah, I have a few points of advice. The first one is just sad experience I'm sure we all have is document everything, because the loss of institutional knowledge in this field is like is huge.
So people are doing things, they're doing programs and then they leave and you've lost everything that they've ever done. So a lot of working with a large organization, especially in my experience, say cell press has 60 something journals and every editorial team had a specific thing that they were doing on five different parameters. And one editor leaves or moves and all of that practice and knowledge is lost.
And so there's a lot of wrangling to be done to get people to write down what they're doing in an accessible format so that knowledge is not lost. So that's a lot of administrative work, but it's pretty vital work if you're trying to gather good data on what you've been doing and how it's been working, because someone might leave and then all of a sudden your numbers change in a way that you have no idea why.
And it's because they were doing something that you weren't aware of. So really that's the irritating administrative bit as far as the more conceptual bit. A lot of us who are in this space have things in our heads that we have to keep reminding other people of. And one of those is that your methods are not your goals. Or as I heard it put the other day, like your outputs are not your outcomes.
And so people get really attached to the ways they have of doing things, the programs that they're putting into place. And if those aren't moving the needle pull a different lever instead. That's OK to drop what you're doing and say, this is not the best way to accomplish this and really get people to dial in on their goals. What do you want to achieve in this.
What is your strategy here. And it's not just to have, I want to publish so-and-so papers from so-and-so authors. Why is that important to you. What do you want to achieve by that. Do you want to achieve having greater visibility. Do you want to achieve having greater having a certain population given a larger voice in science, do you want to achieve getting more submissions from these people.
Those are all slightly different goals. And so you need to know exactly what your goal is before you can attempt to measure whether what you're doing is the best way to achieve that. And a lot of us in this space who work in Dia have this in our heads. But your editor doesn't. Production staff doesn't necessarily, because this isn't their main job.
So constantly reminding people like, hey, no one's going to judge you for stopping a program. It's psychological safety. You can fail. You can move on. You can change. But like, if this isn't achieving your goal, change your method. You don't have to change the goal.
That's all right. Yeah I think especially since many of you said that you're measuring DEI in some way. I think a big part of it is similarly to what Elisa is saying, only measure what we're going to use to make decisions with. I think when I first started my role, I think there was an expectation to just like we got to create some metrics.
And I kept going back to well, what are we going to do with the information that we're gathering. So I think being really intentional about making sure we're only asking questions that will help us make decisions about something is so crucial, or help leadership make decisions about something is crucial. For example, for trainings, Dea trainings or any trainings, I think a really common feedback question is how satisfied are you with this training.
Whenever I see that, I think I don't know what I'm going to do with this information now. Generally, people are pretty satisfied and it doesn't tell me what they're satisfied with or what they're unsatisfied with. What's something else we can ask that will actually tell us if this was successful. Can we ask questions about whether they feel comfortable explaining a specific concept to their colleague, or whether they feel that they gain more information about a topic than before they took the training.
There's some surprising findings there. Sometimes we get a dud training where people, a lot of people who take it are already knowledgeable in the topic and we think, OK, we're not reaching the right people here, or the training was too simplified. So let's move on from there. So I just think being intentional about why we're asking the questions that we're asking is a big piece of advice.
That's great advice definitely as it relates to GDPR. What are the parameters within which we can actually use the data that we're collecting. The piece of advice I would say is don't underestimate the value of qualitative data when we were doing this work. Numbers are good. Numbers are important. Simon Holt said this in his session yesterday.
I hate that I have to make a business case for Dea, but that is where we are. So the numbers matter, but the human stories are what are at the heart of this work. And oftentimes, the most important changes that we're trying to affect are the people parts of it. So that to me is, as much as you can use qualitative data as part of this work that you're trying to do, speak to the humanities, speak to the individual person's story and use that to make your business case.
Don't just rely on the numbers, but also use that qualitative Sais quoi. The other thing I would say is be intentional. Thinking about how you can set your goals, that so that they're aligning with your organization's vision and mission. I think that's a really key way to protect Dea work in this often hostile climate. If the work that you're doing is tied directly to the mission and vision of your organization, it's a lot harder to get away from that.
So how can you use data. How can you set goals and benchmark metrics that tie-in directly with your organization's vision. So that you can more effectively measure that work. So I think that is the last of our discussion. We have a good amount of time for Q&A. I will also just say we want to point to these two sets of resources that came up as we were planning this session.
So there are QR codes on the screen here. The first is the C4 anti-racism toolkit for organizations. They have a chapter on using measurement and metrics. So we strongly recommend you review that. There's a lot of great resources in there. And then the newer toolkit, the focus toolkit, also from C4 disk for editors, also has a chapter on collecting and reporting demographic data specifically.
So there's some best practices and standards there specifically for talking about demographic data of the folks serving in those leadership and leadership roles. So those are two great resources to cap off our conversation. With that, we can move into Q&A. I can walk around with the mic. We just ask that you speak into the mic if you can, so that we can make sure everyone can hear. We'll get started. Does anyone have any questions.
Yay and yeah, questions. Or we'd love to hear what you all are doing at your organizations, or what you're curious about. Or thorny issues, experiences, all that. Great point. Hi Tracy Ryan from Sage would be interested in hearing if you're experiencing survey fatigue. Especially with your internal staff.
And if so, how you've addressed that. Yeah, I can speak to that on the Springer Nature side. Thankfully, we haven't noticed any severe fatigue at all. I think that a big part of that is how we're communicating about the value of taking these surveys and what we're going to do with the information. So I think if people can see that we're actually acting on what they're giving feedback on, and that it's not a huge time commitment to share their thoughts, the more likely they are to participate.
And thankfully, we continue to get feedback on other programs. So it's something that we definitely keep an eye on. But so far, we've been really blessed to have people who are willing to share their feedback. Yeah so I can say that we have experienced survey fatigue, especially with our Fellows the first year, and I think a lot of that had to do with timing. So the first year that we did that sort of midpoint survey, we rolled it out in September.
When folks who are in academia are more likely to be back, on campus. And we had the response rate was great. It was like 80% of the Fellows had responded. The next year we were like, oh, that was great, let's do it early. We tried to roll it out in June and we got a 30% response rate, so timing definitely matters. But again, opportunity to course correct.
So we said, OK, this year we're going to wait until September to roll out the survey, even if it means we have a little bit less time to do something with the responses. Other questions. Do you have any advice for people who work at journals or organizations that can no longer publicly do DEI work.
I was on my journals committee for diversity, equity and inclusion, and it was I worked work for National Academy of Sciences, which federal contractors. So it was kind of cut overnight, and it was one of my favorite parts of my job. And so I'm so happy that we can be at SSP and have these conversations. But I'm sure there's a legal aspect to it as well.
But do you have any advice for people who want to continue doing that work, but are facing some institutional barriers for the first time. I guess I have a couple pieces of advice. First of all. I don't it's late in the weekend.
I can throw some shade. The people doing the censorship are not looking that they're not using their full brains. Like you can just reword things. You can just do what you need to do and not call it what they're looking for. And you're probably going to be OK. That's one of the reasons, I'm in consulting now. You can hire consultants.
You can say these are business services. You can say we're looking to increase our bottom line by improving the geographical locations of our authors. Like you can say things and I don't think like you can't have a specific program anymore called die, but to demeter's frequently repeated point, you can still do the work.
You just can't call it that anymore. And it's irritating. But there's a lot that you just have to hide it a little bit better sometimes. So I think that's where we're at. I'm happy to hear anybody else's thoughts or advice based on your experiences. Yeah, I think just building on from that. This work has always been for the benefit of everyone.
We want people to be able to be comfortable at work so that they can perform their best and feel good about what they're contributing. I think the reason, or a big part of why we look at things like psychological safety and bullying at work and microaggressions is that these are not just morally valuable things to improve, but also things that hinder, like an employee to just perform.
Generally, if we need to make that case for it, I think that's going to continue. It's a human case. And hopefully, just like you said, reframing things to be clearer about why we're doing what we're doing hopefully will help. But it is just a really scary climate. And I think every organization is kind of navigating it differently, but hopefully having these conversations with the SSP is helpful for just kind of knowledge sharing.
I will say to Camille's point that a lot of the work that we do just in making everyone more safe and comfortable and things like psychological safety specifically affects underrepresented communities more. Like it is more important that a diverse team be psychologically safe, because it is less easy for them to understand each other's subtext. And so you don't have to say that to know that.
And so we still need to be doing these work, projects. And we know that they will benefit specifically the people that we want them to benefit. But we don't have to say that. And that's I mentioned this before, but as much as you can tie-in the work that you are trying to do to the mission of the organization, to their point, even if you have to couch the language, which I hate that we have to cancel.
It's awful. It's awful. But we do. But if you can tie it into that mission, then it's a lot harder for them to turn away from that. Because this is, in line with what this organization has stated that they're trying to do, other folks.
So I had a question teed up in case we were all a little sleepy, which is fair. What is something that you wish you were collecting that you're not collecting right now. And I can start with an answer if you all want to ruminate on that for a second. So something we are not doing, but that I very much wish we could was measuring uptake at the article level. So right now we're really only tracking at the journal level which journals have offered which standards.
I would love, in a world of infinite resources and time and staffing and everything else, for us to be able to get down to that article level and say how many articles that we're publishing, for example, have positionality statements. How many have constraints on generality? Different sort of pieces there. So to be able to get to that at the micro level would be amazing because I think that just informs like the real researcher, it's one thing to require some in a submission standard, it's another thing to actually see it happening in the article.
So if anyone has ideas and wants to collaborate about ways to build that out, come talk to me afterwards. That's definitely a wish list for a long time for me. I would love to do some analysis of pay gaps beyond gender and promotions of various demographic groups, and I'm hoping to do that in the future. We're hoping, fingers crossed, to start a sulfide campaign so that we can start to integrate some of our.
Human resource systems with some demographic things, but we'll see if that's successful. I want to see a lot more measurement of disability and neurodiversity, and see how those affect all of these other metrics. I wish we were at the point where we saw more of that. I wish we were at the point where more people felt more comfortable reporting those things, because fundamentally, a lot of what we want to measure is also the bits of information that can be used for evil.
So you have to have a fundamentally cooperative population who trusts you to do that. But I would love to be measuring more of that. Any lingering like ideas or thoughts from the script. It doesn't just have to be questions, as Camille said, like, are there things that are working well or that are challenging for you all that you're doing.
Yeah I'm Sharon from APA publishing. I just do you have advice for editors who have concerns to keep up with the standards of DEI for their journals. I know for a fact, some of our editors, they're just very concerned that with asking reviewers their ethnicities and so.
So, so Sharon and I work together so that full disclosure. But I can say that is definitely a question that we've seen. For editors is like, oh God, another set of resources that you're asking me to adhere to. What about the fatigue of our authors, of our reviewers. As Sharon said, the response piece that we have to that for most of them is that these are tools for you to build upon the work that you're doing.
It's not prescriptive. The hope is that this is just something that you can start to integrate in ways that make sense for your journal and for your specific subfield, because the ways that you are building out DEI for one area of even psychology is going to look very different depending on the different subfields. So that is how we present this is that these are tools in your tool belt. It's not of a requirement for you to adhere to all of these things.
Our hope is that over the long term, that will start to become more of the norm. But right now, especially right now in this political climate, maybe that's not possible, but at least it's out there so that when the time is ready, we can do that work. Any thoughts from my esteemed panelists. Yeah, a lot of the work that I did with editors was trying to streamline and coordinate different journals and different editors to do the same things, to create some of that standardization.
And I would say one of the things that I encountered was everyone is doing something right. We're all kind of trying to move the needle where we want to. And like all science, the field develops and we find a new way to do it better. And we got to do that. That's just so I think sometimes framing it that way to your editors can help them think of it this way.
Like, OK, the science on this has advanced. We are now redirecting. We are doing this and do this instead of that. I didn't find as much that I had editors complaining to me about using Time and resources as much as it was stopping, doing what they were doing and starting doing something else, then just making it very clear that this is a process of learning and developing as a field that all of our disciplines also do, and just framing it as a science can sometimes get over those concerns.
So we are at time. So we can close out. We will just say, thank you so much. I hope this was a productive conversation for all of you. Again, thank you so much for sticking it out until the end of Friday and we appreciate your time. Take care.