Name:
SSP Innovation Showcase (Summer 2022)
Description:
SSP Innovation Showcase (Summer 2022)
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/9ba3f56d-250e-4f44-8d5e-bdf38785ff5a/thumbnails/9ba3f56d-250e-4f44-8d5e-bdf38785ff5a.png
Duration:
T01H00M46S
Embed URL:
https://stream.cadmore.media/player/9ba3f56d-250e-4f44-8d5e-bdf38785ff5a
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/9ba3f56d-250e-4f44-8d5e-bdf38785ff5a/Innovation Showcase July 2022.mp4?sv=2019-02-02&sr=c&sig=acv9g0NW833HF%2BxkGiZqukx9YZJLH4j7dnGvZggPLsM%3D&st=2024-12-21T12%3A46%3A09Z&se=2024-12-21T14%3A51%3A09Z&sp=r
Upload Date:
2024-02-02T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Hello everyone, and welcome to today's innovation showcase hosted by SBS Advancement Committee. We're very pleased that you can join us today. I am Julia kostova. When I'm not volunteering for SSP, I'm director of publishing development for frontiers, the sixth largest and third most cited publisher.
But today I'm here as a member of the SSP Advancement Committee and chair of the generations fund committee. The generations fund is SBS endowment fund founded to provide permanent sustainable funding for SBS valued fellowship, mentorship and D&I programs. We appreciate the support of every organization an individual who has donated to it, and we ask all of you to support the generations fund to ensure the future of the next generation of scholarly publishing professionals.
Before we start today, I have a few housekeeping items. Your phone will be muted automatically in consideration of our presenters and your fellow webinar participants. If you have questions for the panelists and experience technical issues during the webinar, please use the Q&A box. We will be monitoring this and we'll be able to respond there. You may also use the chat feature to chat with the panelists, and other attendees.
And don't worry, if we don't get to your question, we will also share contact information for all our speakers today following the broadcast. This one hour session will be recorded and made available on SBS on demand library following today's broadcast. Closed captions are also available. There is a CC button on the lower right hand corner of the screen.
A quick note on SSP code of conduct and today's meeting. We are committed to diversity, equity and providing an inclusive meeting environment, fostering open dialogue, free of harassment, discrimination and hostile conduct. We ask all participants whether speaking or in the chat to conduct themselves in an orderly, respectful and fair manner.
Today, we're going to hear from seven speakers about the latest industry innovations in a series of short presentations. Again, we'll have some time at the end for of the presentation for some Q&A, and each presenter will also provide a QR code or a link should you want to discuss further offline. Our first speaker today is gerasimos rhazes of atypon. I'll pass it on to him now. Gerasimos great.
Thank you. Give me a minute to share my screen. Can you please verify that? You can see. Indeed, you can see my screen. Yes great. Thank you. Thank you.
So, hi, everyone. Thanks for being here with us today. I am Aziz. I'm a senior product manager of atypon foodborne taxonomies group. And today we will be discussing how in atypon we rely on AI to help our customers solve the content. Tagging challenges. For those not familiar with atypon, atypon is an online publishing platform provider currently hosting approximately 50% of English scientific content.
I assume that most of you are already familiar of water taxonomy, so let us quickly give a definition. A taxonomy is a set of topics or tags that can be used for a plethora of purposes like organizing content, inferring users, et cetera. Let us quickly see some benefits of taxonomies that can be used for enhancing content. And on these discoverability which can take place on site, for example, topic pages, topic facets of experts who can provide personalized recommendations and personalized search but can also take place off site.
Of course, we are talking about SEO search engine optimization, but however, we are not limited to that. We are also able to infer user interest can create content bundles, we can improve content, curation, et cetera. However, let us quickly discuss what are the existing approaches of the traditional workflow of creating a taxonomy. Firstly, publishers can manually create their own bespoke taxonomy.
Secondly, and third party vendors can be employed for minor or semi manually creating a bespoke taxonomy, or publishers can rely on public available taxonomies such as Ms. However, there are some inherent challenges from that approach. And first and foremost, manual or rule based or semi manual if you prefer. Tagging is impractical on a large scale because too much effort is needed to tag the documents and maintain these tags.
Although bespoke taxonomies may fit your current content. Sorry, your current content needs very well. This may not be the case in the future. For example, COVID is a trend now. It exists in the majority of taxonomies. However, three years ago no one was aware of such a term. Finally, domain and subject matter experts are required, which is costly. And of course, humans are prone to error and bias.
Apart from that, untagged content is fragmented public published specific data silos without having the ability to seamlessly query, retrieve, tag, or recommend the content. To address these challenges, we provide a solution. Apa taxonomy, which is a vast multidisciplinary taxonomy spanning COVID 19 disciplines containing more than 250,000 tags, which we derived from very specific methodology from Microsoft taxonomy, which is in turn relying on the mass taxonomy and Wikipedia terms by relying on a combination of AI and collective intelligence, we continuously curate and expand this action.
You can see, let's say, an instance of our taxonomy in the presented figure. And of course, it comes by default with the auto tagger tagger, which is an automatic classification solution. The unique advantage of this solution is that no training is required. It can be used directly on any type of content, not only on documents but on videos, on audios, et cetera.
Let's quickly discuss a little bit about our auto tagger. We're consistently participating and winning the bias competition during the last five years. This bias competition is well known. International competition on content classification. We recently won last year's competition against 30 teams and they derived from multiple countries. Our solution is 8% more accurate compared to the official solution of NIH.
And our auto Tigers are becoming even better each year. Why? because we compete against the best in the domain, and any advancements that are derived are then incorporated into our system and thus made available directly to our clients. Of course, this builds on the years of strong R&D investment that we have put in this solution. And the superior solution. Even more customers are using our auto Tigers.
Currently, approximately 15 countries publishers are using or ready to use our auto Tiger. They are satisfied with it. And and we provide you a testimonial from one of our publishers our most. Sorry let us quickly check how auto Tiger is configured. It relies on two steps. During the first step, the auto classifier is trained with a set of training examples, as you can see on the left hand side of your screen and what we call as a trained model is derived.
During the second step, new untagged articles are automatically tagged and passed through the auto tagger. And as we can see in this model was the outcome of the previous stage. As you can see, apart from the tags, a confidence score is associated with each tag, which measures how relevant this tag is for the content. Of course, this helps curators to save a lot of paper as it allows them to concentrate their efforts only on the items with very low scores.
Our most recent advancement of the tag solution generated 27% better results compared to the previous solution. Would it that? Why and how would it that we, We relied on more advanced deep learning technology for our new solution. Apart from the product outcomes that were described in the second slide, AI based content tagging can drive of course your business outcomes.
For example, your site traffic can be increased as users now discover more efficiently content, which covers, of course, their needs and thus stay on their sites longer. This, of course, leads to increased revenues and of course, increased profits. Also, contamination is increased because it targets the right audience. And from users perspective, these benefits lead to greater user experience and satisfaction.
Moreover, the topics allows you to gain and improve understanding of your audience and of course, of the market needs. Finally, since publishers are now relieved of the laborious processes of training, the auto classifier, of training of tagging the content of maintaining and updating the taxonomy, you can now spend more time to focus on the more important aspects of your businesses.
Of course, one thing that we need to mention is that the auto tagger of course, is not only applicable to the ad production we describe that it can be applied to any custom taxonomy or to any publicly available taxonomy such as the one. If you want to know more about who we are or what we do, feel free to out to our community newsletter to get continuously updated about our AI advances and of course, follow any of the links that are available in the bottom of the screen.
Thank you for your attention. I don't know if there is time for any questions or any other clarifications that might be required. Great harassment. Thank you so much for this overview of new innovations. Our next speaker is going to be murugesh mayandi from StriVectin murugesh. I'll pass it on to you now. Thank you very much.
Hi, my name is Morgan Smiley, based out of Atlanta. Lead the data solutions business at strive. A lot of you would thrive from our content solutions perspective, but wanted to take this time to share some things from our data solutions perspective as well. If you'll go to the next slide, please. The way we look at things and based on every company that is investing in data, every company is a data company, irrespective of the industry.
And that's how everybody needs to think about it as well. It's not just about what we are doing in our own products and services, but how are we using the data that we know about our organization, our products services, our interactions and engagement with the client? To be a better organization, to be a better provider is very critical. Most mature organizations, if you look at it, are creating digital twins of their own operations and products and services.
So that they are able to understand what is happening at each point of interaction and engagement. And as the products move through the value cycle to see how can they be better. And how can they become better in what they are doing. Banking is a great example that I always look at to see what are they doing and how can we learn from them. And today, banks are not just using data about what their customers are doing to see how can they interact, but they are thinking about what can I do to understand why a customer is going to call me even before they call me?
If somebody is calling me or they are going to call me because their card was declined. I know that already. And so I can address that even before somebody explains why they are calling me and getting frustrated about not getting solutions that they offer looking for. Right so we got to be able to use all the data to see how do we bring value.
Next slide, please. From a publishing lifecycle, if you look at it. And here is a view of the various data streams that we need to be thinking about. Most of the organizations focus on what is core to their business, and in this case, it's the content data stream. It is to understanding what are our assets. What are we delivering to our clients and how is it being used?
But we got to look at it holistically in a 360 degree view to look at what else can we understand about our main products, whether it's in terms of people who are consuming, whether it is organizations that are consuming, whether it is our financial models or most importantly, the touch points with our customers in terms of experience and social engagements to really understand what is my product and services being used for.
Is this really being used the right way or is it not connecting with the right engagement needs? And thereby I'm having churn issues or other challenges where I'm losing membership or I'm having customer satisfaction scores that are not equal to what I'm looking for. So understanding your data streams and creating a holistic strategy that connects all of this is very important.
Next slide, please. Now, if we do that, we will look at some of the possibilities of how can we better understand our customers and authors and any engagement parties that are required, or how can I better classify and curate my content in a way that it is consumed by the right parties in the right manner, thereby increasing customer experience, or by looking at what additional content is required?
And how do I explore new content that will bring in New services and capabilities, or also by looking at what more do I have to do to keep my existing customers happy or potential customers engaged so that I'm able to deliver value to what I'm promising from my publishing or any other service that we are looking at. Next slide, please. Now, the challenges that we typically seen are in these four buckets at a very high level.
Number one, organizations, as they've grown, have created these data streams in silos, are only looking at them in their own individual respective financial data as its own silo or operational data as its own silo, but not connecting each of them to say, what is the impact of an operational lever not working. That is not giving me this financial lever that I'm looking for or what is an operational lever that is impacting what my customer experience lever needs to be.
There are also gaps. We are not focused and not necessarily that everybody needs to be mature in every data stream. So there are gaps. But understanding them and creating a plan of what is required is critical. Last but not least, is quality and governance. It's about do we have good quality data that we can trust and that we can make decisions on them is also very critical.
Now, how do we solve for all of this is the next slide. If we can move there, please, is something that we've created, which will help you. Now, this is not a silver bullet to say that you need a platform like Stripe data platform for solving these things. But what we've done with Stripe data platform is create a framework and a solution that is easily customizable and configurable.
So the five steps or the five modules that are required for solving any of the data problems that we spoke about is to have an overarching solution that can extract your data from any of your sources, be able to enrich, transform or curate the data, be able to validate and trust your quality of the data and be able to consume in whatever shape or form that you need. Now, think about if you can build this in a pipeline that can be replicated for any source that you are looking at, whether it is internal or external to you.
And be able to build this incrementally. Now that's what we have enabled through is primarily built on aws, but you can think of this being built on any solution. You can do this yourself. It's not necessary. You have to look at only. But what is essential is this framework and a solution approach that you build once, be able to use it for any number of pipelines that you are looking at.
Be able to configure it for any source that you are looking at and be able to create an overarching solution that can solve for all your data and incrementally, don't look for Big Bang. Let me solve everything at one point of time, but incrementally build and fill your gaps and create value through the data, whether it is your operational data or your financial data or your client experience data, but fill in the gaps and incrementally derive value out of your data sources.
We'll go to the next slide, please. An example of what we've done is, is for a leading information provider and they wanted people data and we employed the people data module from where we were able to look at executive information from company websites. So think of I'm able to point this module on all company websites and in this case, we pointed it to roughly 3 million companies, 18 million contacts.
But being able to automatically extract not just a one time download, but be able to monitor changes when people move from company to company, we are able to look at, oh, somebody has moved and so it needs to be updated. So creating a very Seamless pipeline that will automatically update this data and create good quality data that can be used to look at who are my people, who are my contacts, who are my new contacts, who come into the system, how can I leverage them for my operations?
Are the things that we've been able to solve for in this case. And we'll go to the next slide, please. Lastly, if I summarize, I think the critical thing I will leave with you is please look at all your available data streams, not just what we to know today, but looking at a holistic 360 degree view of your operations, your client experiences, your product needs and your knowledge needs. Those are the four things that I'll say.
Look at knowledge, your product, your customer experience and your operations. And see what data streams are required. Do you have them or how can you build an integrated platform solution that will deliver value incrementally over a period of time, but explore new data streams and get value out of them? Thank you. Great Thanks very much, murugesh.
Thanks for this. Let's move on to our next speaker today, Joe Adams from marisha. Joe I'll pass it on to you. Hello? yes, hi, everyone. My name is Jeff Adams. I'm the senior director for sales and partnerships at Mauricio. And over the next seven minutes, I'm going to be talking to you about the power of integration.
Next slide, please. Thank you. So Mauricio is really focused on being an integrated end to end solutions provider for user experience and faster research breakthroughs. And that sounds like a lot of interesting words. But over the next few slides, I'll talk you through what we mean by that. So next slide, please.
Well, I think might have skipped one. No OK. This is fine. So the Mauricio focuses a lot on early stage research, and this early stage research is pulled through from conference submissions of abstracts, posters, presentations, data sets and so on. And it's really a vast amount of content that's not easily shared with the world.
And there's reasons for that. Most of the time it's because it's not being properly funneled through clear submission workflows. It's being presented very briefly at conferences and not shared very well after that. So often this early stage research is shared in a very small window of time, and then the time that you get to see that research comes out 18 to 24 months later in a preprint or a journal paper.
So really what we're doing at Mauricio is trying to find an integrated way of looping this early stage research in. Next slide, please. And within the Mauricio platform we have a number of different tools which all integrate with one another. So the first of these tools is a peer review workflow, and this is a real, really streamlined approach to have a calls for content abstracts proceedings and so on with really in-depth integrity checks throughout, making sure that all of this early stage research is entering the sort of research ecosystem is credible.
Interesting content. From here. We have an integrated conference hubs, and this is really our virtual event platform. So what we're trying to do here is make sure that all of the content that's run through our submission and review systems is then put into a format in which it's easily shared and engaged with. So our virtual event platforms are really built around the opportunity to network, discuss and explore new research that's coming into the, into the world.
And from there, the real goal is to make sure that this research continues its journey. So it's published on research libraries all within the platform, all integrated, and it brings in all of the different format types that you would like. So not just the abstracts and papers, but also videos, presentations, data sets. And these are all pulled through to these research libraries cinched in with ORCID IDs assigned dois, and it enables this early stage research to enter into the wider scholarly ecosystem.
Next slide, please. So how do we do that is by making sure that we have all of the different content types being able to be pulled through into our workflows tool. And that really means that each piece of content, whether it's a video, a data set, a presentation or a paper, can all be linked with, with one another, reviewed properly and, and have the appropriate amount of integrity checks. One of the big problems that a lot of publishers and societies are seeing is that this early stage research is not going through the Write Checks and balances and therefore retractions later down the line.
And so on like that. And it causes reputational damage. And that doesn't happen with the Mauricio platform. It's really stringent review process with suitable levels of customization. Of course, that enables multi media formats to be pulled through those workflow stages. Now from here it gets pulled through to this conference research library.
Now what's really important. And I was interested to hear from the presentation earlier about the importance of taxonomies within the research libraries. For Mauricio, it's all broken down into clear taxonomies to enable easy search, easy viewing, to look into cross-pollination of ideas, but equally to make sure that there's a feed into journals. And what I mean by that is that there's a way in which to identify that early stage research that journal editors and publishers can really pull apart and think this is something that we would need to be commissioning around or building on.
Next slide, please. So as a sales person, I'm a particular fan of funnels and this is really showing not just how the platform integrates all of the tools that we have within that, but how we integrate with the broader research world, I suppose. So the submission workflows, which I've talked about, really pave the way for quality submissions to enter the research ecosystem.
And this then enables them to be shared in research libraries. And so when I shared that inverted funnel earlier, where you can see the huge volume of early stage research, this research is often not being shared and therefore not being monetized. So when it's pulled through to research libraries, you have the opportunity to money monetize this content through revenues, which is, you know, APC or subscription models or even corporate sales and advertising.
And this is stuff that's not being monetized at all at the moment. So there's a new revenue streams there which are generated through the Mauricio platform. From there it enables the journals, publishers, societies that we work with and partner with to look at the upcoming trends. What's the research that's being engaged with the most? Who are the most popular authors?
What's the topics that are coming through that we should be commissioning around? And so as we move as an industry to more of an article based economy, selecting the right journals, journal articles from these early stage research is increasingly important. And it's not just important because we're getting the right research out into the world, but it's also more important because we're diversifying revenue streams and every publisher and society and organization is really interested in doing that.
And and really that's central to the Mauricio goal here, is to find an integrated way in which to pull all of this early stage research into the wider world, make sure it's shared and engaged with. Next slide, please. So that's it for me. I don't think we'll have time for questions, but please feel free to message me after or reach out to me on LinkedIn or so on.
Thank you. Joe, Thanks so much for telling us about Maurice's efforts in supporting early stage research. Our next speaker today is duck clendenen from Gutenberg technology. Duck hunter, you. Great Thank you, Julia and Thanks to SAP Fiori organizing this webinar. It's been great so far.
Hello, everybody. I'm Doug clendenen, EVP of strategy at Gutenberg technology. For those of you who aren't familiar with gutenberg, we have a SaaS platform that publishers such as Wiley and Cengage use to automate and streamline the content creation and distribution of textbooks, ebooks and OpenCourseWare. What I want to talk to you about today is a problem that we hear from a lot of our publishing clients, which is how expensive it is to create engaging OpenCourseWare.
And this is particularly problematic for low enrollment courses where the demand for these courses is really doesn't justify the expense for a publisher to create a course. The good news is that with recent innovations in content transformation of legacy content, rapid authoring and distribution and assembly and push button distribution, you're able to create a much more agile approach towards creating courses.
Slide so as I alluded to, it's very expensive to create courseware, which is a particularly big problem for low enrollment courses. Some data points to throw at from our clients. IT costs anywhere from $45,000 to create a course up to $250,000 or more, depending on how complex the course is. And if you're looking at how you would potentially what the break even point is for a course to make it economically viable from a publisher's perspective, what we've seen with clients, it takes anywhere from 10,000 units, up to 50,000 units sold over a three year period, depending on how many digital components there are to the course and how complex it is.
So again, it's just really the economics start to break down. If you're a publisher, if you're looking to create a course for a low enrollment course slide. So what is often used in place of a course are ebooks and ebooks are great. They certainly come with a lot of advantages. They're relatively inexpensive to produce. They're available on a variety of mobile platforms.
You can distribute them through third party channels like Amazon or vital source, but they also come with a lot of disadvantages, so to name a few. They're very hard to update. They're very easy to pyruvate. It's hard to integrate related activities and assessments, and you're not able to take advantage of a lot of the features that an LMS would provide.
So from an end learner experience, it's really not it's not optimal slide. The reason developing OpenCourseWare is so expensive is because the developing a course, typically runs along separate print and digital tracks. So you have multiple redundant workflows. There's a lot of starts and stops between these various stages.
There's a lot of handoffs between teams and it's a very manual process as well. So all this adds to the cost of developing a course. It also really lengthens the time to market and it limits the amount of agility you have with your content, because more often than not, your content will get trapped in a certain product format or content format. Slide So part of the solution to this problem is implementing what's known as a lean content development approach, where you're able to consolidate multiple workflows into one workflow or one single source of truth, if you will, and then you can have multiple teams working on the same title or same course at the same time.
So you can have your authors, your editors, your project managers, potentially third party vendors as well, all working at the same time in a real time collaborative environment. Many less starts and stops, and on the production side of things, it's very, very automated. So from one single source of truth, you're able to distribute at the click of a button to print, to digital or to a third party.
LMS slide. Once you implement a lean content development approach, then you're able to start to implement what we call instant course, which is a very quick way to essentially take an existing book, rapidly transform it. So take an EPUB file, use our content transformation tool which transforms the EPUB file into various component XML XML files.
Get that into an authoring environment rapidly, update the course and assemble the course and then at the click of a button, distribute the course via common cartridge or a scorm package. Or if it's an LTI compliant LMS, you can do it that way. All the click of a button. So very fast way to create a kind of a minimal viable product version of the course. Slide If I were to walk you through this kind of very high level as far as the process, these are some screenshots from our platform.
Step 1 again would be you transform the content so you take an existing EPUB file, extract the content and transform it into component XML files based on the structure that you're looking for with the various chapters, sections and subsections. This particular tool is highly automated, so there's not a lot of manual adjustments you're going to have to make after you go through the transform slide.
Once the raw course is in the authoring environment, then you can easily update the content, change the Scope and Sequence if you want, or you can leave it as you want and let the instructors deal with it as well. Slide after you've got the, you know, the basic course updated, then you start layering in different assets. So if you want to start adding in different assessments, for example, these can be as complex or as simple as you like.
Again, this is all within one single platform. So it really streamlines the ability to add assessments to a course. Slide once the assessments are in, you continue to add various assets. So if you want to add videos, any interactivities or supplemental material that you want to add as well, again, you would do that all within the same platform slide. And lastly, it's ready for distribution.
So at the click of a button, these are out of the box export formats you can export via scorm common cartridge or LTI to whatever it is that you're looking to distribute the content to. Slide in summary. So what you're doing with instant course is you're taking an existing e-book quickly transforming it into a course by ingesting it into an authoring environment allows you to build the course quickly, get feedback from instructors and the students very, very quickly and iterate on the course, going forward and keep improving it, if you will.
At the end of the day, you end up with a much more engaging learning experience for your students and creating new revenue opportunities for publishers. And that's it. I think I'm right at the seven minute mark. If you to learn more, please scan the QR code and I'm happy to fill in any additional details you're looking for. Thank you.
Thanks very much to Doug for this overview of instant course. Our next speaker is Mary Sweeney from researcher Mary. Hi, everyone. The theme of my talk today is content. Discovery has been too complicated for too long, and the power of scientific collaboration and communication has never been more evident than during the pandemic.
The last two years have really highlighted the importance of fast and efficient means of distributing knowledge to solve the complex global problems of today. But there are two workflow obstacles which are slowing down the pace of progress. Next slide, please. Firstly, discovery. Over 3 million research papers are published every year, and as people's lives just get busier, it's more increasingly difficult for academics and researchers to keep up.
Keep on top of all the studies being published in the field. Secondly, discussion COVID. COVID almost destroyed in-person conferences. Hybrid is now the norm. Over 69% of researchers that we surveyed think that the lack of networking opportunities and not having a platform to discuss their research is an issue when attending conferences online.
So how do we set about to solve these obstacles? Next slide, please. Researcher is an app. Built by researchers for researchers. It was launched at the back end of 2017 and has grown organically. And we now have 2.5 million users and it aggregates over 22,000 content sources from journals, preprints conference proceedings and industry research from all academic areas.
We also include research produced by businesses and you can also access different media types video podcasts and blogs. So we make staying up to date easy. So how do researchers use the app? Next slide, please. So users can set themselves up to follow journals or authors, or they can set up keyword filters, including any combination of keywords.
It's a bit like following people on Instagram or Twitter, so when there's a new relevant article has been published, it appears in their feed. Users can also follow companies or universities, or choose to only follow open access journals or preprints. Users can also bookmark relevant papers, send them to colleagues and peers, and even share them directly on social media. We also integrate with reference managers.
We don't host any full text articles. All traffic is driven back to the publisher to read the full text, and if the text is behind a paywall, then that will show up at the publisher end as a denial. Users can, however, get through to full text versions of paywall content if available through using their institution access credentials. Next slide, please.
Research and not only helps researchers to discover content, it also helps publishers to disseminate content and build relationships with their core communities. In the summer of 2020, we opened up the app to publishers who wanted to advertise with us and reach our core community of users. And we now offer a suite of publisher solutions, including in-app advertising, full webinar services, animation, podcast and video creation, event sponsorship, lead generation, survey creation and more.
So please do get in touch if you would like to learn more about any of these services to help publishers. Actually, can you go back a slide, please? To help publishers get even more out of the app, we launched the researcher profile manager. The profile manager is an all in one content management, analytics and advertising platform, which enables publishers to connect directly with their followers on the app.
The profile manager gives content owners the ability to manage their presence on researcher. They can view detailed analytics about how users are engaging with their content. They can create and manage ad campaigns and post custom content into the feeds of researcher users who follow them. And some journals have followings of in the tens of thousands.
So if you are a publisher or an author service provider and are not using the platform yet, it's well worth setting up an account as we allow all profiles to have five free posts per month. So publishers who are currently using the profile manager are using it to recruit editorial members. For instance, um, editorial board members, rather promote events, boost top performing papers and to update their followers with the latest news about, say, transformative agreements, calls for thematic series or special issues, and to promote their blogs and white papers.
Also from the profile manager. Content owners can create and schedule live events, live events. Which brings me on to the second workflow problem that we've set out to solve. Discussion research. Alive is Research's latest mobile feature. It's a light and versatile platform for live events whose format combines an informal conference session with a podcast.
Live was created as a response to the rising demand for better connected, freely available academic communication where content and information are at the forefront. It is audio only. Speakers can add slides to their presentation and engage with their listeners via an input platform. Add function. Speakers can choose the date and time of their event, as well as the format of their talk.
Whether it's a single lecture, a discussion between several academics or a panel Q&A or something else. Publishers publishers who create their own events via research profile can send out scheduled posts to their target audience and trigger push notifications when events are about to start. Researcher live also allows publishers to promote their journals, create brand awareness of new journals, invite submissions and directly interact with potential authors to promote author services.
It also allows them to offer an opportunity to their authors by encouraging them to promote their published papers on research alive and generate engagement and impact. Research live enhances the dissemination of research. It connects all aspects of academia. It helps researchers gain skills to interact and with publishers and researchers globally and helps to generate the impact of their work. And with every session made discoverable on researcher on the website and the app rich research alive is growing a wealth of resources for everyone to access.
Next slide, please. Research his editorial team provides quality assurance for sessions hosted on the platform. They also commission exciting series of hot topics. These series are then offered for sponsorship, putting brands into the heart of conversations that move science forward. So we ran a live series on bioconjugation in January, which was sponsored by biotech.
And as you can see, it had some really great results. So next slide, please. Think I've just run out of time, but would like, if you'd like to learn more about how to get the most out of our profile manager, or you would like to run some events with us on research alive or talk about potential partnership, then please do get in touch. Thank you for listening.
Thank you very much, Mary, for telling us about researcher and researcher live more recently. Our next speaker is John challis from whom? John, I'll pass it on to you now. Thank you. So if you just advance to the next slide, please. for most scholarly publishers, data is the missing step in digital transformation.
Digital transformation in publishing that began with the development of the internet and the digitization of content. And in particular, publishers need to understand the intersection of people and content. Hum is how publishers get there. It brings a new class of software to the publishing world, something called a customer data platform.
But we've taken this technology that's been developed for the business, consumer e-commerce world. And essentially fine tuned it specifically for publishers. Our ambitions are as big as you can get. That's to fuel that next transformative stage in publishing by providing you with the tools and insights you need to do to become more data driven and to use first party data. That's data you own to develop deep, genuine and beneficial relationships with your audiences, whether it's a reader, author, reviewer, librarian, member, learner, what have you.
So I talked about customer data platform. Sorry next slide, please. I talked about customer data platforms. They are a form of packaged software that is, that's available. It, it collects data from all sources. And this is important because it, it gives you a 360 degree view of your audience member in one spot.
It builds complete customer profiles and then it actions this data by pushing it back out into other parts of your technology stack. And it does all of this in real time. And we'll talk about a couple of use cases in a second. Next slide, please. So the first thing you need to know about for scholarly publishers is that there are four first class objects.
Every time someone does something on your site, if you're running hum, four profiles are updated. Uh, first is the profile of the person that whether they're a reader, reviewer, subscriber, whether they're anonymous, doesn't really matter if you know who they are or not. Every person does an activity. That person's profile is updated, and if there is information, intelligence about what institution they may be at, perhaps from IP address range through some of our integration partners or if they've come in through institutional system.
You'll also know information about that institution. In addition, the piece of content that they've looked at has a profile. That profile is also updated, and it doesn't matter what kind of content it is, it could be a journal article, it could be a blog post, it could be an email that you send out a mass email or a newsletter. It could be an event or a course.
So it content in the broadest possible way that publishers tend to think of content. Now that content is about something, and the fourth object that gets updated, the fourth profile that gets updated is the topic that it's about. Uh, earlier we heard from atypon about their tool for taxonomic taxonomic identification of content. hum.
Has a tool. It's called q-bert. It is one of our tools. We train it on every customer, so every client specific disciplinary topic is trained, and then that tool crawls the content and assigns topics to every piece of content as it's created and every piece of content in the backlist. This is in addition, I should say, to any taxonomy you may already have, or if you use things like author supplied keywords, those are kept as well.
Uh, next slide, please. So at the heart of home is the ability to use AI without the need for a data scientist. Many smaller scholarly publishers don't have data scientists on staff providing them tools that require that isn't really helpful. So this is for us about democratizing AI. And I list here some of the things we talked about Kubernetes already as our natural language processing tool to tag content.
But we have other kinds of things. We have engagement scoring which allow us to understand how content is performing and how it's performing with individual segments. We use attribution modeling as part of fractional attribution for propensity modeling. So when somebody has done something like renew a subscription, we look back to see what were the five things they did before that, and do we see a pattern in which things are effective and causing people to renew a subscription, for example?
And then finally, lookalike modeling. And if anybody has a marketing background and is familiar with how you use Facebook or Twitter or LinkedIn lookalike campaigns taking advantage of their trove of first party data, you'll understand what this is as well. Next slide, please. So what makes him special? First are the capabilities, this content, intelligence, our ability to segment.
The ability to offer personalization, the ability to explore your audience in real time and to explore your content in real time. And I'll show you a couple of live slides of that. Then our approach. We are not built for data scientists. We are highly cost effective. And because a lot of publishers aren't set up to support this sort of thing, we handle all systems integrations if clients prefer that.
All right. Next slide, please. And if you could just hit it, it should play a video. Here we go. So this is an example of out of the box content intelligence tools, which can allow you to do things like look at content that's underperforming. So that's content that when people land on it, they read it, they deeply engage.
We measure not just when somebody lands on something, but how far they read in it. Uh, underperforming content is content that everybody who gets there likes it, but not many people get there. These are the sorts of tools that publishers are interested in. Next slide, please. And again, Thank you.
This is an example of a personal profile. Everybody who interacts with him, whether they're identified or not, we can see all the properties, the digital sites that they visited, all of their demographic and contact information is collected in one place. And if you choose, you can go in and see the full activity log of every single thing they've done on the site. Next slide, please. And then finally, this is audience Explorer.
This is where you can go in and build a segment. And when, say, you don't need a data scientist. Here's a full Boolean segment, creation live. And you can see I've got an audience of about 1.1 million people, 2743 of them fit the criteria that I set. And as I add and/or criteria to this, it tells me how many people fit into that segment. And if I want to create that segment. Uh, you collect, you click Create segment, you give it a name, and you're done.
Next slide, please. Finally personalization. There is no reason why publishers couldn't do what Amazon does or Netflix does. Highly personalized what people see. So our widgets should run on any of your sites, whether they're event sites, CMS sites, whatnot. Allow you to recommend personal recommendations to people based on what they've done, not what they've told you.
They're interested in what they've actually done on your digital properties. What else? What are other people who are looking at this? Also looking at more like this, others with similar interests, topic based browsing, so on. Next slide, please. So this is my summary. Hum makes publishers data driven.
It offers accessible advanced API and it lets you turn your first party data into a monetizable and strategic asset. Next slide. Uh, we'd love to continue the conversation. Here are some ways to do it. Linked to a couple of resources for you. There's my email address. I'd love to chat.
Thank you. Great thanks, John. Thanks very much for listening. Our final presenter today is Doug valin from figshare. Dan, on to you. Thanks, Julia. Hi, my name is Dan valin and I am the head of Strategic development at figshare. And so for those unfamiliar with figshare, we are a repository publishing solution for storing, accessing and citing digital research outputs.
And we loosely define that as everything from data to preprints to video media to software and code and more. We accept all file types as kind of the tagline there. And so we're actually part of the Digital Science family of tools. So if you've heard of altmetric or dimensions or rapada, for example, those are actually sister companies within the Digital Science portfolio.
And at figshare, we've been working to provide research, data management and preprint solutions to a number of different stakeholders across the research space, and that includes everything from academic publishers and societies to research institutions, colleges and universities to funders, government agencies, biopharmaceutical companies and the like. So this has really helped us to provide a global perspective of the needs of researchers and authors and those research facilitators.
And in line with the surge in open access over the past few years, we've also seen a huge uptick in the publishing of research data, as well as a lot of preprint activity during this time. And so that's what I'm here to kind of talk about today. Next slide, please. So one of the things we've observed as we celebrate our 10th year as a company, because it's our 10th year and contractually obliged to include that in every single presentation this year is the rise in data publishing, in data policies from publishers and research funders.
And so the community focus has kind of shifted as a result. So as data publishing has become more commonplace, the push, isn't necessarily to get researchers to publish their data, it's to get them to publish good, useful, fair data that's findable, accessible, interoperable and reusable. I'll use that a few times. Visit gopher if you want more info on that too. Community standard. So that type of data.
And so in talking to our own publisher, customers and partners, we're hearing that the unspoken reality is that their editorial staff are overwhelmed by the dramatic rise in submissions. And there's so much more that editorial teams now have to check and filter from these submissions before they even start doing their own specialized editorial work. And that's just for the publications themselves.
So when you focus and factor in research data, it potentially creates more work for all parties involved. And that's where figshare and figshare curation services come in. Next slide, please. So figshare plus is one of the solutions I'm going to talk about today. It is not a streaming service, but it is a way to publish big data.
And so figshare plus was born out of a need. Just next slide. Yeah sorry. One more. Yeah, we were just on the slide. OK there you go. Apologies, dan? No, that's OK. Um, Yeah.
Fixture plus was born out of a need that we observed based on support requests that we ourselves were receiving at support@alarmgrid.com fixture. Com, and that is for a number of research disciplines and really generalist repository users. There isn't a publishing solution for what we loosely define as big data. And so, you know, data that's 100 to 300gb or more sized data sets.
And so individual researchers were reaching out, looking for a way to publish data that their funder or publishers journal policy was maybe requiring. And it provided us at figshare an opportunity to explore the best ways to develop a solution and service to end users. Next slide, please. And so we launched fixture plus.
So fixture plus now provides a way for researchers to publish big data sets. It was addressing an explicit need in scholarly communication and in creating figshare. Plus, we wanted to ensure a way for researchers to publish their data in just like the three R's, a reusable, replicable and reproducible way. And so the thinking here is that researchers wouldn't be paying to publish a data set unless it was mandated by a publisher or funder, and thus it would more likely be associated with a final publication or journal article.
And so in order to make research published as fair as possible, we decided to dip our toes into curation and provide that as part of the service and experience of figshare plus. Next slide, please. And so building off of our successful pilot with the NIH, where the curation of all submitted data sets to the NIH figshare portal was a feature with the help of the community, we created a number of comprehensive guides around best practices for publishing data in a fair way.
And so really one of the flagship tools from figshare enterprise tools is the figshare portal. There's a little blurb about it here, and this is what we actually spin up for enterprise clients, and it's ultimately what we created for figshare plus. And you actually view it at plus5, azure.com and with figshare. Plus, we really wanted to ensure that all of the big data on figshare had enough metadata to actually be reusable and to do so in a way that was expected and really defined by community standards.
And so with every item submitted to figshare plus we actually work with the author to make sure that their data is well described, that their files are grouped and organized in the most understandable way possible, and that they adhere to all the other best practices by the wider data publishing community. And so when we launched figshare plus in October soft launch in October of 2021, we've actually seen a steady increase in submissions month over month.
And this has also led us to thinking about ways to scale the service beyond just the big data space. So next slide, please. Yeah and that's how we landed on the figshare curation service for publishers. And so figshare curation service provides a way for users of fixtures, enterprise tools such as figshare for publishers to kind of outsource the checking of submissions for completeness.
And it really ensures that the research showcased under a given brand or journal or organization is, again, that word as fair as possible. And so you could even think about this as an extension of author services. know, there's as we talked about earlier, there's never been more of a need to provide invaluable services to your authors as they, you know, publish in a given journal.
And figshare curation services really provides a hands on consultation around data submission, really ensuring the metadata is complete. It's linked to the appropriate funding agencies for compliance there and also linked to the appropriate journal article. Next and last slide, please. Yeah, so that's it. That was a quick primer on figshare plus and figshare curation services.
If you have any questions or would like to get in touch, you can use the QR code or link here or reach out at info@figshare.com. Thank you everyone. Great then. Thank you very much for this presentation. And Thank you to all of you for participating in today's innovation showcase. We hope that you enjoyed hearing about the latest in publishing, technology and services.
As a reminder, you will be receiving a follow up email. And if you have any additional questions for the presenters, please use the QR code or the link provided in that email and they will be happy to follow up. Please visit also the SSP website for information on upcoming programs. Our next event is ask the experts seminar on publishing ethics to be held on July 28th and do not forget to register for our annual New directions seminar, hosted both online and in person in Washington, DC on September 22nd, 21st and 22nd.
And before we conclude today, I want to appeal to all of you to donate to SSPS generation fund to support the future generations of scholarly publishing professionals. This concludes our session today. Thank you very much for your attention and for joining us today and have a great rest of your week.