Name:
PS24 The Trough of AI Disillusionment: What have we learned?
Description:
PS24 The Trough of AI Disillusionment: What have we learned?
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/833d0e73-a238-4320-8a3c-e6566124bf8c/thumbnails/833d0e73-a238-4320-8a3c-e6566124bf8c.png
Duration:
T00H39M29S
Embed URL:
https://stream.cadmore.media/player/833d0e73-a238-4320-8a3c-e6566124bf8c
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/833d0e73-a238-4320-8a3c-e6566124bf8c/PS24 Trough of Disillusionment.mov?sv=2019-02-02&sr=c&sig=6WUmCNfyzZm0Q05SQQYtY%2B7HSpPOe2LDRU44%2FYE83Nc%3D&st=2025-01-29T03%3A02%3A09Z&se=2025-01-29T05%3A07%3A09Z&sp=r
Upload Date:
2025-01-29T03:07:09.3771568Z
Transcript:
Language: EN.
Segment:0 .
EMILIE DELQUIE: All right. I think we're good. We have a good contingence back. Welcome back. So final stretch of the day. For this one, we're doing a panel. We have four specialists who have been working in the space of AI and done all sorts of things for the last two plus years and then three and four because really, AI just didn't start when ChatGPT came in the news.
EMILIE DELQUIE: And so today, it's going to be a conversation between the five of us. Just to give you a feel for some of the things that our four panelists have done over the years, what they've learned along the way, some of the things they try and didn't. We're hoping to share some tips that you can take back and hopefully just bring back to the office and give you some ideas on things you don't need to try because somebody else did and others that are keeping the panel very optimistic about the opportunities.
EMILIE DELQUIE: I think we're going to start with a poll to get a feel for where everybody's at a little bit on the AI journey. So if you don't mind opening the app, there's a quick question and I need to grab it. So specifically, exactly that. Where are you on the journey? Are you observing, just kind of getting oriented?
EMILIE DELQUIE: Are you considering applications? Are you experimenting, prototyping? All right. People are seeing it. So if you don't mind just going in the app and giving us a feel for everybody's at. All right. This is a very active app.
EMILIE DELQUIE: It's like, OK, no. Every time we almost get a consensus, no. A winner, maybe? OK. All right. It appears that experimenting, prototyping is in the lead with 40% and followed by ideating and considering applications.
EMILIE DELQUIE: All right. So hopefully this session gives you a few more ideas that you may not have had until today or that you didn't think were possible. So let's spend a little bit of time with our panelists, getting to know who they are. And so yeah, the first question is, tell us a little bit about you and how you've been involved with AI and the kind of projects you've been working on.
KATE EISENBERG: OK. Let's start with me. I'm Kate Eisenberg. I'm a family physician, and I also have a PhD in epidemiology. So I like working at the intersection of data, technology, and health care. I'm with EBSCO and I specifically work with DynaMed and Dynamic Health, which are their clinical reference products for health-care professionals to use at the point of care when they're seeing patients largely.
KATE EISENBERG: And so for the past 18 months, I've been one of the leads on our project called Dyna AI to apply generative AI to our reference content using a retrieval-augmented generation approach, which I know is familiar to a lot of people here, is to reduce hallucinations and keep our question and answer service that we're providing based on this within our own body of content. So I've been involved from the inception of the project, which actually started with building out a policy and principles because we had to get ourselves comfortable with moving forward.
KATE EISENBERG: So we had to decide on our own approach to designing a clinical evaluation framework so that we could feel comfortable in what we were building from a clinical perspective all the way through to standing up a beta product. And we do have a commercially available product today, as of just this summer. So it's been a really rapid evolution. The technology has evolved along with us, having to understand how those policy and principles apply in practice as we go has been quite an interesting experience as well.
KATE EISENBERG: And I'm fortunate to have worked with a team that has been very supportive of we're really doing this, we're really going to build a safe and responsible foundation for-- appropriately so-- for working in health care for this product that we're building that we think has real value to our users to be able to access information more quickly, personally, and effectively.
KATE EISENBERG:
DYLAN DIGIOIA: So I'm Dylan DiGioia. I'm here with Hum. I'm our Director of Engineering. Part of what I do at Hum is I lead our research and development efforts. So a lot of that has been with our Alchemist AI product. I've been with Hum about three years. I've been working in machine learning and AI for quite a bit longer than that. But we've had AI in our system since long before ChatGPT broke the internet.
DYLAN DIGIOIA: And we had been experimenting with small language models, building up to our lodestone model, which is a in-house small language model for embeddings that helps us understand your customers better and allow you to market and do sales and things like that against your content easier and more effectively. In all of that, we've also done a lot with large language models, including anyone who saw Jake Minturn present last night about our topic taxonomy that we can build directly from your content and structure and organize, as well as tagging all of your content regardless of where it's coming from or how we get it, with those topics and tags allowing for better analytics and better understanding of your corpus.
DYLAN DIGIOIA: It's been a crazy ride, I think, for all of us in this industry and elsewhere. I remember the days that everybody started looking at ChatGPT, and I was getting phone calls, have you ever heard of these LLM things? And I was like, yeah, yeah, I have a little bit here and there. And it's just been a really fun place to be at a startup working with the innovative people and technologies in such an awesome industry.
DYLAN DIGIOIA: Working for the common good. We talked a lot about purpose earlier in a lot of the panels, and it's cool to bring purpose and technology together in the way that a lot of us have been able to lately. So that's kind of what I've been the most excited about and the most that's what gets me up in the morning.
JEREMY LITTLE: All right, I guess I'm next. My name is Jeremy Little. I am the Technical Lead for the AI team at Silverchair. So our team is really focused on bringing applied AI into our existing platform, which has a number of applications across our client content. Lots of integrating models the way Hum has been doing. And we've really been learning a lot along the way. So I've been in this position for about a year and a half now. And we have a couple of products that are currently sort of in betas.
JEREMY LITTLE: We have some staggered launches of those products throughout the next year, which we're really excited to get into the market and see how people react. But beyond just those products, I think our team's mindset has really been around experimentation, collaboration, and really just learning. I think we have a really vibrant community with our client base that is really engaged with the efforts that we've been making.
JEREMY LITTLE: And it's been really cool to see all the enthusiasm around that. So this mindset has really resulted in one of our prouder, I guess, internal testing suites which we call the Playground. And we have lots of playground-related terminology, which may sneak into this discussion later. And the Playground is really about allowing our client ecosystem to experiment with AI tools, models, and data analysis over top of their content. So it's sort of a siloed environment where we can really quickly launch beta products and get expert feedback on it.
JEREMY LITTLE: And it really encapsulates what we're trying to do, which is learn along the way and make valuable AI accessible to the publishing industry.
MOHAMED ELSHENAWY: So hello, everyone. My name is Mohamed Elshenawy. I am CTO and Co-founder of a startup called Sinai AI. We started it like six months ago. It's basically we are using AI for automating the book derivatives, including translations, audio, and summaries. So basically we are trying to define a new experience for e-book readers. We noticed the e-book readers like basically displays the content of the book.
MOHAMED ELSHENAWY: Right now, we are looking into how to bring the intelligence and what we can do more than just displaying the content. And we released our MVP. And basically, this is a very good news about the startup for now. Before that, I worked as a Head of Machine Learning and AI in another startup called adam.ai. It's a meeting management platform. I released a feature there, it's auto note taking.
MOHAMED ELSHENAWY: Basically it takes the record the meeting, transcribe it, and then generate insights, including decisions, actions, risks, and notes. We also worked on RAG agent. Basically, you can ask the RAG agent about your previous meetings. When did we take these actions? Like, when was the last time I met this person? And so on.
MOHAMED ELSHENAWY: So have been in AI for a while. I had my PhD from University of Toronto. And I'm excited to be here.
EMILIE DELQUIE: Excellent. Thank you all very much. Let's see. Jeremy and Dylan. Could you give us an example-- let me redo it. Kate and Dylan. Sorry. Can you give me an example of something that you have tried-- just one example-- something that you've tried along the journey as you were developing your products and that has sort of worked really, really well.
EMILIE DELQUIE: That has worked positive, really well?
DYLAN DIGIOIA: Yeah, I can start. That has worked really well, I think. So I'm going to give a couple of things and be a little provocative, but I'll try to be quick enough so that Kate can jump in too. One thing I will say that the technology is what the technology is, and it gets better over time. But the way that we approach the projects and the way that we approach the products is just as important, maybe more important, than the technology itself.
DYLAN DIGIOIA: So at Hum, we think a lot about the maturity and where we are at with every project or product. So are we ideating? Are we just thinking about what could be and the possibilities that we're going after and how can we apply some rigor to, is this product really going to work for us? Is the technological feasibility there from just first principles standpoint?
DYLAN DIGIOIA: Is the market there? And all of that is just ideation. And then the very next thing you do is not jump straight all the way to the product, which is what some of us think of when we're thinking, this stuff works so well. Like, of course, you can just jump right to it. You have to go through experimentation and evaluation of your ideas.
DYLAN DIGIOIA: Confirm or deny your hypotheses. You have to go through usually a lot of prototyping and then an alpha release and maybe a beta release and tons of feedback and incorporating all of that and passing it all back. And only then, maybe can you get to a real commercially viable product that you're ready to put out there and ship. And that's not new or different, really, than all data projects of the past.
DYLAN DIGIOIA: But people-- what we want to prevent as people working in this space is seeing the shiny thing because AI is so exciting and thinking, oh, I can skip all those steps. And then asking why do the LLMs always fail? Every project I do always fails. And that's how you end up in the trough of disillusionment. But it's usually because you've skipped a lot of steps. Or maybe you didn't skip those steps, but you weren't really prepared for the journey.
DYLAN DIGIOIA: So that's one tactic, just kind of keeping yourself there. I'll also give a really specific thing for this industry, which is metadata extraction from content and from other tech sources works incredibly well. That's been highly successful for Hum. We found lots of topic modeling works really well, lots of named entity extraction, which 5, 10 years ago was very difficult. Tell me who wrote this, the name of the author.
DYLAN DIGIOIA: Tell me who the funder was. All of that has gotten just so much better with LLMs. And there's lots of avenues to potentially monetize that or just find better uses, better user experiences and stuff, through that deeper and more enriched structured data.
KATE EISENBERG: Yeah, I'm going to echo the success that we've had being based on the approach. And because it was so new, I think, you don't always sit down to write a formal policy before you start a project. So being really disciplined about understanding where you are I think is important and not getting ahead of ourselves. Maybe you have a roadmap in mind, but understanding that you really do need to understand, does it make sense to take the next step just because the technology's been evolving so quickly, what you understand about how the product is working today may be different than how it's going to be in a week or two.
KATE EISENBERG: So I've found we've had to remain really adaptable in that way. I think specific to our context, but probably does generalize, we have really taken the cross-functional team approach very seriously. And we've very deeply embedded our clinical folks onto the AI team building our product and then really branched out from there and been very quick to bring in additional clinical experts, which is what we're really good at is having those subject matter experts.
KATE EISENBERG: And it's been a really deep investment in having that qualitative people power involved in the quantitative kind of technology side. But for us, it's been very successful to just have really deep and well-integrated feedback loops from our clinical folks assessing and understanding where we are with the product.
EMILIE DELQUIE: Thank you. Jeremy and Mohamed, you get the opposite question. So as you've both been experimenting a lot and doing a lot of-- there's been a lot of learning along the way, trial and errors. Can you share an example of something that you tried and thought for sure was going to work but didn't? And how quickly did you decide to just move on?
JEREMY LITTLE: Yeah, that's a valuable question, especially given the topic that we have here. So a lot of my work now is sort of at the intersection between tech and product. So I'm actually going to cheat and give you an answer for both of those two things. On the tech side of things, I think when we see new paradigm shifts and we see groundbreaking technology like LLMs and just the open sourcing of many models, I think there tends to be a number of startups, new products and companies, that just blow up out of nowhere and get super high valuations and gain all this hype.
JEREMY LITTLE: And it's really difficult for us as a technology provider to sift through all of those new innovations and all those new companies without falling into the honey trap that is this amazing new feature that maybe actually isn't really all the way there. So I think along the way, we've had to sift through a number of projects and libraries and codebases that don't actually do what they say they would do and haven't really scaled the way that we thought they would.
JEREMY LITTLE: So looking back on it now, I don't think we could have made decisions necessarily better. But we do know what the right decisions would have been then. So maybe that's what I would have done differently is just know what I know now about the actual tech, right? If only. So that's from the tech side. I think from the product side, I'm actually going to say a lot of what you guys just said, which is I think early on, we were just so enthusiastic, like all the VCs out there, about all the AI opportunities that we just jumped into productization right away.
JEREMY LITTLE: And we really didn't understand some of the tech, although most of the tech was leading it, but it was really about the consumer base and what the market needed. I think we were probably a little too enthusiastic about the first idea that came and didn't do our due diligence about market research or about experimentation and beta testing before actually going down those paths. I don't think that this really hurt us in the long run because this problem actually is what led to our development of the Playground, which has been so valuable for really solving this problem and is really our new launch pad for betas and things like that.
JEREMY LITTLE: So I think there was some silver lining in there, but I think those are probably the two sides of the coin of what we would have done differently.
MOHAMED ELSHENAWY: For us, I would add one more thing. LLMs are generally very easy to prototype with. I remember when they asked me to do this RAG agent, I was able to show them something like next week, like in the next meeting I was able to show them something like, here we go, this is a meeting. You can ask about the meeting and get responses and so on. But when we start working on the actual product, we struck by different challenges.
MOHAMED ELSHENAWY: One of it is basically data quality. And this is basically a very important topic when you are working with AI. Basically, you want to make sure that the data that you have matches the strategies that you have as well. So the data quality is an issue. We found that many people who are using the system are not putting their notes of the meeting in the same way we expected.
MOHAMED ELSHENAWY: And that results in a different output than we expect from the agent itself. We also had to deal a lot with the privacy issues and sign different agreements just to be able to use the data for experimentation. We had to use our own data for meetings generated by our team to validate the idea and so on. So those kind of stuff you don't-- like, as you said, Jeremy, it's like people are very enthusiastic about it in the beginning, and you want to see something fast, but they didn't think through all those challenges we have to go through, which is basically, is your data good enough to add a value compared to the LLM?
MOHAMED ELSHENAWY: Did you solve all the problems related to data privacy and so on so that we can use this data to actually generate the values that we promise? So this type of things that we learned in our journey.
EMILIE DELQUIE: Excellent. Kate, you have the benefit of having a product now live in the market. It's still early days. Based on what you see and the user behavior that you see-- which again, you've done a lot of market research-- but given what you're seeing from end users today, what are some of the lessons learned, and how does it differ from what you were observing during your testing, and same thing?
EMILIE DELQUIE: Can you share some of the lessons learned for the audience to help anticipate what can happen when the product is actually live?
KATE EISENBERG: Yeah, it's a great question, and it is still early days. And I think what we're seeing-- and we've been collecting data as we go on user perceptions of AI also, and we're just seeing such a broad range of perceptions even so similarly, getting a broad range of level of interest, level of uptake. It's early for everybody, and in health care, people are appropriately cautious around new technologies.
KATE EISENBERG: So we're seeing all the way from, I'm just not interested, I just don't think it's there yet, I'll look at it but I'm not ready to use that in my practice, all the way to, I can't get enough of this, can we roll this out for my full institution? Right now, I know I use my own experience as an example. Since I've been building it, I feel very comfortable using the product as well. And I've been surprised at the way that that's evolved.
KATE EISENBERG: So I think there's just an opportunity to learn even from our own internal clinical teams because we do have our folks with their hands on it. So, for example, for maintaining our license or our board certification, we have to do continuing medical education. And that's open book, and it is just stellar at being an open-book resource for that kind of situation. So I'm finding myself turning to it for that, and that's not what we designed.
KATE EISENBERG: That's not what we intended when we stood it up. But we're noticing, wow, it's really strong in this particular example. And it's really very strong at these other kind of specific clinical scenarios. So I think it's been really exciting to realize that this is not fully something we're going to design. It's something that we are going to learn as we go, and we're going to be really learning from our initial users in our next phases of users about how they're actually using it in practice.
KATE EISENBERG: So it's a very exciting place to be because there is so much learning and growth. And I think because of our approach, we've been very agile about incorporating that feedback and understanding what it means for the next phase of priorities. I think early on for us, there was such an overwhelming list of things to do when you first stand something up and you see all the challenges.
KATE EISENBERG: And now that we've worked through a lot of that, it is exciting because we can really start to be very responsive in a more nuanced way to our users' needs.
EMILIE DELQUIE: Thank you. Dylan and Mohamed, you've both made comments about the importance of having a healthy data to leverage the full potential of all the LLMs. Can you share some tips with the audience for publishers who want to make the best of their content? Can you share with them some tips on what they can do today to maintain the health of their data or to optimize the health of their content so that they can, down the line, make the best of LLMs?
EMILIE DELQUIE:
MOHAMED ELSHENAWY: So for me, what I see as important is basically when you build your AI strategy, you have some goals. And those goals are typically will be linked to data that you have. So if the data is open and it's on the internet, basically the LLM most of the time will be able to handle this. So basically, if you are adding a new value, it means that you have extra data that LLM will use to actually generate different responses and create values for the user.
MOHAMED ELSHENAWY: It can be a different use case, but I'm just talking generally. So when you define your AI strategy, just align it with the data that you have. I don't know if it is a committee or someone who is actually be able to answer the question about the data. Is it complete, is it clean, can you actually work with it? This is very important to answer before actually saying, OK, I'm going to build this feature.
MOHAMED ELSHENAWY: Otherwise, you will just need more data to actually build something useful. And this is what I think about it. The other thing is to define a value, we basically have-- value for us is basically an equation that has benefits and cost. So in the numerator, we have the benefits. So can we help our customers or users achieve what they are looking for?
MOHAMED ELSHENAWY: And what's the likelihood that we will be able to achieve what they are looking for? And on the denominator, we have the costs. Can we bring those costs down? And the cost is not only money. It can be time, effort, [INAUDIBLE]. So this is very important as well to understand the value of the product you are going to design as part of your AI strategy or the product that you will include in your process or operations or whatnot to actually speed up the things in your company or in your organization.
MOHAMED ELSHENAWY:
DYLAN DIGIOIA: I would say, in terms of data cleanliness and data readiness in general, the biggest thing we see in the industry is less that the data is messy or defunct or difficult in terms of its cleanliness, which is very typical of other industries, which many of us probably know. What we find is the access to the data is not centralized. It's not easy to work with, it's not easy to gather for a lot of publishers.
DYLAN DIGIOIA: So we'll have AI teams at publishers that are actually struggling to just get a hold of all of their content to use for a particular use case in a way that's easy for them, that's against an API or something that they understand or a database. A lot of times it's kind of hidden in these-- not hidden, but tucked away in these systems that they don't have a lot of access to. So at Hum, one of the things we help people a lot with is getting all of that content into one centralized location and then federating access to that in a controlled way, but in a way that allows these AI teams to actually run experiments and do beta testing.
DYLAN DIGIOIA: And I imagine Jeremy's thinking the same things about Playground. It's like you have to have the access to start doing things, so you need to be able to get your hands on it. And sometimes that's the harder part than the cleanliness. We have a lot of very clean data.
EMILIE DELQUIE: Thank you. I want to make sure we have time for questions for the audience as well. So I'll just ask one last question. Quick answer with Jeremy. We'll start with you. If you have one tip, one tactic, something simple, something that everyone in the audience can take back to the office and suggest or start doing tomorrow, what would be that one thing?
JEREMY LITTLE: I think it's really important to stay out of this trough of disillusionment, to recognize what the role of your company is. I think that a lot of companies have been rushing into AI and throwing money and people at this problem when it doesn't necessarily make sense for that organization to be doing it. So if you're a technology org, it's almost inevitable that you're going to have some form of ML engineer in your organization soon if you don't already have some. But a lot of the people here today are not technology first.
JEREMY LITTLE: And I think people like that need to be cautious of trying to throw time and dollars at this problem when maybe the applications should be left to bigger players or your technology partners themselves. I think having experts in your company is important and reflecting on when the applications make sense is also important. But I think you need to be very deliberate about when you actually start investing in this and whether or not it makes sense.
JEREMY LITTLE:
MOHAMED ELSHENAWY: I agree with what Jeremy said. And I would add start small, start with a small use case that you are very interested in. See what you can achieve, and then you can build up on that. And that's it. It's hard to invest a lot of money into something that you are not very sure about. And when you start doing-- starting small and doing experimentation, doing prototyping, you start learning what exactly what is more important for you, for your customers, and you start having your strategy more clear and much more effective, I would say.
MOHAMED ELSHENAWY:
KATE EISENBERG: Yeah, I think start small, but do start somewhere. And because you can get that learning. And what we found is there really is a certain amount of learn by doing. And you don't have to over invest in a direction that doesn't make sense. But just taking one or two small steps may help you understand what makes sense for you. You may have some really rapid learning around, you know what, maybe we actually could take this further than we thought.
KATE EISENBERG: Or you know what, seeing what we see now, this doesn't make sense for us. And again, because the technology is evolving so quickly, that may change in a few months, but it may give you a baseline of where you are now. So I've just found it extremely valuable. Every time we've taken one step forward, there's just been so much learning, and it's helped us understand what makes sense from there.
KATE EISENBERG:
DYLAN DIGIOIA: One thing I think is kind of interesting-- and the panel earlier was talking about a little bit-- having small data sets that you can prototype with. A lot of our organizations have different restrictions and different protocols depending on the journal, depending on the papers, depending on how they were published, when they were published, all of these things. And being able to go back and without even starting a project, just identify a safe set of data that you can prototype and test against.
DYLAN DIGIOIA: Whether that's 50 papers or 100 or 1,000 that are open access so you're less concerned about losing that data out to the world if someone's experimenting, or whether it's journals that you have a very friendly relationship and a very eager editors on the other side of that can help you review and understand the outputs of whatever you're experimenting on. If you can identify that prototype content that has the ability to be very impactful when you do experiments, and you can get that set up in a way that your AI engineers or your technologists can use it, then you're already ahead of the game.
EMILIE DELQUIE: Thank you. All right. We have about five minutes for questions for our panelists.
AUDIENCE: I'll just-- I'll yell. This is for-- oh, OK. Yeah, thank you. For all of you, what are some of the lessons maybe learned in going through this process of, do you go outside to invest in support resources or do you invest in bringing in internal for almost like the build versus buy but with the AI in mind in how you are developing inside?
KATE EISENBERG: Can start? Sure. I think, again, it's about that self awareness and being very deliberate and intentional. And when I think back, I think there were a couple of times when we hit a wall and then in retrospect, kind of knowing what we know now, it would have been great to recognize that at the time. That either we need to look outside of ourselves to get around that wall or just wait.
KATE EISENBERG: And things are moving so quickly that the situation might be different down the line. But I think experimenting but recognizing quickly when something is just outside of your skill set or is going to require more resources, people, time, money than you're prepared to commit. But for me, at least with this, it's been more rapid kind of self checks like that of, where are we with this and did we accidentally wander down a rabbit hole where we're kind of putting too many people towards a problem that we might want to pull back from?
JEREMY LITTLE: Yeah, I agree with that. I also think it depends on how your organization is set up and what the actual end goal is. So some orgs just have some natural skill or some technologists that maybe have experience with these technologies. I think if you have access to those resources and it doesn't take a lot of investment, the barrier to entry is much lower. So it makes sense to go through prototyping like that.
JEREMY LITTLE: But if your organization is fairly non-technical, you really have to have a compelling use case to go out and upskill people and hire in from an outside organization or consultancy if the use case is really worth it. So I think it depends on the structure and the composition of your org before the problems even started.
DYLAN DIGIOIA: I 100% agree with Jeremy in that. I mean, Hum is, of course, an AI company so we're upskilling internally a lot. But from all of y'all's perspective, it's valuable in ideation. Incredibly valuable and an experimentation to have people that have the skills to actually do these things and the knowledge of your industry and of your own business and exactly how you play in the space.
DYLAN DIGIOIA: So whether that get that by upskilling people internally or by partnering with someone else, it's just invaluable to have that and maintain that knowledge and that closeness to the problem.
MOHAMED ELSHENAWY: I don't have anything to add, but I'll just talk about my experience. Like when adam.ai, they were doing this meeting management that were like a typical web-based application. And they approached me. And OK, we want to have a strategy for the AI. And then they hired me. I was kind of-- but that was before LLM. Basically, we were using BERT and stuff like this.
MOHAMED ELSHENAWY: So that's an approach to hire someone. If you are really decided you want to go in that path. And then this person will help you develop the strategy, do the hires, and other things. But other approach can be you just try with ChatGPT. Prompts is basically the question you ask to the LLMs. So basically, you can know a lot about the capabilities of the system if you just give the data in-- we call it in-context learning to the LLMs-- and see if the LLM will be able to answer the questions, then you know it's a valid use case or not.
MOHAMED ELSHENAWY: That can be another approach and that's it.
EMILIE DELQUIE: Excellent. Thank you. I'll take the last question because we're coming to the end, and we need to give a space for the keynote. This is a question for all the panelists and for you all as well. And if you want to put the answer in the app at the same time as I ask it to the panelist, I think we can all benefit from the answer. If somebody is interested in staying up to date with the latest development in AI, what's a good go-to resource?
EMILIE DELQUIE: And so I'm asking you all for really the purpose of crowdsourcing, giving each other ideas, but while you typing in the app, what are you guys reading to stay up to date with the AI developments?
JEREMY LITTLE: I think from the tech side, I'm constantly watching presentations from OpenAI and Microsoft and all the big tech players around these. They have conferences and presentations all the time. And I think keeping up to date with what the big players are doing is crucial. But for less technical people who maybe don't want to be as in the weeds or spend that much time, there are tons of podcasts.
JEREMY LITTLE: Hard Fork by The New York Times covers things like this. And just listening to one a week or a month can really help you keep a pulse on where the industry is and where it's heading. So I think your involvement depends on how technical you want to be with it.
EMILIE DELQUIE: Thank you. I'm starting to see answers. Thank you. Thank you. For those who say podcast, feel free to say which ones. Thank you, but it's a lot of podcasts. How about you, Dylan?
DYLAN DIGIOIA: I would say a lot of it's about timeliness. Like, how up to date do you want to be in the LLM space knowing that not everything that drops is accurate on day one? We had a pretty crazy thing happen just like a week ago or so where the best open-source model in the world turned out to maybe just be a hoax. Not 100% sure. But Twitter is great for really highly up-to-date information. I find just LinkedIn, especially network with your peers.
DYLAN DIGIOIA: Everyone in here is thinking at least a little bit about AI. I see a lot of great articles and much of it is very industry focused on LinkedIn, which is just excellent.
KATE EISENBERG: I would second that. I think for me, I'm both looking at the medical literature but then also some of these broader technology and other areas that I might not necessarily follow. And I've found by just curating my LinkedIn feed, it really works. People really highlight developments that are specific to my work and then the broader developments as well. I'm also particularly liking the New England Journal artificial intelligence for health care.
KATE EISENBERG: I'm just finding a lot of that is really right on for where the field is in my area.
EMILIE DELQUIE: Excellent. The big red clock behind me is very much flashing. So that's our cue. I just want to thank Kate, Dylan, Jeremy, and Mohamed again for participating in this panel for your insights. We'll be at the reception if anybody wants to continue the discussion. But please join me in thanking the panel, and until the next one. [APPLAUSE]