Name:
LLMs: Redefining "Value" while Upholding "Values"
Description:
LLMs: Redefining "Value" while Upholding "Values"
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/c49c7752-6555-41b0-b968-d0d40bea7cdc/videoscrubberimages/Scrubber_1.jpg
Duration:
T00H25M50S
Embed URL:
https://stream.cadmore.media/player/c49c7752-6555-41b0-b968-d0d40bea7cdc
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/c49c7752-6555-41b0-b968-d0d40bea7cdc/SSP2025 5-28 1415 - Industry Breakout - TNQTech.mp4?sv=2019-02-02&sr=c&sig=B1TIae77TXjr7NB2Ni7cMyEzOpi3Yt2RutzmhOEkeP8%3D&st=2025-06-15T19%3A03%3A01Z&se=2025-06-15T21%3A08%3A01Z&sp=r
Upload Date:
2025-06-06T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Ravi good afternoon. I am Shanti krishnamoorthy from TNK tech. As TNK has been providing technology and production services for many years. This year has been very significant for us. We became part of Lumina datamatics family. We are now a team of 6,000 talented people, offering wide spectrum of services and technology capabilities to serve the publishing industry.
What makes this merger exciting is not the scale. It's not the 6,000 people. It is just the possibilities. It opens up for all of us. Together we can think bigger. We are building things faster, and we are tackling challenges and problem statements of publishers with more agility, more ideas and wider set of capabilities. It's lovely and wonderful to see familiar faces here today.
It's a privilege to connect with friends and peers in this community, and hats off to those who picked up this topic at SSP. Thank you. A little about myself I've been with TNK for 20 years. I managed across operations, R&D and technology. It's been a journey of learning, adapting, growing with the industry. I recently stepped up into the role of a business head at TNK.
A whole new chapter of my journey. With me today is Neil, who heads up our technology and product portfolio, and he plays a key role in shaping our technology, product and AI strategy. Both of us will be sharing our perspectives today, our stories, our learnings, and the journey we are on. Let me take a moment to talk about the questions we have been asking over the years, because in many ways, those questions are the ones which shaped our journey.
We have been coming to SSP for a while now and each time we are here, we try to share our perspectives that are grounded based on what we have actually done. For us, it is never just about theory, it is about trying things, building, learning, coming here to reflect, share and ask. In 2020.
First, to clarify, the person you are seeing is not the a clone of mine. And unlike the image. I would like to believe I haven't lost a few hairs in the process of exploring LLMs, leaving the jokes apart this style. This slide tells us the story of questions we have been asking how we have evolved over the years. Back in 2020, we were here talking about how we embraced machine learning, getting meaningful data, building models, and applying them in our production work.
We started this journey way back in 2010. It was an enriching experience for us. Then ChatGPT arrived in November 2022. It was a moment that left all of us trying to figure out what it meant. So in 2023, we came back to SSP to share our early thoughts and understanding of ChatGPT. We tried to demystify the technology behind it, and also reflect on what it could mean for our industry and the new possibilities it was opening up.
In 2024. The conversation has shifted. It was not about whether we want to use LLM or not. It is about when to use and how to use. So we started asking, how do you use LLMs in a way that makes sense for us. Was it what is the right framework to determine when the use of LLM. Will it deliver a value.
And how can we strike a balance between and the human experience. Now we are here in 2025 once again, but we are on a different plane. LLM has evolved, we have evolved, we have done more hands on work, built real solutions, and learned how to extract value in a way that feels meaningful for us. But as always, the more we explore, the more we realize that we don't know much.
That's why we keep coming back to SSP. It's not about sharing our answers. It's about asking questions and learning from all of you, understanding your problem statement, and continuing this conversation together. We explore, we learn, we share. We listen. We get new ideas. We go back to Lab and try again.
It's not a conclusion again. It's a conversation. It's a journey. We are all together. So let me take a real use case scenario that we have built. We built a simple taxonomy solution for a publisher. I met this publisher at least eight months before. They have already spent half a million trying to automate taxonomy generation from the XML files.
It was based on a predefined vocabulary. Their current solution was not meeting the business need. It had poor accuracy, very limited business value and a high curation burden with SMEs and their author experience was obviously not great. So we thought, OK, a simple statement to a generative model. So a simple LLM solution would work. It's a great magic wand to solve this problem. I was really over confident about the solution.
So the problem statement was clear. I brought this to our data scientist. She jumped and said, give me a week. I'll solve this. It's a simple use case of generative model. So she applied a generative AI model, especially in LLM for term generation. But within a week, with a slight sad tone, she came back to me and said, we are almost there.
But the accuracy is 80% OK, so what should we do. We just require a little bit of fine tuning. Otherwise, it's ready to go. OK, great. But she also told me one funny thing, which I still remember. Besides the model getting fine tuned, your SMEs also need some fine tuning in the evaluation process. So I turned to my domain expert and said, how do you feel.
Her reaction was, yeah, it's nice. It was better than what I expected. But you see, it's bringing some weird terms like disease and humidity, which means it will have a very reduced discoverability and relevancy for our audience. This is absolutely valid. Please remember, I am a domain expert. I have bias on my own with the domain experts.
So here I am. I caught between business values, which is about speed, automation and efficiency. And on the other side I had editorial values, which is precision, domain relevance, reader usefulness, searchability, et cetera words like disease immunity are not wrong in a linguistic sense. The model is doing what it was trained to do draw on vast amount of generalized knowledge.
But this knowledge, this issue is not about correctness. It's about context relevance to a highly structured content environment like ours. So those values, I realized, came only from human SMEs. So the expert in loop was brought in. So this experience taught us LLMs are powerful. They can play the role of anything. It could be a creator, it could be a judge, it could be an advisor, mentor, anything new.
But the presence of an expert in the loop helped us to balance value and values. Obviously, AI involved AI ensure data security frameworks are robust, and with all of these I have to still make profits. The solution was tested, piloted and it is being implemented to sum up values must guide the creation of value. I am repeating myself to sum up values must guide the creation of value.
With this note, I'm going to stop. I'll request Neil to present what we learned from other industries. And as an organization, what are we doing going forward. Thank you. Thank you. Shanti I think what Shanti didn't say out loud is what she says when I involved it. That's her code word for Neil.
Please fix this problem. But please don't break anything in the process. And that's how it's been for us. Rolling up our sleeves, trying to make AI work, and realizing sometimes the hard way that AI isn't a silver bullet. It's been humbling. We have built, we stumbled, we have learned. And through it all we kept asking, what's working for us, but also what's happening out there.
Other industries, other companies, big bets, bold moves, and sometimes big lessons. Take Klarna, for example. If you do not Klarna is a fintech heavyweight in Europe, and they made headlines last year when they went all in on automating their customer service with chatbots. And the goal was faster responses, lower cost, higher efficiency.
It all sounds good, right. But here's what they found. When customers reach out, especially when they are stressed, worried or frustrated, they are not just looking for an answer. They are looking for empathy, a listening ear, a human who gets it before they solve it. I could get answers, sure, but it couldn't care. It couldn't build trust.
It couldn't make a customer feel heard. So Klarna made another bold move. They are bringing back humans because they learned the hard way that in a rush to automate, they had lost something essential. Now they are not throwing AI out of the window. Far from it. But they are still investing in AI to drive efficiencies, speed and cost savings.
But here is the important part. Now they are doubling down on the human side by making sure the human service part of the cloud becomes much stronger and not weaker. And that lesson, it really stuck with us. So even the boldest and most AI forward companies like Klarna realizing that human element is non-negotiable, we thought, what about us.
What's our compass. What's our path. We realized we needed to pause and reflect on two words that often come up in conversation, which is the theme of SSP this time value and value. So we went back to it. Now value is what businesses chase. And it's important efficiency, innovation, quality, responsiveness, all the outcomes that make things faster, better, cheaper.
And values. Well, that's a tricky part. It's a big word. But we do know there are a few things that matter to US privacy, reliability, accountability, and most importantly, the trust. Things that make us feel like we are doing the right thing and not just the fast thing. But here's the thing you can't have all of both.
If you chase only value, you might cut corners on your values. And if you only hold tight to your values, you might miss out on delivering value. So this Venn diagram, it's not just a graphic, it's a reality check for us. There are parts of value that don't overlap with values, and we have decided we are OK letting some of that go, but not necessarily the other way around.
Although what we aim to focus on are true North is this intersection, which you see the sweet spot where value and values meet. Because in a world that's moving fast, where AI promises you the world. It's easy to lose sight of what really matters. We want to build things that are not just smart, but responsible. Not just fast but fair. Not just innovative, but trustworthy.
And we believe for us. Only way to create outcomes we can be proud of outcomes that last. Once we got clear on what really matters our compass, our path, the next question we had to ask ourselves was. AI is evolving so fast, but are we evolving with it. In fact, you see here two axis tech breakthrough, human breakthrough. We kept on debating Shanti and I and some of our colleagues for hours and hours.
While tech breakthroughs keep happening, is it fair to say you human breakthroughs. And Shanti wearing her copy editor hat. We finally settled on. Maybe it's fair to call human breakthrough, but it's still a work in progress for us. So here what we have seen when it's moving at breakneck speed, every week there's a new model, a new feature, a new headline.
OpenAI, Google, Microsoft. They're all racing ahead, pushing boundaries we can't even predict. And we have felt that firsthand. Like Shanti was talking about our earlier taxonomy project experience. I had its breakthroughs, but we learned the hard way that it takes more than just great models to solve complex and real life problems.
So what we have realized we can't predict where AI will go next, but what we can do, what's fully in our hands, is to prepare for it. Prepare to unlearn, prepare to relearn, prepare to adapt because that's the only way we'll stay ready for whatever comes next. And yes us. We love a good 2 by 2 metric. Last time we were here, we used it to explore the different paradigms of human and AI, and now we have brought it back with a different lens.
So let me walk you through it. So the first quadrant, which you see here, that is where we have a thoughtful and a careful team there asking all the questions. But we hold back, we stay cautious, and we think, let's wait until things settle. It feels safe. But the reality is we are not growing.
We are not evolving with tech, but we are not building capabilities. And in a world moving this fast, standing still is really just falling behind. Then we have a thoughtful, careful team. They are asking the right questions, being nuanced in fact, being cautious, maybe even too cautious. We are learning.
Yes, we are building judgment. Yes, because we are so measured, so slow to adopt those tools. We are not keeping pace with how fast AI is evolving. So while we are smart, the impact stays small. We don't scale. We don't move fast enough to unlock real transformation. This is when we lean completely into tech. We chase every shiny new model, every tool, every new feature. We go fast, but we don't build the right understanding.
Our ecosystem isn't ready. Our people aren't ready. We haven't built the guardrails. We haven't bought. We haven't thought through the real risk for the deeper questions. So while it might look impressive in the first glance, it's not built to last. There are gaps, blind spots, and loose ends everywhere.
If I'm absolutely honest, when I look at what is happening around us, most of these developments will fall in this quadrant. And it is not a criticism, just a fact. And let me say, we are not immune to this either. We have gone through all of these quadrants ourselves, and we still at times find ourselves in one of these situations. We played it safe and felt stuck. We've been cautious, stayed small.
We chased the shiny things and learn the hard way. And then there is a zone where we have come that it matters the most. The zone we are chasing, we try to chase. This is where we do keep up with the pace of AI breakthroughs. We are not sitting back waiting for the dust to settle. We are not getting left behind, while the tech race is ahead. We are actively exploring the latest and greatest models, the newest tools, the cutting edge capabilities, staying in the game, staying curious, staying open.
But here's the thing you are able to move fast because your people are ready. It's how human breaks through. By learning to unlearn, to relearn, to adapt, and to bring judgment, context and critical thinking to mix. That's what gives you the confidence to explore boldly without being reckless. And that's the real transformation for us. So while AI is breaking through and we are keeping up because our people are breaking through too, that's a zone where value and value comes together for us, and that's the zone we are aiming for.
But of course, it's one thing to talk about all this in theory. The real question is, what does it actually look like on the ground in the day to day. And that's where the quiet part comes in, the part that often gets overlooked but really matters the most. Let me show you what we have been doing. First, we built this LLM workspace. It's a screenshot of that, or as we like to call it, our LLM playground.
It's a safe space where our workforce across the company can explore different AI models, try out ideas, and get hands on without worrying too much about cost, data privacy, or compliance because we have already put guardrails in place. We realized early on that just telling people to try I wasn't enough. You have to create an environment where they feel safe to try to experiment, to get it wrong, sometimes without worrying that they'll break the system, leaked data or get an angry email from IT.
So the LLM workspace became the playground where curiosity meet guardrails, and it's worked. We are seeing more people trying LLMs in their day to day work, feeling a little less hesitant, a little more confident, and sometimes even surprising themselves of what's possible. And it's given us some interesting insights too, which teams are the most curious ones, which functions are using AI the most, and where we might need to tweak access, or even our policies.
For instance, we have built custom spaces for our HR, finance and other cross-functional teams where because AI curiosity isn't limited to the usual suspects. And that's been one of the best parts of this journey. Realizing that magic isn't just in the model, it's in the mind of the people using it. Then, once people were comfortable with the LLMs in their own context, we started running regular LLM hackathons. What you see here is a snapshot from one of those sessions.
One day sprints where cross-functional teams come together, brainstorm real problems, and build AI powered solutions end to end. It's been eye opening for us and our folks as well, because people realize when AI doesn't give you what you expect, it's often about how you asked what you asked and how clear your problem statement was. In fact, in one of our recent hackathon, we asked participants, did the LLM disappoint you.
And the answer almost every time was not really, because here what we learn. The LLM wasn't the bottleneck. We were the real challenge, wasn't the model. It was the clarity of our thinking, the quality of our questions, and how well we could work well with each other and also with the AI. So the hackathons weren't just about testing AI. Actually, it turned out to be testing ourselves how we think, collaborate, and learn to work with AI not as a tool, but as a partner.
So that's been the real breakthrough for us. And at the same time, we have been adding mindful nudges into our systems, small, subtle reminders in our HR platform to encourage thoughtful but responsible use of AI. It's a simple thing, yes, but it's interesting how often these little reminders pop up at just the right moment, keeping eye in front of mind in a healthy and a grounded way. While we are learning and taking thoughtful steps, we know we don't know everything.
That's why we are learning from experts too. We are looking at global standards like ISO 42001 certification for responsible AI, and that's what we are working towards. It's helping us learn from the best, revisit our policies, systems and processes step by step. So we are not just talking about responsible AI, we are building it in and doing it right. That's what gives us confidence to explore AI'S breakthroughs without losing sight of our values or our judgment.
Because in the end, it's the quiet part that matters the most. And that's where we are choosing to focus. And with that, I'll stop before Shanti reminds me that I might be evolving, but the clock isn't slowing down. Thank you. Neal before we wrap up, I think we talked a little bit about our journey and how we implemented within the organization.
I want to share a small detail from our own space, something that reflects the journey. We are all people who visited us in Chennai, India. This is the image from one of the conference rooms of TNK office. The design on the glass is a paraquat pattern for anybody who is interested. Parq UAT because I often make the spelling mistake. If you closely look at it, you will see it evolve from left to right.
The pattern becomes more intricate layer by layer, small changes building on each other to form something more complex and more meaningful. You can also see it right to left, where you remove a line one by one from your complex problem and you end up in a simpler working space. For us, this is a quiet reminder that big change does not happen overnight. It comes from showing up, making thoughtful moves and learning continuously.
AA is overwhelming for some of us. Fast moving, full of unknowns. Always rising. New questions. And it is normal to feel that way, at least nowadays. But what matters is keep moving forward, keep exploring, reflecting and adapting. That's how the growth happens. So what is next as an organization.
We are already seeing the next wave. AA agents are taking shape. Soon we won't just manage people, we may be managing intelligent agents that support different parts of our work. We are working on few POCs with agentic AI. This will be a big shift. A new learning curve for all of us. But as we have experienced ourselves, if we stay thoughtful, take steady steps and create space for humans to grow alongside technology we can navigate not as an individually, but as a group.
We must give ourselves permission to explore with intention, to take risks with care, and to stay curious without losing sight of our values. Because in the end, it is this groundwork, this quiet. LEGO blocks going to matter a lot. So with that, I'm going to request Neil to recap. I know we are at the fag end of it. We don't want to bore the audience. Please go ahead.
Thank you. Shanti yeah, we have spoken a lot. So here's a quick version when we step back and look at the whole journey. A few things stand out for us. First, value and values. They don't always play nicely, but when they do, that's where magic happens. That's the sweet spot.
Like we said, we are aiming for not just building fast, but building it. Second, the real breakthroughs that don't come from technology alone they happen when we humans learn, grow, and figure out how to use the technology wisely. And third, the biggest shifts. They don't always feel like big moments when you are in them. They build quietly, slowly. Then suddenly you look back after a few months and/or after a few.
Did we just break these moments and realize, oh wow, we have come a long way. I will keep evolving. The questions will keep changing. No one has it all figured out. We are all in it together. What we can do is to stay thoughtful, stay curious, and give ourselves and our teams the space to explore, make mistakes, and grow.
It's not about chasing every shiny new thing. It's about creating the right conditions for real breakthroughs. So that's us. Somewhere between figuring it out and hoping we don't break it in the process. Shanti, back to you. Thank you. Usually at the last site we asked for questions, but this session.
At least in this topic. We also have questions. What we shared today is lot is just a journey that we have shared, but so much for us to learn, explore, and build together given our Knowledge hub of the industry. If your experimenting with AI, whether it is work to failed or surprised, we would love to hear you got questions, insights, challenges. Bring them on.
No question is small and no story is early. Come and find us in booths 8 and 210. Let's talk. Share Neil and I would love to further discuss how we are exploring and leveraging AI to solve nonreal, complex publishing problems. We will be happy to showcase some of our latest work use cases that we are genuinely excited about. We are looking forward to connecting with all of you with curiosity, honesty, and a lot of ambitions.
Thank you so much. Thank you.