| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
Today's guest is Nina Edwards, Vice President of Emerging Technology and Innovation at Prudential Insurance. With decades of experience driving strategy, innovation, and AI-enabled growth at leading f...
Welcome everyone to the AI in business podcast. I'm Matthew DiMello, editorial director here at Emerge AI Research. Today's guest is Nina Edwards, vice president of emerging technology and innovation at Prudential Insurance. Nina joins us on today's program to explore why, at least in the eyes of MIT, 95% of AI pilots fail to deliver enterprise value and how to return productivity gains into measurable ROI by rethinking pre AI metrics. Our conversation also covers practical workflow changes like creating protected sandboxes to slash approval cycles from months to days, standardizing enterprise KPI glossaries for unified cycle time and exception rate tracking, shifting humans from doing to deciding in a human centered operating model, and developing outcome charters with AI ready business value targets to sustain momentum from pilots to scale.
Today's episode is part of a special series on agentic AI solutions in human led automation and regulated industry sponsored by Moody's. But first, are you driving AI transformation at your organization or maybe you're guiding critical decisions on AI investments, strategy, or deployment? If so, the AI in business podcast wants to hear from you. Each year, Emerge AI Research features hundreds of executive thought leaders, everyone from the CIO of Goldman Sachs to the head of AI at Raytheon and AI pioneers like Joshua Benghio. With nearly a million annual listeners, AI in business is the go to destination for enterprise leaders navigating real world AI adoption.
You don't need to be an engineer or a technical expert to be on the program. If you're involved in AI implementation, decision making, or strategy within your company, this is your opportunity to share your insights with a global audience of your peers. If you believe you can help other leaders move the needle on AI ROI, visit emerge.com and fill out our thought leader submission form. That's emerge.com and click on be an expert. You can also click the link in the description of today's show on your preferred podcast platform.
That's emerge.com/expertone. Again, that's emerj.com/expertone. Without further ado, here's our conversation with Nina. Welcome to the program. It's a great pleasure having you.
Thank you so much, Matthew. It's so nice to be here.
Yes. Absolutely. We we love talking to folks from Prudential. I think there's a lot of stuff going on and also with your your wider experience, and especially for the question that we're hearing over and over again in 2025 on the show. Even more than AgenTic or or anything else, think everybody's asking how are we turning pilots into success?
That really seems to be kind of the big I was at a industrial manufacturing conference last week, and a lot of the questions was how do we turn this 95 figure around? Ninety five percent of pilots don't make it to actual actionable AI. Enterprise leaders we're hearing on the show are investing heavily in the experimentation part, and I think that might even be part of the problem. It's a lot of that mentality. Oh, we're just experimenting.
But most organizations are still struggling no matter what to translate proofs of concept into measurable operational gains. Early wins remain isolated, and teams can't connect local efficiencies to broader business outcomes, or at least that's what they're telling us on the show. Just from your perspective, where are you seeing AI initiatives stall in large enterprises, and how big is that problem?
Yeah. You know, that MIT report really shook things up. It really opened a lot of eyes, and it was probably the hottest news, across the enterprise. Probably probably many companies in terms of, okay. Well, a, I'm not alone, and b, you know, maybe we're not so bad.
And maybe this slower, you know, steadier route made sense this time around just given everything that we know about AI today or this era of AI, I should say. I'm gonna talk. I think there's a lot of different reasons why you have that 95% figure kind of come up and, you know, people's eyes open up wide. I'm gonna talk about maybe just one of them that I think is not probably talked enough enough about. Right?
So I don't really think it's the the tech or the models that are weak when we talk about why things fail. I think it stalls because enterprises still measure and operate using pre AI assumptions. So what do I mean by that? What you see everywhere is AI generating massive productivity gains. That's that's usually that's why people tend to jump in head first.
Right? They're like, we need to do this. Boards are really excited about it. They want you to move forward and do something with AI. We're seeing the productivity gains around faster code.
We're seeing it on instance documentation, automated touch points, but the leadership doesn't see the meaningful ROI because the system absorbs none of that velocity. Right? So we take, like, engineering teams who generate thousands of lines of correct, like, boilerplate code with AI assistance. Right? The productivity jumps, but the ROI doesn't move because the deployment still follow quarterly system release cycles and the legacy approval chains, right?
All of that AI creative velocity gets trapped inside the system and it can't really absorb the speed. So the metrics say no value even though the work itself has radically moved faster, right? You're kind of like, well, if I measure it by how I used to measure things before, we didn't really do anything. But if I really take the technology for what it's really doing, there probably is more value that's been unlocked that I've just not been able to identify or track in the right way. So you'll hear that AI cuts deployment by 60%, but deployment's still quarterly and the cycle time doesn't change.
So the I r ROI looks really flat. Not a compelling story.
Well, well, just right there just right there for how you're describing quarterly, would it would you see that ROI if they had a different time measurement system? Is it merely a matter of time, or is it just our quarterly focus downstream from the entire infrastructure of the tax system, which, I know our our, you know, our corporate tax lawyers are very familiar with, that it just gets chopped up in such a way you never see the difference?
I think it just gets chopped up in such a way that you don't see the difference. I don't even know if the time time bound element of quarterly is really what's going to hamper you being able to categorize or see that ROI. It's really just looking at looking at the productivity, not from a velocity stand standpoint, but from what you usually would look at it like, oh, like a time saved or something like that. When you think about the time that the time that saved by developers when they are using like an automated code assistant, you know, they'll come back immediately and say, well, you know, I saved seventeen. Let's throw out random figure.
Let's say something really bold and like, oh, I saved seventeen hours. If you really break down that seventeen hours, it's three minutes one day, five hours the next day, two hours one day, one hour. I can't deploy seventeen hours at the end of that. Right? And I can't deploy five minutes after one day.
Right? But I can't talk about, oh, how much time was saved at the end of the day instead. I can talk about not talk not talk about how much time was saved, but how much faster I was as a result of this. You get what I'm saying? You get you you see the difference that I'm trying to make?
It's a slight distinction, but it's a better value story at the end of the day and hits closer as to why we're using the technology.
You're mentioning story here, and I I I get the impression that this is a lot of showing this back to the board and in a way that they're going to understand the real productivity change. Just cutting right to the chase there, what do you think is the best way to tell that story?
So for me, it's when and if we keep it in the context of the ROI, you'll it it actually works best when the ROI matches the workflows that the AI actually creates. So what when you're looking at it from it being faster, iterative, and insight driven, all of the things that AI claims to deliver. I think we need to always go back to what was the end goal and make sure that the the the metrics that we're trying to capture align to that at the end of the day. Right?
Mhmm.
We can look if we take a if we go away from the coding example and we go to we consider, service teams. Right? The AI drafts, 80 of the customer replies instantly. Right? And it's it's sort of along the lines that you were talking about before with legal and the reviews that they have to do.
So I've drafted 80%, but the legal SLAs for review is still two weeks. Right? The cycle time doesn't improve. So the ROI dashboard says, oh, there's no measurable impact. But you did just create the drafts 80, like, 80% of the drafts faster than you did before.
Again, looking at it not from what you used to track from a human standpoint, but now from what the technology is supposed to deliver. These are two different measurements, and I think we need to they need to make sure that they delineate between the two in order to to set, to actually put together that value story.
Absolutely. And and and I think especially when you're trying to manage them or really get this in the eyes of folks who are very departed from where the change is actually happening, you gotta keep a lot of that storytelling in the short term. Thinking a little bit more about the long term, what does it look like when you start to see the enterprise become system ready, especially for AI operating at scale and you start to get past these changes around looking at ROI in a different way to sell the the short term gains?
Yeah. It's a great question. I think you start to see protected sandboxes are like the first shift here. Right? There was one retail bank, I can't remember the name right now, that created tiered sandboxes with spend caps
Sure.
Redacted date, and automated, logging. Right? The result of them doing all of that is that they reduced their approval cycles from months to days, and they finally captured value in ROI metrics like time to operationalization and not just hours saved. I know I keep going back there because it's always the one that people love to use in enterprise to say, okay, yeah, we did something.
Right.
Beyond that, the next step is probably around like shared truth. So there was a major health care payer that created an enterprise KPI glossary that unified the the definitions of cycle time, exception rates, and, automation percentage, and the rework that was avoided. Right? So in standardizing all of the KPIs, for the first time, they could quantify which AI pilots actually produce deployable capacity versus one off time savings. Right?
So that unlocked scale because the leadership, finally had comparable ROI signals. Right? The the other piece of this is the the human centered operating model, which I kind of alluded to when we were talking about, okay, it's not that the other metrics go away. You just need to delineate between the the human ones versus the ones that are really attributable to AI. So the human centered operating model, this is where humans are reorganized around that new speed.
Right? So this is like changing what they do, how they prioritize, and how workflows across and how work workflows go across the system. Right? So this is what converts the speed into outcome. So to be crystal clear here, this is human shifting from doing from the doing to the deciding.
Right? So from gathering to governing, from processing to prioritizing, from checking to sequencing, and from escalating. And then the AI handles the high frequency repetitive and deterministic pieces of what you're trying to do at the end of the day. And if I bring it back to insurance, there was another insurer another insurer. This was this is not a Prudential example that moved underwriters out of the data gathering mode and into the decision sequencing mode.
And they didn't track the documents reviewed. They tracked the time to decision. And so that shift alone cut cycle times from days to hours and fundamentally changed the entire ROI story. So system ready here ROI really means like ROI ready, at least in the context of the discussion that, you know, we're having today. Because if the KPIs can't capture the value, the value really never appears.
It's kind of like, what the kids would say about Instagram. Like, if you didn't take the picture, were you really there? Like, that's really your evidence Yeah. At the end of the day. So I say partly, and I I wanna stress, you know, I don't want you to to I don't want to purport the idea that if you're ROI ready and you have all of your ducks in a row when it comes on to KPIs, that you're a 100% system ready for AI working at scale.
It is just a part of it, but I do think it is a really critical lever in the broader set of conditions that make enterprise systems, sort of ready for AI. Like, for me, if I think about complete system readiness, this is going to be this is gonna require rather the right processes. The operating model needs to be tight. Governance, data flows, and sandboxes, which we kinda touched on somewhat, is gonna be important. And then the most important, talent design.
Do not forget your people throughout the process. They are so integral to this. I know that there is sort of a blanket assumption that the tech can do everything. It's not gonna move without the people. So all of these piece all of these pieces have to work together in order for AI to scale.
And when they come together, that's when you see the value become really visible and also deployable. So we just don't wanna see it, but we wanna be able to to leverage it and really use it.
Absolutely. And I just going back to what you were saying and thinking about KPIs from a human lens versus the system lens, a point I've made on the on the show in the past. But I I think it's when you're at your AI white belt, you know, all manual processes are the enemy, and you gotta go in there with lasers and get them out. And and and that's where you want the technology. Think when you get to your AI black belt, it becomes, no.
Manual processes were not the enemy. We need to be selective about what the manual processes are because that's the stuff that's front of mind. And
this No.
Yeah. Go going back to right what you were saying about training your people. We've talked a lot about KPIs really telling that story upstairs. Maybe thinking a little bit more about the team and the human resources that you're working with, small h, r. What should leaders do now to move from demos to deliverables when it comes to their human talent?
That is a good a good question. It really you gotta start from the beginning like you do for any major transformation within your company. You really need to start thinking about your AI literacy and your AI fluency within the company. Larger companies right now, what what I think they're typically used to is that you you have something big in you that, you know, comes into the enterprise and it's something that you want, that you know you're gonna spend a lot of money on and that is gonna touch all areas of your business. And you say, well, let's go to L and D and let's kind of spin up a lot of training for folks to become literate in whatever this thing is.
And I think that that is a good start, but you can't stop there. The I mean, just in the nature of how we started this conversation, Matt, right? Like, when we think about it, you were like, well, you know, things are moving so quickly. Like, you know, this this was the conversation before, and now we're talking Adjentic, and we're talking this. You can't you have to the l and d piece of this, you have to ground people in the fundamentals.
Right? So what does the the AI journey kind of look like? What was the AI from ten years ago that we've already been doing? And how does it connect to the AI of today and what we're trying to do within the firm? Those foundational courses on prompt engineering, etcetera.
These are good tools, like, in the tool chest for all of your all of your folks within your organization. Beyond that, the messaging and communication around what you're actually doing in the firm and how it's going to impact all of the different areas, that needs to be consistent and transparent. Leaders that are charged with kind of driving the AI strategy need to make sure that they're talking about it all the time. And then there's this gray piece in the middle, and I talk about this all the time with folks on my team. And this piece of it is really, like, not even so much about the company messaging and the foundational learning, but it's like, okay, so now we kind of understand these concepts.
How do we get to play with them before they become embedded into our processes? How can we better understand the tech in a more tangible way? What do we do with it? This is when you start thinking about, you know, the sandboxes that we were talking about before and introducing safe spaces for people to start playing with the tech so they not just read it and get tested on it from, you know, one of that in from that internal l and d, you know, course that they had to take. But now they can say, okay, well, you know, based on that coursework, I have an idea.
It's not ready for prime time, but let me see if I can play with it. And they have that sandbox in order to do in order to do so. So now the the human is getting to understand the tech to the point where it can work side by side with it because now they know each other to a certain extent. Right? The the tech has already been trained on what the human does.
Now the human needs to understand tech.
And that's such a two way street that is really hard to explain to folks who are pre transformation, have not become acquainted to these systems, and often the case that that's where board leaders are are starting from, of course. We've talked a lot about that beginning of the process, how to get started, how to sell the board. I think the next biggest question ends up, you know, once you have momentum, once you have early gains, how do you sustain that momentum? How do you get those AI systems to mature as especially as expectations rise? Really interested in what you think about what's forgotten in that process, especially as demos are moving to deliverables and and pilots are becoming more mature.
So I think there there are a couple of things. We it's so funny because I feel like we spent or I actually, I won't say we. Spent a lot of time talking about the ROI perspective of this upfront and I still think it is one of the biggest things that is kind of missed once you kind of Hey. Into giving
that story. More emphasis on it, go for it. Don't let me stop you.
So I feel like if I if I bring it back to the ROI, just just one more time. Go for it. We'll rewind it, and we'll bring it back to the ROI again. It's really the step one around it is that, okay, you have your pilot, you have everything going, create like an outcome charter. And I know people hear charter, they're like, ugh, I don't wanna create anymore, like, slab or etcetera.
It's not like a model accuracy target, but it's a business value target. So there is value in doing it. Right? So this this is something that you can reuse. So take the pain out of the word out of the word charter and just realize that this is something that you're going to do, like, kind of kind of one and done.
And it's going to really focus on the business value target with the AI ready KPIs. Right? So there was a claims team that, like, recently defined ROI around exception rates and cycle time, not hours saved. So within one quarter, because they had this sort of they had this charter already in place, they already had a clear financial story, like, that that leaders could go run with and really invest behind. So this not only helps with the the pilots and the POCs that are in progress, but it'll help with the upfront pain sometimes with getting folks on board with some of the stuff that you want to do from an AI perspective.
Because if the one of the things the boards kind of come in and say is that, oh, yeah. We want you to do something. Then you do something. You didn't really measure the r o the ROI well. And they're like, well, we don't want you to do anything unless the ROI is lives in this bucket.
Right? So now if you can tell the story better upfront before they're even, you know, clamoring around like, what what it is that you've done before that wasn't up to snuff with the ROI, then you can tell that story from the beginning. You can tell it backwards and say, this is where we're gonna end up and this is how I know and this is how we're charting it. Right? The second bit around this is to kind of install AI operations cadences.
Right? Like, you want to keep talking about it keep talking about it a lot. Right? One of the I'm not gonna I I can't remember the name, so I'm not gonna give any names. But there was one financial institution that I read about that replaced model model review meetings with weekly operational AI reviews so that they could track time to decision, error rates, customer impacts, and the rework avoided.
So that changed the ROI conversation again from hypothetical to measurable. Like, I think it's important for firms to be able to evidence this stuff because, again, like what I said from the beginning, I I don't think the problem is at all with the tech. Right? There isn't anything wrong with the with the AI at the end of the day, but telling the story has gotten incredibly difficult for people in terms of really showcasing the value at the end of the day. And then the third piece of this is really normalizing the language or set another way, it's standardizing the language.
And what do I mean by standardizing the language? Like, if I speak to a lawyer, that just means that, okay, that you've just cleared up some stuff in contracts. And what I really mean here is standardizing the definitions and the labels, and all of the KPIs and the skills and the role names like we were talking about with the with the people that are gonna be involved. And even the workflow terminology, this is even gonna hit on, you know, the other favorite topic, around AgenTic AI. So what basically will help you to to standardize what what good will look like at the end of the day when it comes on to any of your AI deployments.
Right? So across enterprise, like, skills or KPI glossary that's embedded in, like, a workday and a Jira, this is gonna make sure that your staffing, your funding, and evaluation all speak all speak the same ROI language. Right? And this is what's really gonna unlock the KPIs and ROI that will finally show up, and you'll finally be like, okay. I'm happy with this deployment.
The tech did what it needed to do, and it gave me the results that I needed at the end of the day to go back to the the board or even just turbocharge a a workflow or a process that we were looking to reimagine.
Absolutely. And I really appreciate you you saying that just about jargon and definitions because I think, especially on the data science side and data scientists really underestimate how much is is going to get garbled on impact when they encounter SMEs in terms of what they thought
Oh, yeah.
And the words that they use has a concrete definition. But, no, you're using the same words over and over and over again with different meanings.
And even in the business, like, lag language fragmentation, you'll see in things like, analyst productivity. Right? Like, if you say you say analyst productivity in one area of the company versus the other, that you they they have completely different meanings. Right? Like, the the signals are just too noisy to justify the scaling investment.
Right? So if we double click even into that analyst productivity term, you can have operations that claims productivity is the number of claims processed per analyst. Right? And then the and then the draft summaries and the so to volume increases of 30%. Right?
That's great.
Yep.
But then you have finance that comes in on the back end. It's like, oh, well, product you know, the the analyst productivity is really a cost per analyst, like, cost per analyst versus cost cost to save. Right? Or cost to serve rather. Sorry.
In that example, AI reduces the overtime and the cost decreases by 8%. Both good metrics. Right? But you can't really put pull them together and and, like, and and and say, okay. Well, this is what we mean by analyst productivity.
So one group is measuring volume. The other group is measure measuring cost. So those numbers can't be compared or rolled up in a way to justify anything to anyone, from an enterprise investment perspective.
Very, very important to really head those off the past. And I think we we you know, the culture, think, really experiences that this on on some level, with artificial intelligence in general, and that there are huge, enormous differences between generative and deterministic AI. In that they really from a product standpoint, any way that you would use one or the other, they don't deserve to be compared. Really, you're comparing rocket ships versus calculators. Like, you know, it's
100%. 100%.
And that really, really ends up complicating, the language fragmentation, problem. I do feel like that's one of those words. It's like saying sustainability in 2005. In about five years, this is all anybody's gonna be talking about, especially as this kind of becomes a bigger problems for folks. I know we're right up on time.
I really appreciate the extra couple minutes you gave us. Nina, thank you so much for being with us on today's program.
Thank you so much, Matt. I can't wait to do it again.
Wrapping up today's episode, I think there were at least three critical takeaways for enterprise leaders in data and AI to take from our conversation today with Nina Edwards, vice president of emerging technology and innovation at Prudential Insurance. First, rethink pre AI metrics to capture AI's true value. Shift from measuring hours saved to tracking velocity gains like faster cycle times and deployable capacity since productivity boosts get trapped in legacy quarterly cycles and approval chains. Second, standardize language across the enterprise with a unified KPI glossary that defines terms such as cycle time, exception rates, and rework avoided, enabling comparable ROI signals that connect local efficiencies to business outcomes. Finally, adopt a human centered operating model by reorganizing teams from doing to deciding paired with protected sandboxes and outcome charters targeting business value to scale pilots into sustained transformation.
Interested in putting your AI product in front of household names in the Fortune 500? Connect directly with enterprise leaders at market leading companies. Emerge can position your brand where enterprise decision makers turn for insight, research, and guidance. Visit emerge.com/sponsor for more information. Again, that's emerj.com/sponsor.
If you enjoyed or benefited from the insights of today's episode, consider leaving us a review on Apple Podcasts, and let us know what you learned, found helpful, or just like most about the show. Also, don't forget to follow us on X, formerly known as Twitter at Emerge, that's spelled, again, e m e r j, as well as our LinkedIn page. I'm your host, at least for today, Matthew D'Amelo, editorial director here at Emerge AI Research. On behalf of Daniel Fagella, our CEO and head of research, as well as the rest of the team here at Emerge, thanks so much for joining us today, and we'll catch you next time on the AI in business podcast.
Rewiring Systems to Scale AI From Demos to Deliverables - Nina Edwards of Prudential Insurance
Ask me anything about this podcast episode...
Try asking: