| Episode | Status |
|---|---|
Today's guest is Suvaleena Paul, Assistant Vice President and Senior Analyst in Fraud, Innovation, and Analysis at Bank of America. Suvaleena joins Emerj Editorial Director Matthew DeMello to discuss ...
Welcome everyone to the AI in business podcast. I'm Matthew D'Amelo, editorial director here at Emerge AI Research. Today's Today's guest is Suvalina Paul, assistant vice president and senior analyst in fraud innovation and analysis at Bank of America. Suvalina joins us today to explore how AI driven analytics are transforming fraud detection and prevention in the financial sector. Suvalina breaks down practical strategies for embedding AI into fraud workflows, improving threat identification and response times.
The conversation also highlights key workflow enhancements that enable faster decision making and measurable ROI by reducing fraud risk and operational inefficiencies. Just a quick note for our audience that the views expressed on today's show by Sue Valina do not reflect that of Bank of America or its leadership. But first, are you driving AI transformation at your organization or maybe you're guiding critical decisions on AI investments, strategy, or deployment? If so, the AI in business podcast wants to hear from you. Each year, Emerge AI Research features hundreds of executive thought leaders, everyone from the CIO of Goldman Sachs to the head of AI at Raytheon and AI pioneers like Joshua Bengio.
With nearly a million annual listeners, AI in business is the go to destination for enterprise leaders navigating real world AI adoption. You don't need to be an engineer or a technical expert to be on the program. If you're involved in AI implementation, decision making, or strategy within your company, this is your opportunity to share your insights with a global audience of your peers. If you believe you can help other leaders move the needle on AI ROI, visit emerge.com and fill out our thought leader submission form. That's emerge.com and click on be an expert.
You can also click the link in the description of today's show on your preferred podcast platform. That's emerge.com/expertone. Again, that's emerj.com/expertone. Without further ado, here's our conversation with Suvalino. Subalina, welcome to the program.
It's a pleasure having you.
Hi. Hi, Matthew. It's a great pleasure being here and I'm very excited to discuss all the interesting things that we spoke about and dive deep into it.
Absolutely. I I I think what we're seeing a lot in the fraud side, especially for banking translates into the retail space. Heck, even translates into with some limited visibility into, you know, b to b spaces as well. But wherever we're seeing customer workflows, there's a lot of lessons to take in terms of solving these challenges from the lens of data. Among those challenges, we're hearing fraudsters grow more sophisticated with tools like AI and access to massive digital footprints, being able to make a 100 email sign on accesses at once.
The speed and subtlety of the attacks are increasing from synthetic identities to biometric bypasses and real time account takeovers. We're seeing a lot of different threats right now that maybe a couple years ago, we thought would start to really metastasize and it's right on time right now. For leaders in fraud strategy across industries, the challenge isn't just keeping up. It's anticipating what's next and preparing systems to adapt in real time. Just starting there because I know your team has put a lot of effort into moving off of quarterly rounds of measurement that we're so used to, especially from my background in tax and thinking more real time and technology is helping you get there.
How are you seeing the fraud landscape evolve and what new threats do you expect to emerge in the next year and a half or two years?
Honestly, the pace at which fraud, right now is evolving is unlike anything we have seen before. And one of the biggest shifts is the rise of synthetic identities, like you did mention. And fraudsters are using kind of a mix of real and fake data to create completely new persona that can pass the KYC checks and slowly build trust. A layer of it also comes in here in the scam side wherein, this because we're talking about the synthetic IDs, I just got a reminder of it that there is a particular type of scam called a love scam wherein a person probably overseas or something will actually get into kind of a relationship with a customer, be it a older person or a younger person, and will build a friendship and a relationship over, like, four or five months and will gain trust and actually will get into a love relationship convincing that person, I am your boyfriend or girlfriend. And at the end of that four or five months, they would say that I'm stuck in a very difficult situation or I I I have been arrested.
I need $10,000 or something like that or even more. And the person, because they are so invested and they believe this person is real, they end up sending over the money. And even if not that, they're gonna probably cook up something so believable, like, need your bank credentials because I cannot take out the money from my account. So why don't you, give me your credentials, I'll just log in and do the transfer myself and stuff like that. So it's really interesting and, also very, very And people are so vulnerable to this.
Like, as a bank, whenever we detect patterns like this, we go ahead and send out alerts, we warn the customers. And at times, they override it on their own. They're like, no. We do not wanna go ahead with your alert. We wanna go ahead and pay this person.
So sorry for digressing. But, yeah, these are some of the very, very crazy things that are happening. And pair that with the behavioral spoofing and AI powered tools that mimic how legitimate users behave when you've got attacks that are almost invisible to the legacy systems. So, yeah, we are also seeing some very real time account takeovers where bad actors don't just steal identities anymore. They actually suppress the alerts, reroute the messages, and a similar kind of a incident actually happened.
The case came to me probably in the first year of my, tenure at Bank of America wherein, we analyzed and found out that someone known, probably a spouse or a girlfriend, boyfriend, I don't know what, they got a hold of the phone device for logging into the customer's account, and they changed the like, they added their own fingerprint. And the first thing they did after logging in is suppress all the alerts, scrape out all the security questions, and then ended up draining out $800,000 over six months. And the person, the customer didn't even know anything because there weren't any traditional red flag. It seemed like the customer was making the transactions, and hence, it did not raise any flags from our end. But this was a very, unique case that helped us probably put in four or five strategies back to back that's gonna, cumulatively handle such cases going ahead.
So, yeah, stuff like that just keeps happening. And looking ahead, like, you asked, like, what do you see in the next eighteen to twenty four months? So, yeah, looking ahead, I think fraud will become even more blending in. Like, instead of brute force or suspicious spikes, we'll see what I call a clean fraud, like, where everything looks right on the surface, the timing, the behavior. But however, some subtle mismatches will reveal something off, and probably we need to be more like, FBI here in the fraud space in the financial institutions.
So detection has to evolve just as fast and down to the second, if not to the day.
Yeah. Very much challenges that we've heard across banking. You mentioned we we have to be the FBI. Yeah. We wrapped an interview not too long ago with Nick Lewis at Standard Chartered Bank who leads, the the crime operations unit.
I'm I'm tweaking his title a little bit to keep it short. But he also had mentioned one of the bigger trends we've seen over the last few years is that in terms of the government enforcement, they're really leaning on the financial institutions. You have the data. Oh, yeah. You have it's it's your book report.
We're just gonna check your homework and make sure that you're targeting folks the right way, but we expect you to be at least have an eye on this. And the big development, which we haven't heard yet across the show, but I think it's something folks have been have been anticipating for the last few years, is that the cat and mouse game has kind of caught up with the derivatives and now kind of like the exponential scaling that we see on the generative side, not just the synthetic data, but also that you're deploying these systems on a on a mass sign on basis, creating a lot of content for folks to sift through. But it's all specifically tailored to target the KYC operations and standards that we already have in place, which are very much I don't wanna say the the regulations are are very much even from before, you know, the machine learning period, the the deterministic phase where we saw a lot of this technology. Now it's at the full blown generative phase. We're seeing the mouse side of the table really start to work at scale.
One of the advantages, and it's a limited advantage, but at least on the banking side is you do have, in in terms of a numbers game. You've got the you outnumber these organizations at least in terms of manpower here, and that's one of the things they have to compensate for, which we're seeing in terms of the scaling of the technology. But how are you seeing folks on the banking side of the table start to think about these problems and try to build solutions that counter at that same scale?
Well, we've had to completely rethink our approach actually wherein, like, where we used to review fraud rules on a quarterly basis. Like you said, we we are now operating in what is called a a rapid response model. Like, if we see a pattern emerge, we may only have hours to act. So the team is set up to pivot quickly and deploy countermeasures almost in real time. Like, probably a month ago or so, there was something that came up on a Friday evening at 6PM.
And we actually had people, and we have, we have this entire process of wherein alerts come in and we get, regular hourly emails saying if there's a spike or if there is a behavior or pattern change. So we actually brought in folks and it was almost an overnight, endeavor wherein we, control the entire thing and probably, got it under control within, five hours or so. So, yep, it is very, very real time. And, obviously, we are also, investing in a lot of enriched risk scoring that also includes not just the from where the transaction came from or a new device got added or some stuff like that. We also need to know things like how old the email addresses or whether the phone number has been used, for other accounts and what are the digital footprints look like overall.
On top of that, there's also something called biometric data. Like, if you if you go on the website and wherever you move your cursor and for how long you stay there, how many times you type in your password and backspace, everything can be captured. And we have technologies in place to capture all of that and sort of kind of like bring in all these data, make a beautiful concoction cocktail, and then finally get down to a pattern that we can actually follow and, put down a rule. So this is what we are doing. And, and, of course, we're constantly balancing the equation between fraud mitigation and customer experience.
Because it's not just about stopping the bad guys. It's also about without blocking or frustrating our legitimate users. We do not want to tamper with our customer experience because, it matters a lot. Trust me, Matthew. It matters a lot.
Like, if we Absolutely.
If we put in a alert that, wrongfully, like, as a false positive blocks out someone's account, they get really, really mad and it escalates. So, we have to maintain a very sweet balance between the two wherein we we actually stop the fraud, but, also not really harass or bother the legitimate legacy with us especially. So yeah. So we measure success not just by the fraud dollar stock, but also by how much friction we add in the process. It's all about stopping more while interrupting less.
Absolutely. And I I I think as the scales escalate, so so does the risk. You had mentioned really taking a a deeper look at the enriched risk scoring as a way to think more deeply about the signals that you're getting and how best to act on them in terms of that balance with the customer experience and the business. Wondering what you can tell us just about what go what goes into that enriched risk scoring vis a vis the problems we've been differentiating of fraud at scale versus kind of the first generation stuff we saw with this, when AI first appeared.
Obviously, the these scoring techniques weren't, in place before, and that is the reason the problems, when AI came in first would be this only that we were putting in rules that was, that had a lot of false positives, like I said. And at times, there are people who just forget who genuinely forget their password and who aren't trying to just log in to someone else's account. So so these are the kind of things. So we have put in, more technologies and more, processes to, gather a more historic, information about this so that we can handle it customer by customer or rather cluster it. And probably there are thousand customers who have this kind of behavior and who are legit.
So we have a different kind of algorithm running for them. So we have a lot of ways of clustering and clubbing these kind of behaviors and patterns. And for each one of these clusters, we have a different algorithm running. And based on how how risky the behavior looks like, we assign a score and there are couple of scoring metrics. This is something that I cannot talk at par.
Of course. Yeah. Right.
So We don't wanna tip-off the bad guys too much, but
but this
is a this is And, great for our
since you you brought in, machine learning and AI, so one of the most cutting edge, technologies I won't call it technology, but this is, an upcoming space where things are gonna be really, really helpful. And this space is gonna blow up real bad is graph theory in data science, wherein especially in the financial space in terms of, fraud prevention and risk mitigation, we use graph theory to cluster, the sources of fraud by first party fraud and then second line of fraud, third line of fraud. And basically, there there are at times when only it it's not just one person who's in play. It is a line of people. Two people in between are selected as mules.
They have no clue that they're actually carrying out a fraudulent transaction. Someone is just yeah. The the someone is directing them to do it and they go ahead and do it. So stuff like that. But graph theory is something very, very, interesting and it's an upcoming thing.
And, we are also leveraging it to some extent and hope it's gonna increase over the time.
For a lot of you what you pulled apart right there, in terms of graph theory, enriched risk scoring, I think the move that we're seeing a lot from quarterly measurement to real time, strategy, I think, is has a a much larger space than just what we're seeing in financial services, audits for just about every industry. But wondering how, you know, even places like Bank of America, you know, the big enterprise banks are thinking about translating those priorities from quarterly to real time strategy and what needs to be there in terms of signal architecture.
So great question. We've sort of, moved away from the old model of using this blunt thresholds like three logins in five minutes equals fraud. That kind of logic is, like, too rigid for today's fraud landscape. Instead, we are building layered signal architectures that look at behavior, transaction context, device intelligence, and more. And, like I said before, also the biometric data.
So together all of this, think of it like, I want the system to let good customers fly and bad ones crash. So that means personalizing our fraud detection method on who's interacting, how they usually behave, and what context they are operating in. So we used, device fingerprinting, login behavior, and even network velocity to determine if something is off, sometimes within milliseconds. And the feedback loop is key when we sort of get something right and approve more good transactions. The data comes right back into the system to keep the improving to keep improving the detection architecture.
And it's not about loosening the thresholds, but it's also about getting smarter with the signals we trust.
Absolutely. And and just because we're we're talking about a very regulated space, we have, on one end, the super organized bad guys who are leveraging these latest technologies. We got customers in the middle who might be duped. You talked about catfishing, earlier in the show, which is at least the first time we've heard it Mhmm. On the program.
Although catfishing has been around for ten or fifteen years. Just to see it weaponized at this scale is really, really something in terms of the technology. But on the other side of the table, you also need to stay compliant with regulations. A lot of these regulations date back before even machine learning in terms of, you know, what their standards are for KYC, which are being outdated by this cat and mouse game we're starting to see play out at scale. What advice do you have for other enterprise leaders in financial services spaces like banking that are incredibly regulated in terms of finding that balance between going after the bad guys, letting the good customers go, and making sure that their operations are compliant?
So, yeah, this is where it gets tricky. We are absolutely excited about what AI can do, especially for the pattern recognition, anomaly detection, and reducing false positives. But in a highly regulated environment, you can't just plug in an external AI model and call it a day. So for us, the safest and most strategic path forward is using internal closed loop AI systems that are tightly governed. We keep external vendors at an arm's length unless they are fully vetted.
And even then, we are cautious. But AI can be, an all powerful ally, but it's also not built in con if it's not built and controlled correctly, it can become a vulnerability as well.
Like Absolutely.
So So we don't want to expose that, in front of the bad guys, and, the stakes are huge. The fraudsters have nothing to lose, but we do. And, frankly, that asymmetry, is what keeps us focused. And, we won't speed yes, but not at the expense of compliance privacy or customer trust. So, at your compliance question, this is what I'd say.
This is, it's it's a rolling process wherein we keep revisiting whatever our existing processes are. And even when we are building, internal tools and, different models, we have to keep going and rechecking their, performance, their viability. If it's on a weekly or a monthly basis, it doesn't matter based on what is the importance level of the function. We have to keep keep going and rechecking how they are performing and what is the rate at which they are caching fraud or identifying the true frauds and not, coming up with a whole lot of false positives. So, yeah, my team is sort of, like, leaning into a lot of in house AI tools, ones, which we know exactly how they are trained, tested, and monitored.
So governance is, sort of currently not just a checkbox. It's, like, a foundation. And this is, I feel, not just within Bank of America. This we will be able to see across most of the big financial institutions. Whenever I meet leaders from, other banks as well, This is exactly, what I get to hear from their end as well.
And this is a challenge for sure, but, people are coming up with a lot of innovative ways of tackling this, but within Yeah. The company for sure.
Absolutely. And, I mean, it it it shows how, you know, a lot of these things kind of row in the same direction. The compliance is there to make sure you're taking in enough data so that the government can kind of play teacher student with you. Let me check your homework and make sure that you're doing it right in this kind of new method of enforcement, which has led to this very new dynamic, very different from forty years ago of you're seeing, the enterprise folks really get ahead of the regulations in terms of their own scrutiny and making sure that they can meet those new compliance challenges because it means more for them in terms of business challenges in the short term than it did forty years ago. Very, very fascinating stuff, especially to hear, from Bank of America.
Suvalina, really appreciate you being with us, these last twenty, twenty five minutes or so and giving us an inside look. Thanks so much for being with us this week.
Thank you so much. It was a pleasure talking to you. Thank you, Matthew.
Wrapping up today's episode, I think there were three critical takeaways for enterprise leaders focused on fraud prevention, risk management, and AI driven innovation. First, integrating AI into fraud detection workflows can significantly enhance threat identification speed and accuracy reducing exposure to financial loss. Second, fostering cross functional collaboration between analytics, fraud teams, and IT is critical to embedding AI insights effectively into operational processes. Finally, continuous testing and refinement of AI models ensure sustained ROI by adapting to emerging fraud patterns and maintaining regulatory compliance. Interested in putting your AI product in front of household names in the Fortune 500?
Connect directly with enterprise leaders at market leading companies. Emerge can position your brand where enterprise decision makers turn for insight, research, and guidance. Visit emerge.com/sponsor for more information. Again, that's emerj.com/sponsor. If you enjoyed or benefited from the insights of today's episode, consider leaving us a review on Apple Podcasts and let us know what you learned, found helpful, or just like most about the show.
Also, don't forget to follow us on X, formerly known as Twitter at Emerge, and that's spelled, again, e m e r j, as well as our LinkedIn page. I'm your host, at least for today, Matthew D'Amelo, editorial director here at Emerge AI Research. On behalf of Daniel Fagella, our CEO and head of research, as well as the rest of the team here at Emerge, thanks so much for joining us today, and we'll catch you next time on the AI in business podcast.
Why False Positives Are Costing Banks More Than Fraud - with Suvaleena Paul of Bank of America
Ask me anything about this podcast episode...
Try asking: