| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
Today's guest is Naveen Kumar, Head of Insider Risk, Analytics, and Detection at TD Bank. With deep expertise in financial crime and risk analytics, he offers a frontline view into how regulated insti...
Naveen Kumar, Head of Insider Risk Analytics and Detection at TD Bank, discusses the critical security and governance challenges preventing AI adoption in banking. The conversation covers foundational risks like data leakage, prompt injection, and shadow AI, while emphasizing practical solutions including role-based access controls, treating AI agents as quasi-employees, and phased deployment strategies. Kumar stresses that successful AI implementation in regulated industries requires balancing innovation with regulatory compliance through disciplined data controls, human oversight, and conservative approaches to automation in compliance workflows.
Kumar identifies six critical security challenges preventing AI adoption in regulated financial institutions: data leakage from internal investigation notes, prompt injection attacks (social engineering of AI), model inversion revealing internal AI architecture, shadow AI tools operating outside IT governance, hallucinations producing confidently incorrect outputs, and model drift. These challenges represent fundamental risks that must be addressed before scaling AI in banking.
Kumar discusses how purpose-fit AI with role-based access controls can mitigate both security and hallucination risks. By limiting AI responses based on user roles (HR sees HR data, investigators see flagged employees, finance sees nothing irrelevant), institutions can prevent unauthorized data access while improving output accuracy through proper context.
Kumar outlines a comprehensive AI governance approach that includes full data visibility, role-based access, guardrails as 'invisible force fields', and treating AI agents like employees with similar oversight and approval processes. This framework includes tracking what data AI touches, who reviews its work, and implementing hybrid deployment strategies.
Kumar advocates for phased approaches to AI deployment in regulated environments, starting with specific use cases and limited data availability rather than comprehensive solutions. He emphasizes conservative approaches for compliance use cases, maintaining human-in-the-loop for high-stakes decisions, and using tiered alert systems where AI handles lower-risk items autonomously.
Kumar provides actionable steps for leaders starting AI initiatives: create safe sandboxes for experimentation, build comprehensive AI inventories, involve compliance early in development, reward responsible innovation over flashy projects, and measure success by reduced incidents and better detection rather than just innovation metrics.
Kumar emphasizes that data quality remains the foundational element of AI success, with the 'garbage in, garbage out' principle still holding true despite being overused. He notes that human-in-the-loop review and judgment has proven more valuable than initially expected in early 2023, serving as a critical speed limit on AI deployment.
Governing AI for Fraud, Compliance, and Automation at Scale - with Naveen Kumar of TD Bank
Ask me anything about this podcast episode...
Try asking: