| Episode | Status |
|---|---|
Today's guest is Mathew Paruthickal, Global Head of Data Architecture, Utilization, and AI Engineering at Sanofi. Founded in 1973, Sanofi is a French multinational pharmaceutical and healthcare compan...
Matthew Paruthickal, Global Head of Data Architecture at Sanofi, discusses building trustworthy AI systems in life sciences through phased implementation, human-in-the-loop governance, and embedded compliance. Key insights include the crawl-walk-run approach to connecting structured and unstructured data, building explainability and auditability into engineering pipelines from day one, and creating self-improving systems through continuous feedback loops. The conversation emphasizes that in regulated industries, AI amplifies rather than replaces human judgment, with regulatory precision, patient safety, and compliance as non-negotiable priorities.
Paruthickal outlines the crawl-walk-run methodology for integrating diverse data sources in regulated environments. The approach emphasizes starting with small user bases, demonstrating quick wins, and building modular systems that layer capabilities over time while maintaining business alignment through co-authored problem statements.
Discussion of how life sciences organizations must architect AI systems with traceability and compliance as day-one priorities. Paruthickal explains the technical requirements for creating explainable AI frameworks that can track data provenance, document versions, and provide audit trails for regulatory purposes.
Exploration of the two critical human roles in AI systems: experts who verify outputs and domain specialists who design explainable systems. Paruthickal emphasizes that in life sciences, humans must certify, sign off, and be held accountable, with AI serving to amplify rather than replace human judgment.
Concrete use cases showing how AI transforms manual-heavy workflows in adverse event reporting, ICSR document generation, and clinical narrative writing. Paruthickal details specific techniques including auto-extraction from free-form text, LLM-based pre-filling, and reinforcement learning for model fine-tuning.
Framework for creating proactive AI systems that surface risks and optimize workflows through continuous learning. Paruthickal explains how capturing human overrides, corrections, and approvals creates training signals that improve AI accuracy over time, while monitoring ensures ongoing trust and regulatory compliance.
"Waking Up" Data in Clinical Workflows with AI - with Mathew Paruthickal of Sanofi
Ask me anything about this podcast episode...
Try asking: