| Episode | Status |
|---|---|
Today's guest is Michael Finley, Chief Technology Officer at AnswerRocket. Founded in 2013, AnswerRocket builds enterprise AI agents delivering measurable outcomes for Fortune 2000 clients across cons...
Michael Finley, CTO of AnswerRocket, explains what makes AI agents enterprise-ready beyond chatbots and demos. True agents require business strategy alignment, valid data access through proper tools, and software-grade engineering with testing and governance. Success depends on rapid iteration cycles (6 weeks not 6 months), avoiding over-constraining agent capabilities, and maintaining model independence. The key is treating LLMs as software requiring packaging, testing, and monitoring while leveraging existing enterprise tools and workflows for cross-functional value.
Finley distinguishes real agents from chatbots by their ability to make decisions and take actions autonomously. Enterprise agents need three core elements: understanding business strategy (not just RAG over documents), access to valid structured data that models can properly consume, and well-designed tools that prevent hallucination. The critical insight is that LLMs haven't been pre-trained on specific business problems like pricing lollipops, so proper data presentation through tools is essential.
A common failure pattern occurs when teams repeatedly narrow agent capabilities after each demo variation, reducing scope from 100% to 30% to eventually 1% of original intent. This happens when teams add excessive guardrails due to nervousness about non-deterministic outputs. The result is agents that feel inferior to consumer ChatGPT, not due to competence but poor engineering. The solution is treating agents as battle-hardened software requiring proper packaging and testing rather than progressive restriction.
Enterprises must avoid binding too tightly to specific model providers' APIs and ecosystems despite hyperscaler commitments. The competitive landscape means no single provider will always have the best model. Critical governance includes controlling model versions (not auto-subscribing to 'latest'), regression testing model changes like software updates, and treating LLMs as software requiring packaging, testing, release cycles, and ongoing monitoring. Model selection should be based on the simple contract: prompts + tools = actions.
While data governance remains critical (authorized access, version control, proper provisioning), LLMs eliminate the need for extensive data harmonization projects. Unlike traditional systems requiring 18-month normalization efforts to join disparate sources, LLMs can consume multiple data sources separately and synthesize them in context - similar to how humans read multiple reports and draw conclusions. This creates network effects where value builds exponentially across sales, competitor, warehouse, distributor, and weather data without requiring unified schemas.
Success indicators focus on actual usage - the agent being 'ripped out of hands' because of value delivered. Critical approach: deploy in 6 weeks not 6 months, then iterate 10 times over that period. Choose problems where money is currently being spent (not new initiatives) to ensure measurable ROI. Monitor what's NOT being used after AI deployment. Avoid chatbot-only strategies where ROI measurement is unclear - focus on jobs AI can partially automate with human oversight.
Enterprises have decades of business knowledge embedded in Excel sheets, reports, notebooks, and utilities across departments. These become LLM tools or agents that can be accessed across functions. Example: supply chain commodity pricing risk tools should inform advertising spend decisions. This eliminates 90-day, 12-gate processes by giving LLMs access to existing departmental tools. Agents can orchestrate entire workflows (like 90-day supplier onboarding) and escalate to humans only when exceptions occur - the 'vibe admin' pattern.
Agent-to-agent communication leverages existing enterprise integration patterns from Java Bean era - nothing fundamentally new for enterprise IT. The shift is from people communicating to applications communicating to now agents/LLMs communicating. While user experience may look similar to chatbots, agents are fundamentally more powerful: cognitively loaded, making decisions, taking actions, and going as far as possible before requiring human authorization. The key is proper design ensuring human oversight at critical decision points.
Turning Consumer Goods Data into Real-Time Business Decisions - with Michael Finley of AnswerRocket
Ask me anything about this podcast episode...
Try asking: