| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
Matt Fitzpatrick is the CEO of Invisible Technologies, leading the company's mission to make AI work. Since joining as CEO in January 2025, he has raised $100M, hit the $200M ARR milestone and acceler...
This is 20 VC with me, Harry Stebbings, and this is the last episode of twenty twenty five. Now if you're wondering why I sound like Mick Jagger, no. It is not because I have been partying like a maniac and lost my voice over the Christmas break. It's because I have been walking four marathons in four days with my mother to raise money for multiple cirrhosis sufferers. We've raised $50,000 in the last three days.
I would love your support if you wanna donate to MS sufferers, but that is why I sound like Mick Jagger. But to the show today, and data is everything in the world of model performance. Turing, McCaw, and today's guest, Invisible, are one of a few who have reached several $100,000,000 in revenue. And as I said, I'm thrilled to be joined today by Matt Fitzpatrick, CEO of Invisible Technologies. Now since joining as CEO in January 2025, he's achieved some incredible milestones.
Most significantly, he's raised over 100,000,000 for the company. And as I said, he's hit the rarefied air of over $200,000,000 in annual recurring revenue. This was an incredible show recorded in person in London, and I cannot wait to hear your feedback. But before we dive into the show today, are you drowning in AI tools? ChatGPT for writing, Notion for docs, Gmail for email, Slack for comms, and you're constantly copy pasting between them all losing context and losing time?
This is the AI productivity tax, and it's killing your output. At twenty VC, we're all about speed of execution, and Superhuman is the AI productivity suite that gives you superpowers everywhere you work. With the intelligence of Grammarly, mail, and coder built in, you can get things done faster and collaborate seamlessly. Finally, AI that works where you work, however you work. Superhuman gets you from day one with zero learning curve and is personalized to sound like you at your best, not like everyone else using generic AI.
Get AI that works where you work. Unlock your superhuman potential. Learn more at superhuman.com/podcast. That's superhuman.com/podcast. And speaking of tools that give you an edge, that's exactly what AlphaSense does for decision making.
As an investor, I'm always on the lookout for tools that really transform how I work. Tools that don't just save time, but fundamentally change how I uncover insights. That's exactly what AlphaSense does. With the acquisition of Tegus, AlphaSense is now the ultimate research platform built for professionals who need insights they can trust fast. I've used Tegus before for company deep dives right here on the podcast.
It's been an incredible resource for expert insights. But now with AlphaSense leading the way, it combines those insights with premium content, top broker research, and cutting edge generative AI. The result, a platform that works like a supercharged junior analyst delivering trusted insights and analysis on demand. AlphaSense has completely reimagined fundamental research, helping you uncover opportunities from perspectives you didn't even know how they existed. It's faster, it's smarter, and it's built to give you the edge in every decision you make.
To any VC listeners, don't miss your chance to try AlphaSense for free. Visit alphasense.com/20 to unlock your trial. That's alphasense.com/2zero. And if Alphasense helps you make smarter decisions, Daily Body Coach helps you build smarter habits. You know how so many founders and execs say they'll finally take care of their health once things slow down?
Well, they never do. Running a business is a marathon made of high intensity sprints, and taking care of yourself is what gets you through those times performing at your best, both professionally and personally. This is exactly where Daily Body Coach comes in. Daily Body Coach is a complete high touch service for busy founders and executives, combining personalized nutrition and training with psychology based coaching to help you not just follow a plan, but actually build the systems, habits, mindset to stay at the top of your game. Built by an exited founder and led by certified experts with masters and PhD level credentials, Daily Body Coach is fully tailored to your life, whether you're traveling, dining out, or in back to back meetings.
You get daily accountability, data driven insights from DEXA scans, and blood work and a highly certified team backing you. If you're serious about performing at your best physically and mentally, go to dailybodycoach.com/20vc. That's dailybodycoach.com/20vc and take the next step. You have now arrived at your destination. Matt, I am so excited for this dude.
I think Invisible is one of the most incredible, but also, I'm sorry to say this, I under discussed businesses when I look at the incredible achievements that you've had over the last few years. So thank you so much for joining me.
Thank you for having me. I really enjoy the show.
Can you just talk to me about how does, like, a ten year McKinsey stool warrior become CEO of, like, one of the fastest growing data companies in tech? How does that transition happen?
I would say my McKinsey journey was nontraditional. I spent twelve years there. I was a senior partner, and I led a group called Quantum Black Labs, which is the firm's global tech development group. So about ten years ago, McKinsey actually started hiring engineers, like, and I was a big part of this in a pretty big quantum. And when I started, we had about 100 engineers total in firm.
By the time I left, we had 7,000. I oversaw about a fifth of that group in all the application development, all of the data warehouse infrastructure, and all of the Gen AI builds globally. And so, that journey was really interesting, and, you know, over the course of it, a variety of my time competing with other large enterprise AI businesses. And I got to know the founder, Francis, really well about three years or four years ago now. We actually met totally not work related, kind of a social context where we were discussing it basically a forum called dialogue.
Don't if you heard it, but you basically talk about different ideas. We bonded over
I keep getting reminded of this. It's in, like, Hawaii, though. It's many different locations too. Far.
I really enjoy it because you actually don't talk about work at all. You're not allowed to talk about your job. You spend time talking about history, politics, technology. What does everyone from San Francisco do? They put they don't talk about it for for two days,
which is It's sign of retreat. Exactly. Exactly.
But I actually think it's one of the few events I've been to where people are not talking their own book. They're not trying to convince you of anything, and you just actually I've made a bunch of really good adult friendships out of that. And so Francis and I got to know each other from that four years ago, and there had been another CEO kind of in the two years before I joined who was actually based in Australia, interestingly. And so when the business got to a certain scale, was just time to have a US based CEO that could help take the business to the next level. And, you know, it was actually Francis who approached me and kinda pretty directly said, do wanna be our next CEO?
And that was kind of that was kind of what happened.
Was it a no brainer?
Look. I I think when you walk away from a really stable job that you really enjoy, that's always difficult. The sliver of McKinsey that I was doing, I found to be one of the most intellectual day to day jobs ever. I was working with all the Fortune 1,000 on every different AI topic daily, and particularly in the early machine learning days kind of ten years ago, I think we built some really interesting stuff. But, yeah, it was I think it was kind of a no brainer in some ways because I think when you think about it, I think this is the most interesting time to run a company on a topic that has probably existed in our lifetime, may maybe the two thousands, but to run a company like I right now is is fascinating.
The the rate at which you can build, the people at which you can recruit, the interest of customers in this topic. And so I felt like I'd spent ten years learning one topic, and now I had a chance to run a business and build it the way I wanted to build it on that topic, and that's just something you can't pass up. And even though, you know, walked away from a fair amount, but I think that I'm much more excited about building something for the next two decades out of this.
When we think about, like, decision making frameworks, I always have one, which is like, find someone who you respect and admire. So for me, it's Pat Grady, who's the head of Sequoia. I've known him for ten years. He's a great father, investor, and husband. Three things that I care about.
And whenever I have a tough decision, I'm like, what would Pat do? And most of the time, I get to the answer by asking that question in that framework. If I were to ask you, what do you ask yourself? How do you find direction when struggling with a decision?
I'm not a particularly materialistic person. You know, I think when I was coming out college, for example, everyone was focused on going into large finance jobs, which at that time were pre pre financial crisis, obviously, where where a lot of that was. And I think a lot of what I think about is doing work day to day that I really enjoy with people I really enjoy, and then building something. And I do think I really enjoyed the decade I spent building at McKinsey. I think that was an incredibly interesting experience to stand up something of that scale within an existing institution.
And then I do think about I read a ton about everything from military history to current entrepreneurs to enterprise executives I really admire, and then I have a group of kind of a small group of people whose opinions I ask pretty regularly, and probably the most telling piece of advice, my girlfriend and my main mentor, both of them when I asked within two minutes were like, absolutely, do this. My main mentor is a guy named Smesh Khanna, who had been a senior partner at McKinsey for a long time, is on the board of a whole variety of different companies today. And I remember we got lunch, I walked him through the opportunity, I said, listen, it's a big risk. And he goes, the only risk is if you don't take this and the amount of regret you'll have, not give it a go.
I totally agree with that one. I was once given advice that whatever you think you should do, hold that close and then let your girlfriend tell you what you should do. And that's why you still have a relationship.
That's a That great piece of
was from someone who's been married for forty years and so it's worked well for him. We were chatting before and I said, listen, where do we have to go? I always think that the best conversations are led by passion. The first one that you said was there's a gap or a chasm between model performance and adoption. When we break that down, can you explain to me what you meant by that and how we see that in action?
Yeah. And let me set the context, I'll go into more detail later. Invisible is an interesting business in that we both train all the large language models with Reinforcement Learning Human Feedback, and we are at the core a modular software platform where, in enterprise context, we deploy all different enterprise use cases. And I think the cognitive dissonance that has occurred over the last couple of years is model performance has increased exponentially. Don't think anyone would doubt that.
If you look at all the public benchmarks, models increased 40 to 60% in performance over the last two years. And consumer adoption has been also exponential. So KPMG just released that 60% of consumers use GenAI weekly now, but the enterprise has not. You know, I think in the enterprise, MIT just released this report that 5% of Gen AI deployments are working in any form. You know, I think you've seen Gartner saying 40% of enterprise projects will likely be canceled by 2027, and I think the reason for that is deployment of the enterprise is a lot more than just models themselves.
It's the data infrastructure to support those models. It's the redesign of workflows. It's the process figuring out which operational leader takes accountability for that. And most importantly, it's trust. It's observability.
It's all the things that, you know, I spent a decade building things like credit models in banking. And in those cases, you need to go through model risk management, testing, training, validation. And so I think that whole process is in the first inning in the enterprise. I think it's going to take a decade, not two years. And and I do think that is the core mission that we think a lot about is I actually think the evolution of deployment of AI will be what the model builders have done for the last couple of years.
You'll you'll see banks and health care firms start to do the same sort of testing and validation over this period. And then the rest of the enterprise will be over the next five, six years after that, and that's the journey that we're focused on.
I was speaking at one of the largest banks in the world. It's an absolute joke that they get a university dropout like me to speak at that, like, largest retreat. I find it very fun. But I left, and I messaged the team, and I just said, oh my god. They're toast.
And they're toast because I said about the amazing tool they should implement internally. And the CTO laughed at me. He was like, dude, there's no way that we can ever adopt your off the shelf search engine optimization for, you LLM tool because of data, because of security, because of permissions. And I was like, wow. Everything that you just said there, I listened to.
Yeah. But that was once you got in the door. Are enterprises even open for business? You see Goldman Sachs developing a huge amount of their own tools. Are they open for AI business?
Yeah. It's a great question. I think it depends a bit on the sector. I think for there are sectors like banking that are very focused on building this internally. I think that is a reality.
Reality.
Do you think that will work, the internal build, for them?
So it's interesting. If you look at the MIT report, which is the one I mentioned that says 5% of models are making a production right now, they actually cite a stat that externally driven builds are two x as effective as internal team builds. I actually think there's an interesting kind of ten year pattern on this, which is ten years ago, everyone bought software. Right? Like, that was your tech team did not try and build anything and you started to buy and you bought, you know, often you bought way too many apps, but you bought 15 different apps and that was what the technology team did.
And then I think with the advent of cloud, you started to have a world where the technology functions started to start to think about building things. Like, maybe they started to have more some custom applications that wrapped around that. I think Gen AI has 5x ed that, where now an internal team has given this enormous budget and said, kind of go go have at it, and I think that's complicated because I think when you hire somebody to build, any vendor of any kind, you're pretty disciplined about what are you delivering on what timeline, what's the ROI of it? What are the milestones? How does that and I don't think that that discipline exists in the same way in internal builds.
I also think that the talent levels often the internal teams have are challenging. And so When you say the internal team builds are challenging, there are some things that you can't say, but
I can. The perception from external or from general kind of tech crowds is that internal teams for, I don't know, you name your boring large enterprise, it's just really low quality. You're not getting the top tier AI engineers. You're not getting top tier devs. Is that true?
Look, I think the amount of talent that knows how to do this well is not large, and so that finite group mostly works in AI startups of various forms, right, and large tech companies, and so I do think there's real risk to the process of figuring this out from first principles and enterprises, right? And I think that's part of the cycle that we're going through right now is a lot of internal groups have gone through the process of saying, we must do this all internally. But the reality is, if you think about that this is an open architecture ecosystem and you're going to adopt things like MCP or, you know, all the new voice agent that comes out, you actually want a modular open architecture where you can use all the best tech available and figure out how to link it together. And I think the desire to shape that all internally has been challenged. Like, I'll give you I'll give you one of the more interesting examples I can discuss.
I was talking to an ecommerce retailer that had built an agent to handle their returns process, and they spent $25,000,000 building this agent. And at end of it, I said, well, how did you define this was after I'd met them after they built it, I said, at end of it, how did you define if this agent worked or not? And they're like, well, we built we built our own eval tool. This is not a joke. And we basically analyzed a mix of speed of call resolution and sentiment.
The problem with that is what if the agent hallucinates and says, here's $2,000,000? That actually gets resolved quickly, and that person's happy. And so they had built this entire system from first principles, and what ended up happening was a couple months later, shut it down and moved back to a deterministic flow, and that's not surprising to me at all. And so I do think that's a little bit of the adoption curve we're in is over the next two years, you're going to see the CFO function put different guardrails on how this stuff is built and say, what is the ROI? What you are investing in?
What's the metric? What's the return? And that will change the adoption curve. But right now, there have been a lot of science projects. I think that is realistic.
Okay. You know, we have hundreds of thousands of listeners and many of them are CEOs. If you are a CEO thinking about your CFO being equipped to buy and to manage in this new environment, what should they be thinking about? And do we have the right CFO talent pool to manage this new environment?
Yeah. So I think one misconception is that that leader has to be highly technical make that decision, and I would actually argue they don't at all. They just need the same muscle memory they've looked at in the past, which would be, what do you need to get a Gen AI initiative working? You need good data that you can work off of for that specific initiative, clear milestones and outputs, clear line ownership of the initiative, and then probably most importantly, you want to actually anchor it in milestones and outcomes where you pay as it works. So I think the other interesting context for a lot of this is what I would call the Accenture paradigm of the last twenty years, right?
Which is a lot of times the way that if you think about the wrapper that's been around software for the last twenty years, you know, our founder, Francis Verrazza, has the founding principle of invisible was if there's an app for everything, how come nothing works? And And it's an interesting concept, right? Because what ended up happening is you bought 50 apps, you had Accenture come in and you paid them $200,000,000 over two years to try and layer them all together, and often you ended up a couple years in with no working data, no link ages between them, and that kind of layers of sediment has been how the tech paradigm worked in the enterprise for the last five years. And I think what's different now is if you're thinking about a specific Gen AI initiative, like a contact center, let's say, you don't need to operate that way. You can think about what are the operational metrics you want in your contact center.
You wanna think about call resolution, call performance, cost per call, routing logic. You know, you can then look at both internal and a set of vendors who will deliver those metrics and make an evaluation. And if the vendor doesn't work, you fire them. And I think there's a very very clear way to get ROI in this, which is figure out the list of three to four things that move the needle for your business. Focus on those three to four.
Don't spend money on a thousand science projects. Take your best four operational leaders and put them on those four things. Don't locate it in the tech function. That's the main advice I give people is your Gen AI initiative should be led by the business, and figure out that could be your head of call center, that could be your head of operations, but each of those people with clear operational KPIs will get this stuff working. And there are a bunch of companies that have, but it's just a very different approach than I'm building Gen AI as an example.
Example.
It's really interesting you said don't invest in a bunch of science projects, do three to four initiatives. Okay. Let's do three to four initiatives again. Let's put on that CEO hat. Contact center, it's just a big one that is homogenous across everything.
Matt, there's so many players in the contact center space. I'm a CEO. I'm not a I'm not a Silicon Valley guy. How am I meant to understand whether we go for Sierra or Decagon or Zendesk of old or Intercom or any of the other players that we've seen in the space, how do you advise the biggest CEOs on buying in a wave of new innovation?
I think this is the other big challenge of general adoption is you're an average CTO, COO. You've got two and fifty vendors a week pitching you. All of them sound pretty similar. In fact, I was with a customer yesterday who literally started the meeting by saying, how are you different than the other two fifty people that have pitched me this week? So, this is this is the dynamic.
If we have an oversaturation of companies that all sound relatively similar relative to agents, to make your question even more pointed, a lot of them don't work. You know, I think you've got a fair number of the enterprise agent companies that, you know, like Salesforce AI Research released this report that if you test a lot of the out of the box agents on single turn and multi term workflows, they're about 58% accurate on single turn and 33% accurate on multi term workflows, which means they don't really work. And so you've got this challenge of 200 companies will be pitching you. You don't really know how to select it, and you're worried you're gonna pick someone that's effectively Charlotte and it won't work. And and the more you have a market where there's a lot of excitement, the more you do have that risk.
Right? So I think the simplest advice I give, by the way, is how we sell, quote unquote, is start with proof of concepts, start with what we call solution sprints. Don't pay a dollar until you prove the tech works. So, like, we don't actually sell anything. We meet a customer, we say we will we will do it for free for eight weeks and prove to you the tech works.
And that's a very simple way. If your tech works, you'll show it.
It's an expensive way to do business.
It is and it's not. But let's so let me give you an example, like, how one of our deployments works because I think it it fair enough if the answer is that, you know, it takes you two years to build anything. But, like, I'll I'll give you an example. So our AI software platform is effectively five modular modular components. So Neuron, which is our data platform, brings together structured and unstructured data.
Axon, which is our AI agent builder Atomic, which is effectively a process builder, we can build any custom software workflow and then we have a meridial expert marketplace, which is we we have 1,300,000 experts a year on any any topic you can imagine that we bring into those workflows and then Synapse, which is our valuation platform on all of it. Now, we can take those five things and configure them to almost any different enterprise context. So, an example, we serve food and beverage, public sector, asset management, agriculture, sports, oil and gas, a whole host of different sectors using that same modular architecture. I think we end up scaling pretty materially once we show what the tech works. We're working with a company called LifespanMD, which is a concierge medicine business across The US and internationally.
And what we're doing for them is we're building them an entire tech backbone where they have an enormous amount of fragmented data across EHR, ERP systems, notes, everything else. All of their data sits in a pretty fragmented format. So, we're using Neuron to bring all that data together. We do that very, very fast. So, if Accenture would take two years, we can usually do it in two to three months.
We're then, on the back of that, building a lot of different intelligence and reporting so they can look at things like patient journeys over time, labs, genomics data, how much you use, like, the Oura Ring or anything else like that, but they want to look at wearables, how well that content is looking, so they have a lot of detail on what any patient is doing at one time. And then on top of that, we layer things like we have the ability to interrogate it and ask lots of different questions, like, let me look at who's used peptides, it's a male, between 36 and 50 and what have been the results. So, we're using Axon to build all that and then we build and to fine tune the model to do that, and we actually do also, on top of that, build lots of specific custom agents for things like scheduling. So, what you get at the end of that is a transformed tech enabled business with all of those different components. Now, that does take us a little while to stand up, but once that is there, it's effectively hyper personalized software.
And that is my view on where this whole industry goes is you move from SaaS, out of the box SaaS, to much more hyper personalization using the specific data of an individual customer, and that is what we do.
Do you think you can work with enterprise today with Gen AI and with AI implementation without an intense fully deployed engineer mechanism?
I don't think you can. So we are we've doubled down. A huge part of what we do is forward deployed engineers. So we now have eight offices in eight cities, 450 people. We're fully focused on forward deployed engineering.
And I can tell you from a decade of my prior life, you just cannot do this without the box SaaS. It does not work. What do
the economics of FTEs look like? Obviously, Palantir has made it the most sexy thing ever. I love the way tech crowds work where it's like we all just kind of get super excited by, an acronym. It's like, this is the coolest thing. But what do the economics look like?
Well, one thing I'll say is forward deployed engineering has come to mean a lot of different things. So a lot of forward deployed engineering, I think, you know, across the broader market is more like kind of solutions engineering where the people that kind of answer your questions and show up at your office. I think forward deployed engineering done well is executing a very specific workflow build. So you're effectively configuring a set of core platforms to build something hyper specific for that customer, and usually, one of the questions is it depends on how good your platform is because, for example, you could argue Accenture is forward deployed engineering. Right, but that build may take three years.
And in our case, I think we've built modularity and built a lot of the new software workflow develop development workflows into what we do. And so, usually, our forward deployed engineering motions are about three months. So, we will come on board, customize everything to the hyper specific way a customer wants it, and then build something on basis works. And it does require ongoing fine tuning. So, that's the other big difference that people should acknowledge, right, is that you can't fine tune a model in an enterprise context and just leave it for four years and hope it continues to work.
I could give you 100 examples, but take healthcare, GLP-one's launch, you do need to fine tune the model for the new context of the market, and so we do view it that way, but
I'm very naive, so forgive me on this. So do they pay additional for, like, FTEs to come? Do you pay additional in terms of ongoing maintenance? Just on the economics of it.
For many of our competitors, they do charge. We do not charge anything for FTEs.
Why not?
I think it goes back to my general premise that the best way to differentiate in this market is to prove that your tech works. And so the way that we do this is we say you will you will pay when the software is up and running, and we're able to do with one to two person small FTE teams a lot. And so once that's stood up and running, then we do have ongoing software that that is you know, I think the the paradigm that we're evolving from is over the last twenty years, you had kind of the system of record layer was where a lot of the value sat. And what we're building is hyper personalized system of agility layers, kind of what sits atop that. I think the Accenture paradigm is what people are afraid of, and it's very hard to convince somebody you're going to pay time and materials until it gets working, and so I spend less on sellers and more on forward deployed engineers.
That's my simple math.
I always think, you know, the biggest mistake that people have is they don't put the hat on of their customer.
Yeah. Yeah. I think the reason the show has
been successful is because I put the hat on of different customers. A lot of the customers that we have is startup founders who create amazing products and everyone wants to sell into enterprise. That's where the money is. Yeah. If I'm a startup founder thinking, do we need FTEs?
How do we do FTEs? How do we move into an FTE model? What would you say to them that they should know if they're thinking about starting that model or potentially needing that model, knowing all that you know?
I think it depends a lot on the nature of the business and what you're trying to build. If you're trying to build a knowledge management system of public filings for finance, for example, you don't need FDs because what you're building there is a repository of information that people can access. You've seen similar things in health care, for example. If you're trying to change workflows, you do need FDs. I think that's the simple paradigm difference in my mind is, if you're building something where the hardest part is getting adoption and workflow embedding and you need to actually change the way a company works, then yes, forward deployed engineers are the only way to do it.
It's interesting. There aren't that many folks that have expertise doing that. So, it's a hard thing to train and learn, but I do think it is the only way to get the enterprise working.
You've said several times, hey, don't pay until you prove that it And you said earlier, pay as it works. That's not the SaaS business that we've been trained on, Matt. I'm I'm a SaaS investor. How does the pricing model of the future look in this very new environment?
Let me step me back for a second. I think an interesting thing if you look at the economics of SaaS and enterprise five to ten years ago, and I think it's an interesting look at any large public enterprise software business and then look at how much of their revenue is actually services, and I think you could kind of argue that out of the box software has always been a lie to some degree. It's a weird thing to say, but they always had a ton of configuration, and they just dressed it up to some degree. I think SaaS was even more challenging than that because often the unit economics of SaaS, you're selling a much smaller cost per customer. The SaaS business that worked was actually about selling something where the out of the box setup was quick enough that you could make it work with the sales team where you didn't have to do lots of configuration, because the minute you had to bring in FTEs in a SaaS context, your economics broke instantly.
Right? And what I'd say then on the enterprise side, the way people made it work was you that's why Accenture grew so much. That's why Cognizant. That's why TCS grew so much is I'll give an example, like, if you take Insurtechs as an example. Right?
Every one of the major Insurtechs, like a Duck Creek, like, what they have is a set of core data schemas, a series of analytical logic in the front end. And the ones that did really well had momentum and push from the SIs that got them going, and so their economics were geared by having somebody else do all your services around what you did, then you got something standing up at the end that worked. I think the challenge with GN AI is that motion doesn't really work because what ends up being built at the end of the day is something that is hyper specific to that customer. Like, if you actually think about the nature of it fine tuning an LLM or creating a knowledge management system, it's not a box. It's not.
It is something that uses a lot of different consistent tooling, but it has to be customized. And so the way we do that is we stand that up, we get it working, and at the end of it, usually two to three months in, the payment happens when we pass user acceptance testing and validation, and it works. And here's the other thing I'll say is we use SaaS as a paradigm because that's how software has worked, but machine learning has been around the enterprise for I was building machine learning models ten years ago. That's always been a motion that looked like this. So what's happening now is we're starting to realize that the Gen AI adoption paradigm in the enterprise works the same way that ML did.
When we look at the different products that we have today, the expert platform is one I think that gets a lot of attention. How much of the business today is the expert platform? I find companies are lumped into categories. It's easier. And you have your McCools, your Surges, your Invisibles, and you're kind of put in this, are you all just talent marketplaces?
And no one wants to be a talent marketplace, it seems. And I'm like, how much of your revenue is the talent marketplace, and why does no one want to be a talent marketplace?
I actually think the AI training space has many different players that do have many different business models within it. There's four to five, but actually they're all quite different. I of think us much more of an AI training platform than just a talent marketplace, meaning we have 1,300,000 experts that come through the marketplace, but a lot of the expertise we've built over the last ten years is the ability to Here's the simplistic question I think that AI training asks. You have to be able to source any expert in the world in twenty four hours notice. You have to be able to source a PhD in astrophysics from Oxford, put them into a digital assembly line, in four days later generate perfect statistically validated data that will be compared head to head with somebody else's data and make sure that that is perfect at the end.
That is an incredibly difficult thing to do. And so, actually, lot of what I saw when I took over Invisible was that motion was incredibly applicable to actually the next phase of the enterprise as well, which is the fine tuning motions, the training, the ability to statistically validate for an enterprise use case like claims processing. It's the same motion. Like, I actually think AI training will be used next in banking and health care, and then after that in in many other different enterprise contexts. And so the the historical business I took over in 2024 was pretty materially weighted to the AI training side of the house, but I came in with a thesis that enterprise would be a huge source of growth.
And I think as you see next year evolve, you know, I think we've confirmed 12 enterprise deals in the last forty five days, so we see pretty good momentum on that side of the business, and I think that's where we will evolve is to doing both. I think the five core platforms we have allow us to serve a whole host of different end markets, and I do think that's very different than the other AI training players you mentioned. I think we're the only player that spans that broad based view in the same way.
On the talent marketplace side, how much of the business is that today then?
I won't say an exact number, but it was a pretty material percentage of twenty twenty four. Okay, got you. So it's
a pretty material percentage. The one thing that's also striking is the concentration of revenue to a couple of core players. When you look at other providers, it's like two players that make up more than 50% of revenues for pretty much every provider. Is that the same for you? And how do you think about what that revenue makeup will be given the enterprise diversification that you're talking about?
Yeah. I do think for this is a space where there are not that many players that are that are actually building LM. So by definition, the whole space has concentration. I I think I would not disagree with that. I do think that's one of the really interesting things for us on the enterprise side as we have materially more diversification now in the number of customers we serve on a whole different range of topics.
I also think you're seeing more kind of early stage model builders as well that are building hyper specific topics, And so that's the other part of where we see expansion in
the total customer base. When you come to negotiations with a client, given the revenue concentration, how do you play that staring contest? Because essentially they go, we know that we are one of your core customers and we will squeeze you on price. And you go, I know I'm one of your core data providers. I will stand firm.
How do you handle that negotiation? Because it is a staring contest of sorts.
I think people are willing to pay for good data. That's my simple if you think about the importance of these models, if you think about the cost of compute that is actually a huge chunk of the cost base, if you think about one week of bad data burns a lot of compute, I think what we've seen, the reason it's been the same four to five players in this market for a couple of years now, is it's really hard to do well. And so people are willing to pay for good data. And so I think we we have a very collaborative dynamic with all of our customers on that front. You know, I I think that when you provide a service that's helpful, people are willing to pay for it.
And if you provide a service that doesn't work, people don't pay for it. And so the interesting thing I would say on that front is the discussion topics anchor around, again, proven value. So we'll get a topic that'll come in like a multimodal audio model, for example, and we'll go head to head with somebody on that that week. At the end of it, we win or we lose. And so if you win and your day is way better, people are willing to pay for that.
I had a chat last night with a board member of of another of the companies in the space, and he said two things that really stood out to me. He said, I'm just drastically shocked at the lack of price sensitivity for the core customers. Like, they're willing to pay pretty much anything. Is that the case, or is that a bit of an exaggeration?
I think it's an exaggeration. I think in any if you think about, like, classic economics, people are willing to pay a fair price for good data. And so I don't think we operate in a model of trying to give anything unreasonable. I think there's actually fairly standard price bounds across all the players here.
Is data commoditized? When I think about, like, pricing power I'm a massive fan of Hamilton Helm, The Seven Powers. It's an amazing book. Yeah. When you think about, like, pricing premiums, you get that through not being a commodity, through owning supply of a rare asset.
Yeah. Is there a commoditization of data and we're kind of in a race to the bottom on the pricing of that data? Or do you own the supply of vet workflow data for surgeons in Oklahoma? Yeah.
So let me take that. I'll actually start with the market context, and then I'll actually use Seven Powers. It is a great book. I'll use one of his frameworks for that. Like, I think the market context that is somewhat misunderstood here is the way that human data becomes more and more important over the next decade, and I think the reason for that is if you thought of the different types of things you could train off of.
So, data gets mentioned a lot, but, like, most the of time synthetic data is useful for things like, let's say, base truth information like math, where there is a clear output that is right or wrong. Now, let's take all of the different reasoning tasks, like a multi step reasoning task, like, I mean, a simple one, like what movie would I select based on, you know, these five preferences? And then let's take that question and add into it audio, video, multimodal language, the ability to do it in 45 language, language context, so the ability to think about computational biology, Hindi versus French versus English versus English with a Southern accent, like, paradigm is actually incredibly hard to train on, and we're still in the first inning of a lot of those permutations of complexity is what I would say. And so for a multistage reasoning test that requires a PhD in multi different languages and, like, human feedback is going to be important in that for the next decade. I have a strong belief on that, and that was actually when I chose to take this job, that was actually one of my core convictions is the enterprise is gonna need that too because actually a lot of if you take legal services, for example, a lot of the way you're gonna need to validate that is with legal expertise.
There's no corpus of information you can train from. So, I would start with the idea that I think the market tailwind for the next ten years, we're actually in the first inning because there's the LMs, then there's the more sophisticated enterprises, and then there's everyone else that needs to train, validate, and move to fine tuning. So, again, contrast, there's like the pre training and LM work, but then to fine tune a model to a specific context, most companies don't even know what that is in the enterprise yet, and that whole process we're in the first inning of. So, I think the market demand is going to continue to grow pretty materially for a decade. Hamilton Helmer framework is an interesting one because he my favorite example is, he talks a little about what he calls institutional memory.
He mentions the Toyota production system as an example, right, where Toyota would literally say to people, this is exactly how our factories are set up, and nobody could replicate it. Right? I think the interesting thing about this space and why you've had a consistent set of folks doing it for a while is to go through the process of every week having to spin up. We have 1,300,000 active agents or experts that come into the pool. At any given week, we have 26,000 of those that we've selected that have to start in twenty four hours and produce perfect data.
Think about the challenge of scaling an organization that for five years can do that at really high quality and consistently turn and and evolve to the different permutations of the market, new new ideas of training, it's really hard to do. And I think that was what got me most excited when I took the invisible job was the question of can you make AI work in a really complicated context? Very few companies know how to do that on the enterprise side or on the training side of that for that matter, and so I thought that was a really unique institutional memory context. Is a digital assembly line, no different than than an auto factory, and I think that is a hard thing to replicate.
The other really interesting area that this board member said to me was he very much agreed with you. He said exactly the same words as you in terms of first innings of data, in terms of just how much market size will increase. He said the other thing that I really didn't understand when I made the investment was the specialization of data and how we are moving into the acquisition of this kind of insanely niche data supply pools where it's not like cat, hedge, zebra crossing. Zebra crossing is a what would you guys call it? A pedestrian path?
No. I did not see the specialization in the unbundling. Is that something that you see too in terms of these very micro niche specialized data requirements?
Absolutely. I think, you know, five years ago, this space was what I would call cat dog, cat dog commodity labeling. I don't think anyone and I think there was a lot of Google Sheets in that era, and you've seen some comments on it. Like, this sector has evolved the same way most technology sectors do, where it started with Google Sheets and cat dog labeling, and it's evolved to real digital assembly lines, huge velocity of expertise, and incredibly specific expertise. So, like, you know, we have to, I'll give a funny example, we have to be able to validate an architectural expert on seventeenth century French architecture who speaks French.
I mean, that is a that is a complex thing to do on twenty four hours notice. Right? And so the ability to source, assess, validate, and I think one of the advantages for us is because we have five years of data on who's been good at what task, there's real institutional data memory in how you do that selection and assessment. I think that's one of the core advantages we have from that. How important
is pay? You know, I think a couple of other providers have said that, Blunty, it's about how much you pay. You pay more than the others, you'll get a good talent.
So, a weird analogy, I think of our business like Uber. We source talent at the price at which people will do the work that is asked of them, right? So, the same way I do that, if you're standing on a street corner, your question is, can I find a ride that will pick you up at this moment within three minutes? And that matter, that's a different price if it's raining, that's a different price if you're in, you know, Rio De Janeiro versus London, right? The price depends on the market context and the specific place you are.
Think I expert pay is the same dynamic, really. A lot of what we're doing is what I call price discovery, and so the nuance I would add to what you're saying is you can overpay a really bad expert and that is a total waste of everyone's time. And so, what I think our customers appreciate is we can tell you between 150 expert and $130 expert the difference in expertise you get.
Do you think you have control of a finite supply of data providers? If you look at the seven powers in Hamilton and Hamilton, one of them is like acquiring finite supply.
So I actually don't think finite supply matters, and what I mean by that is I think the expertise needed varies so much month to month that if you tried to do a world where you bottled up whatever supply it is, it would change in three months, and we actually relish that concept. I actually think the dynamic, again, why I would use Uber and Lyft, you could use Airbnb and Vrbo as the same context is I don't think experts go on five platforms, right? I think actually what you want to be is this is a two way marketplace where you need enough demand for people to be interested and you need enough expertise that many experts I think the reason we get 1,300,000 inbounds is because of that kind of supply demand balance. So, don't think this moves to a world and actually, I would never say we move to a world where there is one player coming out of this. I think there's benefits to everyone to having numerous players that do AI training.
And so it's a question of being one of the players that has that balance. You said
there about kind of the switching of preference of like, oh, three months ago it was this that you want, now it's something completely different. Switching cost is another. When you have data providers in this way, are there inherent barriers to switching? Is there any loyalty?
Yeah, no, I think that if you've learned how to do a certain data task really well, there's incredible value in that. Let's take the enterprise context again because I do think it's a good one. So, I'll give you an example. We're doing a lot of fine tuning on some pretty interesting topics. One example, we worked with SAIC, Vantor, and the US Navy on fine tuning a model for underwater drone swarms.
So the question on that, if you think about Niche. Very niche. That's why I
use it
as an example to answer your question. So if you thought of it in that context, you've got a bunch of underwater unmanned vehicles and they're getting in all the drone and sensor data from the interaction patterns of those vehicles. And what they want to know is, you know, an object is in the water near them. What do they do? Do they react?
Do they pull back? Do they alert another drone? Do they engage? What are the topics of that? So, fine tuning a model to take in all that complex sensor data, fine tune it, train it, and build a decision making framework for those drones, There's a lot of logic built into that, and I think that's why it's been a great partnership with SAIC and Vantor because we built logic on how to do that, and it's you know, I think that there is real sustainability and expertise you build up.
And so the way I think about, like, our enterprise motion, for example, is every sector is led by somebody with deep, deep sector expertise, and we do build real logic on those topics, and I think the same is true for multimodal video and audio. It's true for legal. I actually think a lot of the training work, even at the model builder side now, one interesting view I have is people talk lot about the public benchmarks. That tends to be one question you get a lot is like, are we reaching a point where models are not improving? I actually think it I think about it very differently, which is the models are now all moving down hyper specific things where there's not a public benchmark for them by definition, right?
Like, they're moving to more very specific tasks that are very different and not something you can publicly benchmark in the same way, and that's why we do see more and more model improvement every day, but both in model builders and enterprises on these specific tasks.
You said about kind of the benchmarks, so I'm just so interested in it. Wait. Gemini three killed it. It's the best ever. And then yesterday, Opus 4.5 killed it.
It's the best ever. Next week, Sam's gonna release one. Does it matter? Like, are we in a world of such transient influx where really we should detach ourselves from these Bunty updates to last for days?
Look, I think the benchmarks are a useful framework for society to gauge progress on this topic, and it's a very often discussed topic, so people want a way to answer the question of how are the models improving, and I can tell you, like, unequivocally, the answer is yes. I mean, I think by every measure you look at, they are, and, you know, they're not only improving on the benchmarks, but even on specific tasks like research for investments, for example. You can see the models are much better at doing certain tasks, and I think what you're seeing start to happen is people, and we're doing this as well, are building very specific work based benchmarks to calibrate certain things, like how well does the model do on building an LBO model, for example, and you're going to see more and more benchmarks cited. Now, the complexity then becomes if you move from five main benchmarks, like SWE bench and others, to 600 benchmarks, then you kind of lose track of what's doing who's doing well and which things, But I think my my my interesting view on that would be I'm not sure the benchmark progress is what determines enterprise adoption.
And what I mean by that is if you take the fact that the models have improved exponentially over the last couple of years, and you say consumer adoption has been massive, right? Like, KPMG had this report that 60% of consumers use this on a weekly basis. The adoption curve on enterprise is not going to be a question of generalizability. It's going to be a question of hyper specific performance on a specific task. Right?
And so there isn't actually a benchmark for that. Like, I you know, let's take a investment summary document for a private equity firm. Right? There's no benchmark to say, firm one, this is how you write investment committee memos. Does this generate something that looks with 99% precision like something you would roll out?
There's no benchmark to do that, and so that's where what I see as the adoption curve is actually the fine tuning and inference layer of actually testing that, getting into a place where that firm could say, like, this looks good, I'm okay with this, you've tested it. Like, machine learning has a context, don't if you've heard, the banks do this thing called model risk management, where they actually do a whole host of validation and testing on things like redlining before they roll a model out. That's what the enterprise is going to have to do, and so it's not that the model improvement doesn't matter. I actually think the benchmarks are a good way to get some sense of model improvement, but they're almost orthogonal to enterprise uptake. I think an enterprise uptake depends on trust and precision on specific tasks at 99% accuracy, not generalizability.
If those specific tasks are removed in the way that you said, like summary docs for investments, often it's done by more junior people in the earlier stages of their career when they are building and kind of scaling those skills. Do you think we will have a talent pipeline problem if we do remove a lot of those junior roles, which we are seeing in certain cases already and I think we'll continue to see, where we won't actually have the graduation pathways that lead to the leaders that we have today because we've removed those junior roles?
I don't, actually. So so I think one of the challenges is that the adoption curve of this stuff is gonna take a lot longer than people expect. So I do think you know, I said this to you earlier, like, I think on enterprise, this is a five to ten year adoption journey, not one to two. And so I think you have a a dynamic where people have a lot of time to react and to think about what's useful as, you know, in addition to that. And so, I actually find a lot of the people coming out of college right now are some of the highest adopters of this and are the most useful for these kind of tools.
And so, we're hiring more and more people of that profile, not less. But I think the usage curve of that group of people, certain tasks will not be done, but there will be many more. So, I'll give you an example, accounting. If you worked at a bank, example, or any accounting firm in the nineteen eighties, this is absurd to think about, but you literally calculated revenue and financial statements with a slide rule. Like, people literally would sit there and they would generate a financial statement manually on paper with slide rule, and that was how people did accounting.
Now, Excel comes around. That becomes the main tool everyone uses to do accounting. And so, in theory, you'd have less accountants because you went from manual generation of slide rules to Excel, which actually makes it way easier to do that. You look today, we have about the exact same number of accounts, in fact, the same number of junior accounts. And what's happened is people do way more sophisticated accounting scenarios with the tools they have.
It's this old idea of Jevan's paradox, which is you increase consumption with advanced technology, and so the number of accounts go down. You actually had way more accounting. In fact, every FPA function is probably larger now than it was twenty five years ago because the work people do is more sophisticated. I do wanna go back to
we said about kind of market composition Yeah. And how we see the different players. Is this a market where you said like Uber and Lyft. Is this a market where there's one and two players and they take the dominant market share and then there's everyone else? Is it a cloud market where it's much more evenly distributed?
How do you project that out in, say, a ten year horizon?
In both AI training and in enterprise, I don't think the answer is one player. You know, I think actually, interestingly, in the enterprise, historically, there's probably it's been Palantir and not many others, so that's kind of, I think, why you've seen more people want alternative options to that. I think that I think that's part of the reason you've seen so much excitement on enterprise AI recently. I think most of these markets end up with three, four or five players. I don't actually think it's even two, and I think the choice in consumers is markets tend to create that, and that's a good thing.
Right? Like, I think you'll have some specialization on certain topics, you know, maybe some better at coding, some better at specialist tasks, some better at PhDs, but I think it'll it'll stay with a fair amount of choice.
When you look at the landscape, who do you most respect and what do you learn from them?
I would say Palantir is a company I probably respect the most in enterprise AI.
It's really interesting. You see them as a competitor more than a Surge or a Macora or Turing or any of others.
I think they are both competitors in different ways to different parts. All of those players are competitors in different ways to different parts of our business. I think I call out Palantir because I think they realized ten years before the rest of the kind of tech market that forward deployed engineering customization would be important, and I think that was a very countercultural leap at the time. Know? Because I look.
I mean, I spent a lot of time running forward deployed engineering teams, and most of what I saw was players like Accenture. What was called tech services back then was not a place that anyone wanted to play in, and so Palantir spent a decade before anyone realized this was important building good tech. Right? And so I have a ton of respect for that and the the culture they built out of that. I think on the AI training side, I won't comment on anyone specific.
I think I think all the players in the space are good, and they all do different things well.
There are large revenue numbers thrown out. Yeah. Are they revenue? Because I've done shows before with them and I got battered bluntly when people are like, oh, it's not revenue, Harry, and you can't categorize it as revenue. Is it GMV, not revenue?
Are we playing fast and loose with the truth on revenue versus kind of bookings?
I think it is revenue. I think that the rate you get on every project is different. The margin you make on every project is different. So I do think it is revenue and I think that the
Can you help me understand? Sorry, and I'm very naive. If I'm acquiring amazing talent and I get paid for that and then I have to pay them and then I get my take at the end of that, how is that different than booking on Airbnb where I get my take from a location, but I have to pay out to the owner?
Oh, good question. Well, I think Airbnb has one consistent fee, that's the difference. There's actually a fair amount of variation based on the skill side of the expert, like you don't have a consistent rate relative to the booking amount, that's the biggest difference. So there's huge variety depending on the project, the expertise expert type of what you book on that.
Are there any other big misnomers that you think are pronounced in the industry where you consistently are, I wish people would change the way they think about it?
Look, I think the biggest one is just the view that when I first started this job, the main pushback I always got was that synthetic data will take over and you just will not need human feedback two, three years from now. And it's interesting, I don't from first principles, that actually doesn't make very much sense if you think through it, right? If you think about the diversity of tasks that exist in the world and then how long it would take you to get comfortable with the accuracy, it doesn't make any sense, right? Like, I'll take legal services because it's a really interesting one, right? A lot of the legal data in the world exists with big law firms.
It doesn't even exist in the public So, if you take, like, the corpus of publicly available information, that's been commoditized for years at this point, right? And so, most of the logic is incredibly contextual to language, culture, multimodal context, and the information stored in individual companies, as an example. And so the only way to actually do the fine tuning process consistently and to get it accurate for any specific context is RHF. And I actually think in my in my decade, in my McKinsey days, McKinsey Gwann Black days, that was the thing I realized was different about traditional ML models versus Gen AIs. In machine learning, you can backtest, you can get to a really clearly statistically validated outcome without any human intervention.
I think on the Gen AI side, you are going to need humans to loop for decades to come, and I think that is something that most people are starting to realize. I think it's always confusing to people when they hear like, oh, that's how models are trained on the back end. Didn't realize that's how the statistical validation works. And so I think that's been an interesting evolution curve as people start to
realize that. You're profitable, right?
This year, we had started to invest a lot more. So I think one of the big differences, historically, Invisible had only raised 7,000,000 of primary capital in its entire nine year journey. We initially announced 150,000,000, and now raised 130,000,000, and so I'm investing very heavily in technology. So we will not be profitable this year now.
Can you just take me to that decision? Because this was going be my question, which is like, that's a very clear decision to be profitable and profitability comes often at the extent of growth naturally. Can you just take me to that decision making for you and how you thought about it?
Yeah. Look, I mean, to me, it was a simple one, which is if you think about the dynamics of return on capital, you can either harvest capital or invest capital, and your decision to invest depends on the growth you see as a result of that investment. And I think we're in the greatest environment for growth that has ever existed. I think Invisible is really uniquely positioned to capitalize on that growth too, and so I think of our five core platforms, I think of the growth vectors across both AI training and enterprise, and there were just way too many different things I thought were interesting to invest in. It was the clear best use of capital.
And I look. I'm trying to build this for the next ten to twenty years, and I think if you wanna build enterprise value for ten to twenty years, now is the time to invest and build. And, yeah, I I hope we never get to the harvest stage, but definitely not now.
Where are you not investing that you want to be investing?
I think the simplest answer is actual physical world interactions. So what I mean by that is I think a lot of the most interesting data that don't even really have access to yet is things that exist in the physical world that are more complicated to acquire and organize. So, I'll give you an example. We're serving one of the largest agricultural conglomerates in The U. On herd safety, so actually, like, monitoring risk factors, when should you send a vet for their herd of cows, basically.
That whole process relies on us actually sending forward deployed engineers to farms, dropping Starlink terminals into those farms, and building out custom computer vision models in those contexts. And I think there are so many different physical world contexts that become really, really interesting, but it does take cost and capital to build those out. Like, you know, I think oil and gas, oil rigs are an interesting one as an example. And so I think physical world interaction patterns are some of the most interesting growth vectors for this, but they do take time and money to invest in, robotics being another big part of that.
One area of investment that I think is interesting is brand. How do you think about Invisible's brand today?
Well, it's interesting. When I took over, we had had If you looked at the entire public internet, I think there was one article available. And so we've definitely spent a lot more time this year thinking about Was that a deliberate decision? I think so to some degree. I think Invisible has a culture of, you know, we believe in doing great work for customers and we were kind of not really focused on telling the whole world about that.
Does that become detrimental to the business at some point though?
Yeah, look, I do think branding matters a lot. My view now is that it's been very helpful for us to spend time where I spend a lot, I spend about 70% of my time on the road, and I go to a lot of conferences, things like that, and I think building a brand is really important for trust, for awareness, for engagement, and so and I think also how you tell that story is really important. So, I'm much a believer. One of my favorite quotes, like Marc Andreessen, has this idea that when private and public narratives diverge, that is the risk or the opportunity. So meaning if you say things you don't believe to be true, or if everyone's saying things that don't believe true, then what is the actual private narrative?
So I think it's been very important to me to make So
can you just help me understand that?
Yeah. So hypothetically, if I was going around saying we have an out of the box agent that does everything and then that wasn't actually true, that's what either creates opportunity for others or risk for us. That's how I think about it. And so I think what's been very important for me and how
Is that not our industry? I'm sorry. I don't mean to pick a fight with Marc Andreessen, but like, hello, Marc. Like, our job is to sell and then deliver later. Like, I'm looking at thinking, well, I'm fucked.
Well, you know, I guess it's all a question of degrees. And I think in my mind, like, I wanna say things where the narratives are the same to the public and to what our team thinks and what our customers experience. And so I think that's part of why I have focused on saying some of the nuances of what's not working and not claiming everything works out of the box. And I think that is a different approach, but it's been a core to how we've thought about building the brand is we are building this around trust where, like, I want a company we work with to know that if I say this will work, it will work. And I think you only get one chance to do
that right. You agree with fake it till you make it?
That's such an interesting question. I think it depends on what faking it means. Right? And and and what I one of the things I think is really complicated about Gen AI is it's nondeterministic. Right?
So, like, if you've never built a machine learning model to do pricing in industrial manufacturing, you can still understand what data is available, understand how the price is being set today, get pretty comfortable that what you build, if you say you will build it, will work, and I think that is okay. I think the challenge of nondeterministic systems is there is more risk to making it until you make it. Meaning, you you can kind of go out and say your agent will do anything, then you actually have to deliver an agent that works. Right? I think that's part of the the interesting you're asking about accounting dynamics.
I think it's part of the interesting dynamic of, like, a lot of the contracts that the people will sign right now are, like, I'll sign for 50 agents to be delivered, but then the question is, do you deliver the agents? Do they work? And so I think that is a different thing than SaaS, to go to go back to your earlier question. If I deliver a SaaS box, I know it will work. If I deliver an agent in the current world, there was actually a report AWS came out with today, it's interesting, like 70% of agents are actually not even AI agents as you think about it.
Like, most of agent agentic processes today are actually traditional script writing and just traditional automation, right? And I think that's why I don't self identify as an agent company, actually, at all. I think we do AI agents. We do AI workflows are a core part of we do, but we do data, we do training and fine tuning, and agents are one tool in the toolkit because I think too much emphasis on them a lot of the time won't work.
Did you see the video of the robot going around the house recently? And it was like the worst thing ever. It was like eleven minutes to take out a cloth and make sure she
did it again.
And then at the end, was like, and this was controlled by Simon in the backroom. And you're like, the shittest robot ever was then controlled by some weird dude in your back bedroom. Like, this is so shit.
I I do think that it yeah. I did see that. And look, think robotics is another one that will take longer, but will be really interesting when it works. But by the way, I think even in that case, you'll need more task specific robotics, not just broad based.
Have you ever faked it till you make it and been caught out? And did you learn anything from it?
So when I first started working in it wasn't even called AI back then. Was kind of data analytics was what it was actually called. This was probably twelve years ago or thirteen years ago now, probably twelve years ago. And, you know, I think the firm gave me a pretty interesting purview to try and explore where I could build out AI offerings across different sectors and customer bases, and I don't think I knew what I was going to build, candidly. I think that the interesting dynamic is I I had a lot of conviction that, and partly because some of the things I did before, that AI could be really useful on a whole host of things from inventory forecasting pricing to credit underwriting.
If you just thought intuitively of, like, the sources of data, the fact that so, percent of the software in America is over 20 years old. Most of that data is massively fragmented, not clean, and so a lot of the decisioning that happens in the enterprise is done in a really fragmented way. And this is what I did know. I did know that, like, if you took your average sales rep making a call, most of the time they're, like, googling some stuff to try and figure out what information not now, but this was twelve years ago, they had very little information on the script to say, customer information, what they might sell. So, I had a lot of conviction that that would work.
I did not know what would be most interesting. In fact, were areas I thought would be really interesting, like banking, that were actually much harder to do this inconsistently. It was somewhat you mentioned earlier, like bank so so the average bank spends 93% of its cost, of its tech cost, on maintain initiatives. 7% go into building new things.
It's my favorite thing with people that I I just had one of the CEOs of a big vibe coding platform on, and he was like, if SAS is Debra, we wanna build our own
I heard of stuff. Yeah.
Yeah. Yeah. And I'm just like maintaining, provisioning, updating. Are you high?
Yeah. If you've never gone through InfoSec and approvals of the bank, like the banks are banks And look, for very good reason, banks are much more complicated to do a build like that in, right? And so I think what
was This event that I was at last week was a bank. They have 6,500 people in KYC alone. Six and a half thousand people.
It's a great example, and so I think when I was doing that in the early days, partly because there was very little media coverage or interest in it, I was kind of figuring stuff out from first principles, and so I think the degree to which I faked it when I make it was I had to figure out other people I worked with and customers that trusted me enough to allow me to coiterate and develop stuff with them. I had to figure out a way to recruit really good people. That was actually like I actually think if you take any business very simplistically, it's a question of can you build trust with customers and coiterate to develop and make things work, and then can you recruit unbelievable people to deliver that? And it actually really comes down to recruiting in a lot of ways. I think that that's actually the number one thing we focus on.
Think of us as a talent company as much as anything else. Like, you could argue that, like, not to use a sports analogy, but, like, Nick Saban did not build Alabama football with the process. He built it with recruiting the best football players in the country. And I think about that the same way, like, you have to recruit great people. So in some degree, in the early days of that, you know, ten, twelve years ago, I was setting a vision and trying to figure stuff out and actually iterating a lot of stuff.
And I do think we ended up building a lot of things that really worked, but it took time, and it took iteration as much as anything else. It took iteration and trust. So I would say the counterintuitive thing is I didn't fake it, and then I never told people would definitely work. I would actually my entire approach would be to say, I think this would work. This is my reasoning why I think it would work, and let's build this.
And that actually a lot of people were very comfortable with that. Think if you go in and say, I have an out of the box AI that solves all your problems, people are pretty skeptical. I do just want to
stay on recruiting because again, I again, I think the show is successful because you put on the hand and you're like, as a startup CEO, one of your biggest jobs is to recruit great people. Having recruited people across different companies now, both McKinsey and now obviously Invisible, what would you advise startup CEOs in the earlier stages, knowing all you know now and what it takes to be great at recruiting, acquiring and retaining great talent?
Yeah, it's probably the topic I spend an enormous amount of time focused on that. It's probably a topic I think about the most because I actually do think if you get amazing people, everything else will follow from that.
So you agree with the moniker of, like, hire great people and let them do that work because people have kind of pushed back on that now.
Yeah. I think I think not just hire, hire, retain, and evolve great people because I actually think you have to give them a platform that they enjoy day to day. I think the two things that I believe that are somewhat counterintuitive is when you recruit a great person, I don't think about role most of the time. Meaning, I think people are very role focused of like, I will hire this person and they will only do oil and gas as an example, right? But the reality is, like, really good people will run five to six different roles across they'll run seven to eight different products.
Particularly on the business side, you may have somebody that does everything from delivery to sales to account lead, and you can be comfortable with that if you hire great all around athletes in a lot of ways. And I think the second thing is it has to be fun. My view on one of the narratives that has gotten a bit lost in the last couple of years is if you have a culture that is brutal to work at, people will leave. They might stay around as long as your stock's high, but they're not going to stay. I think you have to create an environment where people really enjoy going to work every day, where they're intellectually challenged and where they feel like they can unleash creativity.
And so I think that's I spend a ton of time thinking about that.
Can you just I I don't want to argue back, but I I want to build great companies myself. Yep. I'm trying to with 20 VC, and I I try to build good cultures. Revolut is a brutal culture to work at, famous for it. But Nick has famously always told me culture's fucking bullshit.
Winning is what matters. When people win, they learn more, they earn more and they grow.
Yeah.
And that really is culture. Brutality in bounds drives humans. Is that wrong?
No, no, I think it's actually right. And let me caveat what I said is I think it's also the nature of the business I'm in being AI, meaning I actually think that's a very true statement if what you're trying to do is scale a relatively consistent business model to do one or two things, then that is a function of execution and hiring people to go in very specific roles and do very specific things well, and I actually sorry. Let me caveat my prior comments on that. I think the difference is a lot of what we do is research and exploration fundamentally, right? And so, in the AI world, it is a different dynamic in that you're trying to figure out very specific problems with customers to solve and build really unique tech, and so I think in that world, you do have a different cultural dynamic.
It is a research culture as much as it is an implementation culture.
Is that difficult then when, you know, I just we do a show every Thursday, is blown up, which is incredibly nice for us as a business. But it well, essentially, we have Jason Lampkin and Roy of Driscoll TV season. We talk about news and we talked about Sam Altman and war mode. Can you do a war mode then in the culture of research and AI where it's maybe more thoughtful? Does that work?
Yeah, there are definitely parts of our, like, think if you take our delivery and operations teams, they're in war mode quite a bit of the time. So, think, again, I'm more describing general, I think, countercultural beliefs I have on how to hire certain sets of great people. I don't think it applies to every single function of the company. I would agree with that. I think there are definitely you have to be able to push really hard to deliver certain outputs, I think we do a great job of that, but I also think there have been ideas of, like, every great engineer should be able to spend 30% of their time on new projects as well as sprinting on the existing ones.
I think it's paradigms like that that are important.
What decision are you scared to make that you think about it often?
Yeah. I think the simplest answer I'd have to that is that growth in this industry relates to the amount of capital you raise and, you know, your earlier question on investment. I do think there's a world in which if you pursue hyperscale growth, it is possible, but you have to invest a lot more to do that. Like, every new company, every new customer you onboard, you have it does cost money to do the forward deployed engineering work. You invest more in your tech.
And so there is an interesting, like, do you run a business for consistent steady growth for twenty years or do you try and build something that gets to 50,000,000,000 to $100,000,000,000 and becomes game changing? And we have very much tried to operate in a way where I think we have a path to profitability than anything else, but we are going to invest in the near term because I think it is a very interesting time to do that.
I know you don't like to name names, but I can because you like, when McCool raises, like, $2,000,000,000, and you're like, fuck, we need to raise more fucking money.
It's interesting, if you look at the players in our space, that there have been very different levels of capital raised and people had success more or less. I actually think a lot of our investment is in different areas than many of our peer set AI training are focused on. A lot of it's in things like the enterprise, it's in core software platforms that are maybe a little bit different than what others are focused on. So, I think you can raise a lot of money and the question is where you spend it. Again, I actually think most of the capital we need in the next five years is more enterprise focused.
I think we've actually built something on the IT side I feel very, very good about.
We were talking about recruiting before I went off on a tangent there. You now have offices despite being a remote company for several years. Does remote not work?
Yeah. So we were a fully remote company for nine years until I took over. We've now gone largely in person, and we do have we do have some folks who work remote, but we now have offices in New York. We took the old Pinterest space in San Francisco, London, Paris, Poland, DC, and just opening Austin, Texas now. And I think the interesting thing I've experienced is that is I do think remote, you really struggle to build culture in the same way, so I think that the things I've experienced since we remote is just a way stronger positive culture of colocation, which I think people enjoy their work and get to know their coworkers a lot better as a result of it.
I think it gives us a lot more depth with customers to be colocated in cities where we spend time with them. I mean, like, I take London and Paris, we need to be co located with the customers there. It can't just be, you know, someone in a Zoom screen in New York. Do you see productivity increase? Exponentially, yes.
Think if you take engineering as an example, like, I think you can execute engineering tasks remotely, but the process of working through really thorny problems, like we so I've tripled the size of the engineering team. And what I can tell you is, interesting thing is vast majority of those people wanted to be in person. Now, I'm not saying that's true of all engineers, but it was interesting how many people, particularly the younger tenures, were like, I want to be co located. I want to work through things. And so, I don't even mandate office attendance.
I just have it in those offices and we have huge appetite. Like, I was with our we have 40 people in our London office. I was with many of them last night. They were all commenting on how many of them come in voluntarily even on like a Friday where they might not need to because they like being around their peers. I think that I would actually bifurcate two separate things, and I don't think they're related.
One is the hours you work seven days a week, very flexibly depending on when client needs exist, from physical colocation. I actually don't think they're related, meaning, I think the benefit of integration is if stuff comes up on a Saturday or you're pushing on a new product bill, like, will work on that Saturday, but if you do that from your home, that's totally fine. I think office culture to me is like, if you took a hypothetical thought experiment and you said over a year, I think there is a diminishing return from being in the office all the time where you lose flexibility. So, as an example, if I said we were physically we were remote 100% of the time, that would not work at all. If I said we were physically in the office six days a week, I I think that is overkilling.
You lose great people, particularly senior enterprise folks don't wanna be in the office on Saturdays. But what I think we found a nice balance between is people come to the office most days, people really enjoy being with their colleagues, they work most days, but they can do it from their own home on the weekends, and I think that sort of flexibility is good. Final one before you do
a quick fire. What did you believe about management that you now no longer believe?
I think two things I would highlight. One is that I think control is a bit of a fallacy depending on the volume of things you have going on, meaning I actually think to the question earlier on hiring great people, if you're serving several, you know, let's say a couple of years from now, you're serving a couple 100 customers on different topics, you actually need to have values, consistent tooling, consistent approaches, but you need to empower all those teams at the edge to operate and do what they will. And so, actually, of the big focuses I've made over the course of the year is to reduce a lot of our hierarchy, make the organization way more flat so that companies, that people at the edge serving customers are empowered to make decisions. They have decision making frameworks, they have consistent tooling, but they are empowered. I think trying to control that centrally maybe works in like a manufacturing business, but you lose a lot of latency of decision making.
So, you know, I think if you look at like, there's a lot of, interestingly, military history that would say the same thing, it's like, actually, if you look at the function of an army, at some point it moves into people in the field make the decisions, and so you have to have the training, the strategy, the recruiting to do that, and then you have to empower your teams to work. And I think I think about a lot of that very similarly. And the second thing I think a lot about is in the AI world, least, strategy is a somewhat overrated concept, and what I mean by that is I I think actually all strategy I was talking to a CEO in the biotech space, and he was saying that strategy is very important for them because every time they make a capital decision, it's a seven year capital cycle. Right? And and so in that case, strategy makes a lot of sense.
But in the AI world, one thing that's been interesting to me is every three months, the entire world changes, and I I just had to get very comfortable with that dynamic. You have to think about your investment life cycle as core beliefs you have and then 30% to 40% of things that you iterate constantly based on new tech. So, there is tech that you're going to build, like a new voice agent comes out, that will become obsolete, and you have to just be very comfortable that you're building an interoperable set of frameworks that you can integrate the new tech into, and that has to be a core function of the business is five year strategic planning is not a useful exercise right now in a lot of ways. I think you want to think about five years in terms of the cultural context you build, the organizational, like institutional memory to use the seven powers framework, but the actual iteration cycles are much, much faster. And I think if you don't react quickly, that that does not sustain.
And now I think enterprise the interesting flip side of that is enterprise sales cycles, for example, are much longer. So it's not like you can't survive unless you're making but but I do think the big thing is a lot of the tech being developed changes every two to three months, and so you need to be constantly incorporating that into what you build.
Final final final one, have problems for a quick fire. You said about always being traveling and you mentioned a girlfriend earlier. How do you make that work? And what would you advise me as like, hey, tips and tricks to not have a severely pissed off girlfriend most of the time?
I think the first thing is to find a great girl who understands that you are really passionate about what you're doing and is supportive of that. I think my girlfriend Claudia has been great on that front. I am very appreciative of that. But look, I mean, it's tough. I'm on the road probably 60% of the time.
If you look at my last four or five weeks, Riyadh, Geneva, Paris, Berlin, London, San Francisco, Boston, Singapore, now London again. So, I mean, that's a Do you enjoy this? I do in some ways. I think that I feel very lucky to be building something at this particular time and with a group of people I love working with. You know, this happens to be what I spent my last decade doing, and it happens to suddenly now be what, like, a lot of people wanna do, which is great.
And so I feel very lucky because of that, and so every day I wake up and see what else can I do to be to kind of push that forward? And so I do kind of live on the road, but look, I think you I think some of the things I've tried is, like, you know, you figure out things like FaceTime. You make sure you keep the cadence of interaction high because being on the road is tough. But I also don't think it's forever. I also think I'm in that fun stage of trying to take something to like, we kind of went zero to one and now we're trying to go one to end, but we're not yet, you know, fully mature public company or anything like that, and so I think she's been very understanding throughout that process.
Are you ready for a quick fire round? Yeah. Okay. OpenAI at five hundred billion or Anthropic at 360,000,000,000, which would you rather invest in?
I do not comment on any any players in the modal builder space for a variety of reasons.
You can see it's the discomfort around. What's the most underrated infra company today?
Databricks, which is you're going to be like, well, they're very rated, but look, I think their tech is great, and and I think that it's interesting in a lot of ways, the most useful foundation for AI is really good Databricks infrastructure. I think when I hear a customer has them, I'm always very happy.
What's the best advice that you've been given that you most frequently go back to?
We kind of talked about this a little bit earlier, but a CEO that I respect a lot, when I took the role, I asked him his advice. They're like, what's the best way to think about a team? And he said, look, your job as a CEO is to do three things really well. Recruit great people, create a culture where they love working together and build great things, and try and make them all extremely rich. And I think it's a funny framework, but I think an interesting way to think about, like, that is my responsibility to employees.
I want them I wanna find great people, help them enjoy each other, and then build something that becomes big and helps all of them achieve their dreams.
What's one widely held belief about AI that you think is completely wrong?
That out of the box agents will solve everything with a push of a button. That is, I think, the biggest misconception now is that I think many people were hoping the adoption curve will be, buy something, I just push it in my business and it takes a whole process and fixes it, and I think they're realizing it requires training, fine tuning, and a whole host of process redesign and business ownership.
You are me today. You have a new $400,000,000 fund, and you're a partner in the fund with me. Where should we be investing when most people are not? Because everyone is investing in agents out of the box.
Well, look, I think it's an interesting question because I think a lot the reason people are investing in the agents out of the box is that they're trying to apply a SaaS paradigm of what's worked historically to AI, which is challenging. The model building layer is clearly producing amazing returns. I think the AI agent layer is more complicated now. Where I think that's also complicated is the application layer is tricky too, and I think you hear a lot of commentary on this, like many of the applications may or may not work. They're not really getting full workflow embedding.
They're more of like kind of nice to have in workflow context. So, actually, my counterintuitive take would be one interesting question of the paradigm now is whether new companies built around AI get distribution faster than big companies figure out how to adopt interesting paradigms for our society. And so, I think some of the most interesting new businesses are actual businesses using AI in the physical world that are AI native and that will be highly disruptive. So, you mentioned Revolut and banking, for example, or you could go like loan servicing. There's many different areas where people are standing up new businesses.
One of the most interesting stats I've heard recently is if you look at Y Combinator's recent class, I think it's like the largest it's two x the revenue of any prior class, and many of those are businesses that are actually serving a customer need, not selling that customer software, if that makes sense. And so, I think from an adoption standpoint, one way to do this is to bet on AI agents, which are more of like a SaaS paradigm, who will sell stuff to customers, and the other way to think about it is what are business models that will change because of this? I think there's a whole host of, like, you know, Gen AI native services businesses are very like, you know, tax accountants, etcetera, are really interesting examples of that.
Again, you're a partner with me in the fund. Yeah. Do we just get used to a world of lower margins? Is how this business plays out? Is the world of 70%, 80% software margins over?
First of all, challenge that 70%, 80% software margins actually ever existed. What I mean by that is there's the gross margin, but if you look at, like, profitability in public software multiples, it's fascinating. Right? In the last two years, you've seen public software multiples go from 20x to 10x, partly because of growth changes and partly because they've tried as they move profitable, their growth slows materially, and what you realize is, like, I actually would take the flip side of this, which is the integrated units will be very, very profitable because the way they grow, they'll be able to acquire customers faster, build them things that are good faster. And so, they won't have the box stickiness, but they'll also I would argue a lot of those software companies below the line were not that profitable.
When you look forward to the next ten years, final one, what are you most excited about? Like, you know, for me, my mother's got MS. I look at potential advancements discovery, treatment pathways. What are you most excited for? I'd like to end on a tone of optimism.
Yeah, you know, I think despite some of my, what I call realism on enterprise adoption, I actually am an AI optimist, and I actually think that the current narrative on some of the risks are far outweighed by some of the benefits. And, like, just to give a couple examples, right, and I'll go through four, including healthcare. If you take energy as an example, right, there's a lot of question around, like, data center implications for energy, but if you do the math right now, data centers are about 1% of total global electricity usage. AI data centers are 0.25 to 0.5% of that, so actually really small. I don't even know if cooling, electric air conditioners, is 14 to 20% of global electricity usage.
AI has so many different ways of grid optimization cooling where I mean, the World Economic Forum just came out and said it's going to be massively net positive from an environmental impact standpoint. So, I think energy is one where, if you think about all the energy needs we are going have and the investment now going into clean energy because of all this, I think we will actually be in a much better place ten years from now. I think healthcare is another interesting one. If you look at US healthcare, we spend $14,000 per capita per year on patients in The US, so that's like our rough spend. That's 2.5 to 3x what Germany and Canada spend, as an example.
If you then break down the context of that, you know, 9% of that roughly is administrative, something like 25% of it's waste, and then actual cost of care is, like, really challenging. Mean, Johns Hopkins just released a stat that two hundred and fifty thousand deaths a year happen because of avoidable errors. You see things in AI like twenty percent better identification of breast cancer risk, for example. So, I think actually healthcare is another one where the cost framework for healthcare has been not good over the last twenty years, and the cost of care improvements will be really material if AI works well. So I think that's another one.
I think the the one I'm probably most excited about is education. If you're a kid growing up in any socioeconomically disadvantaged city in in the world, your ability to learn about any topic on earth incredibly quickly is better now than it will has ever been at any point in history. You can take any topic on earth and with just an Internet connection, learn, you can go through, and you can pick your topic. And I think one of the reasons that's particularly important is educational system we've had for the last ten, fifteen years actually, fifty years doesn't really work. I mean, we have massive K through 12 challenges with STEM topics in The US, for example.
We have huge learning gaps, largely driven by sociodemographic context, and most of our educational system is based around, like, teach people biology, English and history and, like, not teach them of basic things like FICO scores or how to do coding. So and to add to all that, the college system has created a student debt crisis where people way too many people are going to colleges that are not worth going to for and taking on enormous amounts of debt to do it. So, I actually think, again, I think the way our educational system will shift, will function, will shift material. We're a talent assessment company. An enormous amount of people we bring in did not go to college, and we assess them on cognitive aptitude and skill, and so I think the really positive note I would leave on is I think the way that people topics they learn, the way we look at resumes and how to screen and assess people will move in a really positive direction, and I think a very different one than we've had the last hundred years.
I'm absolutely thrilled to
hear that there is value in non college or dropouts as a dropout myself. This has been so much fun to do, Matt. Thank you so much for being so flexible with the topic type. You've been fantastic, dude.
Thank you for having me.
But before we leave you today, are you drowning in AI tools, ChatGPT for writing, Notion for docs, Gmail for email, Slack for comms, and you're constantly copy pasting between them all, losing context and losing time. This is the AI productivity tax, and it's killing your output. At twenty VC, we're all about speed of execution, and Superhuman is the AI productivity suite that gives you superpowers everywhere you work. With the intelligence of Grammarly, mail, and coder built in, you can get things done faster and collaborate seamlessly. Finally, AI that works where you work, however you work.
Superhuman gets you from day one with zero learning curve and is personalized to sound like you at your best, not like everyone else using generic AI. Get AI that works where you work. Unlock your superhuman potential. Learn more at superhuman.com/podcast. That's superhuman.com/podcast.
And speaking of tools that give you an edge, that's exactly what AlphaSense does for decision making. As an investor, I'm always on the lookout for tools that really transform how I work. Tools that don't just save time, but fundamentally change how I uncover insights. That's exactly what AlphaSense does. With the acquisition of Tegus, AlphaSense is now the ultimate research platform built for professionals who need insights they can trust fast.
I've used Tegus before for company deep dives right here on the podcast. It's been an incredible resource for expert insights. But now with AlphaSense leading the way, it combines those insights with premium content, top broker research, and cutting edge generative AI. The result? A platform that works like a supercharged junior analyst delivering trusted insights and analysis on demand.
AlphaSense has completely reimagined fundamental research, helping you uncover opportunities from perspectives you didn't even know how they existed. It's faster, it's smarter, and it's built to give you the edge in every decision you make. To any VC listeners, don't miss your chance to try AlphaSense for free. Visit alphasense.com/20 to unlock your trial. That's alphasense.com/2zero.
And if AlphaSense helps you make smarter decisions, Daily Body Coach helps you build smarter habits. You know how so many founders and execs say they'll finally take care of their health once things slow down? Well, they never do. Running a business is a marathon made of high intensity sprints, and taking care of yourself is what gets you through those times performing at your best, both professionally and personally. This is exactly where Daily Body Coach comes in.
Daily Body Coach is a complete high touch service for busy founders and executives, combining personalized nutrition and training with psychology based coaching to help you not just follow a plan, but actually build the systems, habits, and mindset to stay at the top of your game. Built by an exited founder and led by certified experts with masters and PhD level credentials, Daily Body Coach is fully tailored to your life, whether you're traveling, dining out, or in back to back meetings. You get daily accountability, data driven insights from DEXA scans, and blood work, and a highly certified team backing you. If you're serious about performing at your best physically and mentally, go to dailybodycoach.com/20vc. That's dailybodycoach.com/20vc and take the next step.
20VC: Enterprises Will Not Adopt AI without Forward-Deployed Engineers | Who Wins the Data Labelling Race: How Does it Shake Out? | Lessons Learned Hitting $200M ARR with Matt Fitzpatrick, CEO of Invisible Technologies
Ask me anything about this podcast episode...
Try asking: