| Episode | Status |
|---|---|
Ben Horowitz reveals why the US already lost the AI culture war to China—and it wasn't the technology that failed. While Biden's team played Manhattan Project with closed models, Chinese developers qu...
The biggest mistake people make on culture is they think of it as this very abstract thing. And my favorite quote on this is from the Samurai from Bushido where they say, look, culture is not a set of beliefs, it's a set of actions. So the way for The US to compete is the way The US always competes. We're an open society, which means everybody can contribute, everybody can work on things. We're not top down.
And the way they get everybody to work on things is to have the technology be open and give everybody a shot at it, and then that's how we're competitive. I think when you have new technology, it's easy for policymakers to make really obvious, ridiculous mistakes that end up being super harmful.
Today on the podcast, we're sharing a conversation from Columbia Business School with a sixteen z cofounder Ben Horowitz. Ben is a Columbia College alum from the class of '88 and joined Dean Kostas Maglaras for a discussion on AI, culture, and leadership in times of disruption. They cover how open source AI and blockchain could shape the global race for technological leadership and why culture, not just strategy, determines which companies thrive through disruption. Let's get into it.
What a wonderful, wonderful way to start the semester by inviting an incredible leader and an alum of the college, Ben Horowitz, to join us to talk about sort of a variety of things. So, I'm not going to spend time introducing Ben, but I'm going to make a small anecdote, because Ben ran a company in the Bay Area in the late 90s to until the mid 2000s that I, without knowing Ben back then, visited with a bunch of MBA students, I think in 'ninety nine or 2000, as part of our Silicon Valley trip back in January. Name of the company was LoudCloud, which was the second company you were working after, Netscape. So he has seen through the entire trajectory of both the internet era of Silicon Valley, and I guess around the late 2000s, started Andreessen Horowitz with your partner, and has been one of the leading venture capital firms. So I want to start by talking about AI.
We're going to talk about venture capital. We're going to talk about leadership and the types of teams and people that you look for. But I was reading this morning about Anthropic closing their latest round at $183,000,000,000 valuation, which speaks a little bit about AI, speaks also a little bit about how venture capital has changed because that's a private company that is approaching $200,000,000,000 valuation. Incredible growth, incredible change in capabilities. Where do you think we're now in that AI cycle?
And you were a war veteran from the 2000s. So in some sense, maybe you can give us your insight about that and then launch from there.
Well, I think we're early in the cycle in the sense that we just got the technology working like four years ago. So if you think about technology cycles, they tend to run twenty five year sort of arcs. So we're really, really early on. I think there is a question now of how big is the next set of breakthroughs compared to the last set? So if you look at you could call gradient descent like a 10 out of 10, and then the transformer and reinforcement learning, maybe eight out of 10 breakthroughs.
Is there another 10 out of 10 breakthrough or even an eight out of 10 breakthrough on the horizon? And we haven't seen it yet, so we'll see. There are certainly companies kind of working at that. And so the big thing is, is there another kind of big discontinuous change in, I'll just call it probabilistic computing since AI tends to freak people out, Or are we just gonna keep kind of building on the breakthroughs that we've had to date? And that's, I would say, an open question right now.
When you think about adoption and disruption in the economy, how far out do you think that is going to be? And what sectors do you think may start getting affected? Large sectors, big corporates and
I think it's kind of like both overrated and underrated in terms of the dislocation. So if you look at the long arc of automation, going back to the 1750s when everybody worked in agriculture, nobody from 1750 would think any of the jobs that we have now make any sense. They're all ridiculous, like completely frivolous ideas. And so it's not clear like what the jobs will be kind of in the next thirty, forty years. But like I said, the jobs that we have now were unimaginable.
Nobody would think somebody doing graphic design or even certainly being like a marketing executive where that was like an actual job, if that makes any sense at all. So we'll see on that. And then the other thing is-
You're speaking to an MBA crowd. If
you think about computers, so like deterministic computers, what we've had since the kind of '40s and '50s, obviously a lot of things have changed. And many, many, many jobs are gone because of it. But it was much more gradual than I think people would have thought it would be when it happened. And like some of the changes, like the whole private equity industry was created because of the spreadsheet, because it turned out that a huge portion of every business was just people manually calculating what you'd calculate in a spreadsheet and a model. So basically private equity companies were like, Oh, we use a spreadsheet, take that company over and then get all the money out and so forth.
And so that created that whole industry, but nobody would've put that together in advance. It's just like weird side effects of the tech. And I think what we're seeing in AI is it's kind of starting to automate the mundane and then move to kind of over time, maybe it will eliminate that job, but the job is kind of morphing as it goes. So I'll give you an example. So my business partner, Mark, and I had dinner with a kind of fairly famous person in Hollywood who's making a movie now, and basically half the movie is AI.
But the way that's working is they're taking an open source model. By the way, the open source video models are getting very, very good. And normally in Hollywood, when you shoot dailies, there's many dailies that you might shoot a scene like 10 or 20 times. Now they'll shoot it like three times and have the AI generate the other 17 takes. And it's indistinguishable, so it kind of really improves the economics of the current movie making industry, which have gotten extremely difficult with the way distribution has changed, and it's gonna make it much easier for many more people to make movies.
But I think that the way Hollywood would view AI right now is that it's just taking all the jobs. Right? Like, it's just gonna write all the movies, make all the movies. I think that's not gonna happen. It's just not gonna be that way.
It will change. I think there'll be a new medium that's different than movies, the way movies were different than plays using the technology. So things are gonna change. I think it's gonna affect every single sector, but not in ways that you would easily anticipate. By the way, every writer in Hollywood is already using AI to help them write dialogue that they don't feel like writing and all that kind of thing.
So that's already going on. But that's not eliminating those positions. It's just kind of enabling them to kind of work faster and better.
You mentioned open source. Where do you fall into that? I I think I know where you guys fall into that spectrum, but maybe you can tell us a little bit about your thinking about open source, and perhaps also talk about US China and the competition in AI in that context.
Yeah. So, well, with open source, so in AI, there's kind of open source algorithm, which is not that big a deal, but then open weights is kind of the bigger thing because then you've trained on the model and it's encoded in the weights. And in that encoding, there is kind of the quality of the model, but also the subtle things like the value of the models, like the model's interpretation of history, the model's interpretation of culture, human rights, all those kinds of things are in the weight. So the impact of open source, if you think about the control layer of every single kind of thing that you Every device in the world is gonna be AI, right? Like you're gonna be able to talk to it.
What those weights are matter in terms of the kind of global culture of the world and how people think about everything from race issues to political issues to free speech to Tiananmen Square, what actually happened, that kind of thing, is all encoded in the weights. And so whoever has the dominant open source model has a big impact on the way global society ends up evolving. Right now so kind of a combination of things happened at the beginning of AI. One, just the way The US companies evolved in conjunction with The US policy. So The US policy under the Biden administration was very anti open source.
And so The US companies ended up being all closed source. And the dominant open source models are now from China, DeepSeek being the one that I would say most not only US companies use, but also basically everyone in academia uses DeepSeek and Chinese open source models, not US open source models. So we've certainly, I think, lost the lead on open source to China. And OpenAI open sourced their last model. The problem with going from proprietary to open source is it doesn't have, what do you call, the vibes?
So open source is very vibe oriented and the community and the way developers think and so forth. So if something evolves in open source, it ends up being a little different than if it doesn't. So I think it's really important. I think that the reason the Biden administration didn't want the products open source was And so the rationale Let me describe the rationale, and then I'll say why it was delusional. The rationale was, okay, we have a lead in AI over China.
I don't know. We had all these pseudo smart people running around saying we have a two year lead and a three year lead. Like, I don't know how you would know that, but they were wrong, it turns out. And that this was like the Manhattan Project, and we had to keep the AI a super secret. Now it's delusional on several fronts.
One, obviously, AI is really good, and their open source models are actually ahead of ours. So we don't have a lead. But the kinda dumber thing about it was, look, if you go into Google or OpenAI or any of these places, do you know how many Chinese nationals work for Google and OpenAI? Like a lot. And you think the Chinese government doesn't have access to any of them?
Come on. And you think there's security? There's no skiffs there. All that stuff's getting stolen anyway. Let's be serious.
There is no information that companies in The US are really locking down. So the way for The US to compete is the way The US always competes. We're an open society, which means everybody can contribute, everybody can work on things, we're not top down. And the way they get everybody to work on things is to have the technology be open and give everybody a shot at it. And then that's how we're competitive, not by keeping everything a secret.
We're actually the opposite of that. We're terrible at keeping secrets. And so we have to go to our strengths. And so that's just a dumb mistake. But I think when you have new technology, it's easy for policymakers to make really obvious, ridiculous mistakes that end up being super harmful.
And so we have to be careful here. So when thinking about AI and national security, are you concerned about that? Well, I think there's a real concern on AI and national security, but it's not in terms of keeping the AI a secret because we can't. Look, if that was a viable strategy, then great, but it's not a viable strategy. We'd have to reshape the entire way society works.
And by the way, even on the Manhattan Project, the Russians got all the nuclear They got everything, including the most secret part, which was the trigger mechanism on how to set off the bomb. They got all of that. And so even then with no internet, with the whole thing locked down, with it in a secret space and all that kind of thing, we couldn't keep it a secret. So in the age of the internet and By the way, China's really good at spying. This is one of the reasons why there's so much tension between the two countries.
It's like almost like a national pride thing to be good at spying in China. So they're really good at it, and we're really bad at defending against it. So like that just is what it is. Now, having said that, all of defense, like war is gonna be very, very AI based. We've already seen this in the Ukraine with the drones and so forth.
But, like, robot soldiers, autonomous submarines, autonomous drones, all that stuff is basically here. And so the whole nature of warfare, I think, is changing, and we have to take that very, very seriously. But I think that means competing in AI. And the best thing for the world is that not one country has the AI to rule them all. That's the worst scenario where anybody is too powerful.
I think a balance of power and AI is good, which is why open source is good, which is why us developing the technology as fast as we can is important. It's why the private sector integrating with the government in The US is important. China's much better at that than we are, so we have to get better. But keeping things a secret, don't think is gonna work. I mean, I actually don't even think keeping the chips to ourselves is gonna work.
So far we thought, okay, if we stop the export of NVIDIA chips to China, that will stop them from building powerful models. It really hasn't. So a lot of these ideas just end up retarding the growth of The US technology and the industry, as opposed to doing anything for national security.
You mentioned the previous administration and we talked about their attitude. I want to ask you a question about regulation. I've had so many conversations with European leaders about that. Maybe you do as well. Sorry,
I shouldn't laugh. And
why don't you share your thinking a little bit about the American situation and sort of the global situation?
Yeah. So it's funny, every panel I've been on or kind of time I've been at a conference with European leaders, they always say that whether they're in the press or industry or the regulatory bodies, they say the same thing. Well, Europe may not be the leaders in innovation, but we're the leaders in regulation. And I'm like, you realize you're saying the same thing. So Europe kind of got down this path, which is known as the kind of precautionary principle in terms of regulation, which means you don't just regulate things that are kind of known harmful, you try and anticipate with the technology anything that might go wrong.
And this is, I think, a very dangerous principle because if you think about it, we would never have released the automobile. We'd never released any technology. I think it started in the nuclear era and one could argue that we had the answer to the kind of climate issues in 1973, and if we would've just built out nuclear power instead of burning oil and coal, we would've been much better shape. And if you look at the safety record of nuclear, it's much better than oil where people blow up on oil rigs all the time. And I think more people are killed every year in the oil business than have been killed in the history of nuclear power.
So these regulatory things have impact. In the case of AI, there is kind of several categories that people are talking about regulating. So there's kind of the speech things, like can you and Europe is very big on this. Can it say hateful things? Can the AI say political views that we disagree with?
This kind of thing. So very similar to social media and kind of that category of things. And do we need to stop the AI from doing that? And then there's kind of another section which is, okay, can it tell you instructions to make a bomb or a bioweapon or that kind of thing? And then there's another kind of regulatory category, which is I think the one that most people use this argument to kind of get their way on the other things is, well, what if the AI becomes sentient and turns into the Terminator?
We got to stop that now. Or kind of the related one, which is kind of a little more technologically believable, but not exactly is takeoff. Have you heard of this thing takeoff? So takeoff is the idea that, okay, the AI learns how to improve itself, and then it improves itself so fast that it just goes crazy and becomes a super brain and decides to kill all the people to get itself more electricity and stuff kinda like The Matrix. Okay.
So let me see if I can deal with And then there's another one which is around copyright, which is important, but probably not on everybody's mind as much. So if you look at the technology, the way to think about it is there's the foundation, the models themselves. And it's important, by the way, that everybody who works on this stuff calls it models and not AI intelligence and so forth. And there's a reason for that because what it is is it's a mathematical model that can predict things. So it's a giant version of kind of the mathematical models that you all kind of study to do basic things.
So if you wanna calculate when Galileo dropped a cannonball off the Tarapisa, you drop it off the 1st Floor and the 2nd Floor, but then you could write like a math equation to figure out what happens when you drop it off like the 12th Floor. How fast does it fall? So that's a model with maybe like a couple of variables. So think then, what if you had a model with 200,000,000,000 variables? That's an AI model.
And then you can predict things like, okay, what words should I write next if I'm writing an essay on this? Like, you can predict that. And and that's what's going on. So it's math. And inside it's doing The model is just doing a lot of matrix multiplication, linear algebra, that kind of thing.
So you can regulate the model or you can regulate the applications on the model. So I think when we're talking about publishing how to make a bioweapon or how to make a bomb or that kind of thing, that's already illegal. And the AI shouldn't get a pass on that because it's AI. So if you build an application like ChatGPT that publishes the rules of making them bomb, you ought to go to jail. That should not be allowed, and that's not allowed.
And I think that falls under regular law. Then the question is, okay, do you need to regulate the model itself? And the challenge with regulating the model is you're basically The regulations are all of the form. You can do math, but not too much math. Like, you do too much math, we're going to throw you in jail.
But if you do just this much math, it's okay. And how much math is too much math? The problem in that thinking is when you talk about sentient AI or takeoff, you're talking about sort of thought experiments that nobody knows how to build. And I think there's very good arguments, and we do know how to reason about these systems that takeoff is not gonna happen, and that we have no idea how to make takeoff happen. And so it's kind of one of these things like, well, the laws of physics I can do a thought experiment that says, if you travel faster than the speed of light, you can go backwards in time.
So do we now need to regulate time travel and outlaw whole branches of physics in order to stop people from traveling back in time and changing the future or changing the present and screwing everything up for us? That's probably too aggressive. And we're really getting into that territory when we talk about sentient AI. We don't even know what makes people sentient. We literally don't.
You know who knows the most about consciousness? Anesthesiologists because they know how to turn it off. But that's the extent of what we know about consciousness. So we definitely don't know how to build it, and we definitely haven't built it today. There's no AI that's conscious or has free will or any of these things.
And so when you get into regulating those kinds of ideas and I'm not saying that AI can't be used to improve AI. It absolutely can. But computers have been improving computers for, like, since we started them. But that's different than takeoff because takeoff requires a verification step that nobody knows how to do. And so you get into very, very theoretical cases, and then you write a law that prevents you from competing with China at all, and that gets very dangerous.
And so I just say, like, we have to be really, really smart about how we think about regulation and how that goes. Copyright is another one. So copyright, should you be allowed to have an AI listen to all the music and then reproduce Michael Jackson? No. Definitely, that's gotta be illegal because that's a clear violation of copyright.
But then can you let it read a bunch of stuff that's copywritten and create a statistical model to make the AI better but not be able to reproduce it? Well, that gets very tricky if you don't allow that because, by the way, that's what people do, right? You read a lot of stuff and then you write something, and it's affected by all the stuff you read. And by the way, competitively with China, they're absolutely able to do that. And you the the amount of data you train on dramatically improves the quality of the model.
And so you're gonna have worse models if you don't allow that. So there's that's a trickier one, but this is where you have to be very careful with regulation to not kill the competitiveness while not actually gaining any safety. And so that's a big debate right now, and it's something we're working on a lot.
Ask you one question and then I wanna move on to crypto and venture and leadership. But you mentioned machines building machines, and I think of a colleague of mine that is a roboticist. Yeah. And what you're thinking about physical or embodied AI, and are you guys invested in that? Do you think that that's something that is going to be big over the next ten, twenty, thirty years?
How do you feel about that?
Yeah, no, no. I definitely think it's gonna be big, and it's gonna be very important. It's probably gonna be The biggest industry is probably gonna be robotics. It's gonna be super important. I don't think there's any question.
I think it's further away than anybody is saying. So if you think about like the full humanoid robot, well, just to give you kind of a timescale idea. So in 2006, I think Sebastian Thren won the DARPA challenge and had an autonomous car drive across the country. And now in 2025, we're just getting the Waymo cars and things that you can put on the road. So nineteen years to kind of solve that problem.
And why did it take so long? And by the way, the self driving robot problem is a much easier problem than the humanoid robot problem because the data is primarily two dimensional. And then we had like all the map data already and so forth. So it was a lot easier to get there. If you think about the robot data, it's many more dimensions.
The difference between picking up a glass and picking up a shirt is very different or an egg. So there's all these subtleties to it. And then with self driving, like the thing that if you look between 2012 and 2025, say, what took so long, it turns out that the universe is very fat tailed and human behavior is very fat tailed. And so like in working with the Waymo team, the things that were extremely hard to deal with were like somebody driving 75 in a 25 zone or like somebody just running out in the middle of the street for no reason or that kind of thing. It was very, very difficult to make the car safe around those kinds of use cases because they just weren't in the data set.
And then if you think about robots, we don't have any data on that. And you don't get the data from video because you have to pick stuff up, you have to do things, and so forth. And then these humanoid robots are like, they're extremely heavy. The battery problem is hard. And the models that we have, so to just feed an LLM enough data until it can drive a robot, I can tell you hasn't been working yet.
And so then there's a question, do you need another kind of model? There are a lot of people working on so called real world models. Fei Fei Li's got a new company doing that called World Labs and so forth. But it's gonna take a while to get there. And you can tell in the video models that they're not suited for robots because you can't do things like move the camera angle because it doesn't understand what's in the picture.
And that's okay for a video. It's not okay for a robot. So there's gonna be a lot of things that we have to do before we get to robots. But those things are there's certainly a lot of effort going on to it. And in terms of a US competitive space, probably the most worrisome thing right now is the entire robot supply chain currently is in China.
So every robot supply chain company is either Chinese based. I think there's one in Germany, but it's all founded by Chinese nationals.
I think it was bought
by China. Just from like a strategic, okay, do you get your supply chain cut off kind of thing? That's something that we probably have to work on. And it's not the most complicated thing to build, the supply chain, but it's something that if we don't do, we're gonna be in the same situation that we're in with rare earth minerals and chips and these kinds of things.
Quick question about crypto before we talk about people.
All right.
We're going to Summary of what. Yeah, crypto is changing quite a bit, and a lot of momentum in the last year or so. How do you feel about crypto and blockchain applications? And do you envision that over the next five years, we may start to see technology being applied in other areas apart from where it is right now?
Yeah. So cryptos are a super interesting kind of technology, and probably if Hitoshi Nakamoto wasn't a pseudonymous person who nobody knows who he is, he probably would've won the Nobel Prize for mathematics and economics on the Bitcoin paper. So it's a very interesting and powerful technology. I think that the way to think about it in the context of AI is if you look at the evolution of computing, it's always been in kind of like two pillars. One is computers and the other is networks.
So starting with microwaves and integrated circuits and going to mainframes and SNA to PCs and LANs to the Internet and and the smartphone, they always kind of they're very different technology bases, but one without the other is never nearly as powerful. And if you think about AI, what is a network that AI needs? So first of all, in order for AI to be really valuable, it has to be an economic actor. So AI agents have to be able to buy things, have to be able to get money, that kind of thing. And if you're an AI, you're not allowed to have a credit card.
You have to be a person. You have to have a bank account. You have to have social all these kinds of things. So credit cards don't work as money for AIs. So the logical thing, the Internet native money is crypto.
It's a bearer instrument. You can use it and so forth. And we've already actually seen new AI banks that are crypto based where AIs can kind of get KYC ed and that kind of stuff. That's called sorry, know your customer, anti money laundering laws, these kinds of things. So crypto is kind of like the economic network for AI.
It's also if you think about things like bots, how do you know something's a human? Crypto's the answer for, like, proving that you're a human being. Crypto turns out to be the answer for provenance. So, like, is this a deep fake? Like, is this really me, or is this like a fake video of me?
How do I verify that it's actually me? And then if I verify that it's actually me, where should that where should that registry of truth live? Should we trust the US government? What's true? Should we trust Google?
What's true? Or should we trust the game theoretic mathematical properties of a block train on what's true? So it's like a very kind of valuable piece of infrastructure. And then finally, if you think about one of the things that AI is best at, and then probably the biggest security risk that nobody talks about is just breaking into stuff. It's really, really good.
And not just breaking into things technologically, but also social engineering and that kind of stuff. It's amazing. And so the current architectures of where your data is and where your information is and where your money is is kind of not well suited for an AI world. They're just giant honey pots of stuff to steal for somebody who uses the AI. And the right architectural answer to that is a public key infrastructure where you keep your data yourself and then you deliver a zero knowledge proof.
Yes, I'm creditworthy, but you don't have to see my bank account information to know that I'm creditworthy. I'm not gonna give you that. I'm just gonna prove to you that I'm that. And that's a crypto solution. So it ends up being like a very, very interesting technology in an AI world.
And I think that's where a lot of the new developments are gonna be.
Good. Two, three more questions, and then we'll open it up a little bit to the audience. We started by talking about Anthropic at 183,000,000,000, OpenAI may be closing at half 1,000,000,000,000. What's happening with the venture capital industry? And are traditional models changing for seed age, etcetera?
Or why have we seen that change in the last, I would say we started seeing it with the Ubers and Airbnbs, and now it has gone even further?
Yeah. So I think what's happened is, and this is another regulatory thing. No good deed goes unpunished, I would say. So if you go back to the '90s, it shows you how old I am. If you go back to the '90s, in those days, companies went public.
Amazon went public, I think, with a $300,000,000 valuation. When we went public at Netscape, the quarter prior to when we went public, we had $10,000,000 in revenue and so forth. And then what happened was kind of a series of regulatory steps. And some of them are so obscure that you'd never know about them, things like order handling rules, decimalization, Reg FD, just like a series of regulatory things, Sarbanes Oxley, many of which came after the great dot com crash and telecom crash. And the result of those things is going public became very, very onerous and very difficult.
So you definitely couldn't do it at a $300,000,000 valuation because one, the cost of being public just in terms of lawyers, accounting, D and O insurance, and so forth was so high, it'd be a massive percentage of your revenue. So that's thing one. Then secondly, you couldn't Because of the way the Particularly things like Reg FD changed, there's this kind of asymmetric situation between the company and the short sellers. So the short sellers became much, much more powerful because they were able to do things to manipulate the stock where a company could no longer defend itself in the way it used to be able to defend itself. And so that made it more dangerous.
And then of course you get sued like crazy. So all that happened and made kind of companies stay private longer. And then the result of companies staying private longer was that the private market capital markets massively developed. So all of these huge money pools started putting money into the private markets. And so what does that mean?
Well, it means that, okay, look, if OpenAI can raise $30,000,000,000 in the private markets, what is the value of being public? That you can get sued more, that you have to do an earnings call every quarter? Things, the trade off becomes a bad trade to go public. And that's kind of where we are today. I think, look, for the good of the country, the best answer is we fix the public markets.
But in the meanwhile, what's happened is as a venture capital firm, you kind of have to expand your capabilities all the way up into the very, very high end of the markets and really kind of take over a lot of the role that investment banks have previously had. And that's just kind of been what's happened. We'll see where it goes. I think right now, it's on the train to continue. I think the other underlying thing in your question is how in the hell is Anthropic worth so much money?
And look, I think that the answer to that is these products, the biggest takeaway from the AI products is how well they work. So OpenAI went to $10,000,000,000 in revenue in four years, which we've never seen anything like that. And when you look at that, you say, well, why is that? And it's like, well, how well does ChatGPT work? It works awesome, way better than other technologies products you bought in the past.
The stuff works really well. Cursor works unbelievable. And so I think that because the products work so much better than anything that we've had in the past, they grow much faster. And as a result of them growing much faster, the valuations grow much faster. But the numbers are there to justify the valuations in a way that in the .com era, they weren't.
So it's a different phenomenon. Now, the AI land, if there's another big breakthrough in AI, then somebody could get a dramatically better products and then the valuations aren't sustainable and so forth. But that's very theoretical compared to I could go on for days about what exactly happened during the .com era, but this isn't the same. It may have issues, but they're not the same issues.
So there were at least two students that brought books of yours for you to sign when we stepped in. So you wrote the book, The Hard Thing About Hard Things.
Yeah, ain't nothing easy.
Yeah. And for the many MBA students in audience, what's one of the sort of counterintuitive hard things that you think about and people need to know about?
And there are so many things. Actually, somebody asked me, my friend, Ali Goetzi, who runs Databricks, brought up like a couple of days ago, guys, Ben, like, one of the best things you told me was, I can't develop my people, which I thought was like, oh, wow, I said that. But I actually had written a post on it, and it's a kind of a CEO thing that's not true for managers. And let me explain to you what I mean by that. When I was, if you're a manager and you're a product manager or an engineering manager or this kind of thing, you know exactly how to do the job that you hire people into.
And so you can develop them, you can train them, you can teach them to be a better engineer, a better engineering manager, or a better accountant or whatever it is. But as CEO, you're hiring like a CFO, a head HR, a head of Mark, you probably don't know how to do any of those jobs. So if they're not doing a good job, you're spending your time developing, you don't know how to do that job, What are you doing? And the bigger problem is you're now distract One, you're not gonna improve them because you don't know what you're doing. And then secondly, you're taking time away from what you need to be doing.
If you think about what the CEO needs to do, they have to set the direction for the company. They've got to articulate that. They've got to make sure the company is organized properly. They've got to make sure the best people are in place. They have to make decisions that only they can make.
And if they don't make them, then the entire company slows down. So if you're not doing that and trying to develop someone who you have no idea how to develop, that's just a huge mistake. And was a very sad lesson for me. In fact, I wrote a post on it called the sad truth about developing executives. And I think the rap quote that I used was wheezy And it was, The truth is hard to swallow and hard to say too.
Now I graduated from that bullshit and I hate school. And that's how I feel about that blessing. I just hate the fact that I learned that, but it's very true.
In another book that you wrote, which is about what you do is who you are, you focus on culture. This is something that we speak a lot about here as well. But what's, in some sense, some of the things that people need to be thinking about? How they set culture, how do they influence culture within their organizations, the importance of that, and how you have actually put it to work in your own organization.
Yeah. So I think that the biggest mistake people make on culture is they think of it as this very abstract thing. And my favorite quote on this is from the Samurai from Bushido, where they say, Look, a culture is not a set of beliefs, it's a set of actions. And when you think about it in the organizational context, that's the way you have to think about it. So people go, Oh, well, culture is integrity, or we have each other's backs, or this, it's like Right.
Everybody can interpret that however they want. So your culture is probably hypocrisy, if that's how you define it, because nobody's doing that. And by the way, the whole thing on these kinds of virtues, would just call them, is they only actually you only break them under stress. So it's like how many of you think you're honest, like you're an honest person? Okay, now think about how many people do you know who you would consider to be totally honest?
I bet it's a way lower percentage than the people who raise their hand. And why is that? It's because honesty doesn't Everybody's honest until it's gonna cost you something, right? Oh, are you gonna be honest if it's gonna cost you your job? Are you gonna be honest if it costs you your marriage?
Are you gonna be honest in that situation? That's a whole nother thing, right? And so honesty all the virtues are like that. They're only kinda tested under stress. And so you can't define the ideal of something you want.
You have to define the exact behavior. Like, how do you want people to show up every day? Because culture is a daily thing. It's not a quarterly you don't put in the annual review. Like, do you follow the culture?
It's like, well, yeah, sure. I mean, who even knows how to evaluate it at that time? So it's what do you do every day? And so what are the you wanna think of what are the behaviors that indicate the thing that you want? And so I give you one example at the firm that we have is one of the difficult things to do that we really wanted to do as a venture capital firm is like, let's be very, very respectful of the people building the companies, the entrepreneurs, and never kind of make them feel small in any way.
And every venture capital firm would say, You wanna do that. But the problem with venture capital is I have the money, you have an idea, you come to me to get the money, I decide whether you get the money or not. So if that's my daily thing, then I might feel like the big person and I might wanna make you feel like the small person. Like, no, I don't think that's a good idea. And so like, how do you stop that?
So we put a thing Like I can tell people not to do that, but like there's all this kind of other incentive that's making them do that. So what I said is like, if you're ever late to meeting with an entrepreneur, one, it's a $10 a minute fine. I don't care if you had to go to the bathroom, I don't care if you're on important phone call, like you're five minutes late, you owe me $50 right now and you pay on the spot. Why did I do that? Well, because I want you to think that nothing is more important in your job or in your day than being on time for that meeting with that entrepreneur because what they're doing is extremely hard and you have to respect that and you have to respect it by showing up on time.
And I don't give what your excuse is. You were getting married, you wouldn't have to go to the bathroom and be late to the altar. So I know you can do it. So don't give me that. And that programs people, right?
Because every day you're meeting with entrepreneurs, you know, okay, this is what we're about. We gotta do that. Similarly, on that, I'm like, look, if somebody wants to do something larger than themselves and make the world a better place, we're for that. We're dream builders. We're not dream killers.
So if you get on X and say, oh, that's a dumb idea that they're selling dollars for 85¢, da da da, you're fired. That's it. Gone. I don't care. Because we don't do that.
And so you put in rules that seem maybe absurd, but they set the It's a cultural marker for like, okay, this is who we are. And if you wanna come work here, you gotta be like that. And so that's a little kind of way you think about culture. I wrote a whole book on it. If you're interested in this, there's many other aspects.
But I think that worst thing you can do is just go have an off-site and yabba dabba doo about the values that you all have and write up a bunch of flowery language about how you're like this.
Okay. I promise we'll end on time, so I think we're gonna end here. Ben, thank you so much for coming in.
Thank you.
Thanks for listening to the a 16 z podcast. If you enjoyed the episode, let us know by leaving a review at ratethispodcast.com/a16z. We've got more great conversations coming your way. See you next time. This information is for educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product.
This podcast has been produced by a third party and may include paid promotional advertisements, other company references, and individuals unaffiliated with A16z. Such advertisements, companies, and individuals are not endorsed by Ah Capital Management LLC, A16Z, or any of its affiliates. Information is from sources deemed reliable on the date of publication, but A16Z does not guarantee its accuracy.
Ben Horowitz: Why Open Source AI Will Determine America's Future
Ask me anything about this podcast episode...
Try asking: