| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
AI is reshaping the tech landscape, but a big question remains: is this just another platform shift, or something closer to electricity or computing in scale and impact? Some industries may be transfo...
ChatGPT has got eight or 900,000,000 weekly active users. And if you're the kind of person who is using this for hours every day, ask yourself why five times more people look at it, get it, know what it is, have an account, know how to use it, and can't think of anything to do with it this week or next week. The term AI is a little bit like the term technology. When something's been around any for a while, it's not AI anymore. Is machine learning still AI?
I don't know. In actual general usage, AI seems to mean new stuff.
And AGI seems new scary stuff.
AGI seems to be a bit a little bit like this. Like, either it's already here and it's just more software, or it's five years away and will always be five years away. We don't know the physical limits of this technology, and so we don't know how much better it can get. You've got Sam Altman saying, we've got PhD level researchers right now. And Demet Osibis says, no.
Don't. Shut up. Very new, very, very big, very, very exciting, world changing things tend to lead to bubbles. So, yeah, if we're not in a bubble now, we will be.
Is AI just another platform shift or the biggest transformation since electricity? Benedict Evans, technology analyst and former a sixteen z partner, has spent years studying waves like PCs, the Internet, and cell phones to understand what actually changed and who captured the value. Now he's turned that same lens on AI, and the picture is far more complex than benchmarks or hype cycles suggest. Some industries may be rewritten from the ground up. Others may barely notice.
Notice. Tech giants like Google, Meta, Amazon, and Apple are racing to reinvent themselves before someone else does. Yet for all the excitement, most people still struggle to find something they truly need AI for every single day, a disconnect Benedict thinks is an important signal about where we really are in the curve. In today's episode, we get into where bottlenecks emerge, why adoption looks the way it does, what kinds of products still haven't shown up, and how history can actually guide us here. And finally, what would have to happen over the next few years for us to look back and say AI wasn't just another wave, it was bigger than the Internet.
Benedict, welcome back to the AisinZ Podcast.
Good to be back.
We're here to discuss your latest presentation, AI Eats the World. So for those who haven't read it yet, maybe we can share the high level thesis and maybe contextualize it in light of recent AI presentations. I'm curious how your thinking has evolved.
Yeah. It's funny. One of the slides in the deck references a conversation where I had with a big company CMO who said, we've all had lots of AI presentations now. Like, we've had the Google one and we've had the Google one and the Microsoft one. We've had the Bain one and the BCG one.
We've had the one from Accenture and the one from our ad agency. So now what? So there's sort of 90 odd slides. So there's bunch of different things I'm trying to get at. One of them is, I think, just to say, well, if this is a platform shift or more than a platform shift, how do platform shifts tend to work?
What are the things that we tend to see in it? And how many of those patterns can we see being repeated now? And of course, some of the patterns that come out of that are things like bubbles, but others are that lots of stuff changes inside the tech industry. And there are winners and losers, and people who were dominant end up becoming irrelevant. And then there are new billion, trillion dollar companies created.
But then there's also what does this mean outside the tech industry? Because if we think back over the last waves of platform shifts, there were some industries where this changed everything and created and uncreated industries. There are others where this was just kind of a useful tool. So, you know, if you're in the newspaper business, the last thirty years look very different to if you were in the cement business, where the Internet was just kind of useful but didn't really change the nature of your industry very much. And so what I tried to do is give people a sense of, well, what is it that's going on in tech?
How much money are we spending? What are we trying to do? What are the unanswered questions? What might or might not happen within the tech industry? But then outside technology, how does this tend to play out?
What seems to be happening at the moment? How is this manifesting into tools and deployment and new use cases and new behaviors? And then as we kind of step back from all of this again, how many times have we gone to all of this before? It's funny. I went on a podcast this summer, and I sort of opening line, I said something like, well, I'm a centrist.
I think this is as big a deal as the Internet or smartphones, but only as big a deal as the Internet or smartphones. There's like 200 YouTube commentators under these saying this more, and he doesn't understand how big this is. And I think, well
Those are
pretty big. Was kind of a big deal. It was kind of a big deal. And, you know, I sort of finished the day by looking at elevators, because I live in an apartment building in Manhattan, and we have an attended elevator, which means it's there's a hand there's no buttons. There's an accelerator and a brake, and the doorman gets in and drives you to your floor, this street car.
And in the fifties, Otis deployed automatic elevators, And then you get in and you press a button. And they marketed it by saying, it's got electronic politeness, which means the infrared beam. And today when you get into an elevator, you don't say, ah, I'm using an electronic elevator. It's automatic. It's just a lift, which is what happened with databases and with the web and with smartphones.
And I kind of think now it's just funny. Did I've done a couple of polls on this in LinkedIn and thread. So is machine learning still AI? The term AI is a little bit like the term technology or automation. It only kind of applies when something's new.
When something's been around for a while, it's not AI anymore. So our database is certainly on AI. Is machine learning still AI? I don't know. And there's obviously this, like, an academic definition where people say this guy's an idiot.
No. Of course, I'm going to explain the definition of AI. But then in actual general usage, AI seems to mean new stuff.
Yeah. And AGI seems, you know, like new, scary stuff.
Yeah. It's funny. There's I was thinking about this. There's an old theologian's joke that the problem for Jews is that you wait and wait and wait for the Messiah, he never comes. And the problem for Christians is that he came and nothing happened.
You know? The world didn't change. There was still sin. All practical purposes, nothing ham. And AGI seems to be a little bit like this.
Like, either it's already here, and so you've got Sam Altman saying, we've got PhD level researchers right now. And Demes Asibis says, what? No. We don't. Shut up.
And so either it's already here and it's just more software, or it's five years away and will always be five years away.
Yeah. Yeah. Let's compare back to previous platform shifts because some people look at you know, something on the Internet and say, hey. There were net new trillion dollar companies, Facebook and Google, that were created from it and just sort of all sorts of new emerging winners. Whereas they look at something like mobile and say, hey.
There were big companies like Uber and Snap and Instagram and WhatsApp, but these were billion dollar outcomes or tens of billion dollar outcomes. But really the big winners were were in fact Facebook and Google. And so in some sense, mobile perhaps was sustaining. You feel free to quibble with the definition of sustaining disruptive, but sustaining in the sense that maybe more of the value went to incumbents, the companies that existed prior to the shift. I'm curious how you think about AI in light of that in terms of is more of the gains going to come to net new companies like OpenAI and Anthropic and others that follow, or are more of the gains gonna be captured by Microsoft and Google and Meta and companies that existed prior?
So I think there's several answers to this. One of them is, like, you kind of have to be careful about, like, framings and structures and things because you end up arguing about the framing and the definition rather than arguing about what's gonna happen. And they're all useful, but they've all got holes in them. And, you know, what mobile did was, you know, there's a bunch of things that it changed fundamentally. It shifted us from the web to apps, for example.
And it gave everybody on the world a pocket computer. So even today, there's less than a billion consumer PCs on earth, and there's something between five and six billion smartphones. And it made possible things that would not have been possible without it, whether that's TikTok or arguably, I think, things like online dating. And you can map those against dollar value. You can also map those against kind structural change in consumer behavior and access to information and things.
And I think you could certainly argue that Meta would be a much smaller company if it wasn't for mobile, for example. So you can kind of argue the puts and calls on this stuff a lot. There's certainly not all platforms you have to do the same. And, you know, you can do the sort of standard sort of teleology of, say, well, there were mainframes and then PCs and then the web and then smartphones. But you kind of wanna put SaaS in there somewhere, and you kind of wanna put open source in there, and maybe you wanna put databases.
And so these are kind of useful framings, but they're not predictive. They don't tell you what's gonna happen. They just kind of give you one way of understanding what seem some of the patterns that we have here. And, of course, the big debate around generative AI is just another platform shift or is it something more than that? And of course, the problem is we don't know and we don't have any way of knowing other than waiting to see.
So this may be as big as PCs or the web or SAS or open source or something, or maybe as big as computing, and then you've got the very overexcited people living in group houses in Berkeley who think this is as big as fire or something. Well, great. But does this print new companies I mean, you go back to the mobile, there was a time when people thought that blogs were going to be different to the web, which seems weird now. Like, Google needed, a separate blog search. This was seriously, this was a thing.
There was a time when it was really not clear, and I think you kind of generalized his point. You go back to the Internet in the mid nineties. We kind of knew this was gonna be a big thing. We didn't really know it was gonna be the web. So before that, we didn't know it was gonna be the Internet.
They knew there were gonna be networks. We it wasn't clear it was gonna be the Internet. Then it wasn't clear it was gonna be the web. Then it wasn't really clear how the web was gonna work. And when Netscape launched, like, Mark Zuckerberg was in junior high or elementary school or something, and Larry and Sergei were students, and, like, Amazon with a bookstore.
So you can know it but not know it, and you could make the same point about smartphones. Like, it was we knew everyone was gonna have an Internet connected thing in their pocket, but it was not clear it was basically going to be a PC from this has been PC company from the eighties and a search engine company. It was not clear it wasn't gonna be Nokia or Microsoft. See, I think you have to be super careful in making kind of deterministic predictions about this. What you can do is say, well, when this stuff happens, everything changes.
And that's happened five or 10 times before.
I'm curious how you got conviction in this idea or what what got this prediction that, hey. AI is gonna be as big as the Internet, which, of course, is pretty big, but not yet I, Benedict, I'm not yet at the conviction that it's going to be any bigger. I'm curious what sort of inspires that sort of statement, and then also what might change your mind either way? That it might not be as bigger the Internet, because, of course, that Internet was obviously very big, but also that, hey, perhaps it might be bigger.
Well, so I think, you know, I don't wanna I made a diagram of kind of s curves kind of going up the slide, someone said, well, what's the axis on this diagram? I don't wanna kind of get into, you know, is this is this 5% bigger than than Internet, or is it 20% bigger? I think the question is more like, is it another of these industry cycles, or is it a much more fundamental change in in what technology can be? Is it more like computing or electricity as a sort of structural change rather than here's a whole bunch more stuff we can do with computers? I think that's sort of the the the question.
And there's a funny sort of disconnect, I think, in in looking at debates about this within tech because, you know, I watched this this this this one of the, OpenAI live streams a couple of weeks ago. And they spend the first twenty minutes talking about how they're gonna have, like, human level, PhD level AI researchers like next year. And then the second half of the stream is, oh, and here's our API stack that's going to enable hundreds and thousands of new software developers just like Windows, and in fact, literally quote Bill Gates. And you think, well, those can't kind of both be true. Like, either I've got a thing which is a PhD level AI researcher, which by implication is like a PhD level CPA.
Yeah. Or I've got a new piece of software that does my taxes for me. And, well, which is it? Either this thing is going to be like human level and some and that's a very, very challenging, problematic, complicated statement, Or this is going to let us make more software that can do more things the software couldn't be. And I think there's a real, like, schizophrenia in conversations around this.
Because, like, scaling laws, it's gonna scale all the way. And meanwhile, I'm going, hey. Look how good it is at writing code. And, again, like, well, is it writing code, or do we not need software anymore? Because in principle, if the models keep scaling, nobody's gonna write code anymore.
You'll just say to the model, like, hey. Can you do this thing for me?
Yeah. Is it a little bit of a hedge or, like, a sequencing thing? Or
Well, it's a it's some of it's a sequencing thing. But, you know, in principle, if you think this stuff is gonna keep scaling, like, why are you investing in a software company?
Yeah.
Like, because, you know, people just have this, like, god in a box that can do everything. Right. And and and I think this is this is the the the kind of the funny kind of challenge, and this is, I think, the the fundamental way that this is different from PVF platform shifts, is that with the Internet or with mobile or being deemed with mobile, mainframes, like, you didn't know what was gonna happen in the next couple of years. You didn't know that what Amazon would become, and you didn't know how Netscape was gonna work out, and you didn't know what next year's iPhone was gonna be and ten years ago when we cared about that. But you kind of knew the physical limits.
Like, you knew in 1995, you knew that telcos were not gonna give everybody gigabit fiber next year. And you knew that the iPhone wasn't gonna, like, have a year's battery life and unroll and have a projector and fly or whatever. But we don't know the physical limits of this technology because we don't really have a good theoretical understanding of why it works so well, nor indeed do we have a good theoretical understanding of what human intelligence is. And so we don't know how much better it can get. So you can do you could do a chart and you could say, well, you know, this is a rate map for modems, and this is a rate map for DSL, and this is how fast DSL will be.
And then you can make some guesses about how quickly telcos will deploy DSL. And then you can say, well, clearly, we're not gonna be able to replace broadcast TV with streaming in 1998. But we don't have an equivalent way of modeling this stuff to know what is the fundamental capability of it going to look like in three years, which gets you to these kind of slightly vibes based forecasting where no one really knows. So, you know, Jeff Hinton says, well, I feel like. And Demi Sesavas says, well, I feel like, but no one knows.
And then Karpathy goes into Rakesh's podcast and says, I feel like, you know, it's a decade out.
Yeah. I know. Well, I saw this this meme of of what's his name? Ilias Sushkeva. But, like, when he says, like, the answer will reveal itself, and somebody, like, memed I was I'm gonna say photoshopped, but, of course, he wouldn't have been photoshopped, turned him into a Buddhist monk wearing, like, an orange like an orange outfit.
Like, the future will reveal itself. But this is the problem. We don't know. We don't have a way of modeling this.
Yeah. And so let's connect this to sort of the, you know, the upfront investment that some of these companies are making. Because we don't know, you know, is there a risk of overinvestment leading to some, you know, potential, you know, bubble like mechanics? Or how do you think about that that question?
Well, deterministically, very new, very, very big, very, very exciting, world changing things tend to lead to bubbles. Yeah. And you I don't think anybody would dispute that you can see some bubbly behavior now and, you know, you can argue about what kind of bubble, but, again, like, that doesn't have very much predictive power. And, you know, one of the the features of bubbles is that when everything's going you know, everything goes up all at once, and everyone looks like a genius, and everyone leverages and cross leverages and does circular revenue, and that's great until it's not. And then you get a kind of a ratchet effect as it goes back down again.
So, yeah, if we're not in a bubble now, we will be. I remember Mark Andreessen saying, you know, 1997 was not a bubble. '98 was not a bubble. '99 was a bubble. Are we in '97 now or '98 or '99?
You know, if we could predict that, you know, we'd live in a parallel universe. I think, you know, to the there's, I suppose, maybe kind of two more specific, more more more tangible answers to this. The first of them is we don't really know what the compute requirements of this stuff are going to be. And forecasting that except, like, more. And forecasting that feels a lot like trying to forecast, like, bandwidth use in the late nineties.
Imagine if you were trying to do the algebra on that. You'd say, well, this many users. You know, how much bandwidth does a web page use? How will that change? How will that change if bandwidth gets faster?
What happens with video? What kind of video? What bandwidth what what bit rate of video? How long do people watch a video? How much video?
And then you'd like you'd you could build the spreadsheet, and it would tell you what bit rate would what global bandwidth consumption would be in ten years, and then you could try and use that to back calculate how many routers is this gonna gonna sell. And you could get a number, but it wouldn't be the number. You know? There'd be a, you know, hundredfold range of possible outcomes from that. And you could, you know, you could make the same point about algebra of of consumption now.
So, you know, right now, we have a bunch of rational actors saying, well, this stuff is transformative and a huge threat, and we can't keep up with demand for it now. And as far as we know, the demand is going to keep going up. And, you know, we've had a variety of quotes from all of the hyperscalers basically saying the downside of not investing is bigger than the downside of overinvesting. That's or that kind of thing always works well until it doesn't. Yeah.
And I saw slightly strange quote from Mark Zuckerberg saying, well, if it turns out that we've overinvested, we can just resell resell the capacity. And I thought, let me just, like, stop you there, Mark. Because if it turns out that you can't use your capacity, everybody else can have loads of spare capacity as well. Yeah. All these people now who are desperate for more capacity, if it turns out we can get the same results for hundreds of the compute, that will be true for everyone else too, not just you.
Yeah. So, yeah, you know, in a investment cycle like this, you tend to get over investment. But then after that, there's very limited predictions you can make about what's going to happen. I think the the more useful kind of way to look at this is to think, well, you've got these kind of transformative capabilities that are already increasing the value of your existing products, if you're you're Google or Meta or Amazon. And you're going to be able to use them to build a bunch more stuff.
And why would you want to let somebody else do that rather than you doing it as long as you're able to keep funding and selling what you're building? Yeah. And it may well turn out that, you know, we have an evolution of models in the next year that means you can get the same result for a hundreds of the compute that you're using today, bearing in mind that it's already going down, like, Japan, pick your numbers twenty, thirty, 40 times a year. Yeah. But then the usage is going up.
So you're in this very as I said, it's like trying to predict bandwidth consumption in the late nineties, early two thousands. You know, you can you can throw all the parameters out, but it doesn't get you to something useful. You just kind of to step back and say, yeah. But is this Internet thing any good?
Well, yeah, because I'm curious if if the bottlenecks are if you see them as more on the supply side or the demand side, you know, more tech technical constraints, or is just is they is they any good? Are are there enough use cases to to justify the the the type of spend? What are what are you seeing, and and what are you predicting?
So maybe two answers to this question. The first of them is I think we've had the sort of a bifurcation of what all the questions are. So there are now very, very detailed conversations about chips, and then very, very detailed conversations about data centers, and about funding for data centers, and then about what is a a new enterprise SaaS company built on AI? What margins will it have? And how much money does it need to raise?
And so there are venture capital conversations. And so there are many different conversations within which, like, I don't know anything about chips. You know, I can spell ultraviolet, but, like, I don't know what, like, an ultraviolet process is. It's like, it's more it's more more violets. I don't know.
And so you've got this you know, it's like the the Milton Friedman line. No one knows how to build a pencil. You've got the right you know, we've got this you know, it's turned into deployment. I think a second answer might be, I think there's two kinds of AI deployment, generative AI deployment. One of them is there are places where it's very easy and obvious right now to see what you would do with this, which is basically software development, marketing, point solutions for many very boring, very specific enterprise use cases, and also basically people like us, which are people who have kind of very open, very free form, very flexible jobs with many different things, and people who are always looking for ways to optimize that.
Yeah. And so you get people in Silicon Valley who are like, you know, I spend all my die time in ChatGPT. I don't use Google anymore. You know, I've replaced my CRM with this. And you kind of and then you obviously, people who write if you're writing codes, this works really well if you're in marketing.
You know, all these stories of big companies where, you know, they're making 300 assets where they would have made 30. And then Accenture and Bain and McKinsey and Infosys and so on sitting and solving very specific problems inside big companies. Then there's a whole bunch of other people who look at it and they're like, it's okay. And you go and look at the usage data and you see, okay, ChatGPT has got eight or 900,000,000 weekly active users. 5% of people are paying.
And then you go and look at all the survey data, and, you know, it's very fragmented and inconsistent, but it all sort of points to, like, something like 10 or 15% of people into the developed world are using this every day. Another 20 or 30% of people are using it every week. And if you're the kind of person who is using this for hours every day, ask yourself why five times more people look at it, get it, know what it is, have an account, know how to use it, and can't think of anything to do with it this week or next week. Why is that? Yeah.
Is it because it's early? And it's not like a young people thing either, incidentally. And so is that just because it's early? Is it because of the error rates? Is it because you have to map it against what you do every day?
And one of the the analogy I always used to use, which isn't in the current presentation, I've been used in previous presentations, is imagine you're an accountant and you see software spreadsheets for the first time. This thing can do a month of work in ten minutes, almost literally. Yeah. You wanna change you wanna recalculate that DCF, that ten year DCF with a different discount rate. I've done it before you finished asking me to, and that would have been like a day or two days or three days of work to recalculate all those numbers.
Great. Now imagine you're a lawyer and you see it. And you think, well, that's great. My accountant should see it. Maybe I'll use it next week when I'm making a table of my billable hours, but that's not what I do all day.
And an Excel is doesn't use do things that a lawyer can do every day. And I think those there's this other class of person that's like, I'm not sure what to do with this. And some of that is habit. Some of that is, like, realizing, no. Instead of doing it that way, I could do it this way.
But that's also what products are. Like, every entrepreneur who comes into a '16 z, when I was there from 2014 to 2019, and I'm sure now, like, you could look at any company that comes in and say, that's basically a database. That's basically a CRM. That that's basically Oracle or Google Docs, except that they realize there's this problem or this workflow inside this industry and worked out how to use a database or a CRM or basically concepts from five, ten, twenty years ago and solve that problem for people in that industry and go in and sell it to them and work out how they can get it to use it. And so this is why, you know, you look look look at data on this that, you know, depending on how you count it, the typical big company today has four to 500 SaaS apps in The US.
Four to 500 SaaS applications, and they're all basically doing something you could do in Oracle or Excel or email. Yeah. Yeah. And that's the other side I'm monologuing, I'm afraid. But, like, this is the other side of what is what do you do with these things?
Do you just go to the bot and ask it to do a thing for you? Or does an enterprise salesperson come to your boss and sell you a thing that means now you press a button and it analyzes this process that you needed, that you never realized you were even doing.
Yes.
And I feel like that's, I mean, that's why there are AI software companies. Right. Really? Isn't that what they're doing? They're unbundling ChatGPT just as the enterprise software company of ten years ago was unbundling Oracle or Google or Excel.
Do you have the view that, you know, what Excel did for for for accountants, you know, we're we're sort of AI is now doing for for coders and developers, but hasn't quite figured out that sort of, you know, daily critical workflow for for other job positions. And so it's unclear for people who aren't developers, you know, why I should be using this for many many hours a day. Or
I think there's a lot of people who don't have tasks that work very well with this. Yeah. And then there's a lot of people who need it to be wrapped in a product and a workflow and tooling and UX and someone to come and say, hey. Have you realized you could do it with this? I had this conversation with in the summer with with with with Balaji, who's another former a 16 z person.
And he was making this point about validation that can you because these things still get stuff wrong, and people in the valley often kind of hand wave this away. But, you know, there are questions that have specific answers where it needs to be the right answer or one of a limited set of right answers. Can you validate that mechanistically? If not, is it efficient to validate it with people? So, you know, with the marketing use case, it's a lot more efficient to get a machine to make you 200 pictures and then have a person look at them and pick 10 that are good than to have, people make 10 good images or a 100 you know, even if you're gonna make 500 images and pick a 100 that are good, that's a lot more efficient than having a person make a 100 images.
But on the other hand, if you're doing something like data entry, and as I wrote something about this, about, about OpenAI launched Deep Research. OpenAI launched Deep Research. Their whole marketing case is it go goes off and collects data about the mobile market. I used to be a mobile analyst. The numbers are all wrong.
Their use case of look how useful this is, their numbers are wrong. And in some cases, they're wrong because they've literally transcribed the number incorrectly from the source. In other cases, it's wrong because they've used a source that they shouldn't have used. But, like, if I'd asked an intern to do it for me, then an intern would probably have picked that. And to my the point about, you know, verification, if you're gonna do data entry, if I'm gonna ask a machine to copy 200 numbers out of 200 PDFs, and then I'm gonna have to check all 200 of those numbers, I might as well just do it myself.
Yeah. So you've got, like, a whole swirling matrix of how do you map this against existing problems. But the other side of it is, how do you map this against new things that you couldn't have done before? And this comes back to my my point about platform chips because, you know you know, I see people looking at ITVT or looking at generative AI and saying, well, this is this is useless because it makes mistakes. And I think that's kind of like looking at, like, an Apple too in the late seventies and saying, could you use these to run banks?
To which the answer is no. But that's kind of the wrong question.
Right.
Like, could you build video edit professional video editing inside Netscape? No. But that's the wrong question. Right. And later yeah.
Twenty years later, you can. But that meanwhile, it does a whole bunch of other stuff, the same with mobile. Like, can you can you use mobile to replace, you know, your, you know, your five screen professional programming rig? No. Therefore, it can't replace PCs.
Well, guess what? 5,000,000,000 people have got a smartphone, seven or 800,000,000 people have got a consumer PC. So we kind of did Yeah. But did a different thing. And the point of this is, like, the new thing this is, you know, the disruption framing you mentioned earlier.
The new thing is generally not very good or terrible at the stuff that was important to the old thing, but it does something else. Right. And a lot of the question is, okay. It may not be very good at doing there's a class of old tasks that generative AI is good at. There's also a lot many more old tasks that generative AI is maybe not very good at.
But then there's a whole bunch of other things that you would never have done before that generative AI is really, really good at. And then how do you find those or think of those? And how much of that is the user thinking of it faced with a general purpose chatbot? How much of that is the entrepreneur saying, hey. I've just realized that there's this thing that I can do that you couldn't do before, and here you are.
I've given you a product with a button that will do it for you.
Right.
And that's why there are software companies.
Right. And and on on mobiles, you know, some of the new use cases, you know, we're, you know, getting in strangers' cars. You know, we mentioned Lyft and Uber or sort of, you know, dating people you met via an app or sort of, you know, lending your spare bedroom out, you know, etcetera. And and those were net new companies that that that, you know, were built around those behaviors. And I think, for AI, there's still a question of, you know, what are those net new behaviors?
We're we're starting to see some in terms of, you know, people engaging and talking with, you know, chatbots instead of humans or or, or in addition. And then there's a question of, hey. Are these done by the model providers that that currently exist, or are these done by, you know, net new companies both on, you know, sort of enterprise and consumer?
Well, this is always a question is how far up the stack does a new thing go. Yeah. And, you know, I was talking I about this with another former former a 16 z person who pointed out that, like, in the the the mid nineties, people kind of argued that, well, you know, the operating system does all of it. And the Windows apps are basically just kind of thin Win32 wrappers. Yeah.
And, you know, Office is basically just, you know, a thin Win 32 wrapper. Like, all the important stuff is being done by the OS, whether it's, you know, the document management and printing and storage and display, which all stuff that used to be done by apps. Like, in dot on DOS, the apps had to do printing. The apps had to manage a display. Move to Windows, like 90% of the stuff that the app used to do is now being done by Windows.
Yeah. And so Office is just like a thin Win 32 wrapper, and all the half stuff has been doing being done by the OS. And it turns out, well, that was again, it's like frameworks are useful, but that's not made maybe not a useful way of thinking about what's going on. And the same thing now, like, how much does this need single dedicated understanding of how that market it works or what that market is and what you would do with that? I mean, I remember when we were at a 16 z, there was an investment in a company called Everlaw, which is cloud legal discovery in the cloud.
Yeah. And so machine learning happens, and so now they can do translation. Are they worried that lawyers are gonna say, well, we don't need you guys anymore. We're just gonna go out and get a translate app and a sentiment analysis app from AWS. No.
That's not how law firms work. Law firms wanna buy a thing that solstice wanna buy legal discovery, software management. You know, they don't wanna, you know, go out and write their own do API calls. I mean, very, very big law firms might, but, you know, typical law firm isn't gonna do that. People buy solutions.
They don't buy technologies. And the same thing here, like how far up the stack do these models go? How much can you turn things into a widget? How much can you turn things into an LLM request? And how much now does it turn out that you need that dedicated UI?
Funny thing is you can see this around Google because Google had this whole idea that everything would just be a Google query, and Google would work out what the query was. And guess what? You know, now you want me to this Google Flights is not a Google query. You know, they use a certain point. And and one of the one of the interesting things about this, and and I think it's interesting to think about what a GUI is doing, that some of what a GUI is doing, and the obvious thing that a GUI is doing, is that it enables Office to have 500 application 500 features, and you can find them all.
Or at least it's you don't have to memorize keyboard commands. You can now have effectively infinite features, and you can just keep adding menus and dialogue boxes. And, eventually, you you run out of screen space for dialogue boxes. But, like, you can have hundreds of features without people needing to memorize keyboard commands. But the other side of it is you're in that dialogue box or you're in that screen in that workflow in Workday or Salesforce or whatever the enterprise software is, whatever any software or or or the airline website or or Airbnb or whatever it is.
There aren't 600 buttons on the screen. There's seven buttons on the screen because a bunch of people at that company have sat down and thought, what is it that the users should be asked here? What questions should we give them? What choices should there be at this point in the flow? And that reflects a lot of institutional knowledge and a lot of learning and a lot of testing, a lot of really careful thought about how this should work.
And then you give somebody a raw prompt, and you just say, okay. You just tell the thing how to do the thing. And you're like, but you've kind of gotta shut your eyes, screw your eyes up, and think from first principles, how does this all of this work? It's kinda like I always used to talk about machine learning as giving you infinite interns. You know, imagine you've got a task and you've got an intern, and the intern doesn't know what venture capital is.
How helpful are they gonna be?
And they don't know that companies publish quarterly reports and that we've got a Bloomberg account that lets us look up multiples and that then you should probably use PitchBook for this data and rather than using Google. This is my point about deep research. Like, no. You should use this source and not that source. Do you want to have to work that out from scratch, or do you want a bunch of people who know a lot about this stuff to have spent five years working out what the choices should be on the screen for you to click on it?
I mean, it's the old user interface saying the computer should never ask you a question that you should have to work out, that it should know by itself. You go to a blank, raw chatbot screen, it's asking you literally everything. It's not just asking you one question. It's asking you absolutely everything about what is it is that you want and how you're gonna work out what how to do it.
The and so, you know, you're mentioning Chet you know, wrote about ChetJPT isn't sort of a product as much as this chatbot dis is disguised as a as a product. I'm curious, you know, when we sort of look back at this sort of the, you know, platform shift, do you think that there will be another sort of iPhone sort of esque or Excel esque product that kind of defines the the the feature the sort of platform shift in a way that ChatGPT won't, or or or is it sort of that the world has to catch up to how to use ChatGPT or or something like ChatGPT?
So both of these both of these can be true because there was a lot of like, it took time to realize how you would use Google Maps and what you could do with Google and how you could use Instagram. And all of these products have evolved a huge amount over time. So some of it is, like, you grow towards realizing what you could do with this. Like, you realize that's just a Google query now. You realize that you could just do it like that.
And you realize I spent, you know, hours doing this, and I just realized, oh, I could actually just make a pivot table. Yeah. The other side of it is then but you're still then expecting people to work it out themselves from first principles. And, you know, it's kind of useful to have somebody really a 100 a 10,000 really clever people sitting and trying to work out what those things are and then showing it to you as a as a product. I think another side of this is, like, you know, there were always these precursors.
So, like, there were lots of other things before Instagram. Yeah. You know, YouTube didn't start as YouTube. It started as video dating, I think. There were lots of of attempts to do online dating that all kind of worked until Tinder kind of pulled the whole thing inside out.
And so there were always lots of things what's the phrase? Local maxima. In fact, this is where we were particularly with the iPhone. Before because I was working in mobile for the previous decade. It didn't feel like we were waiting for a thing.
It felt like it was kind of working. Like every year, the networks got faster and the phones got better and you got a little bit better every year. And we had apps and we had app stores and we had three gs and we had cameras and stuff seemed to be you know, every year was a bit better. And then the iPhone arrives, and it just, you know, just, you know, blow the chart kind of you know, you've got this line doing this, and then there's a line that does that. Although, remember, also the iPhone took, you know, two years before it worked, As you know, the price was wrong and the feature set was wrong and the distribution model didn't quite work.
And so, yeah, you, you know, you can think your you know, you can think everything is going well, and then something comes along and you realize, no. Oh, no. No. No. That's which is the same for Google.
You know? Like, search was a thing before Google. It just wasn't very good. So so there were lots of so there was lots of social stuff before Facebook, and, you know, that was the thing that that catalyzed it. So, you know, I just think deterministically, this whole thing is so early that it feels like, of course, there are going to be, you know, dozens, hundreds of of new things.
Otherwise, H, C, and Zs would just kind of shut down and give the money back to the LPs because Right. The the the fan models will just do the whole thing. And, like, I don't think you're gonna do that. At least I hope not.
No. No. No. If we have any regrets from the last few years, it's it's it's not going bigger. I I think we didn't fully appreciate how much specialization there would be across, whether it's voice or image generation or take any subsector that there would be net new companies created that would be better than the model providers, there would be even multiple model providers that or that in every category, you know, one thing we've always in the Web two era, we've always been on the category winner.
Right? And the category winner would take most of the market, but these markets are so big, the the the there's so much expertise and specialization that in that there one, there can be winners in in every category. It's not just sort of the the model providers take everything, but that even in every category, including the model providers, there can be multiple winners and increasing, you know, specialization, and and the the markets are just big enough to to contain multiple winners.
I think that's right. And I think, you know, the categories themselves aren't clear. Right. And, you know, many know, things you think this is a category and it turns out, no. It was actually that whole other thing, and the categories kinda get unbundled and bundled and recombined in different ways.
I mean, I remember I was a student in 1995, and though I think I had like four or five different web browsers on my PC web web servers on my PC. Because I mean, Tim Berners Lee's original web browser had a web editor in it because he thought this was kind of like a network drive, and it was a sharing system. Didn't realize not not really a publishing system. So you would have your web pages on your PC, and you'd leave your PC turned on, and that would be how your colleagues would look at your Word documents or your web pages. And so, again, like, we just don't know how and and I I just kind of keep coming back to this point.
I feel like most of the questions we're asking at the moment are probably the wrong question. I'm picking up on on a on a strand within what you just said, though. The interesting one of the things I'm sort of thinking about a lot is looking at looking at OpenAI because, you know, I'm I'm I'm sort of fascinated by disconnections. And we've got this interesting disconnect now, which is that, you know, if you look at the benchmark scores so you've got these general purpose benchmarks where the models are basically all the same. And if you're, yes, if you're spending hours a day and then then you've got this opinion about, oh, I like Claude's tone of voice more than I like GBT, and I like GBT 5.1 more than GBD 4.9 or whatever the hell it's called.
If you're using this once a week, you really don't notice this stuff. And the benchmark scores are all roughly the same. And but the usage isn't. It's basically the the only the the Claude has basically no consumer usage, even though on the benchmark score, it's the same. And then it's ChatGPT, and then halfway down the chart, it's, Meta and Google.
And funny thing is, you know, that you read all the AI newsletters, and then, like, Meta's lost. They're out of the game. They're dead. Mark Zuckerberg is spending a billion dollars a researcher to get back in the game. But from the consumer side, well, it's it's distribution.
And the interesting thing here is that you've got what I'm kind of circling around is is the model for a casual consumer user certainly is a commodity, and there's no network effects or winner takes all effects yet. They may those may emerge, but we don't have them yet. And things like memory aren't network effects or stickiness, but they can be copied. How is it that you compete? Do you just compete on being the recognized brand and adding more features and services and capabilities and people just don't switch away, which is kind of what happened with Chrome, for example.
There's not a network effect for Chrome, but it and it's not actually any better much. Maybe it's a bit better than Safari, but, you know, you use Chrome because you use Chrome. Or is it that you get left behind on distribution or network effects that emerge somewhere else? And meanwhile, you don't have your own infrastructure. So I suppose what I what I'm getting at is, like, you've got these eight or 900,000,000 weekly active users, but you don't have but that feels very fragile because all you've really got is the power of the default and the brand.
You don't have a network effect. You don't really have feature lock in. You don't have a broader ecosystem. You also don't have your own infrastructure, so you don't control your cost base. You don't have a cost advantage.
You get a bill every month from Satya. So you've kind of got to scramble as fast as you can in both of those directions to, on the one side, build product and build stuff that on top of the model, which is our earlier conversation. Is it just the model? Yeah. Now you've gotta build stuff on top of the model in every direction.
It's a browser. It's a social video app. It's an app platform. It's this. It's that.
It's like the meme of the guy with the map with all the strings on it. Yeah. Know? It's all of these things. We're gonna build all of them yesterday.
And then in parallel, it's infrastructure. Like and, you know, we we do we've got a deal with OpenAI. We sorry. Deal with with NVIDIA, with with with Broadcom, with AMD, with NVIDIA, with Oracle, and, well, with petrodollars because you're kind of scrambling to get from this amazing technical breakthrough and these $809,100,000,000 wows to something that has really sticky, defensible, sustainable business value and product value.
Yeah. And so as you're evaluating the competitive landscape among the hyperscalers, what are the questions that you're ask that you think are gonna be most important in determining, you know, who who's gonna gain, you know, durable competitive advantages or or how this competitive is going to competition is gonna play out?
Well, this kinda comes back to your point about sustaining advantage, and we we talked about Google. Like, if we think about the shift to particularly shift to mobile, for Meta, this turned out to be transformative. Like, it made the products way more useful. Yeah. For Google, it turned out mobile search is just search.
And maps changed probably, and YouTube changed a bit. But, basically, for Google search, Google search is search, and the web web search is just mean means more people doing more search more more of the time. Yeah. And the default view now would seem to be, well, Gemini is as good as anybody else next week, like the new model. I haven't looked at the benchmarks for GPT 5.1, which is out today.
Is it better than Gemini? Probably. Will it still be better next month? No. So that's a given.
Like, you've got a frontier model. Fine. What does that cost? It costs you pick a number. $250,000,000,000 a year, a $100,000,000,000 a year.
What's this what is our earlier conversation about CapEx? Okay. So Google can pay that in because they've got the money. They've got they've got the cash flow from everything else. And so you do that, and your existing products get you optimized search.
You optimize your ad business. You build, you know, you build new experiences. Maybe you invent the new the iPhone of AI. Maybe there is no iPhone of AI. Maybe someone else does it, you do an Android and just copy it.
So fine. It's a new mobile. We'll just carry on. Search is search. AI is AI.
We'll do the new thing. We'll make it a feature. We'll just carry on doing it. For Meta, it feels like there are bigger questions on what this means to search, on what it means for content and social and experience and recommendation, which makes it all that more imperative that they have their own models just as it is for Google. For Amazon, okay, well, on the one side, it's commodity infra, and we'll sell it as commodity infra.
And on the other side and maybe maybe stepping back. If you're not a hyperscaler, if you're a web publisher, a marketer, a brand, an advertiser, a media company, you could make a list of questions. Well, like, you don't even know what the questions are right now. Yeah. What is this what happens if I ask a chatbot a thing instead of asking Google?
Even if it's Google from from Google's point of view, what I'll ask Google chatbot. It's fine. But as a marketer, what does that mean? What happens? If I ask for a recipe and the LLM just gives me the answer, what does that mean if my business is having recipes?
Yeah. Do you have a kind of split between and this is also an Amazon question. How does the purchasing decision happen? How does this decision to buy a thing that I didn't know existed before happen? What happens if I wave my phone at my living room and say, what should I buy?
Where does that take me in ways that it wouldn't have taken me in the past? So there's a lot of questions further downstream, and that goes upstream to Meta and to some extent for Google. It's a much bigger question in the long term for Amazon. Do do LLMs mean that Amazon can finally do really good at scale recommendation and discovery and suggestion in ways that it couldn't really do in the past, because of this kind of pure commodity retailing model that it has? Apple Apple sort of off on one side.
You know, interestingly, they produced this incredibly compelling vision of what Siri should be two years ago. It just turned out that they couldn't make it. Interestingly, nobody else could have made it either. You go back and watch the Siri demo that they gave, and you think, okay. So we've got multimodal instantaneous on device tool using agentic multi platform e commerce in real time with no prompt injection problems and zero error rates.
Well, that sounds good. I mean, has anyone got that working? Like, no. OpenEye open Google and OpenEye, didn't have that working. I mean, I don't think Google or OpenEye could deliver the Siri demo that Apple gave two years ago.
I mean, they could really do the demo, but they couldn't, like, consistently, reliably make it work. I mean, that that demo, that product isn't in Android today. And Apple I mean, Apple to me has the most kind of intellectually interesting question, which is so I saw Craig Craig Federighi make this point, which is like, we don't have our own chatbot. Fine. We also don't have YouTube or Uber.
What what explain why that is different, which is a harder question to answer than it sounds like. And of course, the answer is if this actually fundamentally changed the nature of computing, then it's a problem. If it's just a service that you use like Google, then that's not a problem, which is kind of the point about about, you know, where does Siri go? But the interesting candor example here would be to think about what happened to Microsoft in the February, which is the entire dev event environment gets away from them, and no one builds Windows apps after, like, 2001 or something. But you need to use the Internet.
To use the Internet, you need a PC. And what PC are you gonna buy? Well, like Apple's like not really a player at that time and or just getting back into the game. Linux is obviously not an option for any normal person. So you buy a Windows PC.
So basically, Microsoft loses the platform more and sells an order of magnitude more PCs. Like, not selling them, but in order of they're all in order of magnitude more Windows PCs as a result of this thing that Microsoft lost. And then it takes until mobile that, like, then they lose the device as well as the development development environment. So here's this kind of question is if all the new stuff is built on AI and I'm accessing it in app that I download from the App Store, to what extent is this a problem for Apple? And one would have to you you would need a much more fundamental shift in what it was that was happening for that to be a problem for Apple.
And even if you take, like, the, you know, not the, like, the full, like, the rapture arrives, and we all just kinda go and live sleep in pods like the guys in up. Not Up. Yes. What is it? The one with the robot that's capturing the trash.
Which one is that? Wally. Wally. Wally. Yeah.
You know the guys in the pods in that movie. Maybe we'll be the people maybe we'll be like that, in which case, fine. But, like, there's a sort of a mid case, which is like the whole nature of software changes, and there are no apps anymore, and you just go and ask the LLM a thing. Fine. What is the device on which you ask the LLM a thing?
Well, it's probably gonna have a nice big color screen, and it's probably gonna have like a one day battery life. Probably use a microphone, probably a good camera. Yeah. It's kinda sounds like an iPhone.
Yeah.
Am I going to buy the one that's a tenth of the price and just use the LLM on it? No. Because I'll still want a good camera and this good screen and the good battery life. So it's not there's a bunch of kind of interesting strategic questions when you start poking away. Well, what does this mean for Amazon?
Those are completely different questions to what does it mean for Google, or what does it mean for Apple, what does it mean to Facebook, or what does it mean to Salesforce, or what does it mean to, you know, Uber. And then right back to what we were saying at the beginning of this conversation, you know, what does this mean for Uber? Well, their efficiency get operations get x percent more efficient, and now the fraud detection works. And, you know, okay, maybe they're autonomous cars. Different conversation.
But presume no autonomous cars. That's a whole other conversation. Otherwise, as Uber, what does this change? Well, not a huge amount.
I wanna sort of zoom out a little bit, the this whole framing. The so you've been doing these presentations for a while now. You know, you bumped them up to two times because there's so much is changing. And and one of the things you do in each presentation is is you're famous for asking, you know, really great questions and chronicling what what are the important questions to to be asking. I'm I'm curious as you reflect, you know, maybe post, you know, Chad GBT in 2022 or GBT three rather, The questions you were asking then and you reflect on to now, to what extent, do we have some direction on some of those questions, or to what extent are they the same questions or or new and and and different questions?
Or what what is sort of your you know, if I woke up on a in a coma after reading your, you know, your original presentation, let's say, you know, the one after g p t three launch came out and then seeing this one now, what were the sort of most surprising things or things that we we we learned that updated the those questions?
So I think we have a lot of new questions this year. So I feel like, you know, you could make a list of as it might be half a dozen questions in '23, like open source, China, NVIDIA, does scaling continue? What happens to images? Does how how long does OpenAI's lead remain? And those questions didn't really change in '23 and '24.
And most of those questions are kind of still there. Like, the NVIDIA question hasn't really changed. You know? They like, the answer won't try. The answer on, you know, will there be well, how many models will there be?
The answer is, okay, there's gonna be who can spend a couple of 100 can can can spend a couple of billion dollars can have a Frontier model. And that was, I think, pretty obvious, know, only 23. It took a while for everyone to understand that. And big models and small models, will we have small models running on devices? No, because the small models the capabilities keep moving too fast for the small models to get to shrink the small model onto the device.
But those questions kinda didn't change for two, two and a half years. I think we now have, I think, a bunch of more product strategy questions as you see real consumer adoption and OpenAI and Google building stuff in different directions, Amazon going in different directions, Apple trying and obviously failing and then then trying again to do stuff. There's some sense of, like, there is something more going on in the industry than just, well, let's just build another model and spend more money. Yeah. There's more questions and more decisions now.
There's also more questions outside of tech in certainly on, like, the retail media side of, how do you start thinking about what you would do with this? And again, you know, classic framing in my deck is like step one is you make it a feature and you absorb it and you do the obvious stuff. Step two is you do new stuff. Step three is maybe someone will come and pull the whole industry inside out and completely redefine the question. And so you could kind of do like an ImagineIF here of like, step one is, you know, you're you're a manager at a Walmart in the Bay Area or DC or whatever it is.
Step one is find me that metric. Step two is build me a dashboard. Step three is it's Black Friday, and I'm running managing a Walmart outside of DC. What should I be worried about? Like and that might be the wrong one, but it's like, you know, step one for Amazon is you bought light bulbs.
So here's so you bought bubble wrap, so here's some packing tape. But what Amazon should actually be doing is saying, looks like this person's moving home, we'll show them a home insurance ad, which is something that Amazon's correlation system wouldn't get because they wouldn't have that in their purchasing data. And we're still very much at the, like we're still starting to we're we're we're still on the step one of that, but thinking much more what would the step two, step three be. What would new revenue be for this other than just, simple dumb automation? What would new things that we would build with this be?
Where would this actually like, might might actually kind of redefine or change what the market might look like? And that's obviously a big question for anyone in the content business. Yeah. You know, what does it mean if I can just go and ask an LLM this question? What kinds of content were predicated on Google routing that question to you?
And what kind of questions what kind of content isn't really that question? Like, do I want a Bolognese recipe, or do I want to hear Stanley Tucci talking about cooking in Italy? Like, do I just want the do I want that skew, or do I want to work out which product I should buy? Which is Amazon is great at getting you the SKU, terrible at telling you what SKU you want. Do I just want the slide deck, or do I want to spend a week talking to a bunch of partners from Bain about how I could think about doing this?
Do I just want money, or do I want to work with a 16 z's, you know, operating groups? Yeah. Like, what is it that I'm doing here? And I think the the LLM is starting thing is starting to crystallize that question in lots of different ways. Yeah.
Like, what am I actually trying to do here? Do I just want a thing that a computer can now answer for me, or do I want something else that isn't? Because the LMS can do a bunch of stuff that computers couldn't do before. Right. Is that thing that the computer couldn't do before my business?
Yeah. Or am I actually doing something else?
We're we're about to figure out what is the in a much more granular way, what what what is the true job to be done for for for many, many of these
Yeah. And, you know, going back to the Internet, there was, you know, the the sort of observation about newspapers is that newspapers looked to the Internet, and they talked about, you know, expertise and curation and journalism and everything else and didn't really say, well, we're a light manufacturing company and a local distribution and trucking company. Yep. And that was the bit that was the problem. And until the Internet arrived, like, that wasn't a conversation you thought about.
And then the Internet suddenly makes that clear and suddenly creates an unbundling that didn't exist before. And so there will be those kinds of, like, you didn't realize you were that before until an LLM comes along and points to someone comes along with an LLM and says, oh, I can use this to do this thing that you didn't really realize was the basis of your defensibility or the basis of your profitability. I mean, it's like the joke about US health insurance that the basis of US health insurance profitability is making it really, really boring and difficult and time consuming. That's where the profits come from. Maybe it isn't.
Don't know. Don't know. Disagree. But for the sake of argument, say that's that's your defensibility. Well, an LLM removes boring, time consuming, mind numbing tasks.
Yeah. So what industries are protected by having that? And they didn't realize that. And these you know, it's like you could have asked these questions about the Internet in the mid nineties or about mobile a decade later. And, generally, you'd have half of the questions you'd asked would have been the wrong questions in hindsight.
I mean, I remember as a as a baby analyst in 2000, everyone kept saying, what's the killer use case for three g? What's a good use case for three g? And it turned out that having the Internet in your pocket everywhere was the use case for three g. Yeah. But that wasn't the question that people were asking, and I'm sure that will be the thing now is there's so much that we will that will happen and get built where you go and you realize, oh, that's how you would do this.
You can turn it into that. Yeah. And I'm sure you've had this experience seeing entrepreneurs. You, you know, you get every now and then, they come in, they pitch the thing, you're like, oh, okay. You can turn it into that.
It didn't I didn't realize it was that.
Yeah. No. 100%. My my last question to get you out of here is if if we're talking two or three years from now or you're you're doing a presentation, you say, oh, this is actually bigger than the Internet, or or may maybe this is like like computing. What would need to be true?
What what would need to happen? What what would would evolve our thinking?
I mean, I I kind of, you know, sort of come back to my point about, you know, Jews and Christians. The messiah came. Nothing happened. We forget I mean, there's maybe two two ways very brief ways to think about this. One of them is I think we forget how enormous the iPhone was and how enormous the Internet was.
And you can still find people in tech who claim that smartphones aren't a big deal. Yeah. And this was the basis of people complaining about me like this idiot. He thinks, like, generative AI is big as those silly phone things. Come on.
I think another answer would be, like, I don't wanna get into the argument about, you know, what is the gross rating capability and benchmarks and and all. You know, you can see lots of five hour long podcasts of people talking about this stuff. But the stuff we have now is not a replacement for an actual person outside of some very narrow and very tightly constrained guardrails, which is why, you know, Dennis' point that I it's absurd to say that we have PhD level capabilities now. What we would have to be seeing something that would really shift our perception of the capability of this stuff Yeah. So that it's actually a person as opposed to it can kind of do these people like things really well sometimes, but not other times.
And it's, you know, it's a very tough conceptual kind of thing to think about because, you know, I'm I'm deliberate. I'm I'm conscious I'm not giving you a falsifiable answer, but I'm not sure what a falsifiable answer would be to that. When would you know whether this was AGI? You know, it's the Larry Tesla line. AI is whatever doesn't work yet.
As people soon as people say it works, people say, well, that's just not AI. That's just software. It's a you know, it's a it's an and becomes like a kind of a slightly drunk philosophy grad student kind of conversation as much as it is a technology conversation. Like, what would it have you ever considered, Eric, that maybe we're not conscious either? As a thought.
It's it I I all I can say to give a tangible answer to this question is what we have right now isn't that. Will it grow to that? We don't know. You may believe it will. I can't tell you that you're wrong.
We'll just have to find out.
I think that's a good place to to to wrap. The the presentation is AI eats the world. We'll we'll we'll link to it. It's fantastic. Benedict, thanks so much for coming on the podcast to discuss it.
Sure. Thanks a lot.
Thanks for listening to this episode of the a 16 z podcast. If you like this episode, be sure to like, comment, subscribe, leave us a rating or a review, and share it with your friends and family. For more episodes, go to YouTube, Apple Podcasts, and Spotify. Follow us on x at a sixteen z, and subscribe to our Substack at a16z.substack.com. Thanks again for listening, and I'll see you in the next episode.
As a reminder, the content here is for informational purposes only, should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any a sixteen z fund. Please note that a sixteen z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see a 16z.com forward slash disclosures.
AI Eats the World: Benedict Evans on the Next Platform Shift
Ask me anything about this podcast episode...
Try asking: