| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
Naveen Rao is cofounder and CEO of Unconventional AI, an AI chip startup building analog computing systems designed specifically for intelligence. Previously, Naveen led AI at Databricks and founded t...
I think AI is the next evolution of humanity. I think it takes us to a new level. It allows us to collaborate and understand the world in much deeper ways. Naveen Rao is here's expert in AI. Naveen Rao, probably one of the smartest guys in this domain.
He sees things well before anybody else sees them.
You had a lot of success doing Nirvana Mosaic and Databricks. Why start a new chip company now?
First off, it's not a chip company per se. Most of what we're doing is really kind of looking at first principles of how learning works at physical system. NVIDIA, TSMC, Google, are
these potential allies for unconventional? Are these competitors?
Well, I think TSMC is absolutely gonna be a partner. Google kinda has everything to turn away, and NVIDIA, of course, they built the platform that everyone programs on today. So are we gonna be at odds with NVIDIA going forward? I don't know. We'll see what the world looks like, but there could be a world where we collaborate.
Has anyone called you crazy yet for doing this?
Oh, yeah. Plenty of people.
A squirrel brain runs on a tenth of a walk. Our AI data centers now consume 4% of The entire US power grid, and we need 400 more gigawatts in the next decade just to keep up. Naveen Rao thinks the problem isn't power generation, it's that we've been building the wrong kind of computer for eighty years. Naveen sold his last AI chip company to Intel, and now he's back with a bet most people call crazy, analog computing purpose built for intelligence. In this conversation, a sixteen z's Matt Bortenstein sits down with Naveen to discuss why now is the time for this unconventional bet.
Our guest today is Naveen Rao, cofounder and CEO of Unconventional AI, which is an AI chip startup. Prior to that, Naveen was at Databricks as head of AI and co founder of two successful companies, Mosaic in the cloud computing world, and Nirvana, doing AI chip accelerators before it was cool. We're here reporting from NeurIPS, and great to have you on the podcast. Navin, welcome.
Thanks. Thanks for having me.
So you were kind of at the vanguard thinking about what the proper hardware is for running AI workloads. Absolutely.
I mean, it's like when you have a hammer, everything's an ale, I suppose. But the early part of my career was really about how do I take certain algorithms and capabilities and shrink them, make them faster, put them into form factors that make those use cases proliferate, like wireless technology or video compression. But you couldn't do video compression real time on a laptop back then if there wasn't enough computing power. So you actually needed to build hardware to do those kind things. So early part of my career was all about that.
And then I went back to academia, did a PhD in neuroscience. And so you still kinda look at it like, hey, can I make something better that's more efficient?
So, you sold Nirvana to Intel, and then founded Mosaic, which is a cloud company. It's interesting to cross domains like that, I think, to be able to look at hardware and software. I would argue Mosaic was really a software company. How'd you make that decision, and why do you think you have these diverse interests?
Well, I think I was I don't know. I guess you would call it an OG kind of full stack. Now full stack engineering means something different than it meant back then. I think back then, I meant someone who understands potentially devices like silicon, how to do logic design, computer architecture, low level software, maybe OS level software, and then application. That was a full stack engineer.
I actually had touched all those topics. So to me, it's very natural to kinda think across these boundaries. To me, like, software and hardware is not really natural boundary. It's just where we decide to draw the line and say, okay. This is something I make configurable or I don't.
And it's like, where is the world gonna consume something? Where is the problem? You then right size and figure out the solution to go and hit it.
Now full stack means I know JavaScript and Python. That's right. So you've had a lot of success doing both of those things and at Databricks. Why start a new chip company now?
It is kinda crazy. It's one of these things like, actually, was first off to say, it's not a chip company per se. Most of what we're doing is, at the beginning, is theory and really kind of looking at first principles of how learning works in a physical system. And the reason I could go back and do this is just purely out of passion. I think we can change how a computer is built.
We've been building largely the same kind of computer for eighty years. We went digital back in the nineteen forties. And in undergrad, in the nineteen nineties, when I learned about, dynamics of the brain, like the brain's 20 watts of energy, and the kind of computations that can happen inside of brain and neural systems, I was just blown away then, and I'm still blown away by it. And I think we haven't really scratched the surface of how we can get close to that. Biology is exquisitely efficient.
It's very fast. It right sizes itself to the application at hand. When you're chilling out, you don't use much energy, but you're still aware of other threats and things like this. And once a threat happens, like, everything turns on. It's very dynamic.
And we really haven't built systems like this. And I've been in the industry long enough to know that we have to have an incentive to build things. You can't just say, hey. I wanna build this cool thing, and therefore, I go build it. Maybe in academia, you can do that.
But in sort of the real world, I can't. And now it's exciting because those concepts are super relevant. We're at a point in time where computing is bound by energy at the global level, which just was never true in all of humanity.
And so for those of us who aren't experts, can you describe the difference between digital and analog computing systems? And, like, why do you think the architecture has evolved the way it has sort of more digital focus over decades, as you said?
Yeah. I mean, very simply, digital computers implement numerics, and numerics with some sort of estimation. Right? I mean, in a digital computer, a number is represented by a fixed number of bits, and that has some precision error and things like this. It's just a way we implement the system.
If you make it enough bits, like 64 bits, you can largely say that maybe the error is small, you don't have to think about it. And so the digital computer is capable of simulating anything that you can express as numbers and arithmetic. So it became a very general machine. I can literally simulate any physical process. All of physics, we try to do computational physics.
Right? I have an equation. I can then write numeric solvers that sort of deal with those imprecisions in the number of bits. And so this became obviously computer science, the entire field now. And we went that direction actually very early on because we couldn't scale up computation.
It was actually kind an interesting conversation if you look from back then, not that I was there, of course, but if you look at the papers and things, they actually looked very similar to today in terms of scaling up GPUs. Analog computers are actually some of the first computers, and they worked really well. They were very efficient, but they couldn't be scaled up because of manufacturing variability. So someone said, oh, okay. You know what?
I can actually say I can make a vacuum tube behave as a high or low very reliably. I can't characterize the in between very well, but I can say it's high or low. And so that was kinda where we went to digital abstraction, and then we could scale up. ENIAC, which was built in 1945, had 18,000 vacuum tubes. Wow.
So 18,000 is kinda similar how many GPUs people use now, right, for large scale training. So scaling things up is always a hard problem. And once you figure out how to do it, it makes a paradigm happen. And I think that's why we went to digital. But analog still is inherently more efficient because it's actually analogous computing is the way to think about it.
Like, can I build a physical system that is similar to the quantity I'm trying to express or compute over? You're effectively using the physics of the underlying medium to do the computation.
And so in digital computers, we have transistors. Just to make it sort of concrete, what kind of substrates are you talking about for analog computers? Yeah. I mean, analog computers can be lots
of different things. There's wind tunnels are a great example of an analog computer in a sense where I have a race car on a track or an airplane, and I wanna understand how the wind moves around it. And you can, in theory, solve those things computationally. The problem is you're always gonna be off. It's very hard to know what the real system's gonna look like, and doing things with computational fluid dynamics accurately is pretty hard.
So people still build wind tunnels. That's actually modeling that. That's an analog computer. I think we still have lots of reasons to build these analogous type computers. Now in the situation we're talking about, we can actually build circuits in silicon to recapitulate behaviors of neural networks.
So what we're doing today is more specified than what we're doing eighty years ago in a sense is that then we were trying to automate generic calculation, which was used to calculate artillery trajectories. It was used to calculate finances, maybe some physics problems like going into space, things like that. Those require determinism and specificity around these numbers and these computations. Intelligence is a different beast. You can build it out of numbers, but is it naturally built out of numbers?
I don't know. A neural network is actually a stochastic machine. And so why are we using the substrate that is highly precise and deterministic for something that's actually stochastic and distributed in nature. So we believe we can find the right isomorphism in electrical circuits that can subserve intelligence. That's a
pretty wild idea, isn't it? Maybe unpack it one level deeper, because I totally agree with you. Computers for decades have been sort of the complement to human intelligence. It's like, hey, my brain isn't really great at computing at orbital trajectory. That's right.
And I don't want to burn up on reentry. So, a computer can help us with this incredible degree of precision. We're now kind of going the opposite direction. Right? We're actually trying to encode more fuzziness into computer systems.
So, go maybe just a little bit deeper on this idea of an analog and why intelligence is a good fit for analog systems.
Well, I mean, the best examples we have of intelligent systems nature are brains. And it's often been said, know, human brains run on 20 watts of energy. That is true. But if you look at mammalian brains generally, actually extremely efficient, like a squirrel or a cat. It's like a tenth of a watt.
And so there's something there that we're still missing. And not to say that we understand all of it, but part of what I think we're missing is we have lots of abstractions in a computer that are quite lossy. In a brain, the neural network dynamics are implemented physically. So there is no abstraction. Intelligence is the physics.
They're one and the same. There's no OS and some sort of API and this and that.
It's like So there's some visual stimulus, for instance, that directly activates a actual neural network and produces some somatic response, that sort of thing.
Exactly. And those things are mediated by chemical diffusion and, you know, the physical properties of the neuron, the physics itself. So I think, absolutely, it's possible to build something that's much more efficient by using physics in an analogous way. That is a 100% true. Can we do it and build a product out of it is really the question we're asking here at unconventional.
And is part of the idea that now is the right time because AI is a both a huge and a unique workload?
Yeah. Absolutely. You know, it's it's interesting. So just maybe some stats here. Like, The US is about 50% of the world's data center capacity.
And today, we put about 4% of the energy grid of The US energy grid into those data centers. And this this past year, 2025, was the first time we started to see news articles about brownouts in the Southwest during the summer. And, you know, just imagine what happens when this goes to 8%, 10% of the energy grid. It it's not gonna be a good place that we're in. So can we build more power?
Absolutely, we should. Building power generation is very hard, expensive, and it's infrastructure. Like, it takes it takes time. You can't you can only bring online so many kilowatts or gigawatts per year. And so it's something on the order of four per year.
By some estimates, we need 400 gigawatts additional capacity over the next ten years to power the demand for AI. Wow. So have a huge shortfall. And so we really just need to rethink this.
The the, you know, 15 year old sci fi nerd in me says like, wow. We're we're mobilizing, you know, species scale resources to, like, invent the future.
We are.
And then then there's the practical. It's like, even if we add 400 gigawatts of production capacity, our our 1970s era transmission grid is probably gonna melt under the under the load. So yeah. So so there's very serious sort of infrastructure hurdles to this, I think.
It's hard to get a lot of humans to act together. Right? It's just a reality. That's what has to happen to make to solve these problems.
What trade offs do you think this entails? You know, sort of the path you're pursuing versus the the mainstream digital path now?
Yeah. I actually don't see it as, you know, it's digital or analog. It doesn't work like that. Think there are certain types of workloads that are amenable to these analog approaches, especially the ones that are that can be expressed as dynamical system. Dynamics meaning time.
They have time associated with them. In the real world, every physical process has time. And in the computing world, like a numeric computing world, we actually don't have that concept. You simulate time with numbers. Actually, simulating time is very useful in certain certain problems.
So I think we should still build those things, and we should still have those capabilities for the problems that we need to solve that way. But for these problems where, you know, like you like you said, it's a bit fuzzier. I'm trying to retrieve and summarize across multiple inputs. That's actually what brains do really well. Right?
They can they can, take in tons of data and sort of formulate a model of how those things interact. And sometimes those models can be actually extremely accurate. Like, look at an athlete. You know? Alex Honnold climbed El Capitan.
Right? Just think about the precision that's required. It still scares me
every time I see it. Right? Yeah.
And if he slips, like, just like, he's off by a millimeter in some places. Wild. He dies. Right? And that's true for, like, every top level athlete.
They're someone who's, you know, at the Yes.
Olympic. Steph Curry, you know, the story is he set up a special tracking system so he can make sure the ball was hitting the middle of the rim, not not just Yeah.
Going through. So the level of precision these guys hit with a neural network that's noisy is actually quite high. So neural systems can actually do a lot of precision under certain circumstances. But what's interesting about these situations is Steph Curry, when he shoots a ball, is never gonna shoot under ideal circumstances in a game.
Mhmm.
Always, it's a unique input, and there's a lot of different input verities coming at you. Like, where the other players are, precisely where you're standing. Maybe your shoes are different. Maybe the surface is a little different. Like, maybe the ball is tackier or your hands are sweaty.
Like, there's so many inputs, and we we kinda put them all together and integrate them. It's still a very accurate behavior. So brains are exceptionally good at this, and, you know, that's a set of problems that is actually very useful to solve. And now we're approaching those problems. But it doesn't mean we don't still use computational, substrates to do actual computation.
This is kind of an intelligence substrate.
And so what types of AI models or or data modalities do you expect your your hardware will be well suited for?
Yeah. So we're we're obviously starting with the state of the art today, like transformers, diffusion models. They they work. They do really good stuff, so we shouldn't throw that out. And diffusion models and flow models are actually energy based models are actually pretty interesting because they inherently have dynamics as part of them.
They're literally written as an ordinary ordinary differential equation. So that makes it such that, hey. Can I map those dynamics onto the dynamics of a physical system in some way that's either fixed or has some principled way of of evolving? And then can I basically use that physical system to implement that thing and do it very efficiently with physics? So that's that's kind of the nature of what we're doing.
And we will be releasing some open source and things around this to to let people play around. But, you know, transformers are really they're a big innovation because they they made the constructs of a GPU work extremely well. And it doesn't mean it's wrong, but I don't think there's there's nothing natural. There's no natural law about the parameter of a transformer. Transformers parameter is a function of the nonlinearities and the way a whole thing is set up with attention.
There's gonna be some kind of mapping between transformer parameter spaces and these other parameter spaces. And transformers are I I think have can use lots of lots of parameters to accomplish what they do. I have to
ask just since you mentioned energy based models, and Yann LeCun has been, you know, writing quite a lot about this. Do you think pursuing these sorts of paths that you're talking about gets us closer on the path to AGI, whatever AGI means?
Honestly, I do. The reason I feel that way and again, this is hand wavy. I'm gonna be really honest.
I don't That that's why I'm putting quotes around it. Yeah. I think that the discussion is necessarily hand wavy.
It's gotta be because we just don't know. But my intuition says that anything where the basis is dynamic, which has time and causality as part of it, will be a better basis than something that's not. So we've largely tried to remove that. And a lot of times you can write math down, it's reversible in time and things like that, but the physical world tends not to be, at least the way we perceive it. And so can we build out of elements of the physical world that are you know, do have do have time evolution?
I think that's the right basis to build something that understands causation. So I do think we'll we'll have something that is better, and will give us something closer to what we really think is intelligence. Because, yes, we have intelligence in these machines. I don't think they're anywhere close to AGI, because I mean, they still make stupid errors. They're very useful tools, but they're not what it's not like working with a person.
Right?
I think most That's people would at actually really interesting. So the sort of thing that's missing in AI behavior, which I think a lot of us see that there's something missing, but can't quite put a name to it, it sounds like you're arguing part of that is sort of a real sense of causality. Yeah. And that training in more dynamic sort of regime may impart this kind of like apparent understanding of causality better than what we have now.
Yeah. And again, hand wavy, but yes. I mean, look, you have kids, little kids, and you see them. I mean, children kind of innately understand causality in some ways. Like, this happened, then that happened.
And yes, know you can say it's reinforcement learning or whatever. That's some part of it, but there's something innate that we understand causality. In fact, that's how we move our limbs and all of that. I know if I send a certain command to my arm, it'll do do something. So I I think there's something innate about the way our brains are wired and built out of primitives that are that do understand causation.
Put unconventional in the context of the broader industry for me. Like, NVIDIA, TSMC, Google, are are these potential allies for unconventional? Are these competitors? How do you think about it?
Yeah. I mean, a couple of things that we set out to do when we built we're starting this company was see if we can find a paradigm that's analogous to intelligence within five years. And then at the five year mark, we should be able to build something that's scalable from a manufacturing standpoint. So, you know, you can you can think about building a computer out of many different things. But if it's not scalable from a manufacturing standpoint, we can't intercept this this global energy problem.
So we need to have somebody say, okay. Go build 10,000,000 of these things. Right? So I think TSMC is absolutely gonna be a partner forward. You know, met with them recently, and, you know, we wanna we wanna work closely with them to make sure we get what we need, get fast turnaround times to prototype, and all of that.
Google, NVIDIA, Microsoft, all these guys are, you know, at the forefront of where the application space is. Obviously, Google kinda has everything internally, and I think they're working on sort of lower risk, but, you know, continual improvements for their hardware. With TPUs, you mean? With TPUs. Yeah.
That from what I can see, you know, just publicly is it makes total sense. Right? They have a business to run. They're trying to make their margins better. And, you know, how can I do that with all the tools I have at, you know, in front of me?
NVIDIA, of course, you know, they've they've they've built the, the platform that everyone programs on today. So is it are we gonna be at odds with NVIDIA going forward? I don't know. We'll see what the world looks like. But, I mean, we are trying to to build a better substrate than matrix multiply.
There could be a world where we collaborate on such solutions. And, you know, we're open to all of these things.
Where do you where do you personally get the motivation to get up in the in the morning and build this company? I mean, you've had a lot of success in your career this year, earned startup. What what you know, what's exciting about this to you?
I don't know. I just it's It's a weird thing. If you haven't worked in hardware, it's hard. I've been fortunate to work in hardware and software. And I love writing a bunch of software, and then hitting compile and seeing it work.
That's a good dopamine hit. But, man, when you work on a piece of hardware
and you turn that thing on,
that's a big dopamine hit. That's like this is like celebration jumping, you know, jumping up in the air, high fiving. It's a different thing. And I don't know. You sort of live for these moments.
Yeah? Like, when I was at Intel, like, was one of the only execs who would go to the lab, the first chip would come back. And I'm like, wanna see
we turned on.
Let's see what happens. Sometimes you turn it on. It's like
You see the little puff of smoke
come out.
You're like, uh-oh.
That's not good, but you wanna be there. You wanna be part of the moment. But I I think that's part of it. I think for me personally, feel like we we have this opportunity now that we can really change the world of computing and make AI ubiquitous. I I'm the opposite of an AI doomer.
I think AI is the next evolution of humanity. I think it takes us to a new level, allows us to collaborate, understand each other, and understand the world in much deeper ways. Totally agree. So every technology has negatives, but the positives to me so far outweigh it. And the only way we're gonna get to ubiquity is we have to change the computer.
The current paradigm, as good as it is and as far as it's taken us, is not gonna take us to that level.
I think that's such a great way to say it. AI actually can help us understand each other better, help us understand ourselves better, understand the natural world better. Yeah. I don't think it's at all what some of the doomers think of replacing human experience.
That's a short term thing. There will be bumps along the way. Technology does that.
That's what happens when you've seen too many sci fi movies.
That's right. But with Star Trek.
Yeah. Yeah. Totally. Totally. It's great.
This is a really big swing. Right? This is a very ambitious company. What gives you confidence that it's gonna work? Or how's a reasonable shot of working?
There's a number of data points. Of course, like I said, the brains are existence proof. But there's also forty plus years of of academic research, which is showing a lot of promise here. People have built different devices, albeit not in the latest technology with professional engineering teams, but they have built proofs of concept that actually show some of these things work. We've also, from a theory standpoint, both from neuroscience and just pure dynamical systems and math theory, do start to understand how these these systems can work.
So I think we now have pieces at different parts of the stack that show, hey, if I can combine these things the right way, I can build this. And that's what great engineering is all about, is like exploiting this thing that someone else built for something else, exploiting that thing. Engineers are kind of like the opposite of theorists. It's like, well, alright. That thing doesn't quite fit.
Just sand it down
and take it. Right?
So it's like, we gotta do a little bit of that right now, and then we can build something and put all together.
Yeah. That's awesome. Has anyone called you crazy yet for doing this?
Oh, yeah. Plenty of people. That's fine.
Is it is it like everybody?
Well, it's I'm I'm used to this at this point. You know? My family, I've been called crazy. I was called crazy going back to grad school years ago when I had a very good career in tech. So it it's fine.
I think that's that you need crazy people to go out and explore. Mean, if you think about humanity out of Africa, all that, you know,
the crazy people went out. We would be lost without without crazy.
You need some crazy in there, so it's okay.
I'm fine with that. And so what kind of people are you looking to bring on to the team? You have a very ambitious goal. Who should be interested in joining you?
Yeah. I mean, I think some of the traditional ish, when I say traditional, over the last five years, this field of AI systems has evolved. People who are really good at taking algorithms and mapping them very effectively to physical substrates. Those folks who understand energy based models, flow models, gradient gradient descent in different ways. You know, this this kind of thing is what we need there.
We need theorists who can think about different ways of building coupled systems, how I can characterize the richness of dynamical systems and relating that to neural networks. So there is a theory aspect of this. Then there's folks who are, like, kind of at the system architecture level. It's like, alright. Here's what the theory says.
This is what I can really build. How do I bridge that gap? And then there's the people actually physically building this stuff, like analog circuit people, actually digital circuit people too. We're gonna have a mixed signal here. So that's that's the whole stack.
The stack is it's hard because these are all things that no one's really pushed to that level. Like, when we build this chip, our first prototype, it's gonna be probably one of the larger maybe the largest analog chip people have ever built, which is kinda weird. For some you do something, things don't usually work the way you think they
So you can get in on that Cerrebus Jensen game where they were each pulling the biggest possible wafer out out of an oven. You you
do Something like that. Yeah. Yeah. Exactly. Right.
Put a few vacuum tubes on top for effect. Yeah.
We can need blinking lights. Need to
cool Yeah. Exactly. We're not gonna have
cool heat sinks. It's gonna be super cool. It's gonna be cold. Like, you don't need big heat sinks. You know?
So I hope they make something that looks looks interesting here.
This is a funny time for for top AI people, right, where you have sort of the option. If you wanna start a company, there's a lot of venture capitalists who probably would fund you. If you want to get a cushy job at a big company, you can get a very cushy job and kind of do some interesting things. Or people can join a startup like unconventional that has a lot of the nice aspects people look for in AI careers, and are taking super big swings. I'm just curious, you've been on all sides of this.
Do you have any advice for younger people starting out in their careers, or how do you
think about this? I think you get such a breadth of working in a startup that at the beginning of your career, that will pay dividends later on. Because like I said, the reason I can think across the stack is because I did all those things very early in my career. I built hardware. I built software.
I built applications. And in big companies, it's not it's not Amos' fault. It's just the way it is. Like, you get hired to do a thing, and you do that thing over and over again. You're really good at doing that thing, and that's fine.
You need people who are really good at doing specific things. But if you wanna be prepared for change in the future, being really good at one thing is probably less valuable than being very good at but slightly good at a lot
of things. Yeah. That's interesting. Is it fair to say unconventional is sort of a practical research lab. Is that kind of the culture you're going for?
Absolutely. Yeah. I mean, first few years, it really is open ended. I I don't wanna close doors. Like, I I'm really specific about this.
Like, I always try to bring the conversation back because those people are like, oh, that's gonna be hard to manufacture. I like, stop. Don't think about that. Will it work? First, come up with existence proofs, then we go back and try to engineer it and, you know, all the trade offs therein.
But if you make those trade offs up upfront, you don't go into a good place. So, yes, we're really thinking wide open, but with an eye on the future of who we are building a product.
And and to your point, it takes not only people with diverse skill sets, but people with kind of high agency to try new things and learn new things and kind of Yeah. Integrate across the
I mean, I think what I've done really well across the companies I've built has been going after hard problems, which kinda lends itself to smart people wanting to come in and try to solve them. They they see a challenge. It's like, here's the mountain at risk of climate. But then giving them agency. And I sort of look at it like, what decisions can I make as a leader to increase agency of the org overall?
Like, me making top style decision may be global globally better for the company in the short term. Mhmm. But I think long term, we will we'll do better if more people have agency and can and try more things out. So personally, I like to find ways to get out of the way when I see people who are who are very passionate about trying something. It's like, okay.
What you really wanna do this. That makes sense. Go for it. You know? And then you own it.
You own both the good and the bad. Right? And that's agency to me. It's like, you gotta you guys are like, okay. I fucked up.
Now this wasn't real. That's okay too, but give people the room to do that.
You know? Anything else you wanna wanna say before we wrap up?
I mean, I think this is like an opportunity to do something that is generationally will be felt. You know? To me, that's that's what gets me up in the morning is, you know, you can go work on a product and make a tweak, and people will use it. That's great. But, like, in five years, many times people forget those things.
But if we are successful here, the world will not forget this for a very long time. Right? This will be written in history books. And so I feel like those opportunities are rare.
Thanks for listening to this episode of the a 16 z podcast. If you like this episode, be sure to like, comment, subscribe, leave us a rating or a review, and share it with your friends and family. For more episodes, go to YouTube, Apple Podcasts, and Spotify. Follow us on x at a sixteen z, and subscribe to our Substack at a16z.substack.com. Thanks again for listening, and I'll see you in the next episode.
As a reminder, the content here is for informational purposes only, should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any a sixteen z fund. Please note that a sixteen z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see a 16z.com forward /disclosures.
The 80-Year Bet: Why Naveen Rao Is Rebuilding the Computer from Scratch
Ask me anything about this podcast episode...
Try asking: