| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
Pundits are screaming about the so-called “AI bubble.” But historically slow-to-adopt industries like medicine and law are actually embracing AI at an unprecedented speed. Sarah Guo and Elad Gil look ...
Hi, listeners. Welcome to No Priors. How can we even begin to wrap this year up? The AI field has grown, breaking out into the mainstream and taking center stage with policy policymakers. ChatGPT shipped massive numbers and asked for massive dollars.
Gemini and Google roared back strong. And on the application front, AI coding has shifted to agents and is eating up all of our inference capacity. Doctors are adopting clinical decision support en masse, and in law and customer support, enterprise adoption is accelerating. What's next? On the research front, the race has multiple live players with open source closing the gap too.
A handful of neo labs, new research labs, got funded this year, and the narrative is changing. Ilya is calling it the age of research. People are trying different ideas around diffusion, self improvement, data efficiency, EQ, large scale agent collaboration, continual learning, energy transformers. It's more open than it's ever been. Finally, we had a lot of attempts to make AI reach into the real world with renewed optimism around robotics.
Next year, those companies are going to start making contact with reality. From a prediction standpoint, personally, I think we're gonna see somebody make a lot of money, hundreds of millions of dollars, trading markets with LLMs next year. Inevitable. We're in the second or third inning. Markets are running a little hot and a little volatile.
It's hot in the hot tub. So get into it with me, Elad. Okay, Elad. It's been a year.
I know. How the guns? 2026, baby.
Are you feeling the AGI? Are you feeling AI AI winter in a good way?
I think I'm actually just feeling microplastics. I think I'm now 80% microplastics. I'm just increasing my microplastic consumptions. A friend of mine actually launched a new water brand that, has their microplastics, by the way. It's called Loop.
Instead of, like, glass bottles and also the cap that's in that plastic.
Does it come with continual testing? No. That's a continual testing for you?
They they did actually try to take out all the microplastics, and so they, I guess bottled water and actual bottles have more microplastics than plastic bottles because of the cap.
Okay. We'll check back in with you in twenty seven to see if you feel
Yeah. And I'm just completely ossified out of plastic. I'm actually real really worried about microplastics. What about all the little glass particles? Aren't you worried about that?
People talk about microplastics but not microplastics. Much more concerned about that.
I don't think those particles end up embedded for you permanently.
Silicon? You're not worried about silicons. I go to the beach. I'm like, oh, no. Microglastics everywhere.
I'm actually very willing to insert silicon in my body eventually in my
Wow. That was yeah. I'm not gonna say anything. We can keep going.
What's what's happening in AI, Elad? What do you where where are we, and what are you most excited about?
Yeah. I guess for '26 is about just something I think, will be interesting that's coming. I think we will, I think there's probably four or five things. One is I think people will proclaim yet again that AI is not doing much and it's overhyped and, like, that MIT report that people were putting that I thought really didn't matter. And and the reality of the technology ways take, like, ten years to propagate, and people are getting enormous value out of the hour already, and they're gonna get way more out of it in the future.
You know? So there's these undoubtedly, next year, there'll be these overstated but bubble claims as well as, hey, I actually isn't working that well kind of claims, and that happens every technology cycle, we'll just hear it again. Next year, there'll be pundits and discussions and just a bunch of waste of time on it. So I think that'll happen. I think another prediction for '26 is the next set of verticals will hit massive scale.
I think this year, we saw consolidation of coding into a handful of players, scribing into a handful of players, legal into a handful of players, like Harvey and others. And so I think we'll see that next set of consolidated verticals happening. So I think that'll be interesting. I can keep going my way. I have, like, a bunch of these.
Do wanna go next? We can alternate. I just did two. Why don't you do two?
Maybe I'll react.
Yeah. Or react.
I'll react, and then I'll and then I'll give you two predictions. I have to think of my predictions while I'm reacting, so I'm glad I have at least two threads. Yes. I I think that the overall sentiment on AI in the investing landscape is a lot of people getting stressed about the amount of capital they have at work and then just a level of uncertainty around the adoption cycle and technical bets that people are making that they don't have full first principles confidence on coming to roost. So I think, like any number of exogenous factors plus noise about the speed of adoption, which by the way, seems like blinding overall, and we can talk about what the constraints are.
Not so fast. I don't even know what people are talking about.
I just saw a report that talked about it's from this group called Off Call that talked about adoption of AI by doctors. And, look, there is just amazing adoption of, of course, you know, several different like documentation, clinical decision support with things like abridge and open evidence, and obviously the general models. But there's massive enthusiasm from most of the physician profession here. And I'm like, Okay, of all of the domains that were professional and considered more conservative, the fact that there is this desire to have things that make work better seems like obviously to continue in the other professions.
I I think this is, by the way, super under discussed. The people who have tended to be the slowest adopters of technology love AI. That's physicians. That's lawyers. That's certain accounting types.
It's, you know, it's it's actually kind of fascinating. It's compliance. You know? It's all the people who always never adopt technology are now adopting this stuff fast. So I do think that's really notable and very under discussed.
It will keep happening. There are actually lots of professions where, like, being able to reason and interact with unstructured data is very useful. Like, I expect that there's gonna be some, like, negative market current. Like, you know, if NVIDIA doesn't overperform by some massive amount one quarter, everybody's gonna freak out. But I I think that has very little to do with the fundamental secular change.
Yeah. It has to do with microplastics at NVIDIA. It's my 2¢.
That has to
do with
has to do with, microglastics, as you said.
Yeah. That's true. Actually, the silicon there is in the air, I bet. I bet they have microglastics all over the place. It's messed up, Sarah.
It's part of the trade. If you make $20,000,000 as an average NVIDIA employee, then you also have to have microglastics in your blood.
I
know. Listen this, Jensen. Jensen's our next guest. It's 1% can't
hear that. Percent microglastics in the blood. I think, you know, a third area is the next set of foundation models are gonna come. And by that, don't mean the NeoLabs and the and the next gen LLMs, which, course, will happen, but I mean, physics, materials, science progress by models, math progress. And I think what'll happen is there'll be one or two use case one or two cases where it works really well for something.
They'll invent some new material, or there'll be some conjecture proved or something. And then it'll fall into this overstated hype cycle of it's gonna change everything about physical sciences or whatever. And that one off will be overstated, and in the long run, the trend will be understated and will be incredibly important. So that's that's another prediction for next year is there'll be a a couple anecdotal one offs in science that will make people say, look, science is solved, and they'll realize science isn't solved, and then later science will be solved.
I have okay. Fine. Three three quick predictions for you. One is there's gonna be, like, some collapse of sentiment around a set of robotics companies next year, not because it, like, actually isn't as a field going to progress, but because, you know, people are beginning to project timelines. Yeah.
And, you know, not everybody is going to deliver on those timelines.
What's your timeline?
I think that we will see humanoid and semi humanoid robots get deployed at small scale in environments, be the consumer or industrial next year, and not everything will work. And that, like, the because there's this, you know, hype cycle around humanoids overall, as soon as something doesn't perfectly work, which it will not, people are gonna freak out. Right? And then there's gonna be some bifurcation about people investing.
Yeah. I mean, we're near fifteen, seventeen, whatever, self driving, something around there. And it's really working now, but it took a long time. So it seems like robotics should have maybe a faster curve, but a similar curve. Right?
It's gonna take some time to figure all this stuff out. And then once it's figured out, it's gonna be really valuable. And the the big question for me on robotics, you know, it's interesting. If you look at self driving, there was, like, two dozen, three dozen, whatever legitimate self driving companies, really good teams and good approaches and all the rest. And then arguably the two biggest winners at least now are Waymo and Tesla, which were two incumbents.
Right? Waymo is Google. Tesla is Tesla. So I wonder what will happen in robotics. It feels to me like Optimus or some form of, like, Tesla robot will be one of the winners.
Most likely. Right? High probability. And then the question is, does Waymo just adopt what it's doing for cars to robots as well? Because there's some similar problems there.
Is it some other big industrial company? Is it startups? Like, who are who are the winners and why? And, structurally, when you have a lot of capital needs but also a lot of hardware and manufacturing needs, that's gonna favor incumbents, which is self driving. Right?
I guess, arguably, the other winners in self driving are Chinese companies. Right? Chinese car companies, which are banned from coming into The US market, and those will probably also be winners in robotics. Right? The most likely global winners in robotics will be some subset of China plus Tesla plus something else.
Right? Maybe maybe one of the startups.
I think that's right, but that's like saying I I think in most industries, like, you know, the incumbents are more likely to win than the startups if you're just looking at it, like, as as a numbers game. Don't know. Way?
Don't know. Yeah. I don't know. I don't think so. I think, I think there's startup industries where startups should win, and there's incumbent industries where incumbents should win.
And they have different characteristics in terms of market structure, in terms of capital needs, in terms of certainties of expertise and supply chain. You know? So I do think there are markets where incumbents should definitionally do better. They don't always, but they typically do. And then I think there are markets where startups will do better.
Sure. But I don't argue that, like, some markets are like, the moats are structurally deeper. Right? But one way that you might look at autonomous vehicles is this one very complex single use case robot. And it mostly does locomotion.
It does lots of other unnecessary types of prediction, defensive driving, whatever else. But it's a single use case robot.
Yeah. And we forget there's a lot of good ones like that. Dishwasher is a great single use robot. Vacuum cleaners are great. You know?
Like, there's all these things that we actually have that are robots in the home that we pretend aren't we forgot that they're robots. Elevators are robots. No. Seriously. Escalators are robots.
I'm gonna use the language of, like, for a robot to be a robot, it has to be somewhat intelligent. Right? And so dishwasher doesn't count as an appliance. A self driving car does count as a robot. Not just like Where's
that border of intelligence for you?
I I think, like, it's probably some level of generalization. Right? It can work in different environments. It can work on different tasks. It can work on different objects.
Otherwise So a car know?
Self driving car is okay. Yeah. I don't know. I didn't have that complex of a definition. I just had it as, like, something that will do certain preprogrammed types of labor for you.
Maybe that's maybe I have a better definition. Let me look up what Yeah. Definition of robot is. A machine capable of carrying out a complex series of actions automatically, especially when programmable by a computer. But, you know, all these things have chips in them now.
Your dishwasher has a chip in it. Right? Has a computer in it.
Okay. Yes. But, like, I would argue that robotics has not interesting area of innovation without intelligence. And so that's the relevant set for maybe you and me and many people that are looking for something that changes quickly.
Yeah. That's cool. I mean, I do think that, on the on the on the topic of robots, the biggest trend perhaps or one of the biggest trends of 2026, 100% will be that self driving will really begin to matter. And that'll be both in terms of your own car. It'll be in terms of Waymo and Tesla, cabs.
It's gonna be, I think, one of the big things that's talked about next year. I think I think on the robotics team, that's the biggie.
I think if you look at all of the potential use cases for robots besides self driving and say, like, self driving I mean, the Optimus team actually proves this. Like, if you take if you take a model that is powering Tesla self driving and you put it in Optimus, it can do locomotion, but it can't do many other things and you still have to do the hardware. Right? Like, manipulation. And so I think that the advantages here are not as strong as you believe they are.
And it's like startups some set of startups scarier competition is the Chinese, but I do think that there is opportunity here.
Oh, I totally think there's opportunity for startups. And then misinterpret me. I just think that it's not just the fact that you have a model or a base model, you have the expertise to build the model, but then you also have all the supply chain. And I think that's really important because a lot of the same sensors that you need to use are there and, you know, how you think about actually procuring and scaling things are there. You know, there's there's good overlap, in terms of some of the other skill sets that are needed that take a long time to build usually at a start up or that are a little bit painful to build, and people do it.
It's fine. It's not I mean, Anderol did it, and SpaceX did it. You know, all these companies have done it. It's extra stuff. So that makes sense.
I I do think I do think some startups will succeed here. I'm just trying to think through, you know, besides the startups, who's gonna be big. And then, also, I think there are one or two, like, incumbents lots that will just default happen unless something very strange happens. And, one could have argued that should have happened in foundation models where Google should have had a default slot, and the end it did. It got there.
And I think that was very predictable that the Google models will get there. I think I even may have read a post about this, like, two, three years ago that Google would be relevant, right, because they just had all the assets that were needed for them to be a really important foundation model company. They obviously invented Transformers, but they had all the data. They had all the capital. They had GPUs and GPUs.
Had, like, the best people for all sorts of things or some of the best people. So it felt inevitable, and I think this feels the same to me that doesn't mean it's right. Do you wanna talk about IP as an M and A next year? What do you think will happen there? I think that's another big that's theme number four, five.
I guess, you know, three was different types of models, four was robots and self driving, and then five would be IPOs and M and A. What do you think? More IPOs, less IPOs, more M and A, less M and A, different types of M and A?
It depends on whether or not the bottom falls out of the AI market at some point. Right?
But I I think regardless What do you mean by the what do you mean the bottom falls out? Like, what what what does that translate into?
I think people just get skittish about you you know, the the cycle here is, like, what are people scared of? They are concerned
that Robots.
Demand isn't real. No. Demand isn't real, for AI to support the CapEx cycle, that there is systemic risk from people passing the ball around in terms of who is actually responsible for the CapEx build out and these credit agreements, right, or, you know, pay on delivery contracts for data centers and for chips. What else are they afraid of? Afraid of Microglastics.
Microglastics AKA, like, too much concentration in NVIDIA and a small number of other players if you're, like, a big public markets investor. You're just like, you know, you
Too much silicon. It's too much silicon.
It's too much silicon. You're damned if you do. You're damned if you don't. I was talking to a friend of mine who runs a large tech hedge fund, and they're already, like, a foundation model investor in, like, multiple significant labs that may or may not go public in the next couple years.
Yeah.
And they're like, okay. Well, the question is, do you buy the IPO? Their game theory on it was like, actually, no matter what I think about it, I have to do it because retail will want it. Mhmm. Because they, like, want to be part of the AI revolution.
And then
if you're a hedge fund, you get benchmarked on annual performance. And because of the retail pop and some set of investors wanting to buy into it as a pure play where you're like, oh, I can't miss it like I missed NVIDIA, then you have to buy it. And so his view was like, buy the IPO regardless of your fundamental view of the company. And I was like, wow. This is not the investing job I know how to do.
Yeah. What do you think happens?
I think there'll definitely be a lot more IPOs next year. I think if one of the main AI companies goes out, it's it'll be probably do extremely well depending where they price. I mean, they obviously if they're overly aggressive, it won't. But in general, I think there's so much retail appetite to actually participate in AI besides NVIDIA, and then that'll just get a lot of other people to go public just as followers on it. So I I do expect there'll be a lot of them if just one that even goes out.
And then, also, it's a great way to raise huge amounts of money for some of these labs eventually. So, it'll be interesting to watch what happens there. Any other predictions for '26?
Yeah. I I I think that I did not believe that we were gonna see that many, like, unique consumer experiences besides, like, ChatGPT. I think we are gonna see, like, a slate of consumer hardware that mostly fails, but I'm so open minded to it. And then definitely, actually, like, it reminds me to see if any of these scales, but I am seeing magical experiences of, like, really different consumer agent software that I, like, I actually want and will use. And I I think people are really beginning to well, these companies are in stealth right now.
But I I do think that, like, there's gonna be a lot more product people that experiment with this and model companies that experiment with this next year. And so I'm I'm pretty optimistic about that.
Yeah. I agree with that a 100%. And I think, the big question is what will end up being a breakout startup, and it'll undoubtedly be some. And then what will be a startup that'll grow really fast, and then it'll get copied by the main lab slash Google, and then it just gets incorporated into the core product. And the the interesting thing is unless a company truly hits escape velocity and build out a network effect or something else that's really defensible, usually incumbents can launch two, three years later and catch up.
And so if they have the distribution and they have the core product and they have but, you know, to your point, I think it's very exciting, and I've been waiting for this for a while. I think two years ago, three years ago, this guy David Song, who was on my team at the time, ran a two quarter thing at Stanford where we had different team supply, from the engineering programs there. And it was, like, groups of people building consumer apps using AI. Because we said this wave of AI is so fascinating. Why isn't anybody building anything consumer?
So we basically just gave people free GPU to go and try stuff. And there was no, like, obligation on their side to do anything with it, you know, in terms of us getting involved. It was just go do cool stuff because this is such a good playground. And those really neat experiences that were being prototypes. And then I was just shocked that nothing happened for a couple years in terms of, you know, really interesting consumer products.
So I agree with you. There's so much room for that. And I always wonder, is it because there's a different generation of founders who don't wanna work on consumer or forgotten how? Because, you know, the big consumer companies have kinda aged out. Is it the incumbents are just too scary?
Is it like, why is there so little innovation actually on the consumer side of AI? I still don't quite understand what the issue is.
I okay. Let's let's, like, list the the reasons. I do think that the incumbents are pretty scary. And anybody who was around for the last generation of interesting consumer ideas saw actually the ingestion of those ideas into the existing platform as you put out. Yeah.
So there's that. I also think, like, the first instinct that that I've seen from companies and from founders working on, like, new consumer experiences is essentially building, like, better versions of, like, last generation experiences with this generation technology. And it ends up, like, not being that interesting. And so I actually think you have to be, like, either quite close to research or pretty creatively ambitious to build, like, something very different that has any chance. And so I think, like, I think, like, there's just not that many people who have had that experience set or that creativity, and now we're gonna see it.
Yeah. I think it's pretty exciting. The other thing is, I was talking to a really well known consumer founder who's running, you know, a giant public company, and his view is that perhaps in the entire world, there's a few 100 great product people for consumer, at least in terms of who are actually working on it. Obviously, there's enormous human potential, and people aren't working in consumer products could. And, you know, but of the people working in consumer products, you can sit most with a few 100 people who are exceptional who could actually come up with and launch their own product that would be interesting or good.
And so you could also just say say that maybe there's just a limitation on how many of these things can exist just given human potential within the set of people who are already doing it, which I think is kind of an interesting argument. I don't if I agree with it, but I thought it was an interesting argument that he made.
I would limit myself to that number if it's also the set of people who, like, have the context of, like, what is possible now.
Mhmm.
If you've got great consumer product instinct, but you're, like, work you're, like, grinding away on the, like, fiftieth iteration of an existing product. Like
Yeah. Yeah. You're working on the the the little sub button in Gmail or whatever instead of actually doing off the database. 100%. Yeah.
Cool. Anything else we should talk about or any other big predictions for '26?
I feel like a very big, emergent thing that happened this year was the surprising funding of, like, Neolabs, like, three through eight. What do you think of that? What do you think about alternative architectures? Like, do you have any point of view on, all of the effort around, like, getting reinforcement learning to be more general continual learning, some of the research directions?
You know, I think there's enormous amounts of really interesting research being done. So I you know, there's a lot of juice to be squeezed out of these models still in different ways, and I think that's really exciting. Well, ultimately, these things become capital gains for certain types of approaches or models because we know scale really matters, which means that eventually you have to have to lapse into a handful of players because capital will aggregate the things that are working the most. They're generating revenue. And so then the question is, what are those things?
At what point do things just get locked in from a usage perspective for whatever reason? And there's all sorts of ways you can imagine this being built over time against some of the models. So I think it's interesting. I think it's exciting. I think we'll see how it plays out.
I think to articulate what, like, the the arguments could be for, you know, new research directions is like, Ilya, you know, did this interview recently where he describes it as And the age of to to paraphrase, he, like, basically says that, yes, I believe in scaling, of course, but, you know, there's there's some floor of compute that is not infinite where we can test ideas at scale. And then if we have, let's say, secret ideas around, like, how to get to more rapid or more compute efficient improvement, then it actually isn't just a straight resource battle
Mhmm.
Which, like, the rat race does feel a little bit like today. I think the other argument you you could take is actually, like, multiple architectures, and people have done some research on this, but multiple architectures are really relevant at big domains of of, usefulness. They just haven't been scaled. Right? And, like, there's enough capital out there to test them, be they like diffusion or SSMs or whatever, and that's gonna happen this next year.
And then I think there's, like, a, like, a resource focus argument. Right? If Ilya is describing that some set of labs, they have an enormous amount of compute, but they have to spend a lot of that compute on inference today, then how much do you spend on your particular research direction, be it self improvement or post training or emotional intelligence or very large scale out agent stuff?
Yeah. It depends on what you're doing because the inference is what ends up then, raising you money to pay for everything else because you're generating revenue. So I think, sure, that it's effectively your weighted bootstrapping to more and more scale. So I always thought perhaps incorrectly I I actually probably think it's incorrect, but I always thought that eventually you end up with evolutionary systems is really how you build AI Because and maybe I'm over extrapolating up a biology where effectively your brain has a series of modules that have different functions or tasks. Right?
You have a visual system that's highly sort of prewired to deal with vision really effectively. You have different areas of pirate thought and learning. You have memory. You have mirror neurons that are involved with empathy. Your brain is actually very specialized in some ways.
Although, obviously, as people were born, it's literally like half a brain hemisphere, and the brain rewires and sort of covers all the functionality. But, like, a few famous cases like that. But, you know, fundamentally, you have a lot of stuff that evolves into very specialized tasks. It's almost like a MOE or something. You know?
And the question is the degree to which you recapitulate that as you're doing further development of AI. When do you start just spawning off a bunch of instances of something and just have some utility function involving against that you then have some selection and recombining and all the other stuff that you'd kind of do to to try and make some of that work versus how much of it is a more analytical approach or a more experimental and iterative approach or you know? So it's or in in a directed way. And so I think it's really interesting to ask. Because if you look again at biology as a as a potential precedent, although maybe a very bad one, you look at protein design.
And for a long time, there are these, like, super analytically designed proteins, and then they came up with all these systems and just abolish it. You know, like phage display and, like, mutagenic scans and all sorts of things that give you dramatically better results than if you just sat and thought about it. And now, of course, we kind of solved it with AI where you have, all these three d structural predictions that are actually very good. That was, AlphaFold and a few other things that really were breakthroughs there. So it feels like in the context of AI, maybe eventually we end up there as well, right, where you just involve these systems.
And then that may be a very different type of approach and training. Know, That may be where I think things really have an interesting break. And that's one of the reasons arguably people are so focused on code because code is arguably a bootstrap into moving faster on development of AGI. But I think it's kind of code plus self evolution is really the the potential really interesting approach to it to to get to a really fast lift off. But maybe not.
Right? We'll see.
What is, the one prediction you have for '26 that has nothing to do with AI?
Do you think about anything else, Sarah?
I do.
I'm joking. Really? I mean, the other thing, by the way, one other prediction that does have to do with AI is I do think, defense will accelerate in terms of startups and defense tech and the shift to autonomous or not autonomous, but to drone based systems in general. It's a massive reworking of how you think about war and defense, and I think that's gonna be a shoot shot that we'll see go even faster this coming year. I think this is accelerating in part to, you know, how the Trump administration has been approaching it and the secretary of Warren and everybody there have been thinking about it.
I think in part just you have enough density now of startups doing interesting things. So I think that's the other thing that's like a huge shift that, you know, it's a hype cycle right now, and I actually think, again, it's a little bit underthought about because it's it's gonna be so big. Outside of AI, I mean, I think there's obvious really interesting things happening in space with SpaceX and Starlink and I think about communications and telephony. So that's a big shift. There's really interesting things, in my opinion, happening in energy and mining.
And, you know, I I think there's a lot going on in the world.
I agree on defense with some, like, concern that, you know, we have to wait for budget to actually shift from contracts to primes to some of these new companies at scale. But the demands, like, the need to be competitive in a world that's increasingly autonomy driven is, like, so obvious. Right? And I think, you know, hype cycles and booms are good in that they bring a lot of people to the table, you know, capital, founders, people who wanna work in the industry. And so you can make a lot of progress in a quick amount of time even if a lot of companies die.
Mhmm. Yeah.
And there's there's more enthusiasm over a short period of time. So I agree with that, and I also don't think that's necessarily bad. Right?
I What's your non AI prediction?
I think that, like I'm not the only one, but I I think the the, like, GLP one thing is just despite all of the enthusiasm, like, still underrated for how much impact it is having. Right? And so I think that the continual adoption of these is like inexorable. I actually think it creates a path that is interesting for like other peptide and hormone therapies. I think the fact that it has been so effective has like lots of second order effects, both from people way like, just being a lot less overweight, like, directly and the willingness to look at other engineered peptides or like, I think it like, everybody understands now that, like, delivery matters.
There are these really incredible medicines. And I think that the impact of that is going to, like, fuel much more investment in, anything that looks like that type of opportunity. And so I think that's Yeah.
I actually think, one thing that you mentioned is really interesting where if you look at the sort of biohacking community, there's a lot of peptide use now, different, you know, different peptides that will do different things in terms of, you know, somebody will have some chronic carpal tunnel thing, and they'll fly to Dubai to get, you know, peptides injected or whatever. And, usually, those are sort of early indicators of potential larger scale adoption societally. And so I think that's a really interesting trend right now in general, like, this whole, like, world of peptides and their uses. And is there a HIMSS of peptides? Like, what's the what's coming there?
So I think that's super interesting. Yeah.
I also think, like, the biohacking community, as you said, it like, the set of people who were really, really early off label GLP one adopters interested in longevity, neuromodulation with ultrasound, stem cell injection, for example. That has been a fringe small community. And I think it's going to get less fringe.
And a lot of these things traditionally, ten years ago, came out of the bodybuilding community. Right? The bodybuilding community was like creatine and all these things that are more broadly used now, but also other other things for sleep aids or other, you know, magnesium and all those stuff.
And to round out this year end episode, we've asked some of our friends for their predictions for 2026. I'm so curious.
My prediction for next year is that the reasoning systems are going to translate directly to AIs that are much, much more versatile, much, much more robust. And reasoning is going to impact is gonna revolutionize not just not just language models, but reasoning is going to impact every single industry from biology to self driving cars to robotics. And so reasoning, I think, is is the big huge breakthrough that that is going to transform a lot of different applications and industries.
In 2026, AI will stop being a reactive tool that base for us to prompt it. Instead, it will become very proactive and get deeply integrated in our work life. It will go where we go, hear what we hear, know what tasks we need to work on, and in fact, most of the times complete those for us before we even ask it to do so. It could be a coach that helps us improve our skills. It could be a manager who helps us prioritize our work and manage our time.
In short, it's going to be the best work companion we could wish for.
I think the main AI prediction that I have for next year is I think context is just gonna be the most important part of every single product. And, honestly, one of the best experiences I've had with it so far is just memory and ChatGPT. I think that there are gonna be a lot more features that basically their goal is to extract the user intent and make the onus less on the user to basically give all of the models or the system or the product more and more context. So in other words, how do you put the onus on the product to actually extract that from the user instead of the user having to do all of the work to do this upfront.
My prediction for 2026 is there will be a whole new suite of product experiences that run on much faster inference.
My prediction for 2026 is that we'll finally stop copy pasting stuff into chat boxes. Instead, I think we're going have applications that have better use of screen sharing and context management across the sources that matter the most.
One prediction for 2020 There's so much talk of agents right now, and there has been for a while, but no one has truly created a mass scale consumer agentic AI. I think the models are there today for this to be possible. And in 2026, we will see the group that figures out the right interface and system and product that creates as big a step function and overall experience as chat did when it first came out. And I think this area is not nearly as seeded to the labs as people assume. It really is anyone's ballgame.
Hello. Aaron here. First of all, I get quite awkward around doing selfie videos. This is my ninth take of this video, so I hope it goes okay. But 2026 prediction would be that this is going to be certainly the continued year number two of AI agents, but in particular, AI agents in the enterprise in either deep vertical or domain specific areas.
I think this is going to be the main way that we actually take all of the progress that we're seeing in AI models and actually deliver them into the enterprise. You have to be able to tie to the workflow of the organization. You have to get access to the data that they have. You have to have the right context engineering to make the agents actually work. And then you have to do the change management that makes the agents effective.
So, this is going to be a year where we start to see this pattern emerge more and more, which equally means that we need to ensure that we have a lot more happening on agent harnesses. So, shout out to Akhorma, Suhail, and Dex for that answer. But it's definitely going be the year of agent harness and seeing how do you start to get, you know, an order of magnitude improvement on the model's capabilities by having all the right scaffolding around the model. And then finally, it will be the year of economically useful evals. So really starting to figure out how these models end up doing a lot more knowledge worker cast in the economy.
And that's gonna we're gonna see a lot more of that in 2026. We saw some previews of that this year with Apex and GDP Val and a handful of others. We're gonna see way more of that. So those are the predictions, and we'll see you in 2026.
I think 2026 is going to be a very interesting year for American open models. Over the last year, the frontier of open intelligence shifted from America to China, starting with the release of DeepSeek at the 2024. And American institutions were slow to notice this erosion of American leadership in open intelligence. But I think they've noticed in a big way over the last half year, both from the government level, from the enterprise level, and there are some really interesting neo labs starting to come out with open intelligence as their directive and there are a few of these, not just reflection. And these companies are starting to produce some very interesting small open models and next year I think we'll see The US regaining leadership at the open weight frontier at the largest scale.
And I'm really excited to see that.
Hey, folks. My prediction for 2026 is that I think we will see AI become much more politicized. I think we'll see it become a major point of discussion for the twenty twenty six midterm elections, and some people will come out strongly against it. Some people will come out strongly supportive of it. And I'm not sure which size is gonna win out.
2025 has marked an incredible year in AI drug discovery. In the past year alone, we've gone from being able to design simple molecules on the computer to designing simple antibodies and now most recently full length antibodies with drug like properties zero shot on the computer. If 2025 has been the year of research in AI drug discovery, 2026 will be the year of deployment. The models have finally entered an era where they're becoming really useful for drug discovery. Not only do they make things faster, but they're also allowing us to go after really challenging targets which have been traditionally really difficult to do with traditional techniques.
I'm really excited to see what comes next because the model show no signs of slowing down.
Okay. My prediction for 2026 is it will be the year that YOLO dies. We will begin transforming ourselves from a you only live once to don't die. I think right now we're kind of a suicidal species. We do very primitive things.
We poison ourselves with what we eat. We design our lives so that we slowly kill ourselves. Companies make profits by making us addicted and miserable. We destroy the only home we have and somehow we celebrate these things as virtue. I think it's all backwards.
And I think one day we'll look back and we'll be pretty astonished that we behaved like this. I think the shift coming is gonna be simple and radical that we say yes to life and no to death. It's simple, but I think it could be in response to AI's progress. And we do this defiantly as a form of unification. I think it does require a lot of courage for us, though, to say we recognize how sacred our existence is.
We don't wanna throw it away, and we want to defend it with every bit of courage and strength we have, because it is so precious. I think it's gonna be the year we end YOLO and the beginning of don't die.
The most striking thing about next year is that the other forms of knowledge work are gonna experience what software engineers are feeling right now, where they went from typing, you know, most of their lines of code at the beginning of the year to typing barely any of them at the end of the year. I think of this as the Claude code experience for all forms of knowledge work. I also think that probably continual learning gets sold in a satisfying way, that we see the first test deployments of home robots, and the software engineering itself goes utterly wild next year.
My prediction for 2020 is that it's the year where everyone's perceptions are flipped. Currently, everyone believes that you can only use NVIDIA outside of Google, and that will be obvious that that's not the case. Currently, about a third of Americans hate AI and think it's really bad. That number will increase. Currently, most Americans think AI is not useful.
That will flip as well. And so everyone's priors will be flipped. That's because the transformative use of AI will be so prevalent. The the obvious utility of it will be so high that there is no way for anyone's priors, you know, cognitive dissonance will be wiped away.
Hey. I'm Benjamin Spector. I'm Asher Spector. And our prediction is that 2026 is the year of energy efficient AI.
Data center buildings are primarily constrained by energy, power availability, grid interconnects, high voltage equipment, things like that, which is why XAI's Colossus was initially powered by on-site gas turbines.
The thing is the dynamic compute is continuing to grow. Labs, neolabs like us, and startups like Kirsten have a pretty remarkably excisional demand for both training and compute, and this demand is currently outstripping our ability to push lots onto the grid. This means that in 2026, it will be really important to squeeze every available bit of tons out of every wallet.
That said, in the long term, chips probably matter more than power because chips depreciate much more quickly than the underlying power infrastructure.
So for example, with data center power supplies of 10¢ per kilowatt hour, the chips cost actually in order to manage more than the power in the five year depreciation cycle.
So in 2026, we think intelligence per watch is really important to squeeze as much intelligence you can out of every unit of energy. But in the long term, we think it's the chips that matter more. Happy holidays. Happy New Year.
Thanks for the year.
Happy 2026.
Happy 2026, listeners. Thank you. Find us on Twitter no priors pod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen.
That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.
The 2026 AI Forecast: Foundation Models, IPOs, and Robotics with Sarah Guo and Elad Gil
Ask me anything about this podcast episode...
Try asking: