| Episode | Status |
|---|---|
BUILT 2 SCALE | AI NEWS | Episode 31 - November 28, 2025Every week, Matty and Scotty cut through the noise to bring you the AI developments that actually matter: the moves reshaping markets, the strat...
Thought we'd get straight into the hour news. There's a lot going on. And last week, we were pretty hot on the release date of Gemini three, which has really caused a ruckus online. It was hitting benchmarks because they pre release out to select few. It was hitting benchmarks on the day last week.
But since, I just think over the course of the week, we've seen multiple almost tech influences basically be talking it up. I've had a really good play internally, so we'll go through if what what use cases you and I have gone into. But what we're seeing now to play out, one in their share price. So once again, shout out to our stock picks. But they've I mean, they've added they've added what if they added?
2,000,000,000,000 or something to their market cap in eighteen months. But also, we're seeing this Google verse NVIDIA thematic come out of this and this TPU versus GPU discussion. I just thought we could basically start wherever you want because there's so many layers to this. There's the Google ecosystem. There's GPU versus TPU.
There's all the NVIDIA supporting the rest of the ecosystem versus Google. So just tell me where you're where you're at with it potentially from a hardware, software model strategy share price layer. How how are you seeing this play out at the moment?
Yes. I think the interesting part that you just touched on it was is the ecosystem and the hardware part. So I haven't gone deep on Gemini three. I've used it a little bit. I'm using it on the phone a bit now.
And it's not a it's it seems like a very capable model. I haven't been blown away by it. We're gonna talk about the potential diminishing returns of, what's happening with these models in a bit. But I don't for me, personally, I didn't, like, use this model and say, wow. Like, it's deserving of all of this hype on the Internet.
I think where the hype is probably coming from is exactly the ecosystem and the hardware. So just to tee them up at a high level, and I'd love you to add your thoughts. The ecosystem part is the model is plugged in to all of the Google ecosystem so that it's very easy to go and use other Google products with the model, whether that's going into the phone, whether that's going into Google Docs, whether that's using their coding platform to start from a idea in Gemini and then go directly into coding a web app. They've got their Nano Banana image generation model. So you see a bunch of infographics on Twitter at the moment that had been generated by Nano Banana Pro that's tightly integrated into Gemini.
So that's like the ecosystem play. And why I think people are starting to realize that this is so incredible is Google was always considered a bit behind on the models, but they had the ecosystem. So if they caught up, it's like, well, why would you use anything else if you're already using the Google ecosystem? And now that it's at parity or better than, say, an OpenAI or a or a Claude, it's like, I think we're finally past that point where people realize Google can compete here. And part of that's that's that's the ecosystem part.
Interesting part about the hardware layer is they didn't train this on NVIDIA GPUs. They trained it all in their own AI chips called TPUs, which is they've been working on for years, think, like, ten years now. So they've just cut off that reliance on NVIDIA, which means they can just do their own thing. They're fully integrated. They've proven that they can train a world class model on their own chips.
These TPUs, which are I think they're a bit more specific to certain AI training rather than Jensen's AI GPUs that are more versatile across different, like payloads. But the most interesting thing is people talking about what this means for the cost of compute and how Google will essentially be able to outcompete on a cost basis or a cost per token because they don't have the reliance on NVIDIA. And so this is gonna start to eat into NVIDIA's margins if they wanna continue to play. And Google, with as such a profitable company already with other areas, they can afford to not make any money on this part of their business if it means they keep people in their ecosystem. So they're just gonna keep driving the price down.
Great for the consumer, but, it does start to, you know, seeing a bit of a warning sign for someone like an NVIDIA or the people who rely on NVIDIA's chips if Google has their own vertical vertically integrated, ecosystem that they can train on and then deliver inference on. So I think that's probably the part those two parts are the most interesting part for me beyond just what the model is capable of.
Yeah. It's it's really interesting, isn't it? I see this very similar as the Google versus Apple battle on the phones. So you've got Android versus the Apple OS. I think we're gonna see something similar play out.
I think there's room there's always room for two players probably in any market. We're seeing, you know, Jeff Bezos starting to roll out their own version of Starlink this week now that they can land a rocket. So and and that that business, I'm sure, is gonna be a success, Blue Origin. But, yeah, I think that the different chip architecture is actually the most interesting element because this race for token per watt is the only race that truly matters probably in AI at the moment. Everything else is just a layer on top.
However, I just think that the, you know, the feedback loop and the iteration speed of NVIDIA and, you know, how tightly wound they are on this stuff, I think it would've spurred them on even more. And and then to leave to leave NVIDIA is such a big play for all these other frontier models and all these other businesses and all the hyperscalers. Like, do you wanna go across to Google who you're actually comparing paying with as a hyperscale? Like, do you so you're probably not necessarily gonna leave NVIDIA that easily. I think they're pretty sticky.
So but I think there's room for both. So I think in the end, it's great for Google, but I don't think it harms NVIDIA, if that makes sense. I think Google's a rock star business, greatest business of all time, who have somehow managed to double their market cap in the last two years, which is unbelievable. And in theory, it was meant to kill search, and they've pivoted and and, you know, and and done what they've done in the hardware and software layer. So that's that's my thought on NVIDIA versus Google.
One of the problems that people are talking about NVIDIA is NVIDIA are offering to rent back their GPUs from their own customers. So they've got because of all this depreciation talk and and all and because they keep updating and iterating at such a pace, they're like, well, we're gonna have surplus GPUs, and NVIDIA have then got orders to rent back their own product. But I think there's a secondary market for compute that's non frontier model led that NVIDIA can still then profit off that leasing their own product back because it's still so far ahead of every other product. So I don't see any problems with NVIDIA's strategy and,
you
know, their quarterly earnings again beat everyone's thoughts and kept the bubble going in the market. So they're pretty much holding the whole world finance markets together at the moment. So I think at that layer, room for both, both doing great. In terms of Gemini, what we noticed was we've been mucking around just vibe coding some internal apps, you know, almost that on demand type software internally. And we sort of still see it as a prototyping thing to then take when we want to deploy something of substance to a software developer.
However, the Gemini three Pro, we started from scratch and made an app better on the day that was better than what we've been doing on Lovable and Replit for the last two months. And that made me think, you know, when you always say someone's updated and that's the death of this other company. It just made me think if Google has got that UI right and the fine tuning right, then why do Lovable how do Lovable and those other companies, those other Vibeco companies sort of maintain their valuations if I'd I don't feel like I have a reason to go there anymore.
Yeah. It's gonna be really hard for them. I think we have talked about this before briefly, but I think the best strategy here, if you're built on top of these models and you're competing against the frontier model companies at the application layer, is you've got a niche down. You've gotta offer niche experiences that you've created some workflows that just make it a slightly more convenient or slightly better than using these horizontal players like Google and OpenAI. They are building products to serve billions of users.
So they can't go too niche, into each each business vertical or each type of consumer. And so, you know, like, let's take Lovable. Like, maybe Lovable becomes the platform that's really easy for kids to start doing, by by coding, and they go and get Lovable into all of the schools. And that's how you learn to use that, because it's not as complicated as using the Google ecosystem. So something like that, I think, is what people need to do or, you know, maybe bolt.new.
That's what the is really good for marketers. And the market, it can really quickly spin up marketing websites. It's just trained on how to do that specifically, and it gets a 10% edge on the generic products. And that 10% is enough for someone to say, yep. I'm gonna use that product.
I think you've got a niche down. Otherwise, yeah, people will just start using if if your product is exactly the same thing as what Google's is doing, and Google's is probably offering theirs just as part of their bundle, we get bundled out just like Microsoft was able to do to products like Zoom with their Teams offering.
Yeah. And and I think you can have a regular company, you know, competing in the space, but it's once you're up at the lovable and cursor and all those kind of companies, once you're up at their valuations in the billions, that's the worry for me.
Do what
I mean? Like, you can have a company that's gonna keep a 100 people happy or something like that. But once you're at a 100 times revenue as your valuation, I just don't understand how you can make that. I mean, what's stopping Google if Gemini three is the best model? What's stopping them making the the commercial use of that model super expensive and pricing you out on the on the model layer or the compute layer because they wanna attack that, you know, that market themselves.
So that was my first takeaway is that it was a real leap for the average Joe to just start playing around with whatever they want. So I think for everyone out there, like, you just wanna have that really low barrier to entry muck around trying to create an app, Gemini three Pro is great for that. And then the other thing in my in my space was actually it's starting to really understand three d context and geometry and starting to really level up its output in terms of here's a sketch, make it a three d render. Here's a photo, give me a floor plan. And it's using all their I mean, imagine what it's being trained on.
It's it's the first even for this podcast, it's the first model that's actually watching YouTube clips. It's not reading the transcript. It's watching. So if you and I wanted, we could say go back and watch the last 30 episodes and give us feedback not on just our our our words, but how we behave, how we move, all that kind of stuff, the setting, the theme. So all other models up until this point have just been downing the transcripts out of YouTube and then summarizing those.
So I just think their training dataset is so unbelievable. And now, you know, from a from a use case, we're starting to see more and more what does you know, you take it grab something off off Google Street View and turn it into a three d model and start looking at what you could do with that site as a developer in three d. And then you can export that straight into a Revit file, an Autodesk file, a CAD file, and now you're designing off a Google Street View image. So they they're just everywhere. They got their tentacles everywhere.
So I think there's some genuine upgrades. And it probably leads to this conversation because we've also had this week had Claude Opus 4.5 released, which is up to the level again benchmark wise on coding. So maybe we should talk strategy about are there is there room for all these models? What are the different strategies of OpenAI, Google, Anthropic, and x AI? Is there room for everyone?
And what you and I think whether you need to have as a consumer, do you need to have six models and then have different things for different options, or where are you sitting with with all this at the moment?
Yeah. I think it depends on what where you sit on that innovation curve. Are you an early adopter of these things and you love using them and you wanna get the latest and greatest, which probably is the category that you and I fit into? And we're happy to have six models sitting there on our phone. I've got a whole page of apps for that.
So there's always gonna be those types of users, but I think what needs to happen and this must exist. I'm surprised that one hasn't sort of taken over the market yet. But all of these models have APIs. Right? You can use them.
You can sign up to them. You get an API key, and you can plug into them. So why does it has there not been an app that's been built that understands what models are good for different types of things? And based on your request, it will go and farm off that to different models, either chooses one or it goes off to different models for different parts of the request. Or it says, I don't actually know which one to go to.
I'm gonna go to all six. I'm gonna get a response, and I'm gonna analyze the responses from them, and then I'm gonna give that back to the user. It feels like there must be something in that, for the average Joe who says, I wanna have the best. I'm happy to pay up for it. Happy to pay a $100, $200 a month.
I just wanna have access to the latest models. I wanna get the best responses, but I don't wanna have to think about which one to use at which time. And you just have one app that goes very clearly. It gives you visibility of what's using them going off this model for that, this model for that, but then that can all be abstracted away, and the user just gets their response.
Yeah. I think I think Andre Kapathi created something for himself last week with that where he sort of put them all together, and he called them like a like a council of models. And then where he wanted them to to all give their answer, then all review out each other's answers, and then distill it into the best answer, which was really cool. And I think a bunch of other people tried to copy them. I don't know if you released, like, a paper or a how to do so everyone could build it for themselves.
But it is a great idea, isn't it? So so it's like, I think you called it a count yeah. Council of of models or something. But it does. We are going to see everyone loves to put a layer on top and, you know, it's almost like what's it called?
A platform? You know? Like, when, you know, everyone wanted to do, like, Flight Centre, like, new you know, because you've got all these different airline companies. So it's like, oh, how can we be the layer above that? The layer above that.
The layer above that. Yeah. So so I think that that is really interesting use case. I mean, for me, like you said, yes, you're gonna have the early adopters who want the breadth. But I think that, you know, ChatGPT and Google seem to have distinguished themselves as the two consumer choices that can do most things.
Anthropic is all in on coding. And I suppose this is the opposite strategy of let's do one thing and do one thing the best of the world. And everyone believes the path to AGI is through coding. So therefore, having the flywheel of the best coding in the world, picking up the best data, and focusing all your compute on this one issue, if you believe it's a race to AGI, then potentially that's an excellent strategy as well.
Yeah. And interestingly, I think in the pursuit of being good at coding, they've actually got really good at creative writing, like creative pursuits or even, like, writing emails. I find that Claude is still the best. I find it better than GPT and Grok. So I but I think those two things are actually very closely aligned.
As Andrej Kavathi says, English is the new programming language. So you don't you essentially just have to have right good prompts, and, the model knowing how to be very good at just language in general does actually help with coding. But to your point, they have focused in on training on the right, datasets in adding that intelligence layer into the coding rather than coding just being one thing that sits on the side of this really generic large model.
Yeah. And I suppose every fortnight, it seems like we get you know, this is the step towards AGI with the newest model and the you know, and how big of a leaf we've got. And then we get the pouring cold water on it by the world's best CTOs. So it's usually Andre Kapavi. And then this week, it's Ilya from previously at OpenAI, now it's Safe Superintelligence.
And he's basically said the age of scaling is over. So that the sheer compute in the training and the training data and just throwing more money in compute and training data at the problem is probably over, and he believed we're now entering an age of research. Now a guy who's basically running a research lab with a $10,000,000,000 valuation is potentially talking his own book there. But, I mean, we do have consistently we're talking about him now, Jan Lecun, another another another researcher, which we talk about a lot here, the Andre Kapathis of the world. These guys do and and even Demis from from from DeepMind, he said the same thing.
So you've got probably the the the leading AI minds of the last decade all saying the same thing. And then you've got CEOs of Elon and Sam saying, ugh, you know what you're talking about. It's twelve months away. What do you think? I think a couple of things.
The first thing is, I think we need
to ask ourselves, does the definition of HEI really matter? If we can all just accept that AI is gonna get better and better and that a lot of typical human work that we do today is gonna be replaced, and things are gonna change, but that we're still humans, and we need to, have human interaction and do human things and be the predominant race apart from the you know, all these AIs. If we if we can kind of accept that that's happening, does, like, getting to this point of AGI really matter about when it happens? Like, it's gonna happen at some point. We're moving towards it.
And so, like, I just don't know if it's, like, even the right question to be asking. And it seems that all of the people who are smartest in this space can't even agree on what AGI means and when it's gonna come. So, like, honestly, I didn't even think about that, too much. So I think the
What are your thoughts on the the research element of this, though? Like, you've got the smartest minds all agreeing of the last ten years. The next breakthrough is research one, not just throwing compute and scaling laws at this.
Yep. So I I agree with that. And, like, I don't think you have to be an AI genius to realize like, intuition is a very powerful thing. And if I said to you, hey. Like, over the last couple of years of using AI, has the pace of innovation sped up, or has it slowed down?
And, like, if you look at the jumps in the early days from g a p t three to 3.5 to four, there was huge jumps. Right? Like, it's it was just phenomenal. And it does just feel like that's starting to slow down. It's so good already that when we get this new thing, it's like, okay.
This model's a little bit better at this. Or some of the image gen stuff, yeah, that has gotten a lot better recently, but it's still like these incremental improvements. So it it makes sense to say that the scaling is coming, the innovation from the scaling is coming back, and we are running out of data. So I would agree with that that until we have a breakthrough in how we design LLMs, we're probably not gonna see a huge jump in the into the next layer of intelligence. And if you define that as superintelligence or AGR, then sure.
Like, that's that's the question we can ask. But as I I agree with that sentence, I hate to say it, Scotty. It sounds like, if you do too, then we're we're starting to have a little bit of respect for our man, Jan.
Not so fast. No. No. Absolutely. We've always respected Jan, but it was just comical that he was working for Mark Zuckerberg and a bunch of 15 year old kids from scale.
But no. No. I look. I think I I think there's always been layers to this. There's only so much you can get out of an LLM that's producing the next token or next word.
But I think I think you've got the compute layer, which is, you know, every time Jensen puts his mind to it. Like, the start of this revolution was from the GPU. So I think there's the token token per what layer. So how can you bring down the cost and energy required per per per token or per unit of intelligence? Then I think there is the the data layer, and I think this is the key context.
All of the data I think that's gone into everything at the moment has basically been two d. It's off a screen. It's words. It's even if it's YouTube clips. I think this real world data with IoT and sensors and spatial intelligence and because the real applications in life, most of them are not behind a screen.
You know, we're seeing seeing it with reusable rockets. Once you have a real world application, it can be it it bleeds into everyone's lives and has a higher impact, I believe, and that's what we're gonna see with humanoid robots. So I just think that the next the next key key factors are reducing the amount of energy required for compute, and then a a heap of capital and intelligence and and research going into real world spatial intelligence and real world physical applications. So that's that's that's my that's my thought on it. And I have absolutely no prediction on how long that'll take.
I I I I think that it's probably a bit frothy at the moment. So you can say some you can say some nice words and get some capital, but it seems like a super hard problem to solve, which people aren't gonna be able to solve until maybe the compute and intelligence and energy layer is resolved to get to the to get what you need to move really move forward.
Yeah. And I think the reason that that is hard to predict is because we had a breakthrough with LLMs. Right? Like, OpenAI basically made the bet that if we keep training on these or keep pushing the limits, that we might see some improvements. And as Ilya says, the improvements are way more than any of us expected.
And so it probably feels like the whole AI community has progressed for the light years, but the reality is that one technology has. So if we're saying that that technology has hit its limit, does that actually progress us forwards? Can we use that tech to get to the next step? Or are we sort of going back to saying, okay. Well, we tried that path, but now we're back to where we were with the researchers, and we're actually no further ahead in getting to superintelligence because, that path wasn't it.
So that's the thing that I would the question that I would ask, how much can we use the current tech that we have built to take us to the next step versus being like, okay. Well, we tried that. What's the next thing we need to try?
Yeah. And I think yep. And I and I think for me, OpenAI standing like, not many people have 900,000,000 monthly users or something, and and you could say they're on shaky ground. But I think they really need to lean into memory, taste, UI, and consumer applications. Like, we've talked about payments and shopping and all that kind of stuff.
Because that's when you're saying how far can LLMs go, they're making the bet that it can get into everything in your life. But it seems like a heap of product they have to produce to stay to stay at that level and to and to make their valuation make sense. So once again, I think we're all the benefactors of all this, but I I am hearing this convergence of ideas from the smartest people in the world in this space saying it's it's time to get the head back in the books and not just talking up billions and billions of dollars of spending on GPUs and pushing LM. So interesting interesting pivot in the conversation.
Yeah. And look, the only catches of that is the, is the don't bet against Elon argument, which has proven to be very wise words. And he's saying that Grok is is gonna become super intelligent or is gonna get to AGI in the next couple of years. So, that would be the counter to it. He obviously thinks that there's something in what they're doing with Grok.
I don't know if that's just on LLMs or if they're working on different architectures. Obviously, Tesla, has their own AI chips, they work with real well, physical intelligence. Maybe there's sort of a, I don't know, a cat in the bag that they're gonna, unleash soon. But are you Well, we talked about
the constraints. We talked about the constraints. Real world data Yep. Tesla, energy, Tesla, and compute. So he's trying to now build his chips with Samsung, isn't he?
So I think he in fairness, if anyone's gonna do it, he's solving all those three layers that we spoke about. So, yeah, obviously, in a pretty good spot. Now I suppose the other the other conversation I wanted to have around this was Marc Andreessen was saying that every interesting company in the West in technology is located in Silicon Valley. He's saying a 100%. I wanted to
have I wanted to have
a chat with you about this about you've obviously received funding from The US and other parts of the world. I remember you saying that your investors said, you know, eventually, you'll have to come over here. You then made the call to go over, maybe talk to your decision around where you landed in Austin, what your thoughts are trying to do this from somewhere like Australia, then trying to do this from somewhere in The US that's not Silicon Valley, and maybe how you made your bet and what you think about Mark's statement, whether he's just talking his own book because he's in San Francisco and most of his companies he invested in there, or if there is something to this, physical ecosystem of Silicon Valley.
Yeah. So with Australia, there's enough talent here to build a really amazing company. Well, sorry. I should say there now that I'm in The US, but there's enough talent. Australia is constrained by the capital markets.
There's not enough investors that are willing to invest at the risk tolerance that The US is. So that's the main problem with Australia. And you could argue that, some of the political decisions that have been made over the last few years are holding us back too. So that's the it's the capital constraint, and it's the size of the market. So if you're not global from day one, in Australia, you need to get into different markets to become a global company.
Now some companies can sell into other, yeah, parts of the world. If you're a product led company or you're selling to consumer, you can sell anywhere in the world. People just download your app on the App Store if you put it there. But if you're doing b to b and you're selling to real world industries like construction or health care, you basically gotta have a presence where you're selling. So that's why we moved to The US.
It's more for the capital and the size of the market. Now to the question of San Francisco versus New York versus Austin versus maybe Denver. That's like the four that people pick. If you wanna hire the best AI engineers in The US, yes, you go to San Francisco. That is correct.
That's where they are. That's where they're hanging out, and that's where the cutting edge technology is being created. If you wanna have the best engineers from wherever you've come from, whether it's Europe, whether it's Australia, whether it's The Middle East, and you're not gonna be hiring for engineers in The US, there's no point in being in San Francisco, in my opinion, because you're just competing for talent. You're paying more, and you're, having to yeah. All of your staff have a higher cost of living.
So if you wanna hire a go to market team like sales and marketing, you can do that from other hubs. You can go and do that in New York. You can do that in Denver. You can do that in Austin. And to to Mike's comment around the, you know, the most interesting companies being in San Francisco, If you are on the cutting edge of AI, then I agree with that.
But what I don't agree with is that the only, that you have to be on the cutting edge of AI to be an interesting company because some of the highest valued companies are using AI, are using the foundational models in really smart ways to deliver value to customers, and are growing really fast and become a really big company that way. So, I don't think you have to be in San Francisco to be successful. But if you're a certain type of company, if you wanna go and build a new LLM from scratch or you're working, on AI chips, yeah, you're gonna have a much better time in SF.
When you when you have a look at the the the nationality of all the talent coming through in AI, I think a large percentage of it is Chinese or Indian. Yep. Now they might be going to American schools and then getting American jobs on that h one b visa and all that kind of stuff. But then their cost of living in San Fran goes to five times that of China and and India. And what's to what's what's to stop having a little CEO and marketing and sales office in in Silicon Valley and then running your tech out of Shenzhen or somewhere in India?
And then, you know, similar thing having a European office to enter that market, maybe some salespeople there. I mean, what are your thoughts on this distributed talent versus setting up in one spot? Obviously, there's complications that you're probably seeing with time zones and things now that I haven't had to encounter. But do you think, you know, that we're going to start to enter an era of distributed teams and, you know, like, hacked, you know, engineers in one city in China and you send, like, your CTO over there and the CEO's in Silicon Valley always talking up getting capital and sales in the American market? Or what do you think?
I think that what people in SF would argue is that AI is moving at such a pace that there's a very concentrated pool of talent that is being developed, and people are learning off each other. And the talent there is moving around between the companies, and the only way to essentially spin up a competitor company to say an OpenAI is to rip the talent out of there. So you've got this, like, very concentrated density of only, you know, very few people who really know what the cutting edge is. I think that's what they'd argue and that things are changing so fast. The counter to that is we see the open source models that come out of China.
They say, well, someone's building them, and they're pretty cutting edge too. So they've obviously got talented people in China that are working at at least a similar level of innovation. So you would think then that there's no reason why you couldn't have these distribute all these distributed teams that do have epicenters of innovation in India, in China, in Eastern Europe, or wherever it might be. So I think it's possibly these hard running global teams like, you know, the Oz to US time zone sucks. Oz to UK is even worse, by the way.
That's it.
Oh, Oz to Oz to UK. I've had two long distance relationships in The UK, and because I lived there for a while. And you can't you can't do anything. Like, when you try and align calendars with Europe and Australia, there is actually no business hour alignment. It's only after hours.
So and then to get in the same mood as someone that's at the opposite end of the day to you, that sort of twelve hours, it's really hard. You're either energetic, they're tired, you're sleeping, they're sleeping. That's that's a really hard thing to deal with.
Yeah. Yeah. Totally. I think the only way to solve for that is espresso martinis. So coffees for
the morning. At both ends of the day. Yeah. Don't mind that. Yeah.
But I I mean, what what about, say, augmented reality? So can we make can we bring in the office environment? So I think two d screen probably doesn't solve for it. But can we can you build a culture in a distributed team as AR and things like that get better and you can really get a, you know, an awareness and be able to, you know, be a bit more intimate with with a team that's on the other side of the world? Do you think there's a future where we're not going to have this at the moment, there's this return to the office, get a team together, and move quickly.
But do you think that is still a bit head in the sand on where the tech's going? And
No. I I just don't see it. Like, we've the technology is pretty good. People don't wanna use it. No one wants to put a headset on and be in this virtual world.
We kinda proved that with Meta's failure in the in the the virtual universe or whatever they call the metaverse. I just I think you either need to when it comes to company building, I don't think the hybrid setup works that well. I think you need to either be fully remote from the start, have all of your systems and processes completely async, all of your rituals are around how do we operate across many different time zones and get the best talent in all of them, and operate completely asynchronously, and then come together as a team for real human interaction once a quarter, once a year. So we know that the people that we're working with. Or you need to have in person offices where people are coming in most of the time and they can work around each other.
I think this whole idea was sort of like a we were gonna have an office, but it's only gonna be a tenth full because people are coming and going if they please. We're not really that good at doing async communication because sometimes we have in person meetings, and then sometimes we have to, like, record a video for, the people who are working at home. I think that's too hard. Your point around the augmented reality itself, I think I just don't see people even if you could put someone that was indistinguishable from a human on the other side of the table, I don't think that we're built or designed to be able to interact with that person. I think you wanna be able to go and rub shoulders some with someone at the pub after work, have a beer with them, and just have that human interaction.
I think that that's really important. So, I yeah. I I don't see it happening even if the tech gets there, but that's maybe a bit of a controversial opinion.
No. I think that's fair. I I I think the counter to that is not on big teams. I think the counter is a truly AI native company that has a founder who says, I want the best marketing person in the world in my space. And that and you have you start seeing these one person departments with agents working for them.
And, you know, there's not that cross much collaboration required on a daily basis. So just someone that's building one person departments best in the world in that space. They might be in The Ukraine. Alright. You're doing that.
You're the CTO, and then they're running agents, not staff members. So potentially, if we get some really small super agentic, super talented teams from around the globe that aren't needing like what you're talking about, which is that how does everyone learn at the same time? How does everyone interact and work together? Whereas that's not really working together. It's owning a department and delivering a result.
So that's that's probably the only counter I see to that at the moment.
AI NEWS | Episode 31
Ask me anything about this podcast episode...
Try asking: