| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
Professor Luis Garicano isn’t your usual academic economist. Academically, his theories have heavily influenced how modern economists understand the structure of firms and the labor market. But his in...
As long as the AI needs your supervision because it makes lots of mistakes, then the bottleneck's the human. I think Darwin Asimovu has an excessive optimism about two aspects of this. We are in a game theoretical situation between China and The US. I don't think the possibility of slowing things down exists. The second is the we.
It says we can direct technology. Because who is we here? We is China, is it The US, is it firms, is it workers, is it lawyers, is it truck drivers? Who is we? You have the superstar effect.
A very good AI programmer with lots of AI can have enormous leverage and can reach very large market size. Every single thing tells you the GDPR has been bad for the EU business. And now we're adding the EU IAX. Part of the risk is you try to control the technology, and you end up without technology.
Hi. I'm Anson. I'm a researcher at EPOC AI. Today, I'm joined by my cohost, Andre Falodria, who is an assistant professor at University of Edinburgh. And I'm also joined by Luis Caricano, who is a professor at LSE studying economics.
Luis, thanks for coming on the podcast.
That's my pleasure. It's really it's really great to to be here.
So I'd like to start with explosive growth very briefly. So one thing that we briefly discussed on Twitter was whether or not we're likely to see a massive acceleration in global you know, gross world products growth rates. And one point that I think is somewhat underrated by economists is that if we look at the last two hundred years, maybe growth rates have been exponential or growth has been exponential, but if we look much longer throughout history, it seems like there has been an acceleration. So shouldn't we think that accelerations in growth aren't that implausible after
all? The probability that we get a very large acceleration of growth exists. I am not going to dismiss that, and you guys were arguing that that was potentially the case with your task based model. My view was that there were several things that are likely to make that take a long time or slow it down. So, the first obstacle I was pointing out is, and in R and D for example it's very clear, you can develop as many new ideas for proteins as you want and for biotech and for solutions to biological problems, if you don't manage to get them approved by the FDA, you don't have a medicine.
And if you don't get doctors to use it and you don't get people to learn it, I mean, so there are a lot of bottlenecks that slow things. So, that was my first objection, that people in Silicon Valley who are only observing the very best application of technology, which is coding, and they're extrapolating from simply which tasks do we have, how many tasks are we performing, run the risk of overestimating how easy it is for organizations and institutions to accommodate these tasks. So, just a question.
I think we're kind of on the same page that the sustained explosive growth is perhaps not that plausible. What about kind of an explosive growth spurt, kind of a shorter run thing where you have a, I don't five, ten years of much much faster growth than we've recently expected, just because, you know, we kind of start from initial condition where AI seems to be good at exactly lots of things that humans are bad at, so you start with this high productivity sector being initially relatively large. So, could we have that?
I think so. I think that I am an optimist in AI in spite of our disagreement on that. I do believe that unlike people like Darren Massimo, or others who think even not just ten years, but even the longer run, their models don't predict large growth spurs. I do think we will have. You know, I think the good way to see it is a field doesn't get autonomous, I think the key distinction between autonomous and non autonomous AI, as long as the AI needs your supervision because it makes lots of mistakes, then the bottleneck is the human.
And the human is not improving much. I mean, yeah, the AI is helping the human do it a little bit faster, but the human is kind of bottlenecked by their own time. And so the AI is okay, I'm a better lawyer. I'm doing better at my tasks. But okay, that's just a qualitative difference.
The moment you get the AI lawyer, the moment the AI becomes autonomous, I think there you get a jump, a discrete jump. So, we could easily have a situation where we see very small steps where the AI is helping us, we're doing a little bit better, and oh, think of the Rinyalson customer support chatbots. So there, the chatbot is helping the juniors be better customer support agents. They suggest answers, the junior use them, but it's still the junior doing it. We know, because the paper is published '25, but the experiment is from a little bit before, we know now that the childbirth is precisely one of the areas where it's likely, and in fact we are already seeing in some of the data, that the humans can be earlier removed from the production function because at the end of the day, there is a set of questions that are relatively repeated and common, and then you can do a lot of the customer service fast, reliably, etcetera, and you could always have a layer like in knowledge hierarchies type work where basically you have the routine tasks done by some agents and the exceptions done by experts.
That's kind of how stuff is produced in the high value tasks and done by the high level consultant, the entry level analyst, that's the routine jobs. You could still have that layer of people who get big leverage if all of these tasks that are more junior get replaced and you get that big sport that you're expecting. So, I would think that it could easily be that we are all thinking, oh, nothing's happening, nothing's happening, nothing's happening, nothing's happening, and then boom, something happens in one particular profession. Or something major, like the one type of sport that you're mentioning.
Yeah, so I guess we're all working in this kind of long run macro way of thinking about the effects of AI, but what about the short run macro of it? So what would we expect to happen to things like unemployment, inflation?
I mean, think the macro short is a problematic one because if you get, let's suppose that, let's just hold this experiment that we are having in our mind that we have two sectors, let's say sectors A and B, and sector A basically gets produced for free. So, price of sector A is zero. So, the short run effects are you need to reallocate the labor and the capital to sector B. Now, the first thing that is clear is that I think we will all agree is that wealth is going to improve if, for example, let's say sector A is medical services and legal services. This is autonomous AI.
We get medical and legal services have zero price. Now, first, huge increase in customer surplus. Fantastic, right? All my illnesses, I can diagnose myself, I can get all my legal problems, you know, I need to buy a house, the AI does it, you sign it, it all goes in the chain, in the crypto chain, all automatic, perfect, okay? So, fine, consumer surplus goes up, but what happens to GDP and what happens to employment, even in, let's talk about the short run.
Let's say that you need a neurosurgeon, so in this crazy example, but sector A can be anything, you need a neurosurgeon to become somebody in sector B who is maybe a plumber, like just to make the extreme example clear to our listeners. Then you have somebody who has a very specific human capital who has been completely appreciated, who is used to earning several $100,000, now has to start working in a new sector that doesn't have any, I mean, don't think any of it's human capital that's going to be very viable. Capital, the machines, all the things that were complementary with the lawyer or the doctor are useless, we need to depreciate it, we need to redeploy them. Depending on, we have an increase in supply in sector two, we have an increase in demand. In the short run, only the increase in demand, so the supply is reassigning itself.
It's really hard to get these machines to be useful too. So in the short run, I mean, I would imagine that prices in sector two are going to go up, but in the long run, I don't know. I wouldn't talk about this as inflation. This is a change in relative prices in sector two. Mean, we could have deflation if all of these people are kind of unemployed, etcetera.
But, I mean, when it's a price shock, I am kind of reluctant to talk about inflation. I mean, it's really just a price shock, which is that all of those skills and all of that capital is worth nothing and people in this new sector have to re accommodate this extra demand and this extra labor and capital. That would be how I would see this situation. Obviously, the problem in the short run of my scenario is that the very short run completely contradicts it, which is the lawyers will get the bar association to say it's illegal to sell your house without a lawyer signing and the doctors will get the medical association.
But I guess one of the intuitions I had, and I struggle to reconcile in my head, is like, you know, you have this situation where in sector A, productivity has gone nuts and the price is almost zero, but wouldn't actually be worried that in the short run we'd have a recession, right? I mean, all of these people would be worried about their jobs and would stop spending, so there's this demand side thing happening in the short run. How you, how do we reconcile those?
I would think that's why I said deflation, if you want to call that price shock as, because in this first sector, there is a lot of consumer surplus, but in terms of actual P, so we have the price of sector one times x in sector one plus the price in sector two plus the quantity in sector two. The price in sector one is zero by assumption. So, that part of GDP has fallen off a cliff and that capital and labor is unemployed. So, yes, I think the short run effect until you get this reallocation is I think a big increase in welfare, probably. Still a lot of people are very happy you're in Ghana and you don't have access to good medical services in some rural village and you suddenly can't just get a doctor, an AI doctor, that's great.
But that increase in welfare doesn't necessarily translate into a GDP increase indeed, and definitely those people who have to be reassigned could be in long term unemployment, a lot of them, because many of them might, depending on what their old skills were, might find it very hard to readjust the new world.
So, one thing that I also wonder about this, so part of what we're getting at also with this signing thing is the distributional consequences of these potential shocks, right? And here I sense a little bit of attention both when I read kind of the news about, you know, the entry level job market and what's happening to the entry level job market, and also when I kind of read papers worrying about de skilling. So, on some level we expect AI to be bad for entry level workers and less skilled workers, at least within less skilled workers within skilled professions. On the other level, we're worrying about this de skilling, so will AI be good for less skilled workers than skilled professors or bad for them? How do we think about that question?
So, it's a great question and one that is really being played out right now. I joked in the NPR conference on AI in Stanford a few weeks back about Prinyavsson versus Prinyavsson. So there is a Stanford AI, there is a Stanford
That sounds pretty problematic. Yeah, there is,
I think we can reconcile it. There is a Stanford economist who's had two really important papers. One is the one I was referring to before, which is in the Quarterly Journal of Economics earlier in the year doing software chatbot assistance to the AI assistance to the customer service support agents. And indeed, he finds big increases in the productivity of the most junior ones. Because basically, you get into the job and you get already a tool that allows you to solve most of the problems.
They actually get trained also faster. They seem to learn faster. So, when you eliminate this, turn it off, they seem to have picked up stuff. So, in all dimensions they provide more quality, the clients are happier, etcetera. You get the more junior of them are helped.
And there is also a field experiment, so this is one field experiment. There's another interesting field experiment with software developers that goes also into that direction, finds like some gigantic increase in productivity, maybe 20 something percent, it's from August year. So, it says, look, we gave in three companies these tools and we saw the software developers increase productivity a lot, particularly junior ones. So, that's your side. That's like, okay, it's not the scaling.
Then, when we look at the aggregate data, two very recent papers, one by Henry Grignolfsson and co authors, find something very different already. So, this is not in the big macro data that the Fed finds and the Fed economists haven't found it, etcetera. These are not big shocks that we would have expected in '22. But we do see, let me tell you the two findings. So, one is early September is this paper by Liechtenberg and a co author, Hosseini.
This paper is called seniority based technological change. And basically what it finds, using something like 62,000,000 workers, so it's really very, very significant. In the AI exposed occupations, you don't see anything happen to senior employment, you see it growing. You see junior employment really dropping. And the way it's dropping is through hiring.
It seems like a lot of people are not hiring junior employees. The logic behind it seems to me clear. If you talk to a McKinsey partner, which I have done on exactly this question, in person recruiting for them, he was telling me things like the deep research, that's the job that the junior researcher could do. The PowerPoint slides, you can do them automatically quite well. A lot of the junior tasks can be done by the software.
And so, you get this replacement of juniors that you don't hire anymore, and we'll talk later probably about some work I've done on this training, the missing training ladder. So, these junior jobs are gone and so you're hiring less. You're not firing people. So, that's why I say this is subtle. This is the seniority based technological change.
The Eric Rignoso paper from August is the canaries in the coal mine. Basically, it finds something similar. It finds for workers between 22 and 25 years old, so again, let's look narrowly, let's be careful, and let's look AI exposed versus not AI exposed professions, we again see pretty clear drops and pretty robust on aggregate data. Now, how do we reconcile this? I will reconcile it with the following two ideas.
One is this idea that I was arguing before that you get like, oh, I'm a better customer support agent, I'm a better customer support agent, I'm a better customer support agent, oops, I don't have a job. Because the AI has been helping me become better until the moment the AI is sufficiently better that I am not needed anymore. That is one idea that autonomy, kind of we start with non autonomous AI that enhances and complements our skills. So, Iden Talamas of a recent Journal of Political Economy paper on, I think it's actually the issue of the JPE from this month, where they contrast autonomous and non autonomous AI at different levels of the skill distribution. And basically part of the argument is the autonomous AI is going to basically pin down the wage distribution.
Like, it replaces people at that point and produces an enormous supply shock on that point. Everybody below that is gonna have to compete with AI, is gonna have to earn less than the AI charges or the AI's worth. And so, the moment it becomes autonomous, things change and that's I think one way to reconcile it. Autonomous versus autonomous. The other way to reconcile it is of course the level of the AI, which is very related to autonomy.
As the AI advances, I think we're gonna see the complementarity in some of these lower end jobs becomes substitutability. Now, this does not necessarily yet affect the higher end jobs. I think if you're on the higher end, your leverage increases. The knowledge hierarchy becomes more productive. You have this superstar effect where if you are AI, we see these salaries for the AI engineers that have been offered 100,000,000 and things like that, like football players.
When Messi is watched in the World Cup final or in the Champions League final, he's watched by 500,000,000, billion people. So, being a little bit better a player gives you huge market size because people are gonna, many people are gonna wanna pay a little bit more multiplied by 500,000,000 people. That's, whatever it is that little bit more is bigger. Now, that gives you superstar effects. And that basically, Sharon Rosen, who was a very important labor economist makes this point on when there is limited substitution between quality and quantity.
I cannot substitute 20 players by Messi. I cannot substitute 100 players by Messi. There is 11 and there's only one in the field that is like that. Any number of players is not going to replace Messi. And when you have markets that have joint consumption that one person can reach a lot of people, we cannot consume the same football game, then you get the superstar effects.
And these superstar effects are affecting the top of the wage distribution, a very good AI programmer with lots of AI developer, with lots of actual AIs, LLMs that are being deployed by him, can have or by her, can have enormous leverage and can reach very large market size. So, the extra skill they can add is really very very valuable. So, I think on the top distribution, we could see this bifurcation between on the bottom getting this stability, on the top getting this complementarity. And I think of course, as the threshold, the supervisory threshold, the threshold that the eye can do on its own goes up, this sector that is actually getting the superstar gains will become smaller.
So one thing I'm curious about is that if I'm an entry level worker and I wanted to try to figure out how I can get into this job and learn the skills I need to be valuable in this job, there's sort of a strange situation. It's like, if I get to the points where I can be valuable, then get to become an expert, can learn the skills to be an expert, then that's great. But there's a period in between where I would normally do these routine tasks, but then right now I'm not able to do them as often because the AIs are doing them for me. How do I know when it's worth it for a company to hire me if I'm Yes, an entry level
it's a question I've been thinking about with Chris Rayo, my co author from Kellogg. I like to think of this as an AI Baker problem. So let tell you, Gary Baker was a famous economist who developed a theory of human capital. And he made this distinction between general and specific training by companies. And he said, look, a company can always give you a specific training because they're going to appropriate it.
But how are they going to give you general training? Well, general training can only be given if the company can recover it afterwards. But once you're trained, you can just walk in, get all benefits from the training. So, he would argue like, he would say, how is this gonna work? Well, either there's a market failure because we don't get enough training in the economy, or basically somehow the workers pay for the training.
And with Luis Reyes, we basically wrote an analysis that appeared in the American Economic Review. We basically say, look, the way that these contracts are gonna work is the master, there's a master's in apprentice, and the master is going to basically slow down the training so as to extract all the value of the surplus from the apprentice while the master is giving little nuggets of training. So, I'm giving you just enough that you want to stay because you want to be an expert, but not so much that I train you very fast and you walked out. So, that's kind of the solution that we proposed. Now, in that solution, the AI, as you are hinting, is going to create a problem, which is that it basically devalues the currency with which the apprentice is paying.
The apprentice is basically paying not in dollars, it's paying in menial tasks. Like, okay, you're a lawyer and you're working for Crevasse, and it really is not worth your time to spend all your time reviewing all these contracts. I mean, sorry, it's boring as hell, but okay, you're learning something and you're receiving, but it's basically menial work. And you're in McKinsey and you're the smartest person in your class or an investment bank and you're the smartest person in generation, and there you are kind of doing silly spreadsheets that many other people could do. But that manual task is the way you pay for getting this training.
Now, if the AI can do the basic research at McKinsey, can do the contract review at Krebat or whatever law firm this is, and can do the basic accounting at an accounting firm or basic programming, then how do you pay for your training? So our argument is that the AI devalues the currency with which you pay and as a result makes the firm reluctant or the expert reluctant to get the worker in the first place because they were going to get, okay, I get this worker, he's gonna be paying and so on, but you know, I'm going to get paid for my training them through their work. Now, it's so cheap to do with an AI that the value of the worker is devalued. So, basically, we show in the paper we built a very simple model in which this exchange is happening. And we show that there are two basic things that are happening and the ratio between those two is what is crucial.
One is the AI, the substitution aspect of the AI that is basically devaluing this currency with which the worker is paying. So basically the AI, as it gets better, the worker basically has less to add to this production function of the partner or the more expert person. But at the same time, the fully trained worker is worth more. So, that means that actually the traineeship is still worth it. So, the basic result that we have is that there is a ratio, a key ratio which is how much the AI complements the expert.
An expert, fully trained expert with AI, how much has that gone up relative to how much the AI replaces the untrained person. If the expert with AI's value is going up a lot, then even though the untrained person is not worth a lot, you can extract them so much from that value they're going to be worth that the contract still exists. So, basically that ratio determines whether you are going to want to employ that worker or not and to train. In the absence of that, then the training ladder disappears and we have a big societal market failure, which is, imagine like all of this tacit knowledge, a lot of this training that happens in the job, it's not in any manual, right? If it was in a manual it would be taught in a law school.
It's about how you deal with a client, it's about how you are really precise with the contract. It's a lot of hundreds of things that are hard to describe. Tacit knowledge is the idea that there is a lot that we know that we can describe. And if the worker is not acquiring this tacit knowledge because all this training is not taking place from the master, this transfer of knowledge from the master directly, that he's the one or she's the one who has this knowledge, then the economy has a problem in the longer run. To the extent that the AI is not perfect, we don't have those experts that can supervise the AI in ten years or in fifteen years.
Then we have a hole in our growth model. Growth depends on human capital and suddenly we have that somehow all this pipeline of intermediate people acquiring skills has disappeared. And that's actually a big, I think, potentially big consequence of AI, a problem that AI could cause eliminating those lower ranks from the training ladder. And I think, as I was arguing before with the canaries and the coal mine and the seniority based technological change papers, I think there is a lot of anecdotal evidence from these companies that these very junior employees are not really being hired. But there is these two papers, this already from August and from September, starts to be systematic evidence that this could be happening.
What do we know about the value of this ratio? Do we have any empirical evidence on this?
No, I think that we are saying that people, I mean, it's a theory paper, and we are suggesting that people should look into this empirically. We are inviting people to analyze it empirically. I think we are seeing both. I think we are seeing senior people really complimented and more productive and look at the 100,000,000 checks that we were referring to on these big AI companies, the senior AI experts, AI engineers are getting big, big paychecks, which would be unimaginable without AI, so they're being complemented. I think that in our jobs we already can see that the productivity is increasing with AI.
We are also seeing substitution. So, the question is how big is that ratio in different professions? And the larger the ratio, the more the training ladders will remain.
One thing I'm a little worried about when trying to estimate this is that if we had tried to do this exercise of estimating the ratio three years ago, the models were so different and so much worse, and the ratio might have been pretty different, and I worry that if we try to do it today, three years in the future, it's going to be also similarly irrelevant?
I think you're right, but this is true for all AI, right? It's also true for all the macro models that are trying to estimate how much is compute transforming to advance. Mean, have some general patterns and some general scaling laws, but these things are, we don't really know how much we can extrapolate. We are in a period of massive technological change, and the good news is that it's massive, and the bad news is that we have to peek into the future with like really just in the dark with like a little bit of light. You guys at Epoch are trying to help people see further into the future and we are all trying to use the best tools that we have.
But the truth of matter is, if this is as revolutionary as respect, the future could give us big surprises. Yes, I do agree with that.
How much does this model depend on the tasks that are hard for humans also being the tasks that are hard for the AIs, as opposed to some kind of different skill distribution for the AIs, which seems to be the case. Like, it's kind like Moravax paradox in AI, like the things that are easy for the humans are hard for the AIs.
So I think I think the Moravax paradox is is a it's a huge discovery for all of us. I mean, we discover it every day, right, things that we find impossible to do that the computer is doing perfect, and then we are, we end up like spending time kind of fixing some stupid mistake the computer is on there, the AI is unable to fix. And so, it goes the opposite way in some sense as you're suggesting. We are indeed kind of studying a situation where the AI is little by little replacing things that the lower skilled worker can do. I think the reason why it makes, so yes, I think your point is well taken.
I think the reason why it makes sense in this context is because the AI makes mistakes. And I like to refer to this cut off is this supervision threshold. So, you need to be smarter than AI in order to be able to correct the AI. Think of a kid who's now going to school. And they can do the Charge they can make Charge GPT make the essay much better than them.
So, the Charge GPT is they just do the essay and they hand it in. They can't see where the mistakes are or the things are actually not perfect. So, they are never going to arrive to the supervision threshold. They're never going to arrive to the point where they are able to read the essay and see the mistakes because they basically spent all their years. And you have a young kid.
My kids are already out of this. But you have a young kid and this is going to be an issue, right? I mean, like I have a friend who is a high school teacher of English and he tells me like, you know, how do I make these kids want to write and read? They read like quickly Hamlet in the morning with a charge e p t. They take the key questions that were asked that have to be answered in the class and they kind of BS their way through their answer and they don't read anything.
So, I think that the reason that we are thinking of this is like we are in a context where in a law firm, in a consulting firm, etcetera, as you're acquiring the seniority, you're acquiring the ability to add value and be above what the AI can do. To the extent that it's the opposite, to the extent that AI is doing all the difficult tasks and anybody can do the correction of the points, then this will be a different world indeed. I think companies will have to think of training in different ways. Maybe they have to think of, okay, we're going to train the workers by, maybe we hire less of them, but the ones we have, we train them by going over the AI output and reviewing it so that there is actually a way that you're still improving, but you're not going through all these routine tasks that at the end the day don't have any value at all anymore.
So, in response to this AI Becker problem, could there be more of equity type arrangements involving human capital where firms have some sort of exposure to the human capital they help create?
I mean, the human capital is inherently with a person. And the person, I mean, have a big moral hazard problem, right? So, once somebody has invested in you, you are going to decide how much you work. You could decide not to work because you are not getting the upside, the company is getting the upside. So, it has been historically very hard to find market solutions to this.
Similar with loans. I mean, are loans for MBAs, for certain high end things. But loans, again, it's hard to see how you secure the loan against the human capital. You can't secure with the human beings because of slavery being forbidden and you cannot pledge yourself as collateral. So, human capital transactions, I mean, don't say they are impossible because they exist.
Often these loans are government programs. In The US there's a lot of government guarantees. In The UK there are government guarantees. I think that equity has proven really hard. With football players, right?
Maybe with football players, you get the upside, you train a football player to sell it to another team, etcetera. It's an equity like arrangement. But it's the only context where the firm trained the football player, which is the, I don't know if it happens in The US, professional sports, is able to get a fee, a transfer fee for having trained that person. But it's very unusual context. I would say equity is hard, debt is more promising, but even debt is tricky because of more hazard and repossession and all that.
Going back to the bigger picture a bit on AI and training, do we have a sense at the moment if AI is making training training of humans, I should say, easier or harder? Because on some level, you were mentioning that there's all these learning tools, AI powered learning tools that you could tailor to the student assuming regulation allows you, and that could be helpful, but on the other hand, know, I'm an instructor myself and I can't get my students to read anything, I can get them to read the AI summary of the AI summary of something, that seems bad. Is there any evidence you know?
I don't think I haven't seen evidence. I think we all observing exactly what you're observing. We're all observing that students have been using AI for code cheating. Let me tell you what I do with AI. So, my view on education is summarized by what I do with AI in my two classes.
So, I teach the microeconomics class in the first year of the master's. And my view is that if you want to be thinking in the future, you need some basic models and some basic facts and some basic tools. And that is not gonna change. Otherwise, you cannot think, right? We are trying to triangulate is 400,000,000,000 big or small?
Is that big valuation? I mean, you need to have something in your brain to just think. So, at the basic level, I want them to use the old blue books. Write the problem sets, write the exam, and the exam is going to be. And I think there, honestly, like I tell the students, this is like these basics you need in order to operate in life.
So, there I think AI is an enemy of us because AI kind of, okay, I can do the problem set automatically, why would I go through the problem set? And then you get to June and you have the exam and you're like, oh, what is this exam about? So, there is an enemy. But, indeed, there are tools and I try to tell the students like you can ask Claude for help, can ask Claude to explain. If you don't understand what a cocktail class is, you do it.
You do it two ways, you do it three ways until you learn it, okay? On the other side, let me tell you what I do to my second year class. My second year class is like, it could be called what I learned in politics that I didn't know before as economist. So, start from, they start from the policy. What is the policy that you're looking at?
So, group is looking at Tegucigalpa, they have a huge water problem, the water runs out all the time, and there's only a few hours of water a week. Okay, that's the economic, the economic policy. But now you want to look at the politics. What is the political economy? Who are the interest groups?
Who are it in favor? Who are against? You want to then talk about the narratives. How do you discuss this in public? How do you give a speech?
What is the message that you give? What do you want people to hear? People don't hear what you say. They hear something else. What are the preconceptions?
And then you want to talk about implementation. How are you going to implement your solution? Well, in this class, I tell students AI use is obligatory. They need to, all of these things, the analysis, the politics, the narrative, all of these things they need to build models, they need to understand the data, they need to actually figure out stuff that three years ago would have been unthinkable. They couldn't be doing.
So, I think that my view on how AI is working in education is we need to make sure that they are learning the basics and that is gonna be a struggle, and I agree with you. But at the same time, we need to be able to get our students to do enormously more than they could have done. So, if you're teaching a macro international class like you do, and the students can actually do a trade model of the Ukraine sanctions. They could actually change the decision substitution. They could, I mean, they could do these things that before it would be like the amount of computing and programming that you would need would be a PhD could do it.
So, I think that the way that training is gonna work has to radically change in using the AI tools to learn and using the AI tools to get much further. But at some basic level, we need to be able to persuade the students, that's the difficulty, that the basics they need to learn. I mean, maybe they don't learn, you know, maybe your papers will be written by an AI. But if you don't learn to write, you're not going to learn to think. I mean, know that argument's difficult to make.
But if I had a slightly seven year old like you have, I would try to
hammer home with that argument somehow.
Maybe for this part, maybe for this part what we're gonna have to do is homework in the classroom, right? Maybe the way for this part that is like actually, okay, you're going
to be writing. Just notebooks and adults in rooms.
Maybe we have two hours, you know, in the library of the school from two to four, which is homework time. No phones, no computers, and you guys have to do homework for this basics part. And then we need to also use the AI. I mean, believe in both. I don't think it's either or.
So, this one's fascinating. Should this make us a little bit pessimistic in the sense that my sense that there was kind of this more optimistic line of thinking that I would associate with that on Achimoglu, which is, oh, but we have options, there's this directed technical change, we can choose to develop technologies to keep them compliments with human labor, and then we won't have so many problems. Whereas here, it sounds like almost something inherent is happening, whereas as the AI gets more advanced, it becomes a substitute. So, we don't have a choice. We either accept advanced AI, but we accept substitution, or we don't accept advanced AI.
There's this kind of advanced AI, and no substitution might not be on the menu.
I think that's my view indeed. I think Darren Asimovu has a bit of an excessive optimism about two aspects of this. One is how much can we control this runaway train? We are in a game theoretical situation between China and The US, and I mean, is a strategic interaction between them. If The US decides not to develop, then going China's going to develop anyway.
So, I don't think the possibility of slowing things down exists. Second, I always think when he says, so actually I'm going to, it was two ports, but I'm going to make three. So, one is the interaction part. The second is the we. He says, we can direct technology.
Because who's we here? We is China? Is it The US? Is it firms? Is it workers?
Is it lawyers? Is it truck drivers? Who is we? All of those people have very different interests. Is it the people in the AI industry, which is now generating a big part of the growth in The US?
Does The US not want to have this growth? So, we, it's kind of, it's always kind of hidden away a little bit. Like, this we, I find it kind of for somebody who's as super sophisticated about political economy, he knows better than me. He's written whole book and lots of papers about the institutions and how they mediate this way. The other thing is that I think that the risk of trying to interfere many unintended consequences.
So, I tell want you about Europe because that's what I know well. Apart from being an economist, spent a few years as a politician. I was in the European Parliament. And Europe has made a very asimoglio effort. In fact, let me tell you that this letter that asimoglio and Elon Musk and many, I think, and many others signed, the future of Life Institute.
Future of Life Institute. Was this February or March 23? Something along these lines. This letter actually came in the middle of the elaboration of the EU AI Act. So, the EU AI Act was The draft.
The draft was finished in November 2022, the two drafts, but then the two drafts have to be reconciled, and the act was passed I think in the '23. In between, when they were just finishing, there was the Charge GPT, and that was the introduction. If you remember, Charge GPT was November '22. And that moment was the moment where there was this existential risk pandemic. I mean, everybody was like, oh, we're going to get turned into paper clips, and humans won't exist anymore.
And so, wrote this letter, and there was a moment of panic in Europe, and this, the person who actually wrote the law from the commission has given an interview to a Swiss newspaper. I wrote about it in my blog. Somebody wants to see it in my Silicon Continent blog. It's called Why Is It Dare You I Act So Difficult, AI Act So Difficult to Kill. And basically, I argue that, he argues, and I quote him that it was a bad moment for that letter because really Europe decided, okay, this is too risky.
Let's put all these guardrails all over the place. And the consequence for Europe is that, as you were hinting, a lot of the productivity gains that we could be getting from AI are not possible to get. So, let me give you an example. The AI Act is built on four risk categories. So, there is forbidden uses, which includes detecting emotions that's not allowed, or government controlled kind of surveillance and point systems, social scoring systems, that's forbidden.
But emotional detection is also forbidden. Second, social scoring by the governments, okay, public social. Second, high risk uses, which involved energy, infrastructure, decisions that the legislature says AI shouldn't be taken without a lot of steps. Now, those decisions include education and health. So, in education, you would very much want, for example, your students in Edinburgh, you would want them to take an AI quiz and to help you see how they're doing, that can kind of, you can probably, eventually courses are going to be possible for them to do the problem sets in a customized way so they can jump, step, etcetera.
The AI access, these things are high risk. And so, the fact that they're high risk means that when you train the system, you have to make sure that all the data are correct, that all the data is free of errors to the extent possible, that it's unbiased, and that you have the relevant data. Now, data error free training data doesn't exist. The training corpus right now is the internet. I mean, errors must be all over the place.
Somehow, for some bizarre reason that I don't know if anybody understands, after all of this is aggregated, all the errors get washed out like in a large numbers kind of effect, right? So it kind of works. But the trained data has to be unbiased and free of the data. Now, you need to keep the data locks on these high risk applications. You need to keep your records.
You need to keep documentation of everything out the system for ten years. You need to prove accuracy and security. You need the conformity assessment. And you need to register with the EU authorities. Now, are 55 EU AIA authorities that are going to be, that will do that.
And these authorities are supposed to have personnel that is highly qualified in AI, highly qualified in data protection, etcetera, etcetera. Now, you're an entrepreneur. You're starting your company, your little startup with education. You have to do all this plus GDPR, the General Data Protection Regulation. Business know it because of the cookies.
It's a pain, right? Mean, you know in economics people often disagree about things. I can tell you there has been like 15 papers on the GDPR, and all of them find less venture capital investment, less startups, higher compliance costs. Every single thing tells you the GDPR has been bad for your business. And now we're adding the EUIX in terms of, for startups.
So, part of the risk is you try to control the technology and you end up without technology, which is kind of the world where Europe has a risk of finding itself. We don't have foundation models. We have great researchers. We have a lot of, a huge savings pool, right? For many reasons, like we could go, if you guys care to go into that.
We have the researchers. We have the ideas. Businesses don't scale in Europe. We don't have foundation models. I mean, basically we don't have a competitive, I mean, I think there are like, I don't know, something like two foundation models in Europe compared to 50 in The US.
I mean, it's like the numbers are really, really very disproportionate. And we have very little AI implementation, so that's a problem.
I'm curious what you would say to a person who is like, oh no, but I actually think that these risks are really serious. And even if we don't think like all the way to immediately turning all humans to paper clips, they think that, oh actually, if you have, like, a ton of AI systems that are deployed throughout the economy, then they're going to be if they're not optimizing for the things that the humans care most about, then you could slowly, gradually just, like, shift things off the rails. And so maybe they would say, well, the EIA acts the most serious risk, the systemic risk for GP AI systems, for general purpose AI systems. These are the models that require over a 10 to the 25 training flop and maybe a bunch of other requirements. And so truly, this is just applying to the most capital intensive or most capital rich companies.
And so maybe for most other people, this particular thing doesn't matter so much, but it's just this particular group of actors that need to be subject to additional scrutiny. Yes. What would you say
to The systemic risk category is maybe a different story. I was talking about systems more broadly. There is a systemic risk category indeed, as you are pointing out, I think it's 10 to 24 flops, but it's 24 or 25, we can
I think it's 25? It's based on GPT-four.
Okay, so indeed, GPT-four is above, LAMA is above, so we have already previous generation systems are already above. The previous generation are already above. And yes, they have to be subject to indeed adversarial tests. They have to be made, that you have to prove that they're sure, that they're safe. To me, this is probably these kind of extensional risk issues in these very, large systems, they do deserve additional scrutiny.
So, that context, are you more or less optimistic about the EU? When you think about AI, do you become more or less optimistic about the European Union?
I am desperately worried, I'm desperately worried. I think that we are in a situation where if these kinds of effects that we were discussing before of big productivity growth, big welfare gains for many citizens that can do their, you know, their driving, they're not going to get killed in the car crash, they're going to be able to do their contracts, they're going to get legal advice to do smarter things that they would have done, negotiating with their landlord. Many, many things that are not gonna happen potentially because they will not be allowed. I think that productivity growth will suffer, growth will suffer, welfare gains will not happen. And I think that Europe has a demographic problem and a high debt problem.
So, Europe needs growth more than many other places to pay its bills. I mean, Europe is in a very tricky situation. Look what happened, what's happening in France, that on the one hand it's not growing and on the other hand has this big debt and explicit debt and implicit debt through their pension liability. So, it desperately needs growth and I fear that the European Union has over regulated itself and it's not going to get us growth.
I'm curious how much of what you say also applies to The UK. Because for example, like The UK does have like a frontier, which is Google DeepMind. So how much of what you say also applies to The UK?
I think The UK has taken its, I mean, let me just start by saying that I don't like Brexit and I don't think Brexit's a good idea. But Brexit was bad for The UK, but it was bad for Europe because The UK was the force that was pushing Europe in a more free market and open minded way. The UK was the motto for the single market project that was about making sure that Europe had an integrated market. And so, once The UK left, we had this divergence. The UK has a very pro AI posture.
It hasn't diverged in other areas, environmental areas, etcetera, it's still applying the EU rules. But in other areas, it's actually taken a more positive posture. I mean, I'm a professor here. I'm at the London School of Economics because I actually believe in The UK. I do actually think that they have, The UK has a very bright future.
Just, the governments are not kind of making the decisions that is being necessary to profit from them. But if you think of AI, you have capital, you have nucleus of talent, which are Oxford and Cambridge, are at the cutting edge. You have DeepMind, you have all of these other labs around it. I think The UK could be Silicon Valley. I don't see why that could be impossible.
Maybe the risk taking mentality is the one that is missing. It's not quite there.
So, thinking a bit about the AI value chain. You were saying that there was this kind of infrastructure layer, this kind of lab layer, and this kind of implementation layer. How do you think about where the value will go? How will the value be distributed across those layers, and how the prospects of different parts of the world depend on which layer gets the value?
That's a great question. So, I've been arguing that Europe could try to get the value of the AI. We're not going to get it from the lower layers. We could get it from the implementation. So, the idea for Europe could be if we manage to make the other layers to be competitive, interoperable, then we could get value on the implementation.
So let split it bit by bit. So on the hardware layer, it's clear that China and The US are capturing the value. So on the hardware layer, if the hardware layer is where the value is, then that's clearly going to benefit The US. And I think that it looks like learning curves are very steep. Look at Intel, it had a competitive advantage in PC hardware for, I don't know, decades, four or five decades until just this last generation which they got hammered, but they've had decades.
It's very hard, the learning curves are very steep, you need
to keep it clean, you
need to print it carefully. I mean, you need to design very complicated things, it's very hard to enter. I don't think there's really an entry possibility. So a lot of capture will get into hardware layer. I think the evidence is pretty strong regardless of what happens upstream, right?
Profits will go in there. Cloud computing, I think there could be big switching costs, moving your data from one to another cloud. I mean, Europe is really trying to avoid that. It's really trying to make sure that the data is yours and you can move it. But the cloud players can add features to make sure that you wanna stay and that if you move it, you lose some value.
So, there could be quite a bit of switching cost. I think we need to make sure that cloud computing, that the data is encrypted and it remains on servers that are located geographically here, Europe, so that it's not all the value going back to The US. But I think both geopolitically and economically, the risk is clear that also on the cloud layer, the Cloud Act of The US will have extraterritorial reach because those are American companies. On the LLM layer, on the foundation model layer, it seems to me that what we are observing is that there is very strong competition and very hard to obtain a competitive advantage. It seems to me that what we see is all the time one company gets some feature, we all love it for three months, and then we suddenly start trying the other one because it just got a little bit better feature.
I am basically fluctuating between Gemini, Cloth, and the and the OpenAI, I'm switching between all those. It seems like very hard to get an advantage. Also there is a big open architecture, a possibility OpenAI is not open, which is the LAMA was trying, Mistrali's building on that. All those weights are out in the open. So, we're gonna be able to have some, at least some of the applications that are kind of more energy efficient or smaller, they can go to the old systems and they can actually enjoy the fact that these are open architectures.
So, I would think that that layer remains quite competitive with one caveat, which is the introduction of switching costs through memory. If the system starts to remember you and starts to know how you are, then switching systems is going to be costly. I think that all of that data, we should make sure, we should do our best to make sure that the idea is yours and that it's very easy to port. Think of, I mean, I think the portability is crucial. Think of the example with social media.
In social media, there's no portability. The data of my graph and of my everything about me belongs to Meta or to Twitter, I will never say x. It's just my one principled objection to Twitter. And if you're in disagreement with them, you start again from zero. Okay?
I have, what, a 100 and a few thousand followers in Twitter. If I want to abandon them and start somewhere else, that's my problem. But if not, then I stay there. So, imagine a world where I send a message and everybody who likes me can follow me from any platform. But it's completely interoperable.
Market power would change radically, right? And I think the regulation, you were talking about optimal regulation, it would do its best to make sure that its interoperability exists and that we don't fall in the same track that we fell with social media. On some of those verticals to appropriate quite a bit of those value at the European level. Now, how do you do that and avoid extraction by all those other upstream players that we have been talking about from hardware to infrastructure to the LLMs? Well, we have to move fast, which we're not doing, and you have to keep the markets competitive.
We have to do our best to keep those markets competitive through interoperability and all these other demands that data can be moved, that the clouds are not proprietary, etcetera. I think that it's possible, but it's tricky because the truth of the matter is, if you don't have the hardware, everything else flows downstream.
So you think it's kind of one of the key points here seems to be that you think that you should be using the levers it has to kind of move as much of the value to the implementation layer because that's the layer where Europe
is Yes, strong, I've been pushing a second mover, a smart second mover strategy for Europe, which is a strategy that basically has Europe trying to, let me say for clarity, free ride on this gigantic investment boom in the LLM development and the data center development that is already taking place. Okay, we take it for granted, we're not going to try to imitate because we're too far behind, and let's use all our scarce resources in securing the autonomy, encrypting the data, having the data centers locally based, but mainly in developing a strong implementation layer indeed.
And in that context, would you worry that Europe would have some of the same problems it has had with regulating the tech giants in the past? Because I'm guessing this becomes a geopolitical game pretty quickly.
It has become a geopolitical game. That's the problem, that the US government is really throwing its way behind these big giants, and it's going to be very hard for Europe to insist in kind of level playing fields and interoperability, etcetera, and we are seeing it now. Of course, there was a digital tax, there was the OECD pillar two that we were going to harmonize aspects of corporate taxation. Trump has said that's out of the table. I mean, it's going to be very difficult to do certain things that rely on mutual acceptance.
The US is going to throw its power and we're gonna have to just basically swallow it. And I mean, Turnberry agreement, court agreement between Donald Trump and Nochaloff and Alliant this summer was a trade dispute. In every trade dispute until now, the way they work is, okay, you put the tariffs, I want to reply with the same. Here, Europe comes out of the room saying, huge victory. He's putting all these tariffs and we're not doing anything.
Sorry, what's the victory? No, they're not going to do any other things. Like, where does it say that Donald Trump is not going to do any other thing? No, no, They've promised not to do any other thing with our cars and with, of course, I mean, there's no promise. So, we accept the tariffs.
We don't do anything in retaliation. And on top of that, we didn't really get any commitment not to do any further actions from The US. The truth of matter is geopolitically, are very dependent. And the Ukraine war, which would take us in other directions, is part of the reason we need The US defensive umbrella and we are going to be struggling a lot to get that defensive umbrella to continue.
Yeah. Given that, like, curious how you think about economic security because, like, I think a lot of the reasons for this smart second mover strategy is that it's a lot harder, say, to build out huge amounts of energy infrastructure and data centers, but then very common in these kinds of discussions about data centers is that, well, we want to have some kind of sovereign computer or something. We want it to be If this is so important to the economy, then we want to make sure that we have our own ability to have data centers in the EU. And if people need to use AI, then, you know, we need data centers there. How do you think about that?
I don't think the public investment in this is going to be the big solution. So the EU has two sets of rules. One is like factories and programs, the Gigafactories. The Gigafactories are five big factories, data centers. But kind of the level of investment that is being put now into these one gigabyte watts plus centers, which are really, really like extremely costly, We are gonna have one of those, I think, which is in Portugal.
It's private sector investment. It's a partnership between, it's one company that is invested by Nvidia. It's a UK company. And data maybe? So, this is going to be one data center that will be local.
We're gonna have more local infrastructure in Spain. I mean, so basically it's Portugal, Spain, and northern countries because of energy issues that are getting some big, big data center investments in Spain, two by the Ebro River, I don't know how the Ebro is said in English, the Ebro. The big river that comes through the north below the Pyrenees, taking all the Pyrenees water. There's going to be two big investments there. So, we will have kind of good sovereign data centers, but these are not true sovereign because in some sense, the ones in Spain are basically Google and Amazon.
They're Azure and Amazon Web Services investments. But if they are local, we get some control. I mean, eventually there will be some local European companies doing this. I don't think the public investment is the solution because I mean, the numbers that we're talking about, I mean, we're talking about hundreds of billions of buildup per year, up to a trillion for 2030. I mean, these are numbers that are really very, very large.
And of course, public investment is not at that level. All of these companies are spending in R and D. Amazon, Microsoft, well, all of them, Apple, etcetera, they're all spending in Amazon, in R and D, more than any government. Just one company in Europe. So, it's not going to be possible to keep up through public investment.
It's gonna have to be the private sector has to want to do it. And in order for the private sector to want to do it, regulation is crucial, both in terms of permitting, but in terms of also all these regulatory obstacles that we seem to be throwing all over the place.
So on the net from this sort of geopolitical game, because I think a lot of people in Europe are upset by the relatively aggressive stance that the US government is taking on a on a number of these issues. On the net, is this good or bad for Europe? Because on one level, I guess aggressive US government action means that we're less likely to be able to move value to the layer that's coming into Europe. But maybe this sort of aggressive action would also make it less likely that we'll get to be too risk averse, right, because you know, we want to, you know, our instinct is to stop a lot of things that the tech giants don't want us to stop, and that the US government might not want to stop. So, will the US government save us from ourselves?
So, that's, I mean, the consequence would be welcome. I mean, I would, or at least to some extent welcome. We had a year of wake up calls. We said wake up call number one, Trump gets elected. Wake up call number two, the sofa scene where Vance and Trump ambushed Zelensky, the Ukrainian president.
We had, and all the time it's like, this is a wake up call for Europe. We cannot trust our old guy, the Europe, The US, we need to act together. And then we go back to sleep. People have this wake up call, they're like, okay. Every time you're going back to sleep, the wake up calls don't seem to be waking us up at all.
So, to some extent, what has happened this year should have unleashed a wave of like, okay, we're going to invest in AI and destroy digitalization. And one post that I wrote on this, I mentioned before on the Silicon Continent blog, was exactly asking why is it so difficult to undo this thing. And Europe doesn't have a very easy error correction mechanism. In Europe, the same European Commission that had this explosion of legislation that was the Green Deal and the digital legislation over the five years between 1924 is now tasked, the same president, who's left on the line, is now tasked with undoing it. Oh, we went too far, let's undo it.
Well, you know, the rapporteurs, people who wrote the legislation in parliament and in the council, the people in the commission who pushed it, all of these three institutions, the governments, the parliament, and the European government, which is the commission, all of these three institutions are going to be tasked with undoing a lot of rules that they themselves pushed. They saw big victories when they pulled them. So, now say, oh, you know what, we thought the act was great, but now that we realize it's going to slow us down and Trump is going to be at risk, let's continue. That is very hard to happen. The coalition that runs Europe involves the center right, the center, the center left, and the Greens.
All of those parties basically were the same ones that passed the first legislation and they are the same ones that have to now undo it. And there are many differences inside that coalition as to what can happen. The very first piece of legislation should have been removed, which had to do with excessive corporate reporting and paperwork. It was guaranteed to pass, everybody thought it was going to pass, then parliament turned it down. Because yeah, a lot of people were invested in the existence of that legislation.
So, I hope Trump partly saves us from ourselves in this, or The US partly saves us from ourselves in some of these aspects, but I am not very hopeful.
So one direction I was also hoping to bring back into the discussion is a bit more of the microfinance angle, right? There's been quite a bit of discussion about the potential impact of AI on things like interest rates, right, and and things like that. How do we think about that in the context of fiscal sustainability? Fiscal sustainability, macro financial stability, you know, these are, hot topics in general, hot topics in the European Union in particular. Any thoughts on that?
Yes, so I wrote a post that I titled without G, talking about how the European Union could get the high interest rates and not the growth. Let me unpack this a little bit, but not do it to Europe, for Europe yet, and then apply to Europe. So, there was a very recent paper by Oclair and some co authors that was presented at MBR this summer, people can get a link, maybe we can post the links to the papers that I mentioned. So, that provided a very simple demand and supply framework and applied to AI. So basically, they talk about the price of assets as being the result of a demand and supply equation and when there is a lot of demand, prices go up.
The tricky thing is prices go up, everybody has to remember in our audience that that means interest rates go down, those two things go in opposite directions. So they argue that over the last forty years, demand has greatly outstripped supply. And so we've got the prices go up and the interest rates gone down. So we have a very long secular drop in interest rates. And they basically say, in their calculation, the asset demand has multiplied by four.
So a big, big increase in asset demand. Because of all of these things that have to do with slow growth, with demographic change, people need assets for when they retire and they are old, they need safe assets in particular, all of that has led to a very big drop in interest rates. It's been a godsend for everybody who was in debt, particularly countries that were in trouble. They could issue debt for free. But they argue that AI is going to change this.
And that AI is going to increase interest rates first because of the impact of what we have been discussing of higher productivity. Higher productivity growth, that means the supply is going to increase. They're going to need also, they're going to supply equity. They're going to have to raise equity, that's asset supply. They're going to have to raise equity to pay for the AI investments, for all the AI labs, for all this, well, all of that.
And at the same time, the demand might go down because younger workers think, wow, the economy is growing a lot, so I don't really need assets because the economy is going to grow so much. So, they say this is going to lead to a drop in price and an increase in interest rates. And that's their argument. Now, so an asset price and as a result, apparently a drop in rate. So, their view is that we will have bigger G, higher growth rates, and bigger R.
But the growth rates will be higher than the R, so no problems for fiscal sustainability. Remember that sustainability depends on R minus G. Depends on R is how much do you have to pay when you issue debt, and G is how much is the pie growing with which you pay. So, if R grows a lot, oh, my goodness, I have to pay now 6%, but my growth doesn't go up. I'm like, each time I have to be paying more and more and more squeezed.
If my growth rate is growing a lot than what I used to So, pay the debt, can afford they say, well, the growth rate probably grows a lot and it grows more than there are, so overall that's sustainable. So what I worry about Europe is that you are going to have the bad part of having to pay higher rates without having the good part of having higher growth rates. If you're putting obstacles in a way of how you are adopting the AI, the taxi drivers oppose self driving cars, the legal profession opposes AI in the law and the doctors don't want AI and you get these human bottlenecks everywhere, then you're not going to have increases in growth rates. But you will have to pay the global higher global interest rates that everybody is facing because of the AI revolution and the higher productivity of capital that comes with that and the higher investment boom and all that. So, as a result, what you could have is that you make much worse debt sustainability problems which block our welfare state in the European Union.
So, for me, I mean, we have countries that have not only high explicit debt, 120 of GDP in France, almost 120 to 116. And they also have high implicit pension debt, three or four times GDP, probably more in some countries. And all of this has to be financed with a G. And you have to pay all this increasing R on it. If you don't get a G and you get a higher R, you're going to be in big trouble in terms sustainability.
So, you said before, would Trump be somebody who would wake up Europe? This is another reason to wake up. I mean, we have a problem demographically, and this is not just Continental Europe, also The UK. And we need more growth, and we need to take a much more aggressive pro growth stance. Much more aggressive pro growth stance.
So in terms of this overall problem and that piece in particular, I'm a little bit surprised by the fact that in the economics profession there seems to be, if not the consensus, a very strong majority view that AI will lead to an increase in interest rates, but couldn't you make an equally strong case that it could lead to a decline in interest? I mean, precautionary saving, you know, I get this vibe thing.
Because I'm
a bit scared, so Right, I'm very scared, right, I think in the Valley, people are having this discussion about, you know, I need to do well in the next five years, otherwise I'll be a serf forever, or something, was like, couldn't people just save because they want some exposure to the, you know, companies that will own the economy, and that's kind of a first order thing, so they really want to buy assets now because their human capital will depreciate.
So, think that it's not impossible that we have other, that we have a, I mean you're right that now the world seems very uncertain. Also, it's possible that inequality grows very much, and that's going to go in the opposite direction as well indeed, because the rich people serve more, they consume as much. Mean, at some point Elon Musk is not going to consume his 1,000,000,000,000 package if that happens. So, there are a couple of forces. The precautionary saving is one and the other one is the inequality increase that could push in the other direction.
I would kind of side with the consensus of the government profession, but you're right that there is a question mark over it. As a first order approximation, this low down in growth over all of these years led to this drop in R and an acceleration in growth if what we think is gonna happen as a first order effect, I think will increase the return on capital and will lead to an increase in R.
So, empirically those things go together as opposed to
I would expect that, but as we said, we are peering into the unknown, and we have to modest and humble.
And the other thing that surprised me a little bit was the fact that you were tying this increase in R with problematic implications for Europe in particular, and the reason why I was a bit surprised by that is that I think of Europe as as a continent of creditors, right? We run, you know, huge net surpluses with the rest of the world, so it would seem to me, okay, we're creditors, R will go up, so
We are exposed to these gains.
We will richer. So, in some sense, we will get richer, our governments will have more problems, but we'll get richer, so as long as the government finds ways to tax.
So, let me unpack that. That's a great point. So, it's true that we are net savers, and that means that as a continent, we should get some exposure to the good side of the AI. So, it's true that Even if
it happens in the rest of the world, right?
And Ricoletta was writing a report about how Europe is doing badly, and he came up with this expression that European savers exporting their savings into American companies that are employing the European workers, entrepreneurs who cannot make their, so this is a lot happening. I mean, are down in the West Coast and you see all these Europeans running and Indians and all these other nationalities. So, it's true that the savings should capture some of this additional hour. We should be benefiting and we should be on the good side. Now, the distributional impact is a bit tricky, right?
Because who is going to benefit from those savings rates? I mean, for example, Holland has big pension funds with big exposure to interest rates, but places like Spain, France, essentially the state is doing the whole pension through a pay as you go system. So, the overwhelming majority of the population has zero financial wealth. They have housing, which could also go up. And so, the ones who are risk life.
The risk of risk of of So, you're long on housing, you're going to benefit probably from this run up. But they're not exposed to financial assets. Only the very, very top, I would say three, four, 5% of the population will have significant exposure to these financial assets. So, the distributional issues are not obvious, but you're right that there is a net saver income effect that is positive. Income effect meaning Europeans are wealthier as you put it, when AR goes up.
One thing you were hinting at when discussing the macro finance question was demographics. And I must say that I worry about demographics quite quite a lot in European context, and in the global context as well. So should demographic change change a little bit our view of the trade off between the benefits and risks of AI? I think a lot of people are in this mindset that, you know, at least in the rich West, you know, things are pretty good as they are, so we can kind of afford continuing with things as they are, and be quite risk averse when we write down AI regulation and do things like that, but aren't we actually on a burning platform, aren't things going to get worse unless something turns up like AI?
So you're very right, and my colleague and co author Jesus and friend Jesus Fernanavillaverde has been sounding the alarm about the fact that demography standing total fertility rates are plummeting not just in the developed world like we expected, but actually in the developing world. Colombia, Tunis, Turkey, they're seeing collapsing rates, which is very strange because normally you, the demographic collapse is really, really problematic and it's true that you will have a need of a care economy. For example, in the in the positive scenario that you were describing where AI does many of the tasks that we think are human, care is something that maybe can be helped by AI. And so, replacing some humans on those professions which are going to be very hard to enormous share of evolution that will be old will be useful. And remember discussing it with Joshua Ganz.
I said like, oh, people are not going to want a robot to care for them. He said, are you kidding? I would much rather have a robot take care of my needs, like cleaning or showering me or whatever it is than a human. I was like, oh, actually maybe that makes sense that your robot is gentle, maybe you can do it. So, was arguing that the robots will have a big value as carers potentially that people will want them.
I don't know, we'll have to see. But if not in caring, in many other things, and again, there is evidence that people are doing therapy with AIs, so maybe there is more range. But if not in therapy, in many other things. We need the growth, we need the label that we're not going to get from this lack of agility. And so, that says, hey, let's have a more AI positive posture, absolutely.
Of course, it could be that AI leads people to want to have AI companions, and I don't know if
That makes the fact that the crisis is worse. The is worse.
But okay, that's the consumption choice that we cannot predict how that will play out, but it does seem like people like to have AI friends. I mean, I think that is happening.
So I think the question, one of the things I was had in mind when I asked the question was about stuff like R and D and things like that, right? So, know, and D, think of R and D as being done by relatively young people, although I appreciate that's not changing what I say. It's not so much AI helping us with care economy, although that's important as well. You know, in semi endogenous growth models, we end up with, you know, we need population growth to get any growth, and whereas I guess with fertility declines, that's problematic. So, the very least, you need to be able to shift more humans into R and D, and
No, and not only humans, AI AI as as well, like some of the work by Jones, and the paper by Philippe Aguilen, the recent Nobel Prize winner and Ben Jones and Chad Jones argues that basically, to the extent that AI is just capital, it's not gonna make a big deal, where it really makes a big impact is in R and D. To the extent that AI can accelerate the production of ideas, then AI can really accelerate growth. That's I think the scenario where you will see the big growth acceleration. Having taken into account all the caveats that I put about need the regulatory approval and so on. But I agree with you that in terms of generating ideas, is really the driver of growth, If we don't have the scientists, we better have the AI generating ideas or we need to move more people into a scientific production.
I am pretty optimistic about how AI will help in the production of ideas. We are, I mean, somebody like Terry Tau writes already like, okay, I could solve a problem thanks to It was actually an interesting argument. He was arguing that AI was helping him collaborate with many collaborators. So basically, he says like, look, you always have small teams of mathematicians because you need to trust each other because you don't know if one key step in the proof wasn't well done. Because now with AI, we're just leaning to kind of do the little bits of the proofs.
We kind of decentralize it and then we kind of can check each other's work and somehow we have bigger teams. Other mathematicians are saying that AI is helping them to make a proposition or not quite, I don't think there's an AI theorem, but I think there are some results already. So mean combinatorics, the protein folding, Nobel Prize, we do see some impact of AI in accelerating research which could be crucial indeed given our demographics. We need to have that research sector somehow produced. All right.
I think that's a good place to end.
All right.
Thank you, Luis and Andre, for coming on to the podcast.
Thank you very Thanks, Andrew. It was a lot of fun. I appreciate it.
The EU and the not-so-simple macroeconomics of AI - Luis Garicano
Ask me anything about this podcast episode...
Try asking: