| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
Max Tegmark and Dean Ball debate whether we should ban the development of superintelligence in a crossover episode from Doom Debates hosted by Liron Shapira. PSA for AI builders: Interested in alignme...
Hello, and welcome back to the cognitive revolution. A couple quick notes before getting started today. First, if you're interested in a career in AI alignment and security, you should know that will soon be opening applications for their summer twenty twenty six program. MATS is a twelve week research program focused on AI safety, featuring world class mentors from Anthropic, DeepMind, OpenAI, The UK's AI Safety Institute, and more. 80% of Mats alumni now work in AI safety, and I've heard so many great reviews of the program that I personally donated to Mats as part of my year end donations last year.
Applications open December 16 and close January 18, so you've got some time, but don't delay. Visit matsprogram.org/s26-tcr for more information. That's matsprogram.org or see our link in the show notes. Second, to end 2025 or perhaps to begin 2026, I'm planning another AMA episode. Last year, we got a ton of great listener questions.
This year, I will again invite your questions, but I'll also be asking Chad, GPT, and Claude to mine their memories of our interactions to come up with some questions of their own. Submit questions at the link in the show notes or feel free to DM me, and let's see if humans can ask better questions than AIs for a little while longer at least. With that, today, I'm excited to share a special crossover episode from the Doom Debates podcast hosted by Leron Shapira. The occasion for this debate is a recent statement organized by the Future of Life Institute, which calls for a ban on the development of superintelligence, described as an AI system that, quote, can significantly outperform all humans on essentially all cognitive tasks, end quote, unless and until there is broad scientific consensus that it can be developed safely and there is strong public buy in for doing so. On one side, we have Max Tegmark, president of the Future of Life Institute, which organized the statement, and MIT professor who's pivoted his research group to focus on AI with some outstanding results, including a paper on training models for mechanistic interpretability called Seeing is Believing, which we featured in 2023.
On the other side, we have Dean Ball, frequent guest on the show, previously AI adviser at the White House, and famously the primary author of America's AI action plan. To put my own cards on the table, I did sign the statement because I do oppose a rush to superintelligence, especially via recursive self improvement as seems to be roughly the current plan at multiple top AI labs. Of course, at the same time, I am more passionate than ever about AI doctors, and you'll hear a similar form of techno enthusiasm from Max. He's excited about AlphaFold, self driving cars, and all sorts of controllable AI tools, just not a fully general and autonomous digital species quite literally designed to replace us, which might lead to a potentially irrecoverable disaster. It's that sense of impending disaster that motivates Laurent to make doom debates.
And in my opinion, one of the most interesting parts of this debate was when he asked Dean for his p doom. Dean said zero point o 1%. An answer he later wrote on Twitter was made up on the spot because he considers the concept of be doom to be fundamentally unserious. On this point, I have to say, much like the concept of pornography, which Max briefly invokes in the conversation, I feel strongly that the concept of doom is meaningful. And given what we're hearing from the Turing Award winning fathers of deep learning who also signed this petition, I can't see any argument getting me to reduce my own PDUM below 1%.
At the same time, Dean does make many strong points in this conversation as well. As someone who values bottom up experimentation and innovation and who believes that patients should have a right to try potentially life saving medicines, I definitely don't love the idea of an FDA like regulator for AI. If that were the best we could do, I might hold my nose and go for it. But the fact that this is the go to analogy does, from my perspective, support Dean's point that it won't be easy to convert the high level petition statement to actual effective policy. And similarly, while I, of course, reject the idea that basic AI regulation means conceding the AI race to China, and it should be noted that the CEO of Chinese AI Lab, Jupu AI, the subject of our last crosspost, also signed the petition as well.
I still have to admit that a real unilateral ban does pose competitive risks that aren't easily hand waved away. The bottom line for me is that while no one has the clarity they'd need on what superintelligence will look like or how it will be created to craft perfect policy language today, I do believe that a race to superintelligence is a bad idea, and I'm glad to have done my part to help create common knowledge that many well informed people do, in fact, feel this way. Finally, before getting started, I wanna give credit to Laurent on Doom debates. I generally don't like debates as a format, but by getting such outstanding guests as Max and Dean to focus in on what is very plausibly the most important question of our time, that is how likely is it that advanced AI will in fact go catastrophically wrong? Leron is making the format work, and I definitely encourage everyone to subscribe.
For now, I hope you enjoy this debate on the wisdom of building or banning superintelligence with Max Tegmark and Dean Ball from Doom Debates hosted by Leron Shapiro.
I would argue that artificial superintelligence is vastly more powerful in terms of the downside than than hydrogen bombs would ever be. So if you think of it as actually a new species, is which in every way more capable than us, there's absolutely no guarantee that it's gonna work out great for us. If we treat AI like we treat any other industry, we would then have safety standards. Here are the things you have to demonstrate to the FDA for AI or or whatever.
I think the fundamental thing to think about here is really assumptions. There are many worlds in which humans can thrive amid things that are better than them at various kinds of intellectual tasks. And I just have very serious issues with the idea. We're just gonna be able to pass a new regulatory regime, and everything's gonna go fine, and there will be no side effects. And these analogies of FDA to AI are not really very good.
It's not to say that I I don't think we need something like an FDA.
But then I'm confused by why you don't think we should have the same for AI. So what's the difference between AI and what happened?
Let me just
let me Yeah. Let me make an uninterrupted point for a
few Yeah. Yeah.
If you don't mind. Okay. I think that there will be tons of side effects, and I think that we will stave off a lot of wonderful possibilities for the future.
Maybe the real crux of disagreement is your mainline scenario. And so let me ask both of you this question. What is your p doom?
If we go ahead and and continue having nothing like the FDA for AI, yeah, I would think it's definitely
I just kind of have this sneaking suspicion that, like, if the models seemed like they were gonna pose the risk of overthrowing the US government or anything in that vicinity that, like, I don't think OpenAI would release that model or Anthropic or Meta or XAI or Google. Like, I just don't think they would.
Welcome to Doom Debates. Today, I'm excited to bring you a debate between two of the world's leading voices on AI policy. The question at hand, should we ban the development of artificial superintelligence? The stakes are high. Advances in AI have become the key driver of our economic engine.
Artificial intelligence is increasingly facilitating breakthroughs in manufacturing, health care, education, even basic science. The prediction market, Metaculous, estimates that the first true AGI, the first fully general human level AI systems, will be achieved by 2033, less than ten years from now. Many experts believe that that milestone will soon be followed by the creation of artificial superintelligence, a system that surpasses the capabilities of the entire human species. That brings us to our debaters, two of the clearest voices who disagree about how society should approach these developments. On one side of the debate, we have Max Tagmark, an MIT professor who believes we should ban superintelligence development until there's a consensus that it'll be done safely and controllably and strong public buy in.
His research has focused on artificial intelligence for the past eight years. He is also the cofounder of the Future of Life Institute, a leading organization dedicated to addressing existential risks from AI and other transformative technologies. Max, welcome to Doom Debates.
Thank you.
On the other side of the debate, we have Dean Ball, who completely disagrees with banning superintelligence. Dean is a senior fellow at the Foundation for American Innovation, has served as a senior policy adviser at the White House Office of Science and Technology Policy under president Trump, where he helped craft America's AI action plan, the central document for US federal AI strategy. Dean, welcome to Doom Debates.
Thank you so much for having me.
Okay. Let's do opening statements. Max, the starting point of our debate today was a dispute between you and Dean over your statement on superintelligence, the Future of Life Institute's statement on superintelligence that was published on October 23. And the statement says, we call for a prohibition on the development of superintelligence not lifted before there is one broad scientific consensus that it will be done safely and controllably, and two, strong public buy in. So why should we ban superintelligence?
Well, if you negate that statement, then you're saying that we should be allowed to go ahead and build artificial superintelligence even if there's no mean field consensus at all that it can be kept under control or that people even want it. Right? And if we were to say that, then we would be basically doing the most spectacular corporate welfare, cause we don't do that in any other industries. Yet, right now, there are more regulations on sandwiches than superintelligence in The US. If you wanna sell drugs, medicines, and cars, airplanes, You always have to demonstrate to the satisfaction of some independent scientists who don't have a conflict of interest that this is safe enough.
The benefits outweigh the harms. I'm just saying we should treat superintelligence the same way. And right now, 95% of all Americans in a new poll don't actually want to raise the superintelligence, and most scientists who work on this agree that we have no clue at the moment how to keep something which is so vastly smarter than us under control.
Okay. And, Dean, you oppose the public statement, and you don't share Max's views on prohibiting superintelligence. Give us your opening statement. Why do you think we shouldn't ban superintelligence?
So I think that the the concept of a ban and of superintelligence in general is just quite nebulous, and that is the fundamental issue that I have. AI systems that could that could pose substantial danger to humans are, you know, they're not disallowed by the laws of physics, at the very least. I think there are really serious questions about how close those things are and how likely how likely we are to build those things in the near future. My guess is five years ago, if you were to try to describe general super intelligence in a law that a lot of people could agree to, you know, which would be the way that you would affect something like a ban, all of the things Max referenced are things that we impose those requirements through laws, right, on airplanes and drugs and whatnot. So if you're gonna have a law, you're gonna have to define superintelligence in a statute.
And I think that the problem you will run into there is that you will define things you'll define it in such a way that you actually end up banning many things that we would want. There's many ways that you could you could plausibly define superintelligence that would negate technologies that I think would be quite beneficial to humanity. I mean, imagine an AI system that has largely solved mathematics. Right? It solved all the outstanding problems that we have in mathematics.
It has advanced certain domains of science, maybe maybe many domains of science by, you know, the famous century compressed into a decade, right, or compressed into five years, let's say. It's accelerating the AI research itself that's doing that in meaningful ways because one of the areas of science that it knows how to do experiments in is is is, you know, computer science and AI research. It's a better legal reasoner than you or me or anybody else. It's better at coding than you or me or anybody else. I can imagine such a system like that existing.
In fact, my guess is that such a system will exist by roughly 2030 without posing the kinds of risks that, you know, Max is worried about, which again, I don't think are impossible. I just place a lower probability on them. And so I worry that what we end up with is, in practice, what you end up with if you tried to affect such a ban would be you know, we're gonna ban n plus GPT n plus two. Right? It's in practice what it would mean.
So there's GPT five, and that's n, and there's GPT six, which would be allowed, and then GPT seven would be the thing where we say no. That that's just we've decided that's too scary. And so we're gonna basically ban that. And then, you know, what happens after that? Well, in order to figure out anything about whether superintelligence is safe or not, You can't just do that research speculatively.
Right? You have to actually build the thing to some extent, and put it in a constrained setting to figure out if it's safe. You have to build at least big parts of it. And once you've done that, it's like, well, okay. But there's a ban.
So only the specially sanctioned group is allowed to conduct this research. And at that point, you have a monopoly, perhaps a global governmental cartel of some kind that is developing this. And this, I also think, could potentially be dangerous. And that is, of course, assuming that you were able to get the international cooperation you would need to affect such a ban, which I also doubt. So that would be my, my comprehensive statement.
Hey. We'll continue our interview in a moment after a word from our sponsors. Are you still jumping between multiple tools just to update your website? Framer unifies design, content management, and publishing on one canvas. No handoffs, no hassle, just everything you need to design and publish in one place.
Framer already built the fastest way to publish beautiful, production ready websites, and it's now redefining how we design for the web. With the recent launch of Design Pages, a free canvas based design tool, Framer is more than a site builder. It's a true all in one design platform. From social assets to campaign visuals to vectors and icons all the way to a live site, Framer is where ideas go live, start to finish. And now they've added a Framer AI layer to make it all faster and easier than ever.
With Wireframer, you can skip the blank canvas and get a responsive page with structure and starter content ready to edit. With Workshop, you can create new visual effects, cookie banners, tabs, and more. No coding needed. And with AI plug ins, you can connect top models from OpenAI, Anthropic, and Google to generate images, rewrite text, generate alt text, and more. Ready to design, iterate, and publish all in one tool?
Start creating for free at framer.com/design and use code cognitive for a free month of Framer Pro. That's framer.com/design. Use promo code cognitive. Framer.com/design. Promo code cognitive.
Rules and restrictions may apply. If you're finding value in the cognitive revolution, I think you'd also enjoy Agents of Scale, a new podcast about AI transformation hosted by Zapier CEO, Wade Foster. Each episode features a candid conversation with a c suite leader from companies including Intercom, Replit, Superhuman, Airtable, and Box, who's leading AI across their organization, turning early experiments into lasting change. We recently cross posted an episode that Wade did with OneMind founder and CEO Amanda Kahlo about AI led sales. And I also particularly enjoyed his conversation with John Narona, chief product officer of AI product pioneer and recently minted double unicorn, Gamma.
From mindset shifts to automation breakthroughs, agents of scale tells the stories behind the enterprise AI wave. Subscribe to agents of scale wherever you get your podcasts.
Okay. Max, Dean raised a few points about maybe the practical difficulties of doing this kind of superintelligence regulation, even going so going so far back as to define what superintelligence is for the purpose of this ban. How would you respond to that?
So I'm afraid that we might disappoint you, Lyra, on here by agreeing more than you want because you want the fierce debate to sort of clobber each other. I think it's actually quite easy to to write this law, and I don't think it requires defining superintelligence at all. Let me explain a little bit what I mean by that. Now if we treat AI like we treat any other industry that makes powerful tech, we would then have safety standards. Right?
There are safety standards for restaurants, know, before you they can open, they have to have someone check the kitchen. So if you had safety standards for AI, they wouldn't need to define superintelligence. They would just say that, you know, if there's a system that does some plausible experts maybe could cause harm in, here are the things you have to demonstrate to the FDA for AI or or whatever that this is not gonna do. You might wanna demonstrate that it's not gonna teach terrorists how to make bioweapons. If it's a very powerful system, you probably put one of the safety standards being you have to demonstrate that you can keep this under control.
If the company selling it can't convincingly make the case that this thing is not going to cause the overthrow of the US government, then reject. You know? Come back when you can. Right? So I didn't mention superintelligence here at all.
It's the company's obligation to demonstrate that they meet the the standards. And to take an analogy that might help clarify what I'm talking about here, let's talk about thalidomide for a little bit. This was this medicine that was given to women in The US to reduce morning sickness, nausea during pregnancy, and it caused over a hundred thousand American babies to be born without arms or legs. Right? So the dumb way to prevent such harm would have been if the FDA had a special rule that we have a ban on medicines that cause babies to be born without arms or legs.
What if someone comes out with a new medicine now and and the arms and legs are fine, but the baby has no kidneys or no brain. You know? That's not the way to go about it. The way you instead go about it is you ask the the companies to do a clinical trial and provide quantitative evidence of what are all the different side effects that people might not want, quantify them, how many percent get each, and then quantify the benefits. You give this to some independent experts who don't have money on the line, so they can't work for the companies, for example, who look at the benefits and the harms, and they decide, is this a net positive for the American people?
And, you know, then they approve it. This is how we do regulations in all other areas. I know this is how I think it's quite easy to do also for AI. In summary, you don't define superintelligence. You just define the harms that society is not okay with.
Very broadly, it boils down to demonstrating that the harms are small enough to be acceptable. And then it's the company's job to make all the definitions they want, quantify things, and and persuade them, these independent experts. Does that make sense?
Yeah. I'm happy to let you guys cross examine each other pretty freely, and I'll just step in once in a while.
Okay. Cool. So so yeah. I mean, basically then, instead of saying we should ban superintelligence, you know, the the what you're what you're saying instead is we should have a kind of licensing regime, a regulatory regime of of some kind that you know, with respect to frontier AI systems.
Yeah. Very much inspired by how we do it for other tech.
Yeah. Yeah. Yeah. So I'd say a couple of things about that. First of all, most preemptive sort of regulatory regimes that I'm aware of, you know, they don't generally require you to prove I mean, you you know, you can't prove a negative.
Right? So I couldn't you couldn't prove you the the FAA, the a v Federal Aviation Administration, doesn't require you to prove that your plane won't crash. It requires you to make affirmative statements about, really not the plane itself, really, like, many subsystems of the plane. Right? So, like, the turbines of this jet engine have x y z chemistry, which conforms to x y z technical standard, which, you know, blah blah blah blah blah.
Right? And in fact, the way that, a lot of times that ends up working is, like, there are there are layers and layers and layers of regulations. So, you know, the plane maker has to buy jet engines that are only from people that conform to certain standards, and those standards have to do oftentimes not just with, like, the object level properties of the of of the component in question, but also of things like, how does information flow through the business you know, flow through Yeah. This company. Right?
There's all sorts of things like that. Right? In other words, if you're like a turbine manufacturer for if you if you make turbine blades for jet engines, you are probably subject to implicit and explicit regulations that have to do with, like, risk management inside of your company. And, like, who is the designated risk officer and all these sorts of things. Right?
But the point is
that you have
to make
I can just jump in. I agree with everything you said here. What the companies need to demonstrate in the safety case is the high level thing. The government wants to know how many flight hours on average do you have until a failure and and so on. So it's like the companies can solve that whatever way they want.
Right? It's it's in the interest of the companies to not use flaky manufacturers, to have good procedures, and have people study crack formation, the physics of it, and so on, and and then they'll switch to another alloy if if that works better. It's the same for medicines. The government doesn't come in and micromanage, oh, this chemical is allowed. This ingredient is not allowed.
Rather, the company, if if they have a in effect, some medicine that, you know, seems to be pretty good at suppose there's a new antibiotic that seems really good against bronchitis. You know? It contains lead and aluminum and cyanide and then some small doses. People will look in the company and be like, know, we're having a hard time demonstrating the safety of lead. Maybe this works even without the lead.
You know, maybe we can swap out this thing. So all the innovation is driven by capitalism, by market forces to come up with a the quantitative risk bounds that they wanna meet. Nuclear reactors is a great example because what the law actually says there is the company has to make a real quantitative calculation of what and demonstrate that the risk of a meltdown is less than one in a in ten thousand years to even get permission to start building it. Right? So the company has free rein to come up with whatever reactor design they want, and and then they will innovate.
And and whoever you first meet, though, is gonna get the big bucks.
I think it's considerably more comp I mean, in principle, that's that's true. But it is practice is considerably more complicated than that because, like, you know, there are all sorts of things that there's there's soft what's called soft law, right, which is, like, guidelines and all these other things that push people in in certain technological directions and away from others. But but that's actually not even my point. My point is that at the end of the day, you have to be able to make a like, in order to have a regulatory system like this, you have to be able to make affirmative statements about safety. And the problem, I think the problem, I think, would be, you know, what are the affirmative statements about safety when you consider that the systems we are talking about are, by their very nature, extremely general and already you know?
So I'm just as an example, like, obviously, AI systems today are being used in areas that already have regulatory, you
know Yeah.
Structures like the kind you're describing that affect them. So you have to like, this regulator would either have to be so general and have such a broad projection of authority, or it would have to be really, really narrow. And I kind of doubt that it would end up being really narrow in the context of democratic politics. Because the issue that you'll have is, like, there's going to be more than just, you know, x risk type issues. So even if you could formulate some statement about, you know, existential risk, and like, okay, you have to prove that the model will not do x y z that demonstrates catastrophic misalignment.
Okay, fine. But I would say in practice, you're likely to end up with a situation where, for example, the model cannot result in job loss, would be a really good example of this. And then you have, like, what what I you know, this gets back to an article that I wrote more than a year ago called the political economy of AI regulation, which is to say because this is so general and because it's going because the technology is going to in its positive adoption, not like existential risk, not anything like that. Its positive adoption is going to end up challenging many entrenched economic actors and Mhmm. Aspects of the status quo.
And when those people if a regulatory regime of the kind you are describing exists, then those people are going to be able to use it as a cudgel to prevent technological change that I think we would all agree well, not all as in all people, but probably the three people in this discussion would probably agree is, like, good for the world.
So I will push back in a bit on on this idea that it's so hard to get started on this, but I'd love to just give you a chance first to answer just a very simple question. Do you think it's reasonable to have zero safety standards on AI right now? Do you think you feel it's reasonable that there should be less regulations on superintelligence than sandwiches now in in 2025?
Well, I mean, I certainly think we overregulate sandwiches in the which and and just for the listener who doesn't have context, I think what Max is probably referring to is sandwiches served in restaurants, public health regulations, and local public all sorts of things like this. Right? And it's true. Like, there are probably ways in which we we do overregulate those things and probably many other ways in which we don't. I would say that generally speaking, in America, we succeed when we regulate at the level of like, the restaurant that serves the sandwich has many computers in it, probably.
It probably uses computers in many different ways, including to get the ham and the bread that brought the sandwich to us. And, like, those we don't, like, regulate those computers with respect to their conveyance of ham to the restaurant. Right? Like, we just treat them as general purpose technologies that can do lots of different things.
Right. But if if you go to that sandwich shop and you notice that across the street from it is OpenAI or Anthropic or Google DeepMind or XAI, if they had developed super if they developed superintelligence this year, which I think is highly unlikely, but but suppose they did
Mhmm.
Then they would be legally allowed to just release it into the into the world without breaking any law because there are no safety and parameters they have to meet. Do you feel that that's at all reasonable?
Well, so, I mean, I again, I wouldn't quite say first of all, yeah, fundamentally, I think, like, you should be able to develop new technology and release it so long as you're not behaving with reckless disregard for or or gross or gross negligence, you know, reckless conduct or or gross negligence.
According to whom?
Well, accord so this is the thing. That actually already is that it's it's to say it's illegal would imply it's a violation of criminal law, which may or may not be true. But certainly, it's a violation of civil law. Right? So if release of that system were to result in physical harm, loss of property, death of any person can say.
Yeah. Well, human extinction, sure. But, like, I kind of again, like, I'm I'm sort of skeptical that that's what we're gonna have on, you know, day one. That company is subject to common law liability, and there are common law laws.
Do them if we're all extinct.
So, yes, the the tail risk case that we all die, then, yes, common law liability does not help you. And in general, it's true that common law liability is not a great solution for most tail risks to the extent that the damages incurred out you sort of dwarf the balance sheet of even the largest companies. Right? So it's like, you killed you create let's not even say killed all people. Let's say you created a pandemic.
You're someone made a pandemic with your model, and we've decided that that was reckless misconduct for which OpenAI, the creator of the AI model, bears some form of of liability. Well, that's a lawsuit you can bring against them. But if the damages are, you know, a $100,000,000,000,000 or something, then, you know, it's very unlikely that you're gonna be able to to recoup that amount of money from OpenAI even with all the money that they have. You'll bankrupt OpenAI and still be, you know, not fully compensated for the harms that you suffered. So it is true that as a general matter, tail risks are one of the classic examples of where of where government, you know, of where of where public policy outside of, like, reactive liability makes sense.
So I I don't dispute that. Now I think when it comes to the sort of foreseeable tail risks that AI models might pose, the current ones that people talk about are things like catastrophic cyber and bio. I think there's a lot of things that you can do downstream of that that avoids creating this large scale regulatory regime.
Mean, a lot of people talked about extinction too, wouldn't you say?
I mean, a lot of people do, but there is there still is not the kind of persuasive evidence for extinction in terms of, like, not just, like, theoretically, but, like, mechanically. How would that work? I just don't think we've seen that to quite the same ex nearly the same extent that we have.
We haven't seen any extinction yet, of course, by definition. Otherwise, we wouldn't be talking here today. But, I mean But I can very clearly curious I I Frank has to I can put it on the risk list. So so I'm just I'm thinking it it's interesting what you mentioned there about the pandemic example because I think it's quite relevant. You know, the as you know, it's very controversial right now whether COVID nineteen was actually the result of a gain of function research Yes.
Funded by the US government and Peter Dashek or or not. But if you just consider that there's some probability p that Peter Dashek and his research group did create it with help from others, then, you know, if someone were to sue them for these millions of deaths that it cost, it would be pretty meaningless because Peter Dashek doesn't have that kind of money. The university where he worked doesn't have that kind of money. And for that reason, the US government has now kinda clamped down on gain of function research again and say no more of this gain of function research until we better understand what you're doing. And we also have biosafety labs level one, level two, level three, level four.
So if you're doing something even that seems less scary than what they do, or did, you know, you have you have to do it in a special facility. You have to get some pre approvals. And then you contrast that now with with digital gain of function research. We had Sam Altman in a press conference the other week being so excited about building automated AI researchers. Right?
And ultimately, a lot of people are excited about recursive self improvement, which is very analogous to biological gain of function research. Why should we have regulations on biological gain of function research and still be content with having no binding regulations at all on digital gain of function research. That makes no sense.
So so
the worst thing about automation is how often it breaks. You build a structured workflow, carefully map every field from step to step, and it works in testing. But when real data hits or something unexpected happens, the whole thing fails. What started as a time saver is now a fire you have to put out. Tasklet is different.
It's an AI agent that runs twenty four seven. Just describe what you want in plain English, send a daily briefing, triage support emails, or update your CRM. And whatever it is, Tasklet figures out how to make it happen. Tasklet connects to more than 3,000 business tools out of the box, plus any API or MCP server. It can even use a computer to handle anything that can't be done programmatically.
Unlike ChatGPT, Tasklet actually does the work for you. And unlike traditional automation software, it just works. No flowcharts, no tedious setup, no knowledge silos where only one person understands how it works. Listen to my full interview with Tasklet founder and CEO, Andrew Lee. Try Tasklet for free at tasklet.ai, and use code cog rev to get 50% off your first month of any paid plan.
That's code cog rev at tasklet.ai. Being an entrepreneur, I can say from personal experience, can be an intimidating and at times lonely experience. There are so many jobs to be done and often nobody to turn to when things go wrong. That's just one of many reasons that founders absolutely must choose their technology platforms carefully. Pick the right one, and the technology can play important roles for you.
Pick the wrong one, and you might find yourself fighting fires alone. In the ecommerce space, of course, there's never been a better platform than Shopify. Shopify is the commerce platform behind millions of businesses around the world and 10% of all ecommerce in The United States, from household names like Mattel and Gymshark to brands just getting started. With hundreds of ready to use templates, Shopify helps you build a beautiful online store to match your brand's style, just as if you had your own design studio. With helpful AI tools that write product descriptions, page headlines, and even enhance your product photography, it's like you have your own content team.
And with the ability to easily create email and social media campaigns, you can reach your customers wherever they're scrolling or strolling, just as if you had a full marketing department behind you. Best yet, Shopify is your commerce expert with world class expertise in everything from managing inventory to international shipping to processing returns and beyond. If you're ready to sell, you're ready for Shopify. Turn your big business idea into cha ching with Shopify on your side. Sign up for your $1 per month trial and start selling today at shopify.com/cognitive.
Visit shopify dot com slash cognitive. Once more, that's shopify.com/cognitive. Hey. We'll continue our interview in a moment after a word from our sponsors.
I'm I'm I I'm trying to answer the the I'm trying to actually go back to the first question you asked me, which has to do with safety standards. But first of all, let me just say, yeah, you you you what you said is completely consonant with with my assertion that tail risks are not typically contemplated very well by the common law liability system. But with that being said, you know, I think, like, when you look at for example, actually, you can do essentially automated gain of function research with a nucleic acid language model today. Right? You can do you can basically simulate the evolutionary process that allows for more virulent viruses or whatever else.
And we've seen, you know, the early stages of this from from people like the Arc Institute in California. Those are not, like, chatty PT style models, but it's the same architecture trained on nucleic acid sequences. Right? So, like, we know that's a thing. I think as a practical matter, though, the issue that you have is that those things are bits, and it's very, very hard to just purely regulate bits.
So what do we do? Well, instead of imposing regulations at the layer of the model, which is a really difficult layer of abstraction on which to do it in the same way that, like, we don't tend to place regulations at the layer of, like, computers or of software or of transistors. Because these things are really important general purpose technologies that undoubtedly all three of those things have killed lots of people at this point. We don't regulate at that layer of abstraction because it's not very practical. It's not a good unit of abstraction.
It's not a good conceptual unit of account for regulation. So what do we do? Well, there are all these choke points in the physical world. Some of them are are, you know, labs of of certain biosafety level categories Uh-huh. Of BSL three and four, as you've said.
Some of it is at the layer of nucleic acid synthesis screening, which, you know, basically, you have to say, well, if you're gonna order the creation of a certain kind of nucleic acid, we're going to, as a matter of policy, require that you screen that against some sort of methodology that allows us to test for whether or not you're trying to make a pathogen. And again, I worked on that policy, some of those policies, when I was in the Trump administration. So these are all things that that we do. Again, those are safety standards that exist, that are are emerging, that are kind of downstream in many ways of advancements in AI. I think the urgency of policies like nucleic acid synthesis screening goes up because of AI.
So the issue so I think when it comes to safety standards for something like, you know, at the very general level of a of a sort of generalist artificial intelligence model, what you need I think in the long run, we will build those standards. Right? Like, no I don't I don't think anybody in the spectrum is saying that there won't be standards for safety, security, and etcetera of large language models.
What do you mean by long run? Because Sam Altman talked about a thousand days to superintelligence, and he might be wrong. But I'm curious if you're thinking less than three years or more than three years.
I'm thinking that it will happen gradually over the course of the next decade or maybe a
year superintelligence, maybe.
After superintelligence, maybe. Yes. But I think the the broader point here is that this is traditionally the way that we kind of do things in the in in The United States is that, you you build a technology. Right? You gain experience in practice from you know, with with with its sort of with its utility, and you sort of diffuse it throughout the economy in this very complicated way.
Sometimes there are demonstrated harms, and when there are demonstrated harms, the first thing we do is we deal with that through the liability system. And, again, I would point out that OpenAI, Google, and other companies have not copyright cases, but common law liability. You caused physical harm to me type of liability cases against against them for chatbots. Right? Yeah.
And I think that at least some of those, the companies are likely to lose. I mean, you know, they'll be determined by courts. And then gradually, over time, we codify around a set of standards that are shaped by experience, that are broadly agreed to by many different actors. And then eventually, we codify those in the form of government sort of government standards, and eventually that becomes part of an international standards body. Right?
Nobody is disagreeing that that is not a process in which we need to invest substantial time and money and energy. And in fact, I would say the Trump administration should get points in your book because part of the reason that that that the administration renamed it was called the AI Safety Institute by the Biden administration. They renamed it the Center for AI Standards and Innovation to reflect this reality that the ultimate goal of an organization like that is to produce technical standards. So you have to produce these like, it it takes time to do. But when you actually have these things, they are coherent because they're formulated through experience.
I think the problem is when you when you try to change the sequencing of that and try to come up with standards sort of without any experience and sitting in the, you know, the Ivory Tower or the the, you know, the regulator's conference room, you, I think, have a tendency to create standards that are unrealistic and burdensome.
Mhmm. I completely agree with you, Dean. It's it's great that the current administration is taking biosecurity more seriously, and I get a sense that they're also taking AI assisted hacking more seriously. I completely agree with you also that this is how things have been done in the past. Let technologies come up, let people invent the car, a bunch of people die, and then gradually, you just mandate the seat belt and traffic lights and speed limits and and other things to to make the product more safely.
But I think it's important to remember that science has been getting progressively more powerful from the early from the ancient times until now. And as a result, technology also keeps growing exponentially in in its power. Right? So at some point, the technology gets powerful enough that this old traditional strategy of just learning from mistakes goes from being a a bad a good strategy to being a bad one. I think it worked.
It served us well for cars. It served us well for things like fire. We invented fire first. We didn't regulate it to death. It was later we decided to put fire codes in, have fire extinguishers and fire trucks and stuff like that.
I would argue that nuclear weapons are are already above this threshold. We don't wanna just let everybody who wants to buy hydrogen bombs in supermarkets, and then, oopsie, you know, that didn't go so well. We had the nuclear winter now, and ninety nine percent of Americans starve to death. Let's regulate. For those things, it was very obvious to people that one mistake was one too many, and we we already have a bunch of proactive laws about how to deal with hydrogen bombs.
In fact, even despite all your work in the government, you are not allowed to buy your own hydrogen bomb even though I I would trust you with it. You know? You I know you're a nice guy. I'm not allowed to start doing nuke plutonium research in my lab at MIT even if I I pinky promise that I'm gonna be careful, you know, just because one mistake there is just viewed by society as one too many, and they know I don't have enough cash to pay the liability if I get sued afterwards. And I would argue that artificial superintelligence is vastly more powerful than in terms of the downside than than hydrogen bombs would ever be.
Though there have been some pretty careful calculations recently showing that only about, in the worst case scenario, ninety nine percent of all Americans would would die and starve to death. And if there's a global nuclear war with with Russia, so there's still, you know, three million have survived. Whereas if we if we lose control of artificial superintelligence because we sloppily somebody sloppily built a new robot species that just kinda took over. You know? Mhmm.
It really is game over in a way that nuclear war wouldn't be. So so the way I see this is is not that there's anything wrong with the traditional wisdom for how to regulate things. I think that's very appropriate for all tech sort of below a certain risks threshold. And we're very lucky with AI that so many of the great benefits we have are not particularly risky. You know?
AlphaFold, an absolutely superhuman tool for folding proteins, great for drug discovery. You know? Autonomous vehicles, they can save soon, I believe, over a billion lives every year from this pointless road deaths. You know? It's so much productivity that can be gained from building controllable AI tools.
And and those, it's, I think, very feasible to continue having the sort of liability system you're describing the traditional way. You know? Learn from mistakes and then fix. It's only fringe stuff, like, particular, artificial superintelligence, which is on the wrong side, I think, of it of that threshold where Yeah. Right now, there's not much upside, frankly, to sprint, in my opinion, to sprint to building superintelligence in three years.
If we could do it safely in twenty years instead, we would be much better off just doing controllable tools until then. And that's why it irks me so much that think people conflate these two things a lot. I'm not saying you do, but a lot of folks I've spoken to on in on the hill do, I think, and think that the only choice we have is more AI or less AI or go forward or stop. Whereas I see it instead the development as being branching into two paths. You know, either we continue going very aggressively forward to build all these great tools, but just insisting that companies demonstrate to us that they are controllable tools versus going all in on building building superintelligence.
I have I have to give you a compliment, Dean, also. I was so pleasantly surprised when I read the action plan that it didn't mention the word superindulses a single time, and not even AGI. You must and I think that was I don't know if you get a 100 of the credit for it or 50% whatever, but I think that was really wise because it highlights that there's so much great stuff we can do with AI tools without having to even get into the whole question of of superintelligence. I don't know if there's anything you're allowed to share with us about.
Well, you know, a lot of people contributed to the action plan, but thank you very much. I appreciate. You know? The but actually, I'd say the reason we didn't use terms like AGI and superintelligence in the action plan, at least from my perspective, is like because it's really hard to know that we're talking about the same thing. Yeah.
You know? Because and and so this is this is where I think, you know, maybe we ought to we ought to spend some time is this question of, like, what exactly are we are we talking about when it comes to, like because because I'll give you an example of an area where I've had an evolution in my thinking. About a year and a half, two years ago, you know, I was opposed to this bill called s b ten forty seven, which was a California state bill that had to do with you know, it was regulating models with respect to potential risks relating to extreme cyber events and bioterrorism and other sorts of bioweapon events that could cause up to more than five hundred million dollars in damage. Yeah. And at the time, you know, the frontier model was things like GPT four o, things like, you know, Gemini 1.5, Claude three.
And it wasn't obvious to me at sitting at that time, you know, say the 2024. Yeah. Wasn't obvious to me that just everyone was like, well, the next time we crank the pretraining wheel, the next time we go up another order of magnitude in terms of pretraining compute, we'll get to models that do pose these very serious bio and cyber capabilities. And that wasn't quite clear to me because, you know, I was sort of thinking, like, well, I don't know. Like, you're talking about cross entropy you're talking about minimizing cross entropy loss here on the broad, you know, Internet corpus.
Like, is that really gonna, like, create something that can cause a, like, bioweapon? And I said something, though. I said, you know, if you showed me a model that had demonstrable system two reasoning, and that sort of led to the performance that I think it would, system two being deliberative, reflective reasoning, then I would change my mind about some of this stuff. And then, you know, right around the time s b ten forty seven was vetoed, OpenAI released a model called o one, which did exactly this. It had this system two reasoning.
And the performance on cyber, on mathematics, and and on a lot of different areas of science, including biology, went way up. And at that point, I said, shortly after that model, I said, this changes my risk calculus with respect to catastrophic events like bio and cyber. Because it's clear that this reinforcement learning and inference time compute based paradigm is going to rapidly lead to capability increases in some specific areas that we're worried about. And I can paint a really clear picture because I can go from this model can reason about biology, it can also, by the way, use tools like AlphaFold. Right?
It can also itself use other biology, you know, machine learning tools. I can go I can draw a very clear picture from that to, you know, nuclear a virus being synthesized in a lab somewhere. The virus self replicates. It infects one human host. Now there's a lot of complexity there.
Right? It doesn't mean that that kind of thing is guaranteed to happen. But it means that if you're doing the expected value calculation and you're thinking about, like, okay. Plausibly, the the the chances of this may be causing a pandemic, even if the chances are still low Yeah. The chances just went up a big fraction.
And so we're going to need some degree of targeted regulation to deal with this topic. Yeah. And that is why I I was supportive of s b 53, which in many ways was a a somewhat more tailored version of s b ten forty seven that came a year later. That was because it wasn't just because the law changed to become a little bit more favorable to things I care about. It was also because the facts on the ground changed.
So why did that why does this matter? Because there is a clear link between emergent model capabilities and an actual harm that is cognizable to me. I think the issue with the sort of human extinction thing Uh-huh. Is that it's very hard to demonstrate in in concrete ways, like, what this looks like. So to this point, like, if you could formulate what you wanted, like, if you could formulate things that would make you feel better in affirmative, technical, like, empirical things we could say about models.
Right? Like, okay. We've stress tested the model in this way. We've run this eval or we've done this thing, and we have shown that the model gets, you know, the model passes what we view as an acceptable threshold, then I would totally be willing to say, like, yeah. Let's make that an e make you don't even need to pass a law.
Make it an eval. I kinda promise you if you made that an eval and if you got, like, enough of the credible people around to, like, support it, I kinda think that, like, labs would probably just run it, like, without have a law having to pass. Yeah. So, like, why not just do that?
Yeah. So you you're raising number of of successful policy approaches in the past here. So let's just let me just summarize some good things I think you said there and and add a little bit to it. You know? We we have more broadly with regulation of things.
Let's take drugs again. You know? To knock drown companies and big and red tape and and some other innovation, one tends to look at just rough plausibility, and then divide all the products into classes. We have class one drugs, class two, class three, class four. Right?
So there are much higher safety standards for fentanyl or other op new opioid drugs than there are for new cough medicines for adults. You know? And or for new vitamins. You know? So if you take the same approach to AI, well, then what would you would say is if there's some new software that translates English into Chinese or Japanese, the most embarrassing thing that could probably happen is that it's a sort of repeat of the Monty Python skit with a fake Hungarian dictionary.
You're seeing that some people get get a get a red face. Whereas if it's some if you have an AI that is really state of the art and and protein synthesis or or DNA synthesis, you know, it's pretty obvious that that should be subject to higher levels of scrutiny. And then what we do in industry now is we let companies make the safety case in the rather than the government. So you you said there that it's hard to foresee exactly how superintelligence, if it's just smarter than all humans combined, would kill us all. But, you know, being a scientist for so many years has made me really humble about these things.
It would have been really hard also for the makers of thalidomide to predict that that would cause babies to be born without arms or legs when all it ostensibly did was reduce nausea in their moms. Right? We didn't actually understand how that would happen, but it did happen. And because of that, it would have been pretty reasonable to just say, well, okay. You know, we've noticed that there are a lot of things that can cause birth defects.
We don't understand exactly how it works. So before we try it on all American mothers with no prescription, you know, let's try on a small number of of mothers and then see what happened to their babies and then kind of go from there. So the you shift the burden of proof away from politicians trying to having to articulate why this is going to be dangerous. And, two, the companies would just have to do some basic research to make the safety case. Right?
And I think, again, if we did this with AI today, if I had a magic wand and we created an FDA for AI, you would have class one, class two, class three, class four, or AI safety level one, two, three, four systems, a little bit along the lines of what many AI companies have already in their voluntary commitments. Right? And they will be very, very easy requirements for the ASL one and so on. But when you for the higher level systems, the companies would have to do a lot more to quantify the the safety case. And I think what would happen then is we would end up in a golden age of of AI progress where we would soon get flooded with all sorts of new medical treatments and amazing autonomous vehicles, great increases in productivity.
And the the the the one area that would get slowed down noticeably is precisely the race to build actual superintelligence where I think nobody would be able to make a safety case yet. And and I think that will be just just fine. You know? If we have to wait twenty years to think to get that done properly, it's way better than racing to it and and bungling it and squandering everything.
Liran, look like you want look like you wanted to jump in.
Honestly, you guys are doing such a great job. So I don't know how much value I could add, but just, I'll give it a shot to orient the viewers. So, yeah, you guys were talking about how to regulate these new AIs. Dean, in your case, like Max pointed out, when I read your AI policy document, you know, the America's AI action plan, it doesn't really mention superintelligence. Do you think that is a wise way to go to basically just not look at the possibility of superintelligence currently when making policy, or do you think we should do anything to prepare for the possibility of superintelligence?
Well so, I mean, I should say, you know, the action plan, it is in the in the sort of AI policy public world is very heavily associated with me. But, you know, of course, the action plan was written by by many people within the government, and I was really just the the sort of I was I was I played a big role in it for sure, but, you know, by no means the only one. It was not my unilateral product for sure. And I think one thing I would say there is, like, I think that's that's part of the reason the action plan doesn't talk about superintelligence is because it's very it would be very hard to build con whereas, like, my substack does, you know, from time to time talk about superintelligence. Because it's very hard to build consensus among a document that has so many authors as to what we really mean.
And this maybe gets into you know, again, it goes back to where my concerns are with, like, laws and drafting and exactly what you what you mean and what you don't mean. I think about a model like GPT seven, this sort of ostensible GPT seven. And I think to myself, man, like, if this is a model that, like, advances the frontiers of science in many different domains and solves a lot of different math, you know, math problems that have flummoxed humans in some cases for centuries, is better at sort of legal reasoning and all these other things than than than encoding than any human, doesn't seem inherently dangerous to me. And also seems like like, how is that not super intelligent? Right?
Like, it doesn't it's not like Bostromian super intelligence. Right? Like, it's not like that specific definition. But I guess my view is, like, that concept of superintelligence was created quite a long time ago in the grand scheme of things, you know, with with respect to how fast AI advances. And it's not obvious to me.
I think I think that that concept of superintelligence was a really useful way of thinking about advanced AI systems. You know, Bostrom wrote that book. He wrote the book Superintelligence in 2014, I wanna say. Like like, eleven years ago. Yeah.
So, you know, Dario talks about this sometimes with respect to AGI where he Dario Amade, CEO of Anthropic, for the listener, where he's like, you know, AGI ten years ago was like, we're driving to Chicago. But then, like, once you actually get closer to Chicago, it's like, okay. Well, like, what neighborhood are we going to? What street? What's the house number?
Etcetera, etcetera. And I think that as we get closer, we actually we need to develop new and more specific abstractions for what we are talking about. Because there are, you know, all sorts of things that I think we will probably, in the fullness of time, have, like, really specific kinds of technical standards and also maybe even, like, statutory requirements for what for for, like, what you can and can't can't build with AI. So one thing I wanna be very clear about is, like, I'm not saying, like, this needs to be unregulated for all time. In fact, I would say, you know, you made the point earlier about how we regulate different medicines with different levels of
Rigor.
Yeah. Rigor based on based on their potential risks. I think we already do that with with the the frontier language models. Right? Because, like
There are no binding regulations right now in America for anything.
There's tons of binding regulations on frontier and
There's no binding regulation preventing people from launching things. There is no label afterwards. Right? As opposed to drugs. Right?
Where and I think that's an interesting distinction for the listener. You can't release any drug in The US until you've talked to the FDA about it.
And Well I talked you, you know quite. Not quite. But you you you actually, this gets into technical definitions and things where these things matter. You can release for example, you can release CRISPR engineered bacteria without consulting the FDA, because those are probiotics according to the statute. So so a company called Lumina released a CRISPR engineered bacteria that you're supposed to to brush your teeth with that will ostensibly eliminate.
You're infecting yourself with a bacteria that you'll be infected with for the rest of your life, and every person you ever kiss will also be infected with it. I'm just saying, though, that,
like Yeah.
You know, there's
a lot of a lab in in Wisconsin that's been taking this bird flu strain that kills ninety five percent of humans, but it's pretty harmless because it's not airborne. And they've been working on trying to make it airborne. Yeah. So there's some room for improvement there. But I think we agree on on the basic situation here that the that there are for you can't open your restaurant or really release a new type of opioid before you Yeah.
Been FDA approved. There are just some some in some interesting there may be some obviously, we can have some differences in opinion about things, but there are some things which I think are more in the confusion category, which are just really helpful to to, to clear up. And one of them is around definitions. Whenever you have any term historically that starts to catch on, every hype stirrer is gonna try to latch onto it and have it mean something else. Right?
Mhmm. So Alan Turing, when he said in 1951 that if we build machines that are way smarter than us, the default outcome is that they take control. And when Irving j Good talked about superintelligence machines in in the sixties and and recursive self improvement, The definition of superintelligence that was implicit in that was obviously that they could do everything way better than us, which meant that they could also do better AI research than we could. They could build their own robot factories, make more robots that didn't need us anymore, and therein lie the risk. After that, I agree with you.
Right now, there's there's so much hype and BS about this. Mark Zuckerberg talked about superintelligence in a way that almost made it sound like it has something to do with methods and glasses, and and we have so many different ways that people have redefined AGI from the original definition that I act I don't know if you saw the paper that I was involved in that we did with Dan Hendrix and Yoshua Benjo and many others Yeah. Called defining superintelligence. And welcome people to come up with other actual empirically useful definitions. But we found with this definition that we're absolutely not even at AGI.
GPT four was 27% of the way to AGI. GPT five was 57% of the way there. So there are still a lot of areas where today's best AI systems really suck. Long term memory, for example. But we're getting closer, and I I think that if, we're thinking about only putting the first FDA style, safety standards on AI in three or four or five years, it's there's some reasonable chance that that'll happen only after it AGI and maybe even supertillers have been created.
Right? And that would be, I think, the Yeah. Pretty big big oopsie for for for humanity. So I think there is there are very useful clear definitions of what we mean. And as I said in the beginning, the the the way to write a law is not to define superintelligence and ban it, but I think instead ban the outcomes that you don't want.
You know, something overthrowing the US government, something making bioweapons for terrorists, which are very easy to define. And then soon as that law is in place, it's gonna spur just massive innovation in the companies. It I love comparing pharma companies' budgets with AI companies' budgets. You know, the leading AI companies now spend maybe 1%, give or take, on on safety. Whereas if you go to Novartis or Pfizer or Moderna, they spend way more than that on on their clinical trials and on the safety because that's the financial incentive.
Right? They're in a race to the pop. Whoever can be the first to come out with a new a new drug that meets the safety standards makes a ton of money. People really respect the AI, the safe the the safety researchers in those company. They don't think of them as whiners who who slow down the progress.
They think of the ones them as people who help them win the race against the other companies to make the big bucks. Right? So I think as soon as we start treating AI companies like we treat companies in other industries, we will incentivize amazing innovation.
Can I also throw out a question? I wanna clarify Max's nightmare scenario here, because I think that's important to frame the the discussion, Max, is I think you're not even just concerned about something like thalidomide where a bunch of people die, you know, hundreds of thousands or or whatever it was. You're concerned about this runaway process where it just becomes too late to regulate forever. Is that fair to say?
Yeah. I can take a minute and just clarify a little bit for listeners who haven't thought so much about this. Many, many times, humanity has been thinking a little too small. Like, people thought nuclear weapons were science fiction until they suddenly existed. People thought going to the moon was science fiction until we did it.
And from my perspective as a as a scientist, as an AI researcher, if you think of the brain as a biological computer, then there's no law of physics saying you can't make computers at risk better at all tasks than than we are. And a lot of people used to say, yeah, but that sounds so hard. It's probably decades away. In fact, most professors I know thought that even six years ago, we were probably decades away from making AI that could even master language and basic knowledge at human level, and they were all wrong, it turned out, because we already have it now in in systems like ChatGPT and Clone 4.5. So if we consider what would happen if we actually built huge numbers of of humanoid robots that were better at us than us at all jobs, including research, including mining, including building robot factories, and so on, we would have built something which is not just a new technology like the printing press, but really built a new species because it can rep these robots can build new robots in in robot factories, and they don't need us anymore.
Could be great. It could mean that we don't have to do the dishes anymore, and we're gonna live in in abundance and them taking care of us, but it's not guaranteed. And Alan Turing, as I mentioned, the godfather really of our field said in '51 that the default outcome he thought was them taking control. We have the two most cited AI professors on the planet, Jeff Hinton and Yorkshire Bengio, saying similar things, you know, today. And so if you let go of this idea that AI is like the new Internet or whatever, and you just think of it as actually a new species, which is in every way more capable than us, there's absolutely no guarantee that it's gonna work out great for us.
And and I'm not just talking about how we we obviously couldn't get a job beco get paid for doing work because they could do it all cheaper. I'm talking even about the the the we don't really have any say after that necessarily on what happens on the planet. A lot of people are working on this. It's called the control problem or the alignment problem. Broad scientific consensus that they're not solved yet.
You know? So so this is this is the scenario which, I think will end up up, and if we just race as fast as possible to building these superintelligent machines, rather than instead focusing on the controllable tools that can cure cancer and do all the other great stuff, and then taking it nice and slow with with the things that we don't yet know how to control.
So okay. There's a lot there that I I I think I I can respond to. First of all, I think I would start just by pointing out that, you know, you correctly observed that lots of people will take terms like superintelligence and redeploy them to mean completely different things. I would submit to you that maybe Sam Altman, when he talks about this be existing in three years, is maybe doing a bit of the same thing. And so,
like plenty of hype, top f.
You can't talk out of both sides of your mouth. You can't say, like, well, this happens, but also these people say they're gonna build it tomorrow. You know? You have to, like, pick one. But the other thing I would say that's like more serious is
like Although, I wasn't talking about I'm concerned about this regardless of, you know, whether it happens in three years or or ten years. Me too. Sure. Cheap thing is just whether I mean, I think right now we're we're closer to figuring out how to build superintelligence than we are figuring out how to control it.
Okay. I think the way we the
only way we fix that is is simply by making sure no one is allowed to build it before it can be controlled.
Okay. So let me just respond now. So you have a you are basically the fundamental difference here is that I am saying the technology will be regulated in various in, like, a wide variety of different ways, which are fundamentally and mostly reactive. Right? Doesn't mean that we won't pass laws.
There's already laws that I've supported, which have to do with AI regulation, and I don't think impose substantial burdens on the development. I would also say the development of this technology is a national security priority.
Great.
It seems really hard to like, it seems like a really big cost to impose something that we would self consciously, you know, slow ourselves down when others are not doing that. But I'm not even gonna like, I I think that's a valid point, but that's not even where I wanna go. Okay. Instead, I feel like our crux is you kinda want this precautionary principle based preemptive regulatory regime that would require some group of people to say affirmatively yes before you're allowed to do something. Just like build
Not like we do it with pharma and every other industry.
Yeah. Yeah. Which is what those are. Which is what those are. I think that, you know, it's it's really hard I think that there are huge costs associated with a regulatory regime of that kind.
I don't think the government could do it very well. I think it's very possible that by doing that in practice, as someone who has observed lots and lots of these regimes play out, think it's very possible that by doing that in practice, you would actually end up not just being worse for safety, but being worse for or being worse for innovation, but being worse for safety.
Are you arguing we should close the FDA, or did I misunderstand you?
I mean so I would say the FDA is an organization that is in need of deep, deep and profound reform. Because one of the things that happens when you impose a top down regulatory regime like this is you lock in all sorts of assumptions that you have about the world. So let's take the FDA as an example. Mhmm. The FDA
Before you give this answer, which I was super interested in, it's very different saying the FDA needs to be reformed, less regulatory capture, etcetera, from saying it should be shut down. So are are you gonna give are you saying that it we're worse off having it than we would if we didn't have it at all, or do are you arguing simply for a better FDA?
Well, let let me just explain, like, where where I'm coming from here. Like because I I think that, like, these analogies of FDA to AI are not really very good. And so, like, yeah, I I it's not to say that I I don't think we need something like an FDA. Not that I don't think we need to test drugs before they go into people.
But then I'm confused by why you don't think we should have the same for AI. So what's the difference between AI and what happened?
Let me just
let me Yeah. Let me make an uninterrupted point for a few minutes, if you don't mind. Okay. Okay. So first of all, when it comes to the FDA, we have this huge problem right now with a lot of drugs, which is that what we have realized after several decades of modern science, you know, as opposed to, like, you know, where the when the FDA was made, a hundred years ago, is that diseases are way more complicated than we thought they were.
They're not really discrete things. And, like, there's kind of no such thing as cancer, and there's kind of no such thing as Alzheimer's. They're much more complicated sort of broader failures of very complex biological systems and circuits. And the issue that you run into with that is that you need highly personalized treatments for things in order to solve stuff, because your cancer is different from my cancer, is different from from someone else's cancer. And the FDA's regulatory regime turns out to be entirely unsuited to deal with that because it's depend it's it was based on this sort of industrial era assumption of diseases manifest themselves the same way over many over large populations of people.
So what you wanna do is you wanna test it over a big population and sort of get get a average get average statistical results as as opposed to safety results for one person. And what that means is that what we have locked into place is an entire economic structure for the way that we treat disease that is wrong for modern science. And it's really non obvious how we change it because there's a lot of entrenched interests associated with with the current system, including the people who run the clinical trials that are that we that we operate at great expense, hugely more expensive than they should be. Right? So that would be one of a huge number of examples of problems that can manifest themselves with top down regulatory regimes of this kind.
And so the idea that we need such a thing for AI, you know, I'm not necessarily saying that you I'm saying that doing that bears an enormous cost. It it it carries an enormous cost with it, and I don't think that we really have the evidence that that cost is worth paying with respect to AI compared to the many benefits that we get from not doing that and regulating it the way much more like we have regulated things like the computer and the Internet and software and many of the other general purpose technologies, which have actually worked and grown and, like, made our lives so much better. I mean, not to say that medicines haven't, but, like, the things that have been real growth areas. And, really, a lot of why medicine has done so well has to do actually with software and computation and the Internet and things like this as opposed to, like, pure object level advances in biology, which would be why we're gonna cure cancer on chips that were originally designed to play video games. So I guess I'm saying, like, you need to demonstrate a very I think there's a high burden of proof.
Not to say it will never be met, but to say it hasn't been met. To say every single top down regulatory system we have was carried with it a similarly high burden of proof. And I think that, like, if you if you made a statement to go back to the banned superintelligence statement, if you made a statement that said, like, we need to investigate such and such, or we need to come up with, like, what are the guarantees that we want AI labs to be able to make in terms of, like, empirical evaluations about their models? What are the guarantees we want them to be able to make? I would be I'd be I don't know if I'd sign on to it.
It would depend on the specifics, but certainly, I wouldn't. I would not have had the kind of visceral negative reaction that I did to the banned superintelligence statement.
Cool. So the you a lot of good stuff in there. Let me pick out three things I'd like to respond to. One about regulation, one about perceived vagueness, and then one about national security that you brought up. Sure.
On on the first one on regulation, it sounds like we're actually in agreement that even though you would like to see reforms on the FDA, you would not like restrictions on biotech to be completely eliminated. You would not allow want people to be able to do biosafety level four research to make that ninety five percent lethal bird flu airborne, for example, just because it's cool, and people can sue them later. But you would you would like there to be something. And and but whereas for AI, you feel still there should be nothing to prevent companies from deploying things yet. But maybe maybe but you're open to it.
Maybe in three years, four years, you want us to think more about it. And whereas, like and I guess my position there is that, I think if we don't if someone releases actual true superintelligence that takes over the world, it's that's gonna be it's gonna be too late to regulate it then. And then and then on the second point, on vagueness so this is really important. Many people have said to me, yeah, this this statement that we put out on supertellers is why doesn't it isn't it written much more concretely so you can make a law out of it or something like that? And that was very deliberately because I think and if you look historically in The US, when we've had new laws passed, like, for example, a law against child pornography, no.
You you could have pushed back and said, if if someone says they're against child pornography, well, how do you that's too vague. What do you mean by how do you define child? Is it 16 or 18? And how do you define pornography? That's pretty complicated.
Right? And and the law, you can't just say, oh, you know, you know it when you see it. But I think that would have been
interesting can.
Because because
we can do that in law.
Started to be a a broad consensus that we need some kind of ban on child pornography. That created the political will for experts to sit down and hash out all these details. And this is something you are very, very good at. Right? Looking how would you actually draft the laws.
And I the the idea with our statement was very analogous. We we see 95% of Americans don't wanna erase the superintelligence. There are peep a lot of people are super excited about AI tools and view the idea of of losing control of Earth to a new robot species as kinda dystopian, including David Sachs, no less. Right? So if we can start getting public knowledge that most people actually don't want a race an uncontrolled race to superintelligence, just like most people want some kind of ban on child pornography.
Then that can create political will, where brilliant policymakers like you, you know, sit down and talk to all the stakeholders and come up with carefully crafted language for how this would actually work and if there should be a new agency. So this was, in in summary, not something I view as a bug, the vagueness, but it was, I think, a feature. Where where we're we're going for here is just some moral leadership, basically, that, you know, we would like there to be some kind of restrictions on on a race to superintelligence.
Let me jump in for a sec because I think the I I think that you guys may actually dovetail more on policy itself than it's sounding like. Maybe the real crux of disagreement is your mainline scenario of what things would look like if we just kinda went on cruise control and didn't do much much more than we've already done in the way of policy. And so, Dean, let me ask you this question. What is your p doom defined as just kinda letting AI play out, not layering on additional regulation, waiting ten years, then the probability that we it goes wrong and we kinda get this runaway superintelligence that's now too late to control or regular regulate. What's your PDOM?
Doom being defined as human extinction?
Yeah. Like, it's a it's like a catastrophe, you know, of extinction scale. So maybe, like, half the human population dies, and then we go back to being cavemen or just some something extremely catastrophic or even extinction.
Would a permanent 1984 off account?
I mean, like, if if what we're talking about is, like, AI systems taking control over the world and killing large numbers of people, Yeah. I mean, like, my my my my PDOM is is very low. It's it's sub 1%. It's 0.01% or something like that. It's very low.
It's not to say that I think there's all sorts of other outcomes from AI that seem that seem very bad, that are seem way more plausible to me, that are things that I I work on, you know, a lot. But the specific doom scenario just doesn't really, like it just doesn't really seem all that likely when you think about many different things. One area would be like, look, I think that if you passed a law that said a group of people will have to take a straight up or down vote and their results the vote will be public. You know, a group of seven the Supreme Court. Let's just say, we'd send it to the Supreme Court of the United States.
And the Supreme Court has to look at every frontier language model release, and they have to take a take a a boat on do we think this model is likely to take over the world? I would be unconcerned about I don't I wouldn't really support that law for a lot of different reasons.
But Me neither.
I would not be I would not be particularly concerned about that if that were literally the law. The problem is that it's not. And I think this is where you get back to, like, why, you know, the super intelligence ban statement was it was written in kind of the way that it was, which is that, like, a lot of people that sign that statement, I would predict, have a much more nebulous set of concerns about AI than the sort of very specific ones, Max, that you have. And it's not to say by the way, I'm not saying that you don't have other concerns. Right?
I'm not saying you're not worried about misinformation or deepfakes or or job loss or whatever else. But I think that, like, when it comes to, like, where you actually think because because I think we would both agree that, like, the job loss thing is really complicated. Right? Because there might be, like, AI might you know, Matt Walsh had a tweet a couple of days ago about the conservative influencer. Matt Walsh had a tweet about, you know, AI is gonna cause, like, 10,000,000 job 25 I forget what it was.
Maybe he said, like, 5,000,000 jobs over the next ten years or something like that. And I was like, you know, that's, like, an extremely optimistic scenario. Right? Like like, in the grand scheme of things, like because because eliminating 5,000,000 jobs, like, if you just focus on what gets eliminated, sure, it's not that much, but, like, you assume some jobs are created, millions you know, the economy creates creates millions of jobs and destroys millions of jobs every year. Right?
Like, that's like that would be a very slow rate of change. Actually, if it only created if it only destroyed 5,000,000 jobs over the next ten years, that would be, like, low compared to, like, literally, like, just just just the normal churning of the economy. Right? So I think like, the job thing is certainly complicated, but you can imagine a regulatory regime that was much more like, well, we have to check what the socioeconomic impact of this is gonna be. We have to make sure it doesn't harm blah blah blah.
We have to make sure it doesn't you know, all these other things, which are not to say they're unserious issues, but that, like can you can you can we agree maybe that, like, there's a plausible version of GPT seven that's really good and prosocial that might also displace a lot of jobs. And that if we had a regulatory regime that said we need to take a vote on whether this will take over the world, it would be zero to nine in favor of this is not gonna take over the world. But if we had a regulatory regime that was staffed with, like, you know, union representatives and whatever else, like, you know, various stakeholders, let's say, you know, representative stakeholders, that that group of people, if their task was, do you think that this will be good for the economy? Do you think it could create job loss? Do you think it could be dangerous more nebulously?
That that group of people might vote against the release of that thing and that that might actually end up being a bad outcome? Do you see that failure mode? Like, do you do you do you believe that that failure mode is a real one?
I I totally see things like this getting very political. Yeah. It was very interesting for me because I spent a lot of time, you know, talking to a lot of the people who signed initial signatories of this of this statement. And I, of course, the ones who did sign, I was very interested to hear why. And there were indeed, as you say, many different reasons.
The Natsec people, like former head of joint chief of staff Mike Mullen, for example, I think for him, loss was very central because he views that as a national security threat, of course. Regardless of whether The US loses control, the US government gets overthrown by a foreign power or by superintelligence, it's a NatSEC threat. On the other hand, there were people, you know, from Steve Bannon to Bernie Sanders, who who felt that if we end up in a system where we actually have superintelligence that, by definition, makes all humans economically obsolete, then American workers would be dependent on handouts either from the government in the form of sense of UBI, which the conservatives who signed off in the view as socialism and don't like, or if it's coming from the company is the handout. Sam Altman's Worldcoin or whatever would be viewed by people like Bernie Sanders as incredibly dystopian, you know, the most mass of power concentration in in human history to some tiny click of of people from San Francisco who don't necessarily share the moral values even. So that that thing, I think you're right.
It's something that bothered a lot of people. And then we had a lot of faith leaders who signed this for for fairly different reasons, where they just felt this is really gonna be harmful for human dignity. And, you know, a lot of people we both know in San Francisco like the joke about superintelligence being the sand god and so on. And a lot of these people are like, wait a minute. I already believe in a god.
Should I support some atheists in San Francisco building a new one that somehow gotta run the show? You know? That sounds very undignified. So the people have many diff different reasons. But I think in in in short, there are two separate questions.
One is, should there be any kind of safety standards like we have for biotech or or restaurants? And the second question, which is much harder, is what exactly should be on the list? I'll be very happy if we could start by just having one very light requirement, which is companies have to them is make a good quantitative case that is not gonna overthrow the US government before we we launch it. And then and then we can have a broader political discussion. Before
Dean responds, Max, speaking of, you know, reasons to sign the statement and these nightmare scenarios, what is your PDOM?
Oh, yeah. We actually wrote a paper, me and the three grad students from MIT, where we took the most popular approach for how humans can control superintelligence known as as recursible recursive scalable oversight. And we got very nerdy on it and tried to calculate the probability that the control fails. And we found in our most optimistic scenario that it fails ninety two percent of the time. And I would love if people who think they have a better idea for controlling superdoses were to actually just publish it openly so that they can get subject scrutiny from from from others.
But until that time, yeah, I would put we have if if we go ahead and and continue having nothing like the FDA for AI so people can legally just launch superintelligence and worry about getting sued, yeah, I would think it's definitely over 90% that we that we lose control over this.
Wow. Point 1% versus 90%. It's a pretty big crux.
I've heard it. Yeah.
I just kind of have this sneaking suspicion that, like, if the models seemed like they were gonna pose the risk of overthrowing the US government or anything in that vicinity that, like, I don't think OpenAI would release that model or Anthropic or Meta or XAI or Google. Like, I just don't think they would. You know? I I think they, like, probably wouldn't wouldn't do that and would be, like, quite quite concerned and would probably, like, call the US government and tell them. Like, you know, I just it doesn't seem like a realistic scenario to me.
Let me say
another thing. There. I think I think the makers of thalidomide with that company, and I don't wanna shame any particular company, they would not have released that either if they had known that it was gonna cause a hundred thousand American babies to be born without arms or legs. But it was complicated, and they they just didn't realize it. And similarly very complicated here for these comp people at these companies to know.
And Dario Amore has himself talked about, you know, 15 to 25%. Sam Altman has also talked about how it could be lights out for everybody. So they're clearly comfortable with 5% or 10%.
You can't you can't deny the possibility that that something you know, you can't prove right. I I can't prove a negative. That's why, like, you can't say it's zero. Right? If you're being intellectually honest, you can't say it's zero.
Here's the thing. I'd say I'd say there's, like, two observations I would have about this. The first is that, you know, some of what you are describing, some of the negative effects you are describing are going to be including, by the way, like a lot of the labor market stuff.
Mhmm.
A lot of that is going to be an emergent outcome of a general purpose technology interacting with society that is very hard to model and advance in the way that you not not impossible. It's just very hard to model in the way that you're sort of describing with drug testing. Right? And so I think these emergent these emergent phenomena are just going to be like I think it's really easy when you're thinking about stuff like labor, and this is why a preemptive regulatory regime scares me. If a group of people are sitting around and thinking, what are the potential risks of this thing?
Mhmm.
A, you tend to overstate them. And, b, the reason you tend to overstate them and I I I hear this all the time when people talk about the impact of AI on society. They don't an economist would say that they don't endogenize the the impact, which is to say they model AI as, like, an exogenous shock, like a meteor that's coming to society that's gonna hit us. And that we will just remain completely in place and be like, oh my god. There's a meteor, and not do anything.
The reality is that, like, if I if five years ago, I showed you all the generative AI tools that exist today.
Mhmm.
Just without anything about the society. I just said this exists in 2025. You would be like, wow. I bet you blah blah blah blah blah. And you would say a bunch of stuff that would pro and I would too say a bunch of stuff that would probably be wrong about the downsides of the technology as manifest.
You would guess like, my god. Their elections must be completely, you know, over. Their their media environment, you know, their their this and that. And in real they you know, they there must be huge labor market dislocation. There must be no software engineers.
Right? There must be none. In reality, society is an adaptive complex system itself that, just like the human body, has the ability to internalize many different things and is is, you know, incredibly adaptive, and humans are quite ingenious. And so I think that's an easy thing to discount. The other point I wanna make is about this percussive self improvement thing, because this is another thing that, like, gets to me sometimes.
Every general purpose technology in human history exhibits what you would call recursive self improvement in the context of
AI.
But with a human in the loop?
I mean, like, kind of. Don't know. Like like, we use all I mean by that is, like, you know, you you come into the iron age, and then you use iron to make more iron and better iron. And, you know, like, we use we we we turn a mill that's Yeah. That's got an iron hammer that you're using to to, you know, to to manufacture more iron.
And But there's no people computers you use computers to make better computers, and you use blah to you know? This is this is a you use oil to get more oil, electricity to get more electricity. Right? Yeah. Every general purpose technology exhibits exhibits these kinds of, like, recursive loops, because it's a general purpose technology.
So one of the things that the general purpose technology does is make the general purpose technology better. It's very common throughout the history of technology. So I think that you can't just cite this fact that AI is likely to have recursive loops of self improvement and be the AI will be useful for AI because every general purpose technology is like that.
And so We don't disagree on anything here.
Yeah. It's just like
I wrote about this even
But but there's a reason that, like, in every in the case of every other general purpose technology that the recursive feedback loop, you know, it tends to be auto catalytic. Oftentimes, it produces nonlinear improvement. And but it never results in these, like, runaway, you know, these these runaway processes. Right? We've never seen that.
We it's not like we went we used oil to get Yeah. You know, we used energy to get more energy, and then all of a sudden, the energy, you know, we we blew up the entire universe. Right? That didn't happen.
Okay, Max. Maybe you can explain why you still think there's a doom scenario despite Dean's point. And then I've got one more question for Dean.
Yeah. And I was gonna love to comment on the Netsk aspect angle, which I think is super important also because it's the main reason given in Washington always why we should not regulate. Yeah. So I completely agree, of course, that there's technological progress itself has always involved these self improvement loops. I have written about that extensively, and that is fundamentally why GDP has grown exponentially over time, because we use today's technology to build tomorrow's technology and so on.
But there have always been humans in the loop. And when there are not humans in the loop, things can go quite fast. If you look at a slow motion of a nuclear bomb exploding, there is no human in the loop. You get one uranium atom decaying, and then you get now you get two, and then four, and then eight, and so on. And we have never the reason we've never seen anything blow up fast with our technology more wholesale is because we've we've always had humans in the loop as a moderator.
Right?
Mhmm.
It's pretty obvious. And unless you think that there's something, some secret sauce in human brains that you can't build into machines, that, it is possible to build machines that really don't need us. Now if those machines think a 100 times faster than us, and if they can instantly copy all the knowledge that other robots have learned into their themselves, etcetera. No. There's then we could see more progress in a month than we had seen in in a thousand years when there were were human in the the loop.
This is not my idea at all. Alright? Irving j Good articulated this very nicely in the sixties. I think this is just very pretty basic simple argument. We can't say, oh, it never happened before, so it won't happen again in in the future because we've never built superintelligence before.
All the other tech, like the industrial revolution, just replaced some aspects of human work, like our muscles. We made machines stronger, and we made machines faster. We've never had machines that could entirely replace our cognitive abilities. So so so can I just comment on that sec a little bit here? Because I I think when I talk to politicians on the hill, especially when I listen to AI lobbyists, of whom there are now more in Washington DC than there are pharma lobbyists and fossil fuel lobbyists combined, the main talking point they use to explain why we should not have any binding regulations is in one word, China.
But China, they say. Right? If we don't race to superintelligence, China is gonna do it first. And I think that is just complete baloney. You know?
I think there are it's not one race against China. There's two separate races which people really need to stop conflating with each other. One is a race for dominance, which was very eloquently articulated, Dean, to your credit, in in the AI action plan. A race for dominance economically, technologically, militarily. And the way to win that kind of race is by building controllable tools, which time all four.
Right? And, yes, you need big data centers and all the other stuff to do these powerful tools we can control. Then there's a second race, which is who can be the first to release superintelligence that they don't yet know how to control. And that's the one which I've been arguing as a as a suicide race, because we're we're closer to figuring out how to build that than we are figuring out how to control it. And the Chinese Communist Party, Xi Jinping as well, clearly really like control.
And I think it's quite obvious that they would never permit the Chinese company to build technology if there were some significant superintelligence that could just overthrow them and and take over China. I even had got a firsthand anecdote on that from Elon Musk. He told me in the 2023, he had a meeting with some quite high up people in the CCP where he said to them, look. If someone in China builds superintelligence after that, China is not gonna be run by the CCP. It's gonna be run by the superintelligence.
The reaction in the room was hilarious, Elon said. Like, lot of long faces. Like like, they really hadn't thought that through. And within a month or so after that, China rolls out their first ever AI regulations. I'm also quite confident that the Chinese have much more surveillance on DeepSeek and their other AI companies than the US government has on our companies, and have both the ability and the the the willingness to stop something that they think would be a could could cause them to lose control.
So I I think this is actually something which the way I I see this going, and when I said p doom of over 90%, that was Lyron, to be clear, if we don't if we just don't do any regulation. Right? I'm actually quite optimistic that things are gonna go well instead because I think there's no way China is gonna allow a race to support villages. This is don't know how to control it right now. I think China is gonna continue steaming ahead, trying to build all these powerful AI tools for the race that Dean was describing in the AI action plan, but but absolutely not let anyone build superintelligence.
And I think that's what's gonna happen in The US also. I think the the I even know already a growing number of people in US NatSEC are beginning to view this as a NatSEC threat. Maybe they listen to Dario Amode talk about a country of geniuses in the data center in 2027, and they're like, wait a minute. I have here a list of countries that I'm keeping track of as NatSec threats. Did Dario just say the word country?
No. Maybe I should add that country to my my watch list also.
Mhmm.
And and and then, certainly, we end up in this really great situation where The US will also prevent anyone from building stuff that they don't know how to control. We'll have this race to who can build the best and most powerful and helpful tools. And that's the future I'm really excited about living in.
Okay. Dean, let me just ask you the last question, and then you guys can make your closing statements. So Max has laid out his nightmare scenario, which is we don't regulate AI, and then it can go uncontrollable. It will have, you know, recursive self improvement or just more power than the human species, and it just doesn't go well. He's even said the probability could be up to 90% if we let this happen.
At least 90%.
At least 90%. Okay. And then, Dean, you see that as a very low probability scenario, but you do have your own nightmare scenarios, which are on the side of regulating AI too much. Right? If I understand correctly, your two nightmare scenarios are losing the AI race because your AI action plan focuses so much on winning the AI race.
And you have another nightmare scenario, which is that some kind of overregulation could lead to, like, a tyranny situation. Right? Just undermine democratic governance. So explain your nightmare scenarios.
Well, I mean, certainly, I think all manner of tyranny is possible with with AI, and I I worry quite about quite a bit about that. I think I'd say fundamentally that what I worry about is so so I'll start by saying that, like, we are going to AI is going to challenge the structure of, like, the nation state kind of no matter what in the good scenario and the bad scenario. It challenges that in various ways, and it requires institutional evolution, conceivably revolution in certain places in the world. And so, like, you know, buckle up because that's coming no matter what. But there's a version of that institutional evolution that basically looks like what we get is a we get a rentier state.
We get a state that is run think of the Middle Ages. Right? We get a state that is run by a small number of people that control something, And, you know, that thing is a it's it's it's certainly a a tool of violence, but they're not, like, quite legitimate in the way that we think of democratic legitimacy. And there's some sort of middle class of rent seeking humans who have legal protections from that upper class. And then there is this large underclass of people that have very low practical agency, very low ability to really meaningfully contribute, and they're they're kind of, you know, stuck in some sort of dystopia.
That seems very likely to me, and I think there are many regulatory regimes that including a licensing one, that make that outcome substantially likelier. And so, I think that's the you know, we we face these kinds of trade offs no matter what.
Should we hit on losing the AI race, or should we hit on, like, tyranny scenario, or or do you think what you described is kind of the main nightmare scenario?
I mean, basically, yeah, what I certainly, like, losing I don't really even know what losing the AI race to China means. Like, it's hard to know. Like, certainly, yeah, there's a world in which China becomes the dominant technological power and sets the standards and all that stuff, and that's a really bad world too in many ways. But it I I'd say that's like a bad scenario. It's not my my nightmare scenario.
It's not like the the worst possible thing I think that could happen, but it's definitely not a good thing. Okay. Alright. We've covered a lot.
So let's go to closing statements starting with Max.
Alright. So, we've talked a lot about Doom here, and you've kept nudging us in that direction later on because your brand here, the Doom debates. But I'd like to end on an optimistic note. You know, the real reason I'm so engaged with this topic is because I'm fundamentally quite optimistic person. Know, I spent so much time playing with this guy, and I'm very excited about the potential for him to have an amazing future where he doesn't have to worry about dying of cancer and that we can prosper like never before on earth and not just for an election cycle, but life could prosper in principle for billions of years that have even spread out into the cosmos.
It's it's we've completely underestimated as a species, right, how much opportunity we have. And that's why I'm I think it's so important that we don't squander all this greatness by making some hasty chest moves here and and blowing it all. I think that that there are two very clearly different paths that we're choosing between right now, and we have to make our mind up, I think, within the next year or two probably. One of them is the prohuman future. America was founded to be a country run by the people for the people.
So there was a real emphasis that America was supposed to be good for the people living in America. It was not founded to be good for the machines of America. That was not the idea here. And then the other so and the way to get there is to treat stop the corporate welfare towards AI companies, which is hard because they have so many lobbyists. But hey, so did the other industries that we ended up putting safety standards on eventually.
And then steer technology to really be pro human, to make thing life better for humans. Cure diseases, make us more productive, etcetera, but make sure it's always us in charge. So that first one is a very pro tech scenario. Notice we go full steam ahead with Ever Better AI tools. The other scenario is we race to build superintelligence, which by its very definition is super dystopian.
Right? But the very definition of this, none of you can earn any money after that's been built doing anything. Right? So you're gonna be dependent on handouts from the government or some tech CEO, or you're gonna not have any money and and life will be very bad. That to me is an incredible the unambitious vision for the future.
Why should we, after hundreds of thousands of years as a species on this planet working so hard to build ever better technologies that we can become finally the captains of our own ship so we don't have to be worried about eat getting eaten by a tiger or starving to death? Why should we throw away all this empowerment we've gained through all this hard work by just deliberately building something that's gonna take over from us, you know, ridiculously unambitious? I want us to take charge of this, and I and I think 95% of all Americans, you know, in these polls clearly agree with this. Right? To deliberately say, okay.
We're in charge now. Let's keep it that way. A journalist asked me, what on earth do these different people who signed the statement have in common? I don't even understand, she said. I thought about her for a while.
What does the band have in common with faith leaders and Susan Rice and Chinese researchers, etcetera? Well, then it hit me. They're all human. So, of course, they want the prohuman future where humans are in charge. Right?
If we found out that there was an alien invasion fleet heading towards Earth, obviously, we would all work together to fight the aliens to make sure it's us in charge. And now you have a quite small fringe group from Silicon Valley with very good lobbyists, basically saying, yeah, we should build out all these aliens, and they're probably gonna take over. Elon even said that openly the other week. Right? That is the most unambitious ending to the that this beautiful journey of empowerment I can imagine.
And to me, the inspiring future that I'm excited about and that I think we will actually have once people understand more what this is all about is where we remain in charge, and we keep AI a tool and create a future that's even cooler than the sci fi authors could imagine.
Alright. Great. Let's go to Dean.
I think the fundamental thing to think about here is really assumptions. Most AI doom debates, to use the name of the podcast, result revolve around one of at least one of the interlocutors, usually the one who believes in doom, assuming their conclusions. And here, you know, we've had a lot of conversation in this discussion about how superintelligence has many different definitions, and it could mean many different things that we don't quite know what it means. And there's one version of it that you can articulate that means all sorts of bad things. And doesn't mean that thing is likely to be built, doesn't mean that thing is going to be built, doesn't mean that thing is possible to be built in quite the way that we imagine.
It just means that it's a thing you can say. You know, it is a valid lang valid sentence in the English language. You are and then it it takes a it takes a big leap to assume that that's what we're actually going to build. And I think we shouldn't assume that. I think that as Max said at the end of his statement, the future is often profoundly stranger than we can possibly imagine.
The the future that we live in today would have been unbelievably alien to someone fifty or a hundred and certainly two hundred years ago. In many ways, would have been incredibly alien. And many of the jobs we do would have seemed quite odd, and the relationships we have with one another and our institutions and all of it would be, like, deeply alien. And my guess is that that continues. And my guess is that the things that we assume today about the technology of the future are probably wrong, and we don't wanna embed too many of those assumptions into the law, into regulation.
We want to maintain I think right now more than any other time, given the speed with which AI evolves, we want to maintain adaptability and flexibility. So I just wouldn't assume that superintelligence means the bad thing. I would instead at least consider that there are many worlds in which humans can thrive amid things that are better than them at various, you know, various kinds of intellectual tasks. And that humans can still have a role because there are certain things that are not replaceable inherently by machines, and that we can gain a tremendous amount of wealth, live much better lives, and find all sorts of new things to do that are economically and practically useful. That's been true so far throughout human history, and it wouldn't have seemed that way.
It didn't to people at the time since we have a written record. We know what people's reaction has been to new technology, and it's always been like this. And you can say this time is different, and that's fine. But I think that you have to I think we should demand a higher standard of evidence. Max talked about how, you know, America is is by the people, you know, and and for the people.
True. But we also have a system that makes it quite hard to pass new laws, and there's a reason that we made it quite hard to pass new laws, which is that our founding fathers and this is not like a statement of opinion. This is a statement of fact about American history. Our founding fathers were deeply distrustful of democratic impulse, raw democratic impulse. The word democracy was a pejorative from the people that wrote our constitution.
It was an insult to say that that proposal seemed too democratic because they believe that you had to balance raw democratic will, people's raw intuitions about things. You had to balance that with deliberative bodies that make it hard to pass laws because laws ultimately are rules passed by people that have the monopoly on legitimate violence, and that's a very sacred and important thing. And we don't want to just give them willy nilly all these new powers. We've done that a lot. And I just have very serious issues with the idea, logically and and sort of at a more philosophical and and and even moral level, with the idea that, like, we're just gonna be able to pass a new regulatory regime, and everything is gonna go fine, and there will be no side effects.
And, you know, I think that there will be tons of side effects, and I think that we will ban tons of, technological progress and stave off a lot of great you know, a lot of wonderful possibilities for the future. I would say there are many ways to investigate and interrogate the concept of superintelligence and to advance, you know, the the safety and controllability of that thing. There are many ways to do it that don't involve banning it, which was the original topic of this debate. There are I I note that Max did not spend that much time defending the thing that actually was the subject of the set of the statement that FLI put out. But, you know, this sort of I think you can build something much better than just a regulatory regime.
You can build a society that is capable of grappling with this technology and institutions that are capable of evolving with it. And I think that's ultimately going to be a much healthier, better outcome for the world. That's the one that I work on every single day. It involves taking the risks seriously. It involves taking the technology very seriously.
You should not also be you shouldn't be a radical in either direction when it comes to this technology. You should be willing to update your beliefs frequently. But at the same time, details matter. Getting this right is not going to be it's not gonna be a matter of taking regulatory concepts that we've developed for other things from off the shelf and applying them to this. It's gonna be much more difficult than that.
And so, I guess I'll close with that.
Thank you. I'm thankful to both of you for stepping up to debate the difficult policy questions around superintelligent AI. It's such a complex issue, and so there's so many different positions. It's not black and white. It won't work to do it in an echo chamber.
It won't work to reduce AI policy to left versus right politics. Respectful debates between smart people with different views is what we need right now as a country, as a species. That's how we can stress test different ideas and bring out important nuance. I'd go so far as to say debate is a key piece of social infrastructure. So thank you again, Max and Dean.
Thank you, Dean, for a really great conversation.
Thanks to you, Max. Thanks to you, Larell. This was great.
Wow. What an illuminating debate from two people who are actually in the room for these kinds of policy discussions. So regarding America's AI action plan, the document that Dean Ball helped draft, both Max and Dean were happy that it doesn't mention superintelligence, but for very different reasons. Max was saying, we need a whole another statement about superintelligence, and he even proposed a statement saying it should be banned until there's broad scientific consensus that it will be done safely and controllably and until there's strong public buy in. That's what Max thinks we should do regarding a superintelligence statement.
And Dean is saying, yeah. It's good that we didn't mention superintelligence because it's too vague right now. Dean is saying from our current perspective today, we don't know what superintelligence will look like. Maybe it'll just work out really great and doesn't need that much regulation. So it's a very diametrically opposed perspective, pushing to ban it until there's consensus versus, well, you know, we'll deal with it later.
Like, it's it's fine for now. The crux of disagreement between Max and Dean, as we uncovered during the debate, it really does come down to their p doom. If you remember, Max was saying his PDOM is greater than 90%, assuming that we don't have these tough regulations on AI. But Dean's PDOM is only about point 1%. It's a much, much lower PDOM.
Dean is basically not worried about plowing forward and dealing with issues as we get to it, whereas Max says we better be preemptive because it might be too late to regulate if we don't start right now. The reason I say PDUM is the crux of their disagreement is because that's the thing that I think would really change their mind about the other stuff. I think their policy recommendations are totally downstream of what they see as the probability of doom. So for example, if they were to meet halfway, if Dean were to go up from point one percent to 25% and Max were to go down from 90% to 25% or just anywhere in that middle range, 40%, whatever, they'd start coming up with very similar policy ideas. I think in that case, Dean would naturally say, okay.
Well, we need very tight security on this development. You can't just retroactively do it because there is a high risk of total destruction, right, of runaway AI, and they would just naturally be thinking along the same lines. It's just gonna be downstream of how much doom they expect.
So they went on to talk
about the FDA analogy because they said, what does good regulation look like? And Max was saying, isn't the FDA a success story? Don't you like the FDA? Dean's response was he didn't go full libertarian. He didn't be like, oh, the FDA is evil.
We shouldn't regulate things like that. He said, yeah. The FDA is helpful, but you can see it has this baggage. It has this legacy idea that one medicine has to treat one disease, and, really, science is a lot more complex than that. So even the FDA is kind of putting this straitjacket on where you have to jump through all these hoops, but it's just you're not necessarily getting a lot of productivity.
Like, you're paying a high cost in terms of drag. And so Dean's analogy is that when we get to general artificial intelligence or artificial superintelligence, this kind of straight jacket regulation could be an extreme case of why Dean doesn't like the FDA even though he admits that some amount of regulation is good. So that's kinda where they landed with the FDA analogy. Like, the idea makes sense, but there's this failure mode that's gonna bite. But once again, from my perspective, the reason they're not on the same page about the FDA analogy is just because Max is seeing this runaway risk, this doom risk.
And Dean is like, no. It's it's really just about yet another technology. Right? And this comes up a lot in other debates. Is AI just another technology?
And my take is that Dean is very much leaning on the yes side of that and Max on the no side. In conclusion, what a stark divide. We have a scientist saying there's a high risk of a catastrophic outcome from superintelligent AI in potentially less than ten years and a policymaker saying, you haven't made a strong enough case for why the risk is high, so we shouldn't ban a hugely valuable line of research. And regulation by default is burdensome, so we should be constantly worried about overregulation. It's quite a difference of opinion to reconcile, and what's crazy is that the stakes are so high and the timelines are so short.
My prediction is that we're going to keep seeing policy that's downstream of the policymakers' PDUM. One of the benefits of having Dean Ball in this conversation is that we heard his perspective on p doom more explicitly because it wasn't explicitly mentioned in America's AI action plan. More generally, I think debates about p doom or doom debates,
if you will, are extremely productive for discourse.
It's not just about this one disagreement between America's AI action plan and the Future of Life Institute statement to conditionally ban superintelligence. It's a bigger picture. It's about building the social infrastructure for high quality debate. The world is complex, and debate is one of the most powerful tools that we have as a society to navigate our way to appropriate policy decisions. But it has to be high quality debate.
The people have to be informed. It has to be respectful. It has to stay focused on the issue, not about finger pointing or character assassination or scoring political points. It has to be nuanced. It has to be seeding points where the two sides are actually trying to find common ground if at all possible, And it has to be productive for policymaking.
If you think this debate was productive, you can support me and my team at Doom Debates in building the social infrastructure for having more of these kinds of debates at the highest levels by donating to the show. Go to doomdebates.com/donate to learn more. Thanks to viewer donations, we're currently in the process of building out a professional recording studio to elevate the show. Production is an ongoing cost for the show funded by donations from viewers like you, and I'm happy to say that this will always be independent media managed solely by me based on my personal perspective, not doing anyone else's bidding. You can make a $5.00 1 c 3 charitable donation, and every dollar you donate goes directly to production and marketing of the show.
Again, go to doomdebates.com/donate for more details. If you're new to the show, check out the Doom Debates YouTube channel. I've been having debates with some of the top thinkers in the space, like Gary Marcus, whose PDOM is apparently one to 2%, and Vitalik Buterin, who's more in the eight to 12% range. And in each case, I ask them the question, why isn't it 50% or 90% like Max Tegmark over here? What is going on?
How do we reconcile this huge gap, and how do we do it fast so that we can make productive policy decisions? While you're on that YouTube channel, smack the subscribe button so you'll conveniently get new episodes in your feed. And I look forward to bringing you the next episode of doom debates.
If you're finding value in the show, we'd appreciate it if you'd take a moment to share it with friends, post online, write a review on Apple Podcasts or Spotify, or just leave us a comment on YouTube. Of course, we always welcome your feedback, guest and topic suggestions, and sponsorship inquiries either via our website, cognitiverevolution.ai, or by DMing me on your favorite social network. The Cognitive Revolution is part of the Turpentine Network, a network of podcasts which is now part of a sixteen z, where experts talk technology, business, economics, geopolitics, culture, and more. We're produced by AI Podcasting. If you're looking for podcast production help for everything from the moment you stop recording to the moment your audience starts listening, check them out and see my endorsement at aipodcast.ing.
And thank you to everyone who listens for being part of the cognitive revolution.
Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates
Ask me anything about this podcast episode...
Try asking: