| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
Learn how to use philosophy to run your business more effectively.Reid Hoffman thinks a masters in philosophy will help you run your business better than an MBA.Reid is a founder, investor, podcaster,...
Why do you care about philosophy? Why are answering these big questions important?
You know, one of the things that I sometimes will tell MBA schools, background in philosophy is more important for entrepreneurship than an MBA. Philosophy is very important to this stuff because it's understanding how to think about very crisply what are possibilities, what are theories of human nature as they are manifest today, and as they may be modified by new products and services, new technology, etcetera.
Usually in this show, we talk about actionable ways that people use JETHYPT. But a more interesting question is, how does AI in general, and how might it change what it means to be human? These are really deep, big philosophical questions. I thought you might have a unique perspective on this intersection. Reid, welcome to the show.
It's great to be here.
Great to have you. So I'm sure that everyone listening or watching knows this, but you are a renowned entrepreneur, you're a venture capitalist, you are an author, you're best known as the co founder of LinkedIn, your partner at Greylock. You are a board member or a board member, and an early backer at OpenAI. And you also have an incredible podcast, Masters of Scale. But perhaps most relevant to this conversation, you also studied philosophy at Stanford and Oxford, you almost became a philosophy professor, which I didn't know before researching this interview.
It's really cool.
Yeah. No, it was definitely Part of it was I've always been interested in human thought and language. Started with Stanford with a major called symbolic systems. I was the eighth person to declare that. And as a major at Stanford and then kind of thought, we don't really know what thought and language fully are.
Maybe philosophers do. And so trundled off, took some classes at Stanford, but then also trundled off to Oxford to see if philosophers had a better understanding of it.
I love it. It's funny. I feel like since then, Symbolics and Systems has become the go to like Stanford major for like curious analytical people who end up doing startups. So that's pretty funny to know that you're one of the first. So usually in this show, we talk about, like, actionable ways that people use ChekibT.
And and that's that's the big question. That's, I think, what people come here for. But underneath that, I think what what a more interesting question is is, like, how does AI in general and Treachy Petey in particular, how might it change what it means to be human? How might it change how we see ourselves and how we see the world? How might it enhance our creativity, our intelligence, all that kind of stuff?
And these are really deep, big philosophical questions. And as someone who rigorously studied philosophy and probably still thinks about those questions, I thought you might have a unique perspective on, on this intersection. Cause I think people tend to be like, they're either in the philosophy camp or they're in the like language models camp and like people who are sort of in the middle is kind of kind of an interesting one. And what I wanted to start with, cause I think there are probably people who are listening or watching who are like, why? I just want Reed's actionable tips.
Is is to is to ask, like, why like, tell me more about why you care about philosophy. And I think you got into that a little bit in in talking about how how you got into it, but like, yeah, tell us why is why do you care about philosophy? Why are answering these big questions important?
So one of the things that sometimes will tell MBA schools when I give talks there is a background in philosophy is more important for entrepreneurship than an MBA, which of course is startling and contrarian. And part of that is to get people to think crisply about this stuff, because part of what you're doing as an entrepreneur is you're thinking about what is the way the world could be? What could it possibly be? What is, if you wanted to use analytic philosophy language, logical possibility or something like that. But it's kind of what is possible.
And then partially because these are human activities, what's your underlying theories of human nature about how human beings are now, how they are kind of quasi eternally, and how they are as circumstances change, as the environment in which the ecosystems we live in change, which is technology and political power and institutions and a bunch of other things as ways of doing that. And philosophy is very important to this stuff because it's understanding how to think about very crisply what are possibilities, what are theories of human nature, what are theories of human nature as they are manifest today and as they may be modified by new products and services, new technologies, etcetera. And so obviously people tend to say, oh, that's a philosophical question because it's an unanswerable question, Nature of truth, or while we all speak and understand languages, we don't really know how that works. And as part of the reason why there was the linguistic turn in philosophy that Wittgenstein and others were so known for, which is, well, maybe these problems in philosophy are problems in language. And if we understand language, we'll understand philosophy.
And this question around these unanswerable questions, but actually, in fact, like science itself is full of a lot of unanswerable questions. And it's the working theory as we dynamically improve, and that's part of what the human condition is. And that's part of what actually the in-depth philosophy is. It isn't to say that the same questions today, some of the same questions today in philosophy, the same questions that Plato and Aristotle and even the pre Socratics and other folks are grappling with truth, knowledge, etcetera. But some of the questions are also new questions and the questions evolve.
And part of how science has evolved from philosophy was this question as we get to our more specific theories and kind of developing the new questions that we get to, those are outgrowths. And the same thing is true in building technology, in building products and services, in entrepreneurship. And that's why philosophy is actually, in fact, robust and important as applied to serious questions versus the, you know, one of the things I wrote my thesis on in Oxford was the uses and abuses of thought experiments. And, the most classic one is trolley problems. And there are both uses and abuses within the methodology of trolley problems.
The most entertaining of which, if people haven't watched it, is there's a TV series called The Good Place, which embodied the trolley problem on a TV episode in an absolutely hilarious way.
That's really interesting. Yeah, like, what is the way that people tend to misuse that? Because I feel like trolley problems are so common in EA discourse and people run into that a lot online.
The fundamental problem is they try to frame it to an intuition, to drive an intuition, a principle, etcetera. They try to frame an artificially different environment. So it's like, no, no, it's a trolley, and the trolley will either hit the five criminals or the one human baby, and it's default set to hit the human baby. And do you throw the switch or not? And then when you start attacking the problem, you say, well, how do I know that I can't break the trolley?
I could just not make it continue to run. It's like, well, you know that. You're like, oh, so you're positing in your thought experiment that I have perfect knowledge that breaking the trolley is impossible. So in your posit to make your thought experiment work, you're positing something we never Or when we encounter, we generally think people are crazy, right? Like you have perfect knowledge.
Like why the fact do I know that I have perfect knowledge that I can't break the trolley? And because you're going to say, what is the right human response to this trolley problem? Is I'm going to try to break the trolley so it doesn't hit either of them,
That's really interesting.
Right. And you might even say that the problem is, is that to say, you say, even you say, well, you have perfect knowledge that you can't break it. You're like, well, okay. You know, A, don't have perfect knowledge. And B, even if you did, maybe it's still the right response.
You're trying to get me to say, do I do nothing and run over the baby? Or do I do something and run over the five criminals? Those are my only two options. And you're like, Well, no. I could say, Even if I think I can't break the trolley, that's what I'm going to try to do because that's the moral thing to do.
I've heard a
lot of trolley problems, and I've never heard anyone pause at the third option. I love that. That's great. And I also like there's something about that where it's like, yeah, certain thought experiments sort of like hijack your instincts and and you don't quite reason through all these all these hidden assumptions that I think honestly reminds me of, like, certain doom or arguments. And I don't I don't wanna, like, go into go into the full thing, but I think it's a it's a really interesting, way to think about it.
If I had to, like, summarize what what you just said, like, the value to you of philosophy is like, thinking crisply, thinking crisply about possibilities, thinking about, human nature and reality. All those things are like really, really, really important for business people. I wanna kind of, like, take it take another step, which is, like, some of those some of those questions that philosophers like, or philosophy students or philosophy nerds just, like, sharpen our skills on. There are some of these some of these big questions, some of the big perennial questions like what is truth, what is reality, what what can we know, all that kind of stuff. I'm kind of curious if you have a sense as we start to get into talking about AI stuff, what are those questions where, AI large language models are are going to give us a little bit of a new lens on on on on some of those questions?
Or what are what are questions where we'll we'll find new ones to ask that are better than previous ones, even if they maybe don't answer them? Do you have a sense for that?
Well, I mean, historically, it's like, for example, questions that have led to, you know, a bunch of the very science disciplines, right? It's everything from things in the physical world to things in the biological world, like germ theory and all the rest. I think it's actually even true. It's one of the reasons why kind of philosophy is the root discipline for many other disciplines. When you get to questions around like, okay, how do you think about economics and game theory?
Or how do you think about of political science and realpolitik and kind of the conflict of nations and interests. And it's also one of the reasons why, probably one of my deepest critiques of the non reinvention of the university is the intensity of disciplinarianism. So it's just the discipline of just political science or just the discipline of even philosophy, as opposed to multidisciplinary. Part of the thing that I tend to think is kind of an interesting thing is how much the academic disciplines tend to be more and more disciplinary versus the, hey, maybe every twenty five years, we should think about blowing them all up and reconstituting them in various ways. And that would be actually a better way of thinking and why some of the most interesting people are the people who are actually blending across disciplines within academia.
And I think that part of it is, I think, extremely important. And part of the question and philosophy is the kind of the question of like, well, how do we evolve the question of what do we know? And obviously you evolve the question what you know through, like, for example, a lot of the history of science is instrumentation, new measurement devices that help with kind of provisioning of theories. And that's one of the reasons why people frequently don't think enough about how technology helps us change what is the definition of a human because we have this kind of imagination, like the Descartesian imagination that we are this kind of pure thinking creature. And you're like, well, if we've learned anything, that's not really the way it works, right?
That doesn't mean that we don't think that way to have abstractions to generate logic and theories of the world and all the rest. But put your philosopher on some LSD and you'll get some different outputs.
That makes sense. So I guess like, along those lines, if you want if I step step back and squint, I can kind of like you can kind of divide the history of philosophy into essentialism and nominalism for for a certain part of philosophy. Right? Like, and essentialists are like, do you believe that there are like fundamental there's a fundamental objective reality out there that's knowable and that there's a way to kind of like carve nature at its joints? And nominalists, where we would include Wittgenstein, which I know you studied pretty deeply, and pragmatists, that more or less truth is more or less relative or it's about social convention or it's about what works or there's lot of different formulations of it.
And there's this sort of like ongoing debate between people who think one thing one thing or the other. Do you think language models like change or add any way to either side of that debate?
I think they add perspective and color. I don't think they resolve the debate. The And there's certainly some question about since they function more like later Wittgenstein or more, you know, kind of nominalist, you know, you say, well, does that weigh in on the side of nominalists because of actually, in fact, the way they function? And actually, in fact, you say, well, if you look at how we're trying to develop the large language models, we're actually trying to get them to embody more essentialist characteristics as they do it. Like how do you ground in truth, have less hallucination, you know, etcetera.
And you know, to gesture at a different earlier German philosopher, you know, Hegel, one of the things I think is kind of part of a I think it was kind of the human condition is the thesis, antithesis, synthesis. Like you could say, hey, we have an essentialist thesis, we have a nominalist antithesis, And the synthesis is how we're putting them together in various ways. Because you say, look, we and I don't even think later Wittgenstein would have said that the world is only language, you know, kind of what the deconstructionist and Derrida went to. Was like, you know, it is only the veil of language and you have no contact with the world, so you're not grounded in the world at all. I think he would think that's kind of absurd, right?
But his point was, is to say that there is also in how we live his forms of life, the way that it operates is not a simple kind of denote of and he understood it wasn't just denoting the cat on the mat or the possibilities the cat is on the mat and the possibility of the cat is on that, but actually possible configurations of the universe. And that was this kind of notional logical possibility that was described as language of possibility was to say that kind of essentialist about a language of possibility is actually incorrect to actually how we discover truth and how we operationalize truth. And you still have a robust theory of truth, which is not essentially what the deconstructionists do. But the robust theory of truth is partially grounded in this notion of language games and a biological form of life of how you do that. And then obviously you go into this deeply with saying, well, okay, how is mathematics language game as a classic language of truth is a way of trying to understand that.
And that's part of where you get what philosophers refer to as Kripkenstein, the Saul Kripke excellent you know, lens on reading a part of what Wittgenstein was about. And you kind of then apply all that, you know, everyone's going, where is this going to large language models? And you say, well, actually, in fact, you know, language is this play out of this language game. Large language models are playing out this language game in various ways. But part of what is revealed is we don't just go, truth is what is expressed in language.
Truth is a dynamic process and kind of human discourse. Could be synthesis, synthesis, know, thesis, antithesis, synthesis, or other things. It's this human discourse that's coming out of this dialogic period, this truth discovery, this logical, this reasoning, whether it's induction, is reasoning, whether it's abduction, whether it's deduction, and these reasoning processes that get us to what we think are these kind of theories of truth that are always, to some degree, works in progress.
That's really fascinating. I I wanna try to summarize that in case in case it was a little bit difficult to follow, to be honest. Like, there's a there's a point in there that I think I missed something, so you tell me what I what I missed. But I think one of the like, some of the things that I heard in there that that I thought I thought was really interesting is when you think about how we built AI, which is predicting the next token, that's a very sort of late Wittgenstein compatible idea or pragmatic, like, compatible idea where it's really about the relationship between different words in a sentence. And it's not we're not finding anything out about the world.
Like there are other AI approaches, I don't know, in the eighties or seventies where it was like literally like, let's list out every single object in the world. And those didn't really work. And that would be like something along the lines of a more essential approach to AI. And the one that works is is a more pragmatic and more late Wittgensteinian one. But what's quite interesting is now that we have that pragmatic base that we've bootstrapped, we're in this process of trying to make it more grounded, more grounded in reality or more reduced down to being able to talk about the essential ground truth.
And I think what's really interesting about Wittgenstein is he's sort of famous for saying, like, the limits of my language are the limits my world. I don't know. I don't remember if that's late or early. But but more or less, like, I think what you're saying is that Wittgenstein doesn't think that, like, there's nothing outside of language, but he does think that the way we talk about the world or the way that we use language is part of this sort of like social discourse where we're all kind of like going back and forth to like co invent language and structures and language games together. And and you kind of see that happening with language models where like when you do something like RLHF, that's sort of us playing with a language model, like playing a language game to be like, No, no, you don't like that.
Is that generally what you're getting at?
Yes. So everything you said. But then the additional thing, which later Wittgenstein was really trying to explore in various ways because he wasn't trying to do a kind of a completely just social construction of truth. You know, I'm actually a fan of you have to be a Wittgenstein scholar to actually understand how both early and late Wittgenstein are actually part of the same project. And early Wittgenstein was an idiot, and now let me look, I've religiously converted to this different point of view.
But there is a particular thing, which is how do you get to the notion of understanding truth? And truth is the dynamic of discovery through language and through kind of, it has to have some explicit external conditions that isn't my truth, your truth. There is only to some degree our truth or the truth in various ways. And how do you get to that as what you're doing and having truth conditions. And in kind of early Wittgenstein, the truth condition was it cashes out into a state of possibilities and actualities in this logical space of possibilities, include physical space as part of it, but broader than that.
And then later Wittgenstein said, well, actually, in fact, this modeling of logical possibility is actually not the fact the way this works, right? And we're not actually, in fact, grounding it that way. The way that we're grounding it is in the notion of how we play language games, make moves in language. And the way that's grounded is to some degree sharing a certain biological, you know, kind of form of life by which we recognize that's a valid move in the language game. This is not a valid move in the language game.
Now, is what's interesting when it gets to large language models, because you go, well, large language models, are they the same biological form of life as us, or are they different? And how does that play out? And I think Wittgenstein would have found that question utterly fascinating and really would have gone very deep on it, trying to figure that out. And by the way, the answer might be some and some, not 100% or 100% no. 100% yes, 100% no.
Because the argument in favor is the large language models are trained on the corpus of human knowledge and language and everything else, and they're doing language patterns on that. Some might even argue that some of their patterns are very similar to the patterns of human learning and brains. Others would argue that it's not. But then you'd say, well, but it's also not a biological entity. And it learns actually very differently than human beings learn.
And so maybe its language game, which looks like it's the human language game, is actually different in significant ways. And so therefore, the truth functions are actually very different. And in a sense, what we're trying to do when we are modifying and making progress with how we build these LLMs is to make them much more reliable on a truth basis. Like we love the creativity and the generativity, but we want it to almost, for a huge amount of the really useful cases in terms of amplifying humanity, we want it to have a better truth sense, right? I mean, like the paradoxes in current GBT are when you can kind of tease it out with like very simple questions around prime numbers.
And you go, well, you got that answer wrong. It's, oh yeah, I got it wrong. Here's the answer. Well, answer is wrong too. Oh, I got that one wrong too.
Here's the answer. And a human being understanding these things, I'm just getting these things wrong. Like, got it. I got it. I'm wrong.
As opposed to, oh, I'm sorry, you're right. I got it wrong. And here's another wrong answer. We're trying to get that truth sense into it as we're doing it. Because we do have some notion of, Oh, right, this is what's characteristic.
Like mathematics gets us into very pure definitions of certain kinds of language games. It's one of the reasons why, you know, centuries ago, people thought math was maybe the language of the universe or language of God or language of etc. Because you're like, okay, there is the one where the purest truths, some of the purest truths that we know, two plus two equals four, is kind of embedded in. And we're still working that out as we play with how we create these language tools, these language devices. And it's part of the reason why I think this question is really interesting because you can actually model it to some of the actual, as it were, the technological physics that we're trying to create when we're doing the next version.
Like, how we get these things into good reasoning machines, not just good generativity machines? And they have some reasoning from their generativity, but like part of classic showing where they break is showing where their reasoning stops working in ways that we value and aspire to in terms of what we try to do as human beings as in our best selves.
Here's something exhausting, the export import dance. Let me know if this sounds familiar. You design something in Figma, you export it, you paste it somewhere else, and you pray that nothing breaks. And usually something does. Almost 2026, which means that it's definitely time to stop doing this.
Framer already built the fastest way to publish beautiful, production ready websites, and now it's redefining how we design for the web. With the recent launch of Design Pages, a free canvas based tool, Framer is more than a site builder. It's a true all in one design platform. From social assets to campaign visuals to vectors and icons, all the way to a live site. Framer is where ideas go live from start to finish.
Framer's design tool is different from old school website builders you might see advertising on other podcasts. It offers vector editing, three d transforms, gradients, animations, all for free. It has unlimited projects, unlimited pages, and unlimited collaborators. But what really changed how I think about Framer is that there's no handoff. What you design is the website.
No developer interpretation. No can you make it match the mock up conversation. You design it, you publish it, and it's live. Are you ready to design, iterate, and publish all in one tool? Start creating for free at framer.com/design, and use the code DAN for a month of Framer Pro.
That's framer.com/designpromo code Dan. Rules and restrictions may apply. And now back to the episode. That's really fascinating. You said a lot there.
I really wanna get into the reasoning thing in a second, but I wanna go back to the the sort of the way that you talked about late Wittgenstein versus early Wittgenstein because I haven't really heard it said that way. And the usual, like, thing people say is, like, he just disagreed with everything when he was older or whatever. And what I hear you saying now is more or less in both cases, he's saying some of the of the same things or he has some of the same views. But like the real difference is how he cashes out what it what it means to be true. Something is whether something is true.
And in the first, in his, like, sort of first period, he's, talking about truth in terms of, a logical space of possibilities that can be broken down into these, like, little, he calls atomic facts. And those are never really defined, but, like, you can kind of build up truth from there, mapping those those possibilities into actualities, like what's actually in the world. And in later Wittgenstein, it's all about these sort of like language games, the social relationships, like the the use of that word or that phrase in the context of people. And one of the things that I I really wanted to ask you about is like that first that first version of Wittgenstein where it's sort of that logical space of possibilities. What that reminds me of is embeddings, where embeddings are one of the key underlying technologies that gave rise to AI.
In traditional NLP, they're allowing you to represent words or tokens in a high dimensional space. And then the language model, like innovation is kind of like, it's not just words, it's words in their particular context. Each each word in that particular context has its own part of the space. So like, in a language model, the word king, if it's tokenized that way, know, there's a king in chess, there's a king, there's an actual king, there's like a king of England, there's a king Lear, and they're all kind of like kings, but they're like different spaces. And language models are able to represent all of those different like, when when we say king, we mean many different things that are able to represent all of that.
And that just actually reminds me a lot of of, like, atomic facts or or or the the first, like, Wittgenstein's early early work? And I'm just kind of curious, like because I think you said that language models sort of because of the next token prediction, they they're they're sort of late Wittgensteinian, but I wonder how you, like, factor in the fact that embeddings work and they're sort of a core part of this.
Well, and actually, this is part of the fact that late Wittgenstein is not early Wittgenstein was an idiot. Because, yes, I do think that the notion of cause that were a probabilistic bet for what are the set of different tokens that apply are kind of there. Now, the reason why I would kind of slant more as current practice late Wittgenstein than early Wittgenstein, because early Wittgenstein thought that once you had the grasp on the logic of it, you then almost by speaking correctly couldn't make truth mistakes because the logic was embedded in it. And even though the token embeddings are kind of part of a very broad symbolic, quasi symbolic, I would say, kind of network. And the reason it's quasi symbolic is because it's still kind of activations and so forth and isn't purely the reasoning around a token of king or 15 different tokens of king or 23 different partial tokens king, as much as there's kind of conceptual spaces in that tokenization as mapped from a very large use of language.
But part of language isn't just the historical language, but is the reapplication of it. Like if you say, This is the king of podcasts, right? Or, this is the king of microphones.
Not yet, but maybe.
Yes, but just kind of as instances, that's part of why kind of later Wittgenstein went to, well, it's how we're playing these language games and how we're reapplying them. When we say, like for example, we say on this podcast, this could become the king of podcasts. We all have a sense of what we're doing. It's like, well, would be the cases where that would be true and what would the cases where it be false? And what prediction is that making?
And how is it that that's a useful thing? I'm sure someone said king of podcasts before, but I've never heard it before, right? And it's a different tokenization, especially as it gets developed and elaborated a lot in discussion. And then actually, if you suddenly had another terabyte of information about discussions of kings and kingdoms and all the rest, and all of a sudden that token space that it's learning from would change, right? And then the generalizations off it would change.
And that's part of the reason I would say it's kind of more later Wittgenstein, even though not completely disconnected from those embeddings early. And it's one of the reasons why, actually, in fact, later Wittgenstein is not truth is just what language says. It's no, there's ways in which it's embedded in the world by how we navigate as biological beings. And that's part of how the world kind of comes and impacts it. And therefore it's not just language by itself free floating like the Cartesian consciousness, but it's embedded in some ways.
And part of what he was trying to do is figure out, well, from a philosophy standpoint, how do we understand those embeddings and how do we drive our truth discourse in language based upon that biological embedding.
That makes sense. So I think what I what I hear you saying is despite the fact that embeddings are in this sort of they're they're mapping words into this high dimensional space, which sort of seems like kind of mapping words into this, like, sort of atomic facts or, like, logical possibility space. The way that that space is constructed and and what makes something go into one part of the space or another is more late Vic and Shenyang because it's very much about how it's used in practice and whether it's useful for humans in the world rather than, like, it's about some deep underlying logical ordering where if you've created that ordering, you can't say anything wrong because you're only using words from that space. Is that kind of on target?
Yes, exactly. And part of it is we know that there's truths where the coherent use of language still is a falsity. And so do, like part of what we're trying to figure out is how do we get more of those truths and truth telling and reasoning, because reasoning is about finding truth, into how do these LLMs work.
And just to move into that point a little bit, what is most promising to you in terms of ways that we are getting reasoning into these language models? And do you think that there are any ideas from philosophy, whether Wittgenstein or otherwise, that are relevant to that project?
Well, answer is certainly yes on the relevant ideas. Currently, I think we're doing a couple of things. So I think we're taking kind of, call it, human knowledge and figuring out how to get that as part of what's trained. So the earliest discoveries were actually, in fact, if you trained on code, computer code, then these models learn patterns of reasoning much broader than just computer code. And so all of the models that are doing this are now also training on computer code, even if they don't have a target of being a Microsoft copilot, code generation, etcetera.
Even if they're not doing that, because there's pattern just like math of crisp kind of modeling of reasoning. Another one is that's currently happening is, well, what are you doing with textbooks? And the notion is if you take the same kind of training discipline that we use for human beings encapsulated in textbooks, you can, for example, build much smaller, but still very effective models based on textbooks as ways of doing it. And so textbooks is another one. Now, as you begin to, like there's probably like some interesting, as it were, computational philosophy.
If you begin to say, well, how do we cash out kind of theories of, whether it's kind of, call it theories of science, the kind of different theories of science, and you're kind of building those models into how do you get, you know, it's kind of like Lakotosh as a development on Popper, given thinking about Kunian kind of models, a scientific paradigm? How do you kind of make predictions on those kinds of bases? And some of the in-depth work in logic, maybe Bayesian logic, as ways of possibly looking at this. I'm quite certain that there probably are some very useful things to elaborate beyond it. Now, currently, of course, part of the notion of these things are they're learning machines.
So you have to give a fairly substantive corpus of data from them to learn from. Now, of course, there's synthetic data, and look, there may be like philosophy is in what patterns do we create synthetic data that is still useful to learn from off the current data? You know, might be anyway, so there's a bunch of different kind of gestural areas, but I'm certain those are there, even though I'm not bringing up I'm making gestures rather than specific theories as to how that there, there cashes out.
That's really interesting. So it seems like basically the way that we're trying to get reasoning into models is to find sources of data that just has really crisp reasoning, and so they'll learn the reasoning from that.
Yep.
I'm sort of curious if if that's the case, like, aren't there are only a certain number of, like, moves you can make in logic. You know, like, you can do induction, you can do deduction, you can do there's there's, like, not there's not, like, infinitely many moves. Like, we have a really crisp set of data that's sort of teaching them these moves, what's the thing that's sort of stopping them from being able to apply them more broadly? And maybe that question is not well formed.
Well, first, yeah, correction of the question, because actually, in fact, in logic, there are infinite moves. One of the things that's interesting in various logics is different orders of infinity as people kind of think through it. So there is various things. Now, what you did actually remind me of is one of the things that I've been recently rereading because of thinking of Godel's theorem as kind of a classic instance of human meta thinking. And so Godel, Asher, Bach, which I read as a high school student, I've been rereading recently because I'm
That's great. What do you think?
Well, it's this tangle of amazing observations that you're trying to kind of Like, I'm trying to think about it from a viewpoint of modern LLM. So it's kind of like this question of you've got the girdle self reflection, which is roughly speaking, in any sufficiently robust language system, there are truths that cannot be expressed within the language system, right? And like that's mind boggling, right? And what exactly it means and so forth. And it's because of this classic kind of diagonalization proof to say, if you're enumerating out all the truths, there's at least one of them that's not captured in your numbering out of all truths, hence one version of kind of infinity.
You get that in the recursion patterns that you see within Escher and within Bach, that you say that's another recursion pattern, because this is a recursion pattern of getting to showing the shadow of at least one truth that's not captured within your enumeration of all the truths. You go, okay, well, what does this mean for thinking about truth discovery, whether it's human truth discovery, LLM truth discovery, and that kind of what are the things that are outside the boundaries of logic? Like it would have been, Like, I would have been very curious to have Godel and Wittgenstein, two folks very focused on logic, to talk about Godel's theorem. Like, was asked recently, if I had a time machine, would I want to go forward or back? Me, I'd rather go forward.
I'm just curious about how do you shape in the future. But like one of the historical back ones that I would love to do is put Godel and Wittgenstein in a room and say, you know, Godel's theorem, discuss. You know, and like, like, like, you know, I would I would do a lot to try to be able to hear that conversation.
We needed we need some GBTs in here with with Godel and Wittgenstein. Maybe Godel doesn't have enough writing to make that happen, but maybe eventually.
And the twistiness of the thinking is one of the things that you know, was one of things that made is, Godel so spectacular in this. You know, another one, by the way, that were historical walks is Einstein and Godel used to take walks. You know, you wish that you had digital recorders. Like, please record the conversation. We would really like to listen to that.
No. I love that. That's really interesting because I feel like like I read Godel Osherbach in college. I loved it. The thing that's so good about it is it's like it's such an interdisciplinary book.
You know? It's got math and music and art and, like, all, like, all this stuff. And you're like, wow. Like, that's the kind of mind that's gonna invent new minds. And then you you see Hofstadter today, and he's, like, sort of not like, he's not definitely not in the LLM conversation.
He's a little bit freaked out by them. And, like, I'm kinda curious, like, what do you what do you make of that? Like, what did he get right, and what do you think he got wrong?
Well, I think a central thing that he got right, at least to how I operationalize, is And that was the reason I was gesturing at Hegel with thesis, antithesis, synthesis, which is it's a dynamic process that's ongoing, and you can't necessarily predict the future synthesi. And that's part of even though obviously in philosophy, you try to articulate the truths, you know, Descartes, I think therefore I am, or Wittgenstein saying, well, there actually have to be a world in a certain way, there actually have to be truth statements in the language statement of I think, therefore I am. And so therefore you can be kind of broader than just the disembodied mind as a way of thinking about it, because you think about what the truth conditions must be in a language. If you're saying in a way that's coherent to your current self and your future self, I think therefore I am, what are the truth conditions in the language as ways of doing it? But that's a dynamic process by which we are making new discoveries.
And that's kind of the synthesis. And that's the thing that I think is, know, is part of what I take from the kind of the Godel, Escher, Bach interweaving of these different dynamics and showing the kind of the patterns across it. Now frequently, when you go across a lot of areas where people say, hey, we have this language system and all we know is through our language, and then they kind of go, and the world is unknowable to us because the only thing that's knowable to us is our language, you say, well, that's presuming there's no relationship between how the language engages with the world and how we engage with the world with the language. And so it's one of the reasons why you get into really interesting biologists like Varela and Macerana. It's the reason why you get to kind of different patterns of self referential logic.
And so it gets very interesting. And so I myself don't get freaked out by LLMs on part of this. I think, wow, new things that we can discover, right? And how does that make the discourse much richer, much more valuable, much more compelling, and in some ways, higher on target, you know, discoveries of the truth. Because I gave a speech in Bologna last year, where, along with the book I published last year, impromptu, his last chapter is Homo Techne, is that one of the things that we think of ourselves as human beings is static.
And actually, we're not static because we are constituted by the technology that we engage and bring into our being. So for example, you and I are looking at each other on this podcast through glasses. Like, think about a world with glasses, without glasses, right? The world is a very, very different place and how you can perceive, like we say, most of our theories of truth are fundamentally based on kind of perception. Like, you know, seeing is believing is kind of a classic idiom.
And well, if you don't have glasses, how you see is very different, right? And so technology changes our landscape in the perception of truth. That's why microscopes and telescopes all these other things that kind of get to that changing that landscape. And that's part of what we're doing with technology and we're doing in this particularly interesting ways with these LLMs in terms of how they're operating.
Yeah, that makes that makes a lot of sense. I love that point about sort of how technology changes us and really like how flexible humans are. It reminds me a lot, actually, because I read I read your book to prepare for this, and I also I read your Atlantic article, and you have some podcasts on this, like and it reminds me a lot of have you read the book, The Weirdest People in the World by Joseph Henrich?
No, I probably should.
It's really great. He's a psychologist at Harvard. And the point of the book is most of what we take to be the psychology literature is wrong. And it's not wrong because of P hacking and all that other stuff, but it's wrong because the psychology literature is based on studies of Western college students. And Western college students have a completely different psychology than us, people everywhere else in the world now and in history.
And one of the key differences in Western college students is that they can read. And reading changes your brain in all of these different ways. It enlarges parts of your brain and shrinks other parts where, for example, if you can read, you're more likely to pick out objects in a landscape rather than see the holistic scene. And there's a bunch of these other, like, significant differences that you find in humans who can read versus humans who can't. And so like reading as this technology, created all of the stuff like it, you know, one of the one of the things that he he argues is that it allowed us to create, like a society where we had, where we had churches that that created, like, rules and principles that, like, people would follow even though they weren't being watched.
So, like, you know, you know, I'm not supposed to, like, steal or whatever. You can't, it's like really hard to get like a big organized society without. Without reading basically is, is like one, one big point of, of the book and that it's because it changes our, our, our actual biology. I think that's the thing that people sort of miss about language models. Not to say that we should ignore there are any language models dangers or anything like that.
There's a lot of, I think, really interesting and really important problems to solve. But when you think about what language models might replace versus augment, I think it's also really important to know that we've been replacing or augmenting ourselves for many, many, many, many generations. And if you took a human from, like, you know, five generations ago or 10 generations ago and put them put them now, like, it would be, like, really hard for them to, like, interact in our society now. Same thing if if you took one of us and pushed us back in time. That's because we grow and change in response to our environment and our culture, which is this collective memory that gets loaded up so that we're a modern human instead of a pre evolutionary human or whatever.
And the same thing is going to happen with with language models. Like you can kind of like put it on this on this timeline from the invention of language to like reading to the printing press. Like it's all the same kind of cultural transmission technology. I've I've heard some researchers call it, and I think that that's exactly kind of like what it is to me. Curious what you think about that.
Well, I definitely think that The Progress of Cultural Knowledge, and I don't know if it's the same author, but The Secret of Our Success
is,
I think, a very good book. And it's partially because how we make progress is updating our cultural knowledge. It's part of the reason why it's not surprising that then when we generate interesting learning algorithms that we can apply to the human corpus of knowledge, that we then generate interesting things that come out of that, because that's essentially a partial index of cultural knowledge. It's not the complete index because as you know, like for example, the Secret Service Desk go through, it's like, well, how do you identify which things to eat or which things not to eat or when to do that and all the rest that, and that's part of how you make progress. And I think that's an essential part of how we actually evolve.
Like everyone tends to think evolve in human beings is, do we evolve to be faster, longer, stronger genetics? And actually, in fact, a major clock of our evolution as we shifted, like you could say, there's geological evolution, which is super slow. Then there's biological evolution, which is slow. And then there is cultural evolution, or knowledge, digital, etcetera, which is much, much faster. And part of how the kind of the secrets of our success is we got into kind of cultural evolution kind of that progress of digital.
And that part of what we're doing with AI and LLMs is tools to help accelerate that culturaldigital evolution, which can include, like, why is everyone going to have a personal assistant? Because the personal assistant will be, I read all the texts, and I can bring them to you as you're talking and trying to solve problems. So like, for example, on the, you know, what are things that people should be using ChatGPT for is obviously an immediate on demand personal research assistant that today hallucinates sometimes, and you have to be aware of that and kind of understand that. But an immediate research assistant is one of the things that is obviously here already today. If you don't think you need a research assistant, it's because you just haven't thought about it enough.
Yeah. I mean, it's incredible. It takes everything that humanity knows and gives it to you in the right context at the right time when you ask for it. That's exactly kind of like the bottleneck of cultural evolution is like getting the right information out to the edges of people that need it instead of like having it be locked up on the Internet or like in a library or whatever where you have to go expend resources to get it. All those are better than having to transmit knowledge orally, for example.
Yeah, language models are a profound next step. We're getting close to time. I a couple of we had a whole final section about science, but we may not be able to get to science. We'll have to maybe do a part two.
Yep. That'd be great. I'd be up for that. I love these topics.
But I want to ask you a couple more things, just sort of on the philosophy in AI front. So like, why do you think philosophers didn't come up with AI? Like, why did it come out of I I guess it came out of like sort of a computer science tradition, but also just really sort of an engineer y people who just were making stuff. Yeah. Talk to me about why it didn't come from philosophers.
Well, I do think that this is a little bit like I was gesturing at earlier, which is being disciplinarian is, I think, as obviously people are not idiots in doing this, they have some strengths and hope, but also some weaknesses. And I think part of it is to think about like, well, how is it that technology is going to change our conceptions of how we use language and how we discern truth and how we argue about it and all the rest of the stuff is, I think, pretty central. And it's kind of like, how is technology as ways of knowing, or ways of perceiving, ways of communicating, or ways of reasoning important. And philosophers will say, You don't need any of that. I sit down and I cogitate, kind of a canonically Descartes.
And look, think there's a role to sitting down and cogitating, but I think there's also a role to discourse. And it doesn't necessarily mean you have to be an externalist or a kind of, I don't know who the current physical materialist advocates are, the church lens and other people back in the days when I was a philosophy student, were among those who were very vocal on that. But it's to say that actually, in fact, this notion of how do we engage technology in our work is a very good thing to do. And if so, then maybe philosophers would have come up with it more, or would have been able to participate more in it, versus the computer scientists who are like, okay, I'm working on the technology side of it. What can I make with this technology?
And obviously, you know, the what can I make with this technology goes well earlier than computer science, right? I mean, you go all the way back to Frankenstein, you know, and kind of thinking about, you know, kind of imaginations about what could be constructed here, or the Golem or Talos in Greece. And so the notion that things could be constructed now, could they be constructed with silicon and it could be constructed with computer science? That's the modern kind of artificial intelligence. But the notion of that is, I think, one of the reasons why I want philosophy to be broader in its instantiation, not just a question around, you know, this this is obviously a bit of a deliberate rhetorical slam, but trolley problems.
Yeah, that makes sense. Maybe it may be a way to frame that is like, it's
better
to be asking deep philosophical questions and be a philosopher out in the world to some degree than it is to just be a philosopher. I don't know if you'd agree with that, but something like that?
I chose that with my own feet.
Yeah. There you go. Yeah. I I I I definitely I definitely agree with that. So so we we have a minute left.
The the last thing I wanna ask you is, I assume that there's there are a lot of people who are listening to this, maybe are not, have not been philosophically inclined in the past and are either like, wow. I could not follow any of that, and I wanna figure out what they said. Or they're like, oh my god. Like, wanna learn how to think like that. And I think for the first group of people, I would totally recommend, like, just use ChatGPT.
Talk to ChatGPT about this stuff, and it will tell you for sure. Yes. I but I I wanted to ask you, like, if people are thinking about like, they wanna get that kind of, like, thinking crisply about possibilities thing that you that you talked about so well at the beginning, like, where would they start? Or what are your like, what are your favorite kinds of philosophers or kinds of books like this to dive into?
Well, you know, I think the best way is to get interactive. It's part of the reason, like study philosophy, even for the second part of the question, some use of chat GBT also very helpful there, because the interactive is what does it. And like, for example, one of the things that I use ChatGBT for, which is part of this, is I have something that I'm arguing for, thinking about arguing for, and I put in my argument and I say, Okay, Chad J. B. D, give me more arguments for this.
How would you argue for this differently or more? And then also, how would you argue against it? What would your counterarguments be to this? And use that as kind of, again, the kind of thesis and synthesis, trying to get the synthesized in this. And so I think that dynamic process is really important.
And so part of the way that people traditionally try to get to this is they try to go through what are some of the real instances of great human thought and then try to understand that how to think that way? So one of the things that was too much text prompting to go into impromptu, But as I think very useful as another utility for, you know, kind of use of ChatGPT is, you know, like I'm a non mathematical college graduate, explain Godel's theorem to me. You know, I'm a non physicist, explain Einstein's thought experiments around relativity to me, you know, etcetera. And that dynamic process of getting into understanding those things is part of how you learn to think this way. And it's one of the reasons why, you know, kind of our you know, one of the things that has helped us accelerate our cultural evolution, our cultural evolution, the secret of our success, is having things like books, having things like universities, because it's that dynamic process of engaging that's so important.
And so there's not necessarily one specific book. Although, by the way, if you really want to have your mind boggled, go read or reread Gerdlacher Bach. It's great, right? But like what are the instances of these canonical amazing pieces of thinking? And then, you know, kind of in that dynamic engagement process, you're internalizing them.
Yeah. Be curious about great ideas and engage with them. This was this is a great conversation. I really appreciate you coming on. I feel like I learned a lot.
Thank you so much.
My pleasure. Awesome.
Oh my gosh, folks. You absolutely positively have to smash that like button and subscribe to AI and I. Why? Because this show is the epitome of awesomeness. It's like finding a treasure chest in your backyard, but instead of gold, it's filled with pure unadulterated knowledge bombs about chat GPT.
Every episode is a roller coaster of emotions, insights, and laughter that will leave you on the edge of your seat craving for more. It's not just a show. It's a journey into the future with Dan Shipper as the captain of the spaceship. So do yourself a favor. Hit like, smash subscribe, and strap in for the ride of your life.
And now without any further ado, let me just say, Dan, I'm absolutely, hopelessly in love with
you.
Best of the Pod: Reid Hoffman on How AI Is Answering Our Biggest Questions
Ask me anything about this podcast episode...
Try asking: