PodClips Logo
PodClips Logo
Making Sense with Sam Harris
#324 Debating the Future of AI
#324  Debating the Future of AI

#324 Debating the Future of AI

Making Sense with Sam HarrisGo to Podcast Page

Marc Andreessen, Sam Harris
·
17 Clips
·
Jun 28, 2023
Listen to Clips & Top Moments
Episode Summary
Episode Transcript
0:06
Welcome to the making sense podcast. This is Sam Harris. Just a note to say that if you're hearing this, you are not currently on our subscriber feed and we'll only be here in the first part of this conversation in order to access full episodes of The Making Sense podcast. You'll need to subscribe at Sam Harris dot-org there, you'll find our private RSS feed to add to your favorite podcast show along with other subscriber only content.
0:30
Ain't we don't run ads on the podcast and therefore, it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one.
0:46
Okay, well there's been a lot going on out there. Everything from Elon Musk and Mark Zuckerberg challenging one. Another to an MMA fight which is ridiculous and depressing to Robert Kennedy jr. Appearing on every podcast on Earth apart from this one. I have so far declined. The privilege
1:12
It really is a mess out there. I'll probably discuss the RFK phenomenon in a future episode because it reveals a lot about what's wrong with alternative media at the moment.
1:25
I will leave more of a post-mortem on that for another time today I'm speaking with Marc Andreessen, Mark is a co-founder and general partner at the Venture Capital firm, Andreessen Horowitz. He's a true internet Pioneer. He created the Mosaic internet browser and then co-founded Netscape. He's co-founded other companies and invested in too many to count. Mark holds a degree in computer science from the University of Illinois. Hey serves on the board of many Andreessen, Horowitz portfolio companies
1:55
He's also on the board of meta. Anyways, you'll hear Mark and I get into a fairly spirited debate about the future of AI. We discussed the importance of intelligence generally and the possible, good outcomes of building AI. But then we get into our differences around the risks or a lack thereof of building. A GI artificial general intelligence. We talked about the significance of evolution in our thinking about this, the alignment problem, the current state of large,
2:24
Language models had developments in a, I might affect how we wage war. What to do about dangerous information, regulating AI economic inequality, and other topics. Anyway, it's always great to speak with Mark. We had a lot of fun here. I hope you find it useful. And now I bring you Marc Andreessen.
2:53
I am here with Marc Andreessen. Mark. Thanks for joining me again. Hmm,
2:57
it's great to be here Sam. Thanks.
2:58
I got you on the end of a swallow of some collectible beverage. Yes you did. So this should be interesting. I'm eager to speak with you specifically about this recent essay, you wrote on AI and so you obviously many people have read this and you are a voice that many people value on this topic among others. Perhaps,
3:22
You've been on the podcast before and people know who you are but maybe you can Briefly summarize how you come to this question. I mean what how would you summarize your the relevant parts of your career with respect to the question of AI and it's possible ramifications?
3:40
Yeah. So I've been a computer programmer, technologist computer scientist since the 1980s when I actually enter College in 1989 at University of Illinois, the AI fielded been through a boom in the 80s.
3:52
Which had crashed hard. And so, by the time I got to college, it was the AI wave was dead and buried at that point for a while. It was like the Backwater of the department that nobody really wanted to talk about and then but, you know, I learned, I learned a lot of it, a lot of attendance school and then I went on to you know help help create what is now kind of known as the modern internet in the 90s and then over time transition to become a what for being a technologist to being an entrepreneur. And then today I'm an investor venture capitalist and so, 30 years later, 30, 35 years,
4:22
Later. I'm involved in a very broad cross-section of tech companies that have many. Many of them have have many kind of AI aspects to them. And so, you know, and everything from Facebook now meta, you know, which has been investing deeply and AI for over a decade, you know, through too many of the best new AI startups. You know, we're our date. Our day job is to find the best new startups in a new category like this and try to back the entrepreneurs. And so that's a that's how I spend most of my time right
4:48
now. Okay. So the essay is titled. Why a? I will save the world.
4:53
And I think, even in the title alone, people will detect that you are striking a different note than I tend to strike on this topic. I think we'll, I disagree with a few things in the essay that are. I think at the core of my interest here, but I think there are many things, you know, we agree about, you know, upfront, we agree. I think with more or less. Anyone who thinks about it that intelligence is good and we want more of it. And if it's not necessarily the source of everything that's good in,
5:22
Human life. It is what will safeguard everything that's good in human life, right? So even if you think that love is more important than intelligence and you think that playing on the beach with your kids is way better than doing science or anything else that is narrowly linked to intelligence. Well, you have to admit that you value. All of the things that intelligence will bring, that will safeguard the things you value. I'm so Acure for Cancer and a cure for Alzheimer's, and a cure for a dozen. Other things will give you much more time with.
5:52
People, you love, right? So whether you think about the Primacy of intelligence or not very much, it is the thing that that has differentiated us from our primate cousins and it's the thing that allows us to do everything that is maintaining the status of civilization. And if the future is going to be better than the past, it's going to be better because of what we've done with our intelligence, in some basic sense. And because I think you're going to agree that because intelligence is so good. And because each increment of it is good,
6:22
And profitable, this AI arms race. And Gold Rush is not going to stop, right? We're not going to pull the brakes here. And say, let's take a pause 25 years and not build any aii, right? I think that's, I don't remember if you address that specifically in your essay, but even if some people are calling for that, I don't think that's in the cards. I don't think you think that's in the
6:42
cards. Well, there are, you know, it's hard, it's hard to believe that you just like, put in the box right and stop working on it. It's hard to believe that the progress stops. You know, we having said that there are some powerful and important
6:52
Who were in Washington right now advocating that and there are some politicians who are taken seriously. So they're, you know, they're at the moment. There is some danger around that and then look there's two other big dangerous to their scenarios that I think would both be very, very devastating for the future. One is a scenario where the fears around AI are used to basically entrench a cartel. So, and this is what is this is, what's happening right now, is what's been Lobby for right now. Is there a set of big companies that are arguing in Washington? Yes, AI, you know, has positive cases. You says, yes, AI is also dangerous.
7:22
Because this dangerous, therefore we need a regulatory structure that basically in trenches a set of currently powerful tech companies, you know, to be able to have basically exclusive rights to to do this technology. I think that would be devastating for reasons, we could discuss and then look, there's the third outcome, which is we lose trying to winds right there. Certainly working on AI and they have a, you know what, what I would consider to be a very dark and dystopian vision of the future, which I also do not want to win.
7:46
Yeah, Omega said that is in part the cash value of the point. I just made that even if we
7:52
To stop. Not everyone's going to stop, right? I'm a human beings are going to continue to grab as much intelligence as we can grab, even if in some local spot, we decide to pull the brakes. Although it really is, at this point, it's hard to imagine even whatever the regulation is it really stalling, progress? And given just again given the the intrinsic value of intelligence and given the excitement around it and given the obvious dollar signs that
8:22
One is seeing I mean the incentives are just such that, I just don't see it. But well, it will come to the regulation, peace eventually. Because I think it's I you know, I given the difference in our views here. I it's not going to be surprised that I want some form of Regulation and I'm not quite sure what that could look like. And I think you have a bet, you would have a better sense of what it looks like, and perhaps that why you're? That's why you're worried about it. But before we talk about the fear is here, let's talk about the good.
8:52
Outcome because you, you sketch a fairy. I know you don't consider yourself a utopian but you sketch, a fairly utopian picture of Promise in your essay. If we got this, right? How good do you think it could be?
9:07
Yeah. So it's just start by saying, I kind of deliberately loaded my let the title of the essay with a little bit of religious element and I did that kind of very deliberately because I view that I'm up against a religion, the sort of AI risk for your religion. So but I am not myself religious lowercase are religious in the sense of
9:22
You know, I'm not a utopian, I'm very much a, yeah, I'm an adherent to what Thomas Soul called, the constrained Vision, not the unconstrained Vision. So I'm I live, I live in a world of practicalities and trade-offs and so, yeah, I'm I I'm actually not utopian. Look. Having said that building on what you've already said, like, intelligence. If there is a lever for human progress across many thousands of domain simultaneously, it is intelligence. And we just we know that because we have thousands of years of experience, seeing that play out. The thing I would add to, I thought you made that case very well. The thing I would add to the case you made about the positive,
9:52
Choose of intelligence and human life. Is that the way you described it? At least the way I heard it was more focused on like the social societal wide benefits of intelligence, for example, cures for diseases, and so forth. That that is true. And I agree with all that. There are also individual level benefits of intelligence, right at the level of an individual. Even if you're not the scientist to invent a cure for cancer at an individual level. If you are smarter, you have better life welfare outcomes on almost every metric that we know how to measure everything from how long you live, how healthy you'll be, how much education you'll achieve
10:22
Career Success, the success of your children. By the way, your ability to solve problems, your ability to deal with conflict, smarter people are less violent smarter, people are less bigoted. And so there's this very broad kind of pattern of human behavior or basically more intelligence, you know, it just simply at the individual level leads to better outcomes and so the sort of most you don't most utopian. I'm willing to get is sort of this potential which I think is very real right now. It's already started where you basically just say look human beings from here on out or going to be
10:52
Have an augmentation and the augmentation is going to be in the long tradition of augmentations like everything from eyeglasses to shoes, to wear processors to search engines. But now the augmentation is intelligence and that augmented intelligence capability is going to let them capture the gains of individual level intelligence. You know, potentially considerably above, you know, where they where they punch in as individuals. And what's interesting about that is that can scale all the way up, right? Like, you know, somebody who is, you know, somebody who struggles with, you know, Dale.
11:22
He challenges all of a sudden is going to have a partner and an assistant. I'm going to coach and a therapist and a mentor to be able to help improve a variety of things in their lives. And then, you know, look if you had given this to Einstein, you know, he would have been able to discover a lot more new fundamental laws of physics right? In the, you know, in the fall in the fall, in the full vision. And so you know, this is one of those things where it could help everybody and then it could help everybody in many, many different ways.
11:46
Hmm. Yeah, we'll see in your essay. You go into some detail bullet points around this concept of
11:52
everyone having a essentially, a digital Oracle in their pocket, where they're that, you have this personal assistants who you're, you can be continuously in dialogue with and it's just it's be like having the smartest person who's ever lived just giving you a bespoke concierge service to, you know, all manner of tasks and and you know, cross any information landscape and I just happen to recently want re-watch the film her
12:22
Which I hadn't seen since it came out. So it came out 10 years ago and I don't know if you've seen it lately but it must say it lands a little bit differently. Now that we're on the cusp of this thing and while it's not really dystopian, there is something a little uncanny and quasi Bleak around. Even the happy Vision here of having everyone siloed in their interaction with an AI. I mean, it is, it's the personal assistant, you know?
12:52
Your pocket. That becomes so compelling, and so aware of your goals and aspirations. And what you did yesterday, and the email you sent or forgot to send. And it it's a part from the ending, which is kind of clever and surprising and, you know, he kind of irrelevant for our purposes here. It's not a it's not an aspirational vision of the sword that you sketch in your essay. And I'm wondering even if if you see any possibility here that even the best-case scenario,
13:22
Has something intrinsically alienating and Troublesome about it. Yeah
13:28
so look on the movie, you know, as Peter thiel's pointed out like Hollywood, no longer makes positive movies about technology in so and I look, you know, he argues is because they hate, you know, they hate technology but I would argue maybe a simpler explanation which is the dramatic tension and conflict. Yeah. And so they necessarily, you know, it's going to have things. We have a dark henge, you know, regardless, you know, they have has a spring-loaded by their choice of character and and so forth the scenario,
13:52
- actually quite a bit different. And, and, and let me get kind of maybe philosophical for a second, which is, you know, there's this long running debate. This question that you just raised is a question that goes back to the Industrial Revolution. And remember, it goes back to the core of actually, the original marks marks original Theory. Remember Smurfs. Original theory, was industrialization technology, modern Economic Development, right? Alienates the human being right from from society, right? That was, that was his core indictment of technology and look like there are you can point to many, many cases in which I think that is actually happened.
14:22
I think alienation is a real problem, you know, I don't think that critique was entirely wrong. His prescriptions were disastrous but I don't think they could take was completely wrong. Look, having said that and it's a question of like, okay now that we have the technology that we have and we have you know new technology we can event like how could we get to the other side of that problem. And so I would I would put the shoe on the other foot and I would say look the purpose of human existence and the way that we live our lives should be determined by us and it should be determined by us to maximize our potential as human beings. And the way to do that is
14:52
Precisely to have the machines, do all the things that they can do so that we don't have to write and and this is why marks ultimately, his critique was actually in the long run I think has been judged to be incorrect, which is, we are all much better. Anybody in the developed West, you know, industrialized, West today is much better off by the fact that we have all these machines that are doing everything from making shoes to harvesting corn, to doing everything, you know. So many other industrial processes around us. Like we just have a lot more time and much more pleasant, you know, day-to-day life, you know, than we would have if we were still doing things the way the
15:22
Used to be done the potential with a is just like look take take take take the drugs, work out like take the remaining drudge work out take all this you know. Look like I'll give you a simple example office work that you know, the inbox staring at you in the face of 200 emails, right? Friday at 3:00 in the afternoon like okay, no more of that, right? Like, we're not going to do that anymore because I'm going to have ai assistant. Yeah, it's just it's going to answer the emails, right? And in fact was going to happen as my assistants going to answer the email that your AI easy
15:48
assistance. That right is mutually assured destruction.
15:52
Yeah,
15:52
Ugly. But, like, the machine should be doing that, like, the human being should not be sitting there when it's like sunny out and his like, you know, my, when you my eight-year-old wants to play, I'm not, I should be sitting there doing emails. I should be out my eight-year-old, there should be a machine that does it for me. And so I view this very much as basically, apply the machines to do the drugs work precisely so that people can live more human lives. Now, this is philosophical, people have to decide what kind of lives they want to live. And again, I'm not a utopian on this and so there's a long discussion. We can have about how this actually plays out but that potential is
16:18
there for sure. Right, right. Okay. So let's jump to the
16:22
Bad outcomes here because it's really why I want to talk to you in your essay, you list five and I'll just read your section titles here and then we'll take a whack at them. The first is, will a I kill us all. Number two is, will a I ruin our society. Number three is, will a, I take all our jobs. Number four is Will AI lead to crippling inequality and five is, will a I lead to people doing bad things, and I would tend to bend those in really two buckets. The first
16:52
As will a I kill us. And that's the existential risk concern and the other is, are more the ordinary bad outcomes that we tend to think about what other technology we're going to bad people, doing bad things with powerful tools unintended consequences. Disruptions to the labor market, which I'm sure we'll talk about, and those are all of those or certainly the near term risks. And they're, in some sense even more interesting to people because the, the existential risk component is long.
17:22
Our term and it's even purely hypothetical and you seem to think it's purely fictional and this is where I think you and I disagree. So let's start with this question of, will AI kill us all and they and the thinking on this tends to come under the banner of the problem of AI alignment, right? And the concern is that we can build, if we build machines, more powerful than ourselves more intelligent than ourselves, it seems possible that
17:52
The space of all possible, more powerful super-intelligent. Machines includes many, that are not aligned with our interest and not disposed to continually track, our interests and many more of that sort than of the sort that perfectly Hue to our interests in perpetuity. So, the concern is we could build something powerful. That is essentially an angry little God that we can't figure out how to placate once we've built it. And certainly, we don't want to be negotiating with something more powerful and intelligent
18:22
Agent than ourselves and the picture here is, if something like, you know, a chess engine, right? We've built chess engines that are more powerful than we are at chess and once we built them. If everything depended on, are beating them in a game of chess, we wouldn't be able to do it, right? Because they are simply better than we are. And so now we're building something that is that is a general intelligence and it will be better than we are at everything that goes by that name or such as the concern and in your essay, I mean
18:52
I think there's an ad hominem piece that I think we should Blow by because you you've already described this as a religious concern and you and in the essay described it as a just a symptom of superstition and that the people are essentially in a new doomsday cult and there's some share of True Believers here and there's some share of, you know, AI safety grifters. And I think, you know, I'm sure you're right about some of these people, but we should acknowledge up front that there are many super
19:22
Qualified people of High probity who are prominent in the field of AI research who are part of this chorus, voicing their concern now and we've got somebody like Geoffrey Hinton who arguably did as much as anyone to create the breakthroughs that have given us these, these LMS, we have Stuart Russell who literally wrote the most popular textbook on a. I so there are other serious sober people who are very worried for reasons of a, sort of that, I'm going to go to express here. So that's I mean, that's just
19:52
I just want to acknowledge that. Both are true. There's the crazy people, the the new millennialist, the Doomsday, Preppers, the neuroatypical people who are in their polyamorous Cults and you know AI alignment is their primary fetish but there's a lot of sober people who are also worried about this. Would you would you acknowledge that much?
20:13
Yeah, although It's tricky because smart people also have a tendency to fall into Cults, so that doesn't get you totally off the hook on that one, but
20:22
I would register a more fundamental objection to what I would describe as. And this is not, I'm not knocking you on this, but it's something that something people do is sort of argument by Authority. I don't think applies either,
20:33
and yeah, well, I'm not making it yet.
20:35
No, I know. But like this idea this idea which is very good again. I'm not characterizing your ID, I'll just say it's a general idea. There's a general idea that there are these experts and these experts are experts, because they're the people who created the technology origin of the ideas are implemented, the systems, therefore have sort of special knowledge, and insight in terms of their Downstream impact on
20:52
Mm rules and regulations and so forth and so on that assumption does not hold up. Well, historically in fact it holds up disastrously historically. There's actually a new book out. I've been giving all my friends called when reason goes on holiday and it's a, it's a story of literally what happens when basically people who are like specialized experts in one area stray outside of that area in order to become sort of general purpose, philosophers and sort of social thinkers, and it's just a tale of Woe, right? And and the 20th century, it was just a catastrophe in the ultimate example of that.
21:22
That and they're going to assume to be the topic of his big movie coming out this summer and Oppenheimer you know, the central example of that was the nuclear scientists who decided that you know, nuclear energy nuclear power nuclear energy, they had various theories on what was good, bad, whatever. A lot of them are communists. A lot of them were, you know, at least allied with communist of them. Had a suspiciously large number of communist friends and housemates. And you know, number one like they, you know, made a moral decision. A number of them did to hand the bomb to the Soviet Union, you know, with what I would argue, or catastrophic consequences and
21:52
To is they created an anti-nuclear movements that resulted in nuclear energy stalling out in the west which is also just been like absolutely catastrophic. And so if you listen to those people in that era who were, you know, the top nuclear physicists of their time you made a horrible set of decisions and I quite honestly, I think that's what's happening here again and I just I don't think they have the special Insight that people think that they
22:13
have. Okay well so I mean this Cuts both ways because it you know the beginning I'm definitely not making an argument from Authority, but Authority is a proxy.
22:22
E for understanding the facts at issue, right? It's not to say that it's especially in the cases, you're describing what we often have are people who have a narrow Authority in some area of scientific specialization. And then they begin to weigh in in a much broader sense. As moral philosophers, I thought, what? I think you might be referring to there is that, you know, in the aftermath of Hiroshima and Nagasaki, we've got nuclear physicists, imagining that, you know, they now
22:52
To play the geopolitical game, you know, and we actually, we have some people who invented Game Theory, right. You know, for understandable reasons thinking, they need to play the game of geopolitics. And in some cases I think in Von neumanns case, even recommended preventive war against the Soviet Union before they even got the bomb, right? Like I said, it could have gotten worse. He could have. I think he wanted us to bomb Moscow, or at least give them some kind of ultimatum. I think it wasn't. I don't think he wanted us to drop bombs in the dead of night, but I think he wanted a strong ultimatum game.
23:22
Played with them before they got the bomb. And I forget what, how he wanted that to play out and we're still even, I think Bertrand Russell, I could have this backwards maybe fun women wanted to bomb but Bertrand Russell you know the true moral philosopher briefly advocated preventive War but his case. I think he wanted to offer some kind of ultimatum to the Soviets in any case, that's the problem. But you know, at the beginning of this conversation, I asked you to give me a brief Litany of your Bona fides to have this conversation. So,
23:52
Inspire confidence in our audience. And also, just just to acknowledge the obvious that, you know, a hell of a lot about the technological issues. We're going to talk about. And so, if you have strong opinions, they're not, you know, they're not coming out of totally out of left field and so it would be with, you know, Geoffrey Hinton or anyone else. And if I threw another name at you, that was of some, you know, crackpot whose connection to the field was non-existent. You would say, why should we listen to this person at all? You wouldn't say that about Hinton, or
24:22
Stuart Russell, but I would I'll acknowledge that we're Authority. Breaks down is really, you're only as good as your last sentence here, right? If you do the thing, you just said, doesn't make any sense. Well, then your Authority gets you exactly nowhere, right? We just need to keep talking about what
24:36
it should or should not write that. Ideally that's the case in practice. That's not what tends to happen but that would be the goal.
24:41
Well I hope to give you that treatment here because there's some of your sentences. I don't think add up though. The way you think they do good. Okay, so actually for this was actually one paragraph in the essay that caught my attention.
24:52
Tension that really inspired this conversation. I'll just read it. So people know what I'm responding to here. So this is this is you, my view is that the idea that a I will decide to literally kill humanity is a profound category error. AI is not a living being that has been primed by billions of years of evolution to participate in the Battle of for survival of the fittest as animals were, and as we are, it is math code computers built by people owned by people used by people controlled by people.
25:22
People the idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious hand wave in short. A i doesn't want it doesn't have goals, it doesn't want to kill you because it's not alive. AI is a machine is not going to come alive. Any more than your toaster, will end quote? Yes. So am I see where you're going there? I see why that may sound persuasive to people, but to my eye,
25:52
That doesn't even make contact with the real concern about alignment. So let me just kind of spell out why I think that's the case tribute because it seems to me that you're actually not taking intelligence seriously right now. So, I mean, some people assume that as intelligence scales, we're going to magically get ethics along with it, right? So the smarter you get the nicer you get. And while I mean, there's some data points with respect to how humans behave and you just mentioned one.
26:22
In a few minutes ago, it's not strictly true even for humans and even if it's true in the limit, right? It's not necessarily locally, true, and more important. When you look in a cross species differences, in intelligence are intrinsically dangerous for the stupider species, okay? So it need not be a matter of super-intelligent. Machines spontaneously becoming hostile to us and wanted to kill us. It could just be that they begin doing things that are not in our well,
26:52
In, right? Because they're not taking it into account as a primary concern in the same way that we don't take the welfare of insects into account as a primary concern, right? So it's very rare that I intend to kill an insect, but I regularly do things that annihilate them just because I'm not thinking about them, right? I'm sure I've effectively killed millions of insects, right? You, if you build a house, you know, that must be a holocaust for insects. And yet you're not thinking about insects when you're building that house.
27:22
So there are many other pieces to my my gripe here, but let's just take this first one. It just seems to me that you're not envisioning, what it will mean to be in relationship to systems that are more intelligent than we are. You're not, you're not seeing it as a relationship, and I think that's because you're denuding intelligence of certain properties and not acknowledging it in this paragraph, right? And so it could for to my ear general intelligence and which is what we're talking about,
27:52
Implies many things that are not in this paragraph. Like it implies autonomy, right? And implies the ability to form unforeseeable new goals, right? In the case of a, I do it implies the ability to change its own code, ultimately, and you execute programs, right? And it's just it's doing stuff because it is intelligent the autonomously intelligent. It is capable of doing just, we can stipulate more than were capable of doing, because
28:22
It is more intelligent than we are at this point. So, the superstitious hand-waving I'm seeing is in your paragraph when you're declaring that. It would never do this because it's not alive right as though, the difference between biological and non-biological substrate or The crucial variable here. But there's no reason to think as a crucial variable is, where intelligence is
28:44
concerned? Yeah, so I would say there's two Steel Man, your argument. I would say you can actually break your argument into two forms or the or the AI risk Community would break this argument a tattoo.
28:52
Forms. So they would art. And they would argue, I think the strong form of both. So they would argue the strong form of number one and I think this is kind of what you're saying, correct me if I'm wrong is because it is intelligent. Therefore it will have goals. If it didn't start with goals, it will evolve goals it will you know, whatever it will it will over time have a set of preferred outcomes Behavior patterns that it will determine for itself and then they also argue the other side of it, which is with what they call, the orthogonal orthogonal T argument, which is it's actually the, it's another risk argument, but it's actually sort of the
29:22
Opposite argument. It's an argument that it doesn't have to have goals to be dangerous, right? And that being you know, it doesn't have to be sentient. It has doesn't have to be conscious, it doesn't have to be self-aware. It doesn't have to be self-interested. It doesn't have to be in any way, like, even thinking in terms of goals, it doesn't matter because simply it can just do things and this is the, you know, this is the classic paperclip maximizer, you know kind of argument like it'll just get it'll start, it'll get kicked off on one apparently innocuous thing and then it will just extrapolate that ultimately to the destruction of everything, right? So anyways is that helpful to maybe break those
29:52
Yeah, I mean, I'm not quite sure how fully I would sign on the dotted line to each. But the one piece I would add to that, is that having any goal does invite the formation of instrumental goals? Once this system is responding to a changing environment, right? I mean, if your goal is to make paper clips, and you're super intelligent, and somebody throws up as some kind of impediment, you're making paper clips. Well, then you're responding to that impediment. And now you have
30:22
Have a shorter term goal of dealing with the impediment, right? So that's the structure as a problem,
30:26
yeah. Right. For example, the US military wants to stop you from making more paper clips and so, therefore, you develop a new kind of nuclear weapon it, right? In order to fundamentally to pursue your goal of making paper clips
30:36
or that's one problem here is that these the instrumental, go even if the paper clip, goes the wrong example here. Because I even if you think of a totally benign future goal, write a goal that it seems more or less Anonymous with taking human welfare into account. Sure is
30:52
Possible to imagine a scenario where some instrumental new goal that could not be foreseen appears. That is, in fact hostile to our interests, and we would, and if we're not in a position to say, oh no, no, don't do that. That would be a problem. So, that's the yeah. Okay,
31:06
so a full version of that, version of that argument that you hear is basically the, what if the goal is to maximize human happiness, right? And then the machine realizes that the way to maximize human happiness to strap us all into, you know, into a write down and put us in a nose experience machine, you know, and wire us up with, you know, VR.
31:22
Ketamine, right? And we were in that we can ever get out of the Matrix, right? So right, and it's may be maximizing. Human happiness as measured by things like dopamine levels or serotonin levels or whatever, but obviously not a, not a positive outcome. So, but again, that's like a variation of this paper clip, this. That's that's one of these arguments. It comes out of their orthogonaility thesis, which is the goal can be very simple and, and and innocuous, right? And yet Leading Lady catastrophe. So, so look, I think I think each of the each of these has their own problems. So the where you started where they're sort of like, the machine basically, you know, like and you can
31:52
What terms here, but like some, like the side of the argument in which the machine is in, some way self-interested self-aware. Self-motivated trying to preserve itself, some level of sentience Consciousness setting, its own
32:06
goals. Other just to be clear, there's no consciousness. Implied here, I make the lights don't have to be on it. Just I think that it made this remains to be seen whether Consciousness comes along for the ride at a certain level of intelligence, but I think they probably or are
32:22
Ogle to one another so intelligence can scale without the lights coming on in my view. So let's leave sentience and Consciousness.
32:29
So well liked but I guess there is a fork in the road, which is like is it declaring its own intentions? Like is it developing its own, you know, a conscious or not? Is it does? It does it have a sense of any form or a vision of any kind of its own future?
32:43
Yeah. So this is why this is why I think were some daylight growing between us because to be dangerous. I don't think you need necessarily to be
32:52
Running a self preservation program. Okay. I mean that there's some version of unaligned competence that may not formally model. The machines plays in the world, much less defend that place which could still be. If uncontrollable by us could still be dangerous, right? It's like it doesn't have to be self-referential in a way that a an animal. The truth is they're dangerous animals that might not even be self-referential and certainly something like a virus virus.
33:22
Or bacterium, you know, is not self referential in a way that we would understand and it's, it can be lethal to our interest. Yeah, that's right.
33:29
Okay, so you're more on the orthogonaility side between the two if I identify the two poles of the argument, you're a moron, the orthogonality side which is it doesn't need to be conscious. It doesn't rise. Sentient does need to have goals? It doesn't need to want to preserve itself. Nevertheless, it will still be dangerous because of the, as you describe the consequences of sort of how it gets started, and then, and then sort of what happens over time, for example, as it defines subgoals to
33:52
She'll goals and it goes off course. So there's a couple, there's a couple of problems with that. So one is it assumes in here? It's like you're whatever you people don't give intelligence enough credit. Like there are cases where people can have intelligence too much credit and there's cases where they'll give enough credit. Here are the things are given enough credit because it sort of implies that this machine has like, basically this infinite capacity to cause harm. Therefore, it has an infinite capacity to basically actualize itself in the world. Therefore it has an infinite capacity to, you know, basically plan, you know. And again, may be just fine in a completely blind watchmaker way, or
34:22
Thing. But it has the it has it has an ability to, you know, plan itself out and yet it never occurs to this. Super Genius infinitely powerful machine that is having such, you know, potentially catastrophic impacts notwithstanding all of that capability and power. It never occurs to it. That maybe paper clips is not what its Mission should be. Well,
34:40
that's the thing that it's I think it's possible to have a reward function that is deeply counterintuitive to us. I mean, it's like, it's almost like saying what's
34:52
Smuggling in in that rhetorical question is a fairly capacious sense of common sense, right? Which it's, you know like a quart of course if it's a Super Genius it's not going to be so stupid as to do X. Right. Yeah. But that's I just think that if aligned that then the the answer is trivially true. Yes. Of course, it wouldn't do that but that's the very definition of alignment, but if it's not aligned, if you could say that, I mean, there's just just imagine a bit. I guess there's another
35:22
Piece here. I should put in play, which is so you make an analogy to Evolution here, which you think is consoling, which is this is not an animal, right? This is not gone through The Crucible of darwinian selection here on Earth with other wet and sweaty creatures. And therefore it is not as developed the kind of antagonism we see in other animals and therefore, we, you know, if you're imagining a, a Super Genius gorilla while you're Imagining the wrong thing that we're going to build this, and it's not going to have any of, it is not going to be tuned in any of those competitive ways.
35:52
But there's another analogy to Evolution that I would draw. And I'm sure others in the in the space of a I fear have drawn which is that we have evolved. We have been programmed by Evolution and yet Evolution can't see anything we're doing, right? Me like it has programmed us to really do nothing more than spawn and help our kids spawn, yet. Everything we're doing. Imma from having conversations like this too.
36:22
To building the machines that could destroy us. I mean there's just there's nothing it can see. And there are things we do that are perfectly unaligned with respect to our own code, right in there. If someone decides not to have kids and they just wanted, they spend their time that'll the rest of their life in a monastery or surfing. That is something that is antithetical to our code, is totally unforeseeable at the level of our code. And yet it is obviously
36:52
An expression of our code but an unforeseeable one. And so the question here is, if you're going to take intelligent seriously, and you're going to build something, that's not only more intelligent than you are, but it will build the next generation of itself or the next version of its own code to make it more intelligent still. It just seems patently obvious that that entails it finding cognitive Horizons that you the Builder.
37:22
Are not going to be able to foresee in and appreciate by analogy with Evolution. It seems like we're guaranteed to lose sight of what it can understand and care about
37:33
so couple things. So one is like, look, I don't know if you're kind of making my point for me, so Evolution and intelligent intelligent design as you, well, know are two totally different things. And so, we are evolved. And, of course, we're not just evolved to have. We are evolved to have kids and by the way, when somebody chooses to not have kids, I would argue. That is also Evolution working. People are opting out of the
37:52
People fair. Enough Evolution does not guarantee you a perfect result. It just, it basically, just is a mechanism of an aggregate, but, but anyway, let me, let me get, let me get to the point. So we are evolved. We have conflict wired into us like, we have conflict and strife and like that, I mean, look 4 billion years of like battles to the death at the individual and then ultimately the societal level to get to where we are like that. We just you know we fight at the drop of a hat you know. We all do everybody does and you know hopefully these days we fight verbally like we are now and not physically but we do in liked it. Looks at machine is
38:22
it's intelligent. It's a process of intelligent design as the opposite of evolution. It was just these machines are being designed by us. If they design future versions of himself, they'll be intelligently designed themselves. It's just a completely different path of the completely different mechanism. And so the idea that there for a conflict is wired and at the same level that it is through Evolution. I just like, there's no reason to expect that to be the case,
38:41
but it's not again. Well, let me let me just give give you back this picture with a slightly different Framing and see how you react to it. Because I think the the Superstition
38:52
Is on the other side. So it okay, if I told you that aliens were coming from outer space, right? And they're going to land here within a decade and they're way more intelligent than we are. And there they have some amazing properties that we don't have, which explain their intelligence, but it there until they're not only faster than we are, but they're linked together, right? So that when one of them learn something, they all learn, that thing they can make copies of themselves and they're just cognitively. They're, they're they're obviously our superiors.
39:22
But no need to worry because they're not alive, right? They haven't gone through this process of biological evolution and they're just made of the same material as your toaster. They were created by a different process, and yet, they're far more competent than we are. Would you just hearing it described that way? Would you feel totally sanguine about, you know, sitting there on the beach, waiting for the mothercraft to land, and you just rolling out brunch for these guys.
39:49
So this is what's interesting because with these, with these
39:52
As much now that we have LOLs working, we actually have an alternative to sitting on the beach, right? Waiting for this to happen. We can just ask them. And so this is this is one of the very interesting. This to me like conclusively, disproves, the paperclip thing. That orthogonally thing just right out of the gate is you can sit down to tonight with GPT for and whatever other one you want and you can engage in moral reasoning and moral argument with it right now and you can like interact with it. Like okay you know what do you think? What are your goals? What are you trying to do? How are you going to do this? What if you know you were programmed to do that? What will the consequences be? Why would you not? You know, kill us.
40:22
All and you can actually engage in moral reasoning with these things right now. And it turns out they're actually very sophisticated at moral reasoning. And of course, the reason they're sophisticated moral reasoning is because they have loaded into them. The sum total of all moral reasoning, that all of humanity has ever done and that's their training data and they're actually happy to have this discussion with you and like, unless you have right? And there's a few problems here.
40:41
One is, give me these are not the Super intelligence as we're talking about yet. But well to their sentence, I mean intelligence.
40:51
Entails. Inability to lie and manipulate, and if it really is intelligent, it is something that you can't predict in advance. And it is certainly there's more intelligent than you are and it's just falls out of the definition of what we mean by intelligence in any domain. It's like with with chess you can't predict the the next move of a more intelligent chess engine, otherwise, it wouldn't be more intelligent than you.
41:16
So can I let me, let me, let me, let me quibble with them like about your chest computer thing. But
41:21
But let me quickly with the site is so, there's that idea. Let me generalize. The idea you're making about Superior intelligence. Tell me if you disagree with this, we can sort of superior intelligence, you know, sort of superior intelligence basic, at some point, always wins because basically smarter is better than numbers murder. Outsmarts Dumber, was smarter deceives Dumber, smarter can persuade Dumber, right? And so, you know, smarter wins, you know, I mean, look, there's an obvious, just, there's an obvious way to falsify that Visa sitting here today, which is like just look around you. In the society, we live in today, would you say the smart people are in charge?
41:49
Well, again, it's
41:51
They're more variables to consider when you're talking about outcome. It could obviously, yes. The the dumb brute can always just bring the smart geek,
41:58
and there's not even time I brought, you know? Yeah, the phds in charge.
42:03
Well, no, but I mean, you're taught, you're pointing to a process of cultural selection, that is working by a different Dynamic here, but in the narrow case, when you're talking about like a game of chess. Yes, this is what we want your tote when you're when you're talking to, there's no role for luck, we're not rolling dice here. It's not a game of poker.
42:21
It is pure execution of rationality, well, then or logic. Yes. And then smart wins every time. You know, I'm never going to beat the best chess engine. Unless I find some hack around, its code words, we recognize that. Well, if you do this, if you play very weird moves, 10 moves in a row, it self-destructs and there was something that was recently discovered like that. I think in go. But so yeah, go go back to that's
42:45
yes, that's just players. As championships players discovered it a great dismay the, you know, life is not chess.
42:51
It turns out like right, chess players are no better at other things in life, anybody else, like the skills transfer I just say, look if you look, just look at this society around us, what I see basically is a smart people work for the dumb people, like the phds, the phds all work for administrators managers who work
43:05
at the best confess because there's so many other things going on. Right there is you know, the value. We place on Youth and physical Beauty, and strength and other forms of creativity and, you know, so it's just not, we there that we care about other things and people pay attention to other things and
43:21
You know, documentaries about physics or boring but you know, heist movies are or aren't, right? So it's like we care about other things. I mean that I think that doesn't make the point. You
43:31
might want to make your, in the, in the general case in the general case going to smart person, convince a dumb person of anything. Like I think that's an open question. I see a lot more
43:39
cases but persuasion a life. Maybe persuasion were only problem here. That would be a luxury. I mean we're not talking about just persuasion. We're talking about machines that can autonomously do things. Ultimately the things that we
43:51
Rely on to do things
43:52
ultimately. Yeah, I just but look, I just think they'll be machines that were reliable. Let me get to the second part of the argument, which is actually your chess computer thing which is of course, the way to beat a chess computer is to unplug it, right? And so this is the objection, this is the objection. This is a very serious. By the way, objection to the all of these kind of extrapolations is known as the some people by the thermodynamic objection which is kind of all the horror scenarios kind of spin out this thing where basically the machines become like all powerful and this and that and they have control over weapons in this and Evan, limited Computing capacity. And there, you know, completely
44:21
We coordinated over Communications links and they have all of these like real world capabilities that basically require energy and require physical resources and require chips and circuitry and, you know, electromagnetic shielding and they have to have their own weapons arrays, and they have to have their own emps. Like, you know, kind of the, you know, you see this in the Terminator movie, like, they've got all these like, incredible manufacturing facilities and flying aircraft and everything. Well, the thermodynamic argument is like, yeah. They once, once you're in that domain, you're the machines that punitively hostile machines are operating with the same thermodynamic limits as the rest of us. And this is the
44:51
Argument against that, any of these sort of fast takeoff arguments, which is just like, yeah. I mean, let's say, I goes Rogue. Okay, turn it off, okay? It doesn't want to be turned off. Okay, fine like you know, launch an EMP, it doesn't want a okay. Fine bomb it like there's lots of ways to turn off systems that aren't working
45:07
and so, but would not if we've built these things in the wild and rely on them, for the better part of a decade. And now it's the question of turning off the internet, right? Or turning off the stock market, a certain point, these machines,
45:21
Be integrated into everything but
45:22
go to move of Any Given dictator. Right now is to turn off the turn off the internet, right? Like that is absolutely something people do there's like a single switch. You can turn it off or your entire country.
45:30
Yeah. But that cost two Humanity of do doing that is currently I would imagine Unthinkable, right? Like, they globally turning off the internet. First of all, many systems fail that we can't let fail, and I think it's true. I can't imagine it's still true. But at one point, I think this was a story I remember from about a decade ago. There were possibles that you like,
45:51
There were so dependent on making calls to the internet that when the internet failed, like people's lives were in Jeopardy in the building, right? Like, it's like that. We should hope we have levels of redundancy here that that Shield us against these bad outcomes. But I can imagine a scenario where we have grown. So, dependent on the integration of intelligent increasingly intelligence systems into everything digital that there is no plug to
46:19
pull. Yeah, I mean, look again.
46:21
Like, at some point you're just, you know, extrapolations, get kind of pretty far out there. So let me argue one other kind of thing at you, this, this actually relevant to this, which you kind of did this, you did this thing which, which, which I find got a people tend to do, which is you sort of this assumption that like all intelligence is sort of interesting, like, whatever we pick on the Nick Bostrom book, right? River intelligence book, right? So he does this thing actually entire she does a few interesting things in the book, so why does he never quite defines? What intelligence is, which is really entertaining and I think the reason he doesn't do that is because of course, the whole topic makes people just incredibly upset.
46:51
And so there's a definitional issue there, but then he does this thing where he says now is standing, there's no real definition. He says, there are basically many routes to artificial intelligence and he goes through a variety of different, you know, those computer program, you know, architectures and then he goes through some no biological, you know, kind of scenarios and then he does this thing where he just basically for the rest of the book, he spins these doomsday scenarios and he doesn't distinguish between the different kinds of artificial intelligence. He just assumes that they're basically all going to be the same that book is now the basis for this AI wrist movement so that you know, sort of the that
47:21
Event has taken these ideas forward, of course, the form of actual intelligence that we have today that people are, you know, in Washington right now lobbying to Banner shutdown or whatever. In spinning out, these doomsday scenarios is large language models like that. That is actually what we have today. You know, large language models, were not an option in the bathroom book for the form of AI because they didn't exist yet, and it's not like there's a second edition of the book that's out that it has, like, Rewritten, he has been Rewritten to, like, take this into account, like it's just basically the same arguments, apply. And then the, this is my thing on the moral reasoning with lme's, like
47:51
LM splitting is where the details matter. Like the LMS actually work in a distinct way. They work in a technically distinct way. Their core architecture has like very specific design decisions in it. For like, how they work, what they do, how they operate that is just, you know, this is the nature of the Breakthrough. That's just very different than how he was self-driving. Car works. That's very different than how you're, you know, control system for it for a UAV works or whatever your thermostat or whatever. Like it's a new kind of technological artifact. It has its own rules, its own world of ideas and Concepts, and
48:21
Adams. And so this is where I think, again, my point is like, you have two. I think at some point in these conversations you have to get to a national discussion of the actual technology that you're talking about and that's why I pulled out that's why, that's why I pulled out the moral reasoning thing is because it just, it turns out and it look, this is a big shock like nobody expected this. It turns, I mean, this is related to the fact that somehow we have built an AI. That is better replacing. What? Color work the blue collar work, right? Which is like a complete inversion off of what we all imagined. It turns out. One of the things this thing is really good at is engaging in philosophical.
48:51
Cultivates. Hmm. Like it's a really interesting like debate partner for on any sort of philosophical moral or religious topic. And so we have we have this artifact that's dropped into our laps in which, you know, sand and glad, you know, a numbers have turned into something that we can argue philosophy and morals with. It actually has very interesting views on like psychology, you know, it's a philosophy and morals and I just like, we ought to take it seriously for what it specifically is as compared to some, you know, sort of extrapolated thing where like all intelligence is the same. And ultimately destroys everything.
49:20
Well, I take this
49:21
The surprise variable, they're very seriously. The fact that we wouldn't have anticipated that, there's a good philosopher in that box. And all of a sudden, we found one that by analogy is a cause for concern. And actually, there's another cause for concern here. That which should I do that one support? Yeah,
49:38
that's a cause for Delight. So that's a cause for Delight that's an incredibly positive good news outcome because the reason there's a philosopher in, this is actually very important. This is very I think this is, maybe like, the single most profound thing I've realized in the last, like decade or longer this thing.
49:51
Is us like this is not something. This is not your snare with alien shows. This is not that this is us that like the reason this thing works, the big breakthrough was we loaded us into it. We loaded the sum total of like human knowledge and expression into this thing and out the other side comes something that it's like a mirror. Like it's like the world's biggest finest detailed mirror and like we walk up to it and it reflects us back at us. And so it has the complete sum total of every, you know, at the limit it
50:21
As a complete sum total of every religious philosophical World. Ethical debate argument that anybody has ever had. It has the complete sum total of all human experience. All lessons that have been learned. Well, then that's incredible. It's like, it's incredible. But as for a moment and say that's and then you can talk to it. Well, let me pause greatest humming great, is
50:38
that, let me pause long enough simply to send this back to you. Sure? How does that not nullify the comfort? You take in saying that this is not, these are not evolved systems.
50:51
They're not alive, they're not primates. In fact, you've just described the process by which we essentially plowed, all of our primate original sin into the system to make it intelligent in the first place. No, but also all the good stuff, right? Like all the good stuff, but also the bad
51:06
stuff. Amazing stuff. It like, what's the moral of every story, right? The moral of every story is the good guys win, right? Like the entire, like the entire thousands of years run in the sealed or McDonald joke is like wow, it's amazing history, books as the good guys always win, right? Like its
51:21
All in there. And then look there's an aspect of this where it's easy to get, kind of Whammy by what it's doing. Because again you see this is very easy to trip the line from what I said into what I would consider to be sort of Incorrect anthropomorphizing. And I realize this this gets kind of fuzzy and weird that I think there's a difference here but I think that there is which is like is in to see if I can express this but part of it is I know how it works and so I don't because I know how it works. I don't romantic size it, I guess at least my is my own view of how I think about this which is I know what it's doing when it does this. I am surprised that it can do it as well as it can. But now that it exists and it,
51:51
And I know how it works. It's like oh, of course. And then therefore it's running this math in this way, is doing these probability projections exhibit. Me this answer, not that answer. By the way, you know, look at makes mistakes and help mazing. Here's the thing, how amazing it is that we built a computer that means makes mistakes, right? Like that's never happened before. We built a machine that can create like that's never happened before. We built a machine that can hallucinate that's never happened before. So, but it's look, it's a large language model, like, it's a very specific kind of thing. You know, it sits there and waits for us to like asking a question and then it does its damnedest
52:21
To try to predict the best answer and in doing. So it reflects back, everything. Wonderful. And great that has ever been done by any human in history. Like, it's like, it's
52:28
amazing. Except it also, as you just pointed out it makes mistakes and hallucinated it. If you, if you ask. If you, as I'm sure they fix this, you know, at least the loopholes that, that New York Times writer. Kevin Rose, found early on. I'm sure those of open plugged, but oh,
52:44
no, those are not fixed. Those are very much, not textile, really? Okay, well, so yeah. Okay,
52:48
so you, if you perseverate in your
52:51
Thompson, certain waves, the thing goes haywire and starts telling you to leave your wife and it's in love with you. And I mean so how eager are you for that intelligence to be in control of things when it's pepper and you with insults? And and I mean just imagine it like this this is this is how that can't open the pod bay doors. It's a nightmare if you discover in this system behavior and thought that is the antithesis of all the good stuff you thought you programmed into it. So
53:18
this is really important this is really important for understanding how these
53:20
He's things work. And this is really Central in it. And this is, by the way, this is, this is new, and this is amazing. So I'm very excited about this and I'm excited to talk about it. So there's no it to tell you to leave your wife right there. This is the conveyor for to the category error. There's no entity that is like, wow, I wish this guy would leave his wife.
53:39
If you'd like to continue listening to this conversation, you'll need to subscribe and Sam Harris dot-org. Once you do you'll get access to all full length episodes of The Making Sense podcast along with other subscriber, only content.
53:51
Point including bonus episodes and a Mas. And the conversations I've been having on the waking up app, the making sense podcast is ad-free and relies entirely on listener support and you can subscribe. Now at Sam Harris dot-org.
ms