The Ezra Klein Show: The Culture Creating A.I. Is Weird. Here’s Why That Matters.

New York Times Opinion New York Times Opinion 5/2/23 - Episode Page - 1h 6m - PDF Transcript

So this is the second in a little California two for series first with Scott Wiener that

episode came out a few days ago and it's all about the politics of California and the politics of

the Bay Area. And this is more about the culture of it and the culture of its weirdness and what

emerges if you're willing to take that seriously. Something about Northern California culture in

particular is it is this very strange braiding of technology and engineering and capitalism and

mysticism and openness, a radical kind of openness. And all that is created, the technology industry,

it is creating AI now, which is obviously going to be a major topic of this conversation with

Eric Davis. And to take California seriously and to understand what makes it special and what

makes it frustrating and why what happens here happens here, I think you have to take the weird

quite seriously. And Eric Davis is a guy who takes it weird quite seriously. He is a historian

of California character culture. He is trained as a religious historian. He's written books like

technosis and high weirdness. And he's tried to make weirdness into an interpretive framework

and to understand the role it plays in this place that he loves and chronicles. And I found his work,

it is very weird, but I find his work very, very helpful. And trying to understand how to maintain

an openness without losing a skepticism is a really important talent, a really important discipline,

a really important practice. But I think this is actually pretty helpful for understanding

why so much strange and so much powerful technology comes out of such a small area

of the globe. As always, my email as recline show at mytimes.com Eric Davis, welcome to the show.

It's great to be here. For a lot of your career, a lot of your older books, I understand it's part

of what you've been doing as being a theorist of California culture. And then the last book,

and I think threaded through a lot of your more recent work, is becoming a theorist of this idea

of weird or weirdness. And they seem very connected. So what is the word weird? What is the concept

weird in your understanding? And what makes California weird? Yeah, that's a really good one.

One of the things I was doing with my decision to make the word weird mean more, which some people

already been doing for a while, but there's sort of a sense that it has something for us now that

it didn't have before. And one dimension of that is simply that there's more substance to it than

we allow. And one way of looking at that is that if you pay attention to how people use the term

colloquially, what kinds of things they put in that category, you start to realize that it does a

lot of work, but sort of off to the side. It's the place you put things that are uncomfortable,

awkward, strange, maybe a little gross, kind of fascinating, spooky. It covers a strange range

of things. And I realized that there was a lot hidden there. So I said, well, let's actually

kind of look at this word. Where does it come from? How does it evolve? What are the concepts

associated with it? And, you know, you go back to Shakespeare with the weird sisters, you find

there's this whole marvelous sort of underground history of how the ideas of fate merge with ideas

of the uncanny and the spooky, and then increasingly the bizarre and the pulp and the perverse and the

macabre. And in that current, there's something about Bohemia, there's something about the night

sides of consciousness, the edges of our culture that gets articulated and expressed in a way

that because it's been kind of hidden, unlike other terms that we might pay attention to,

you can contrast it very interestingly with the idea of the uncanny, for example.

But because it's been kind of hidden, it has a lot for us now. So that's something about the

general term. I've been trying to think about why I find your work very helpful for being a

Californian and for understanding California. And one thing I've come to is the idea that you

can have two relationships to that bucket of the weird, which is one relationship is dismissal.

To say something is weird is to put it out of sight, to sort of brush it off of your reality.

And the other is an attraction to it, right? An orientation towards it. To say something is

weird is to say it's alluring, that's seductive. And when I think about what is different in

particularly San Francisco, though, I think more broadly, quite a bit of California,

is this interest in the weird, this kind of openness to things that other people dismiss.

If you think that is right, I'm curious why you think it is right.

Oh yeah, I do think that's very true. And I like that the ambivalence of it is really part of the

power. I mean, California is a really, it's an unusual place. It was unusual kind of almost

from the get go. And it has a lot of really intense polarities in it culturally, how it

defined itself. I mean, even today, when we talk about it, we sort of talk about it from a coastal

point of view, when it actually has all of this, you know, some of the most righteous

racist and reactionaries, you know, were also nurtured in its climbs. So it's a place of polarity,

but particularly a place of what I would call mutation, meaning that when it was recognized

as a site of potentially great transformation and great exploitation, it was from the get go

seen as a place to work, to process, to develop, to transform. I mean, we think about like 19th

century images of California and we might, you know, the sort of pop story is Yosemite and the

great natural wonders and it's just gorgeous and the weather is like Greece and it's like the sort

of beautiful bucolic place, but it was one of the most rapidly industrialized places in the history

of the United States. So it was always seen as like material to work. And at some point, it became

clear that part of the material to work was us. And so the development of media, the development

of Hollywood photography developed here, aspects of television, like there's a lot of technological

development that happens here. That's also sort of about mediation and how we understand ourselves

and how culture transforms. And in a lot of ways, we're kind of living globally in a construct that

was filtered through this peculiar space, which has, again, a sort of popular story about exploration

and novelty and courage and innovation. And then another story that's more desperate and strange.

It's like, if you take the almost mythological idea that the West expands West until doing its

settler colonialist thing and all its glory and horror until it slams into the Pacific,

the largest body of water in, you know, on the planet and goes, well, now what? What do we do

now? Where do we go now? Do we go into aerospace? Go into the heavens? Do you go into media? So

you build like a virtual kind of reality? We move into the sort of computer spaces. These are all

places that end up being inflected with a certain kind of exuberance and a certain kind of anxiety

and even desperation about what is the human now? And do you go internally into consciousness and

meditation and psychedelics? And, but I want to pick up on something you said at the beginning of

that answer, because I think it's important. I think the stereotype of California is that what

is getting mutated, evolved, created here is some form of cultural or even political liberalism.

And I think what is often missed is how much of the modern right is birthed here.

Ronald Reagan, Richard Nixon, if you go into the modern right, Breitbart, Ben Shapiro and the Daily

Wire, I don't think it's entirely fair to say he's right, but Joe Rogan was taping out of here

for a long time. That whole thing of the intellectual dark web was based here to the

extent it was anywhere, the Claremont Institute, which is the central theorists of the Trump era

to the extent there are any there in California. And there's this really, I think, interesting way

in which the boundary pushing cultural liberalism of California has also created its own reaction

on the right. And both of those have become very important in national and international politics.

Yeah, I'm really glad you pointed out that is totally true. You have liberalism, you got the

sort of various forms of intense conservatism. And then there's just the question of what do

you do with libertarianism? And that's in a way part of the secret key. And that's also part about

the way that the hippies become the cyber capitalists. It has to do with like aspects of

libertarianism that I still think we don't really wrestle with adequately in American history. And

nobody else in the planet even understands it because they're like, what libertarian, what is

anarchism? Is it right wing extremism? What, where do I do with this stuff? But there's something

about the way that the social logic of libertarianism overlaps both liberalism and a certain kind of

conservative intensity that I think also gets us something very specific about the state.

So I grew up in Irvine, which is when you talk about California, Irvine and Orange County,

particularly in the era in which I grew up in it, much more right wing, we did not elect our

first Democrat to Congress until Katie Porter, which I think was in 2018, if I'm not wrong.

And that's a very different culture. I mean, the aerospace industries, defense industries down

there, there's a lot of California's. But one thing that I think a lot of people mean when

they talk about California culture is really Northern California. And this sort of strange

braiding of what some people call consciousness culture, then with technology and money. And

that's a really potent combination now. But one of the things that I think is interesting about it,

and it'll bridge us to maybe some of our main topics here, is a way in which that openness to

strange things has been an accelerant to the technological industry here, that openness to

ideas that sound weird, people that seem weird, has been a kind of secret sauce in attracting

the folks in the industries that have continued sort of inventing some of the biggest companies

and technologies in the world. So how do you understand the interaction of weirdness and

Silicon Valley in particular? Yeah, well, I think it's, you probably have to get into the nitty gritty

of engineering culture, nerd culture, and then just the fact that a lot of the psychedelic

movement and the consciousness movement that was related to it in so many ways, much more deeply than

we often kind of remember or imagine, that those also had, if you will, a technical dimension to

them, that there's a protocol logic to the weird. It's not about necessarily having a stance or

having a concept. This is how the world is. It's more like, hey, we can play with this. We can

manipulate this. We can hack this. There's a sort of relationship to the possibilities of reality

that has an open-ended experimental quality that almost inevitably invokes bohemian traditions,

ideas of anarchy, of play, of the unknown, and all of those, you know, you can romanticize them,

or you can just look at them more pragmatically, where they're just, there's a material, we're not

satisfied. Let's work with the material. Let's see where it goes. So if you trace these lines back,

like if you look at the history of the personal computer, and what are all the elements that

are going on in the 1960s, you're going to find these sort of zones where there's an engineering

mindset overlapping with a like, let's change reality mindset, and they're both practical

and sort of visionary and also playful, if you will. And there's part of the weird is a kind of

playfulness, like you just don't know, and it might go south or be strange, but it's part of a kind

of experimental ethos. And over time, then, that gets sort of like, becomes more and more coherent

and more and more visible, more and more obvious that this is a thing to do, that I'm not just

going to go to Burning Man. I'm going to get my whole staff to go to Burning Man because it just

opens your mind. And whether you see that as a petri dish of future products and of future consumers,

or you see it as a kind of edge condition of capitalism or technology or culture,

where inventions happen on the fly and a lot of them go south, but that's part of what the game

is, is that there's a lot of oddities along the way if you're going to find some kind of

novelty. So it gets selected in a way that, you know, from someone like me who is more interested

in the Bohemian cultural consciousness side of things, it also can look sort of insidious,

like there's sort of like, oh, we've gotten really good at learning how to capture and exploit these

sort of elements. But that's also kind of a not an entirely accurate way to look at what's happening.

So let's talk about AI, which is a place where I've reached for the metaphors of the weird

working off of some of your work. And you've done a lot of thinking there now about AI and

weirdness. So what makes AI weird? That's such a good question. I've really been thinking that

a lot. I think part of it, if you have to go, what is the weird? And one of the ways of thinking

about it is that there's something here that challenges my set assumptions about how things

work, that has an additional quality, let's say, of some kind of uncanniness, something that is not

simply confusing, or alien, but has a familiar unfamiliarity to it. And I think the most obvious

place to look at it is, and the first place Wallace explained personally. So I was kind of ignored a

lot of the AI stuff I've known about AI safety issues for a very long time. And I know people

are really into it, or just, you know, I knew a little bit about it, but it just didn't hit me,

really, I almost kind of consciously avoided it, because I sort of felt that it was going to be

something that was going to take over my imagination and mind, and I was going to have to pay a lot

of attention to it. And so when I finally read this book, Pharmaco AI by Kailata McDowell,

which was co-written with GPT-3, and K has a really, they are interested in shamanism and

ayahuasca and the future of humanity, and all these kind of very, very Bay Area topics,

all woven together in this bizarre braid. And I'm reading the book, and then I'm reading

GPT-3, and I can see the way it's kind of a collage, and then there's a statement that hits me.

And I slip into projecting, constructing an author or a sense of an author that is almost

immediately, the drugs pulled out from under it, and I'm left in this space of ambivalence,

but particularly about agency. And there's this sense of like an almost animist sense that there's

something going on here that's more than just pattern recognition and an algorithm choosing

the next best word. And you can intellectually lay that back on and go, okay, this is just a

machine, it's just operating, it's read the whole internet, it's just making a really good guess,

it just has that feel, and you're like, okay, but that's not at all what's happening kind of

emotionally or even spiritually in that response. And that's just one example. I think it's a

particularly concrete one of where do we locate the agency if we're really trying to stay in a

critical mindset. I mean, some people are just like, sure, I'm just talking to the machine,

no problem, I'm just talking to a chat machine, no big deal. Yeah, but if you're like trying to

deconstruct it, and at the same time recognizing its interactive dimension, well,

then we're in this kind of animus space where I'm not so sure if that doll in the corner is

actually animated or not. And that's a very classic site of the uncanny. So there's a suddenly,

there's an uncanniness in the midst of this, you know, highly commoditized, major, major

world changing machine that is, well, that's pretty weird. It's why I like a quote from you,

which is that weird things are quote anomalous, they deviate from the norms of informed expectation

and challenge established explanation, sometimes quite radically. And that felt very true to me

here on two levels. One is one you're getting at here, which is when you talk about the norms of

informed expectation, when we interact with anything that has a facility with language

and the ability to work in context that these chatbots do, we assume agency, our informed

expectation is there is something we would call a mind on the other side of that. And you can go

way too far with that and, you know, assume sentience and consciousness, I think you can go

not nearly far enough and just say, oh, this is an autocomplete and you're an idiot, forget and

fooled by it. But it's why I think trying to exist in a space of this is challenging, this is strange,

is helpful. But then the other, when you talk about weird things, challenge established explanations,

we don't have good explanations of what's going on in these systems. And so this world

where more and more might get turned over to them is a world where we might lose. And I think this

is actually one of the possible coming traumas that people are not quite paying good enough attention

to. We might lose a lot of legibility of our own societies. And you can say in certain areas of

science, we already have, right, we don't really understand quantum physics, that kind of thing.

But just our kids will have friends who they understand to be friends operating on their

phones. But we don't know why those friends, those inorganic and whatever they are, intelligences

operate the way they do. That loss of being able to explain the world around us at any level of

granularity, that's more profound than I think people are giving it credit for. And there's more

reflection than I think people are giving it.

No, absolutely. I mean, I couldn't agree more. I remember the article I read 12 years ago,

where that shift became clear to me. Oh, now we're getting to the place where you can't explain

the outcome because of the complexity, because of the alienness of the operation, because of the

density of the data. I mean, I almost felt it like a kind of nausea, because it's really significant.

And most of us were not scientists. We are all used to living in a world where we don't understand

how our phone works. We trust that the guy who makes the phone knows how the phone works,

so I don't worry about it, or I trust that the scientists that I'm reading about know something.

And that kind of trust has obviously shifted more than we might have imagined, but we still kind

of operate in that zone so that such a radical shift in scientific production, technological

production, wouldn't hit us personally. I may not know how the phone works, but Bob knows how

the phone works. And then when you know that Bob doesn't know, and when Bob's like, I don't know,

like, we're just going to ride this thing, one of the things, and this is where we get back to

the weird, is that that is such a significant shift away from a kind of deep, modern archetype

of knowledge and power, because one of the things that I've been tracking is how, when people try

to talk about or articulate all these very complicated, unnerving and urgent issues that

we're facing now, when and where they grab for myth, when they look for words like summoning,

or the golem, or these sort of, those things are not insignificant. They might just be, oh,

well, we're just trying to illustrate or have a common cultural signifier for these processes.

And I'm like, well, yes and no. I mean, in a way, my whole work, my whole attitude towards

technology has always been about finding those mythic dimensions and then taking them seriously,

but not literally, but to see the way that they operate and what stories they tell.

And so I do think that we are in a situation where that zone of the weird, the Sorcerer's

Apprentice moment, where there's a shift in the power and the thing that we have created

moves outside of our direct control or even understanding that has really profound

directions that signify, to my mind, the beginning of the end of a certain kind of arc of human

production and experience. We can think about in terms of enlightenment values. We can think

about in terms of the emergence of modern science. There's some way in which we're closing something.

And the consequences of that, imaginably, politically, they're going to be very,

very significant. And part of that has to do with this loss of Bob knows what's going on.

I don't think everybody listening is going to love this area because I've over time cultivated

an audience that likes things to be concrete. But I am always struck by the dissonance between the

technical illegibility and the mythical legibility of these systems. And in particular, it is the most

mythed and storied up area I have ever seen or covered. I mean, we have however many decades

of sci-fi, we have Ultron and Hal and Skynet and The Matrix and Asma's Laws of Robotics and then

going backwards. We have fantasy and summoning and then people talk about the Golems and the

Sorcerer's Apprentice. And I've recommended it before, but Megan O'Geeblon's great book,

God, Human, Animal Machine, is sort of all about a lot of the Christian mythology operating in a

sub-rosway here. Singularity is a very mythic concept, very similar, very eschatological

to things you'll see in raptures and so on. And there's this way in which we don't understand

these systems that well, and then we perhaps understand them all too well. I mean, you could

argue we're getting trapped in stories that maybe it doesn't net out that way at all. Maybe this stuff

tops out at a fairly low level, right? It's a pretty good chatbot. And we're not able to get to

these super intelligences and we've let ourselves stray with however many years of imagining what

we could create. But it's something I've appreciated about your work because I noticed it just

traveling through this world, how mythed up it is, how much people are operating with stories

running in the back of their minds, both consciously and unconsciously. And those stories

are creating a lot of interpretive framework because they're standing in for things we actually

don't yet know how to interpret. Yeah, or might not be able to. There's always a place where we're

dealing with these changing human models and cognitions. And now we know that as the technologies,

not just in AI, but technologies get more and more powerful, we were like, what? How do we wrap

our heads around this? Well, we got all the science fiction lying around. Well, that makes sense. So

there is this problem about self-fulfilling prophecies, about getting caught by narratives

that then shape your view so much that you're not able to see other developments. So we absolutely

have to be aware of these things. And a lot of my work has been kind of like a two-step process

where on the one hand, I'm even more open for the mythological potentials, the speculative

possibilities, the wild dreamings than your standard sort of culture critic. And at the same time,

it's like, yes, and we must deconstruct and see what that story is kind of telling us because

we are in a place of kind of self-fulfilling prophecies. Well, that's particularly true for

this technology. I largely agree with people who say that no technology is truly neutral.

But compared to a semiconductor or a bandsaw, AI trained on human language,

where all these myths and all these stories are in the training set. So when we ask it to basically

act like an AI, what it understands, again, these verbs are tricky here, but what it is able to

reflect back at us because of the way it is pattern matched across our language is the stories we

have written about how AIs interact and act in relationship to human beings. There is this funny

sense, particularly with large language models, where the more we have story told about them,

the more they are trained on the stories, the more we have created a thing that is our own

imagining of the thing that we have created. It's why I've always had a slightly different

take on the Kevin Ruse being Sydney conversation that went very viral at the New York Times.

I mean, that was very clearly to me that whatever was powering Microsoft Chatbot,

it had read enough. It had been trained on enough data about rogue AI slipping the reins that it

knew how to answer that question that he was beginning to get at to provoke it towards.

And that's, again, just very weird. This thing reflecting ourselves back at us and our stories

about it, it's a very non-neutral technology. Absolutely. No, that's a wonderful example.

I mean, one of the things that my work is motivated by is that there's better or worse myths

that you can bring to trying to understand things that have some kind of mythological dimension.

And in this case, I would say that the AI is, it doesn't have agency in the way that we keep

assuming that it does so that when people try to find its real motivation, it's such an easy

model to get into. And like you say, it's just simulating a character that is responding to

what it perceives as your question. So what do you have then is this immense series of simulations

of characters based on stories. It's a Proteus. It's not an evil genius. That's what it's doing,

is that it's responding and reflecting and circulating. And indeed, one of the, I think

the greatest things about the technology from a humanist point of view is that it forces us to

think about all this stuff really seriously, really strongly. Like what does it mean to

have our concepts about reality fed back to us in this way? How do we trust stories? Are we made

of stories? You know, all of these kinds of chin stroking questions become a lot more pertinent

right now. I guess one part of it, I'd want to open up a little bit. And you can tell me if this

is what you're saying in a way or it's actually a totally different point. But as I think of

one of the central, I can't decide if I want to say blind spots or divides in the way California,

in the way Silicon Valley thinks about technology as being about whether or not we create technology

and then it is ours to control, or we create technology and then it acts back upon us. I mean,

you can say, look, it's just trained on our language. And so it is under our control, simply

just parroting us back at us. But the idea that that will stop there, right? The idea that is

somehow a way of putting it in a box and then you can be like, okay, it's safely in the box.

That that's a pretty profound mistake. Twitter changes people, Facebook changes people, the

internet changes people. I mean, I've become a big Marshall McLuhanite in my middle age.

And this idea that these mediums and technologies always change the people using them in some

cases do so more so than anybody realizes. Is I always feel the single biggest missed

understanding among technologists who you think would know better?

Yeah, I'm surprised at that as well. I mean, I was a Marshall McLuhanite from the get go and

always very interested in precisely the ways that there were unexpected affordances to media

technologies, new technologies that shifted not just what we did or how we even imagine culture,

but who we are, how our brains are constructed, how our senses of the world are constructed,

and to have respect for the unknown outcome. And that's something that's always sort of terrified

me about contemporary technological development is the lack, whether you want to think about it

as hubris or a certain narrow minded belief that we can control the meanings and effects of these

technologies and the willingness to just throw really powerful things just into society in general,

just treat the whole thing as a Petri dish, as a competitive Petri dish too. And in some ways,

it's just inherent in the way in which we ended up expressing capitalism in terms of how the

competition operates. But it has often surprised me how technologists themselves don't necessarily

acknowledge it. The people that I've always known were always most interested in that.

Like they had that kind of McLuhan-esque view, which in a way is kind of an animus view. It's like

there's something in the object that has its own, it's got its own story to tell. It's going to

make a move and we're in a relationship. There's an interactive relationship with these things

that have effects that we cannot predict and we have to work it out over time and work it out

in a way together. But there's so little time for the degree of transformations that we're making

with these new technologies that it almost seems like the blinkered view that we can control

something as powerful as the large language models released into society at large,

it's like one of those delusions that people have that enable them to continue to function

in their job properly. But it's very hard for me to appreciate that.

So to be McLuhanites for a minute, to take his fame saying the medium is message,

if the medium is message, if the medium encodes certain ways of being and thinking that change

the people who use it, what do you think the message of the AI chatbot medium is? Which it's

worth noting is a medium being built on top of a technology. Chatbotting is just one of many,

many applications. And the fact that that's one taking off is going to also shape the

technology differently than it might otherwise shape. But yeah, what's the message of the medium?

Wow, that is an extraordinary, extraordinary question. I must admit, I'm spinning a bit here

because there's so much going on. Let me try one on you, which is that something

that I think is very present in the way people are thinking about AI is the idea that the output

is what matters and that the sort of work of knowledge of creation is this kind of you run

a search on the information in your head, and then you or maybe now to make your life easier

and quicker, the AI spits out the output, kind of condenses it down and more or less predicts

what you need. And there's much more, I think, if you pay attention to yourself as a human being

mystery in that process. So one thing I am skeptical of, is it AI is actually going to make people

better at as much as they think it will? For instance, the work of writing and Ted Chiang,

the sci-fi writers made this point, the work of writing about first draft is not just a waste

of time on your way to a good fourth draft. It is often an intellectual space in which you realize

you shouldn't be writing that draft at all, in which you realize actually you should be doing a

totally different piece, in which you realize something that you had never thought of and

that isn't within the training set for that draft is actually relevant here. And it's that

kind of mysterious intuitive sense that leads to the creation of great work, also often just

decent work. The number of times that I've been driving around in the car and come up with the

column idea that turns out to be exactly what I needed, but very different than what I had when

I got in the car is many. And that idea that we can just outsource that, I think it's a way of

thinking of ourselves as computers, as opposed to the more slightly mysterious creatures we are.

But by applying that analogy then onto the computer and kind of suggesting that people

can be so much better off if they have the AI right there draft for them. I mean, that to me is a

way we could then change in that direction. If people actually begin doing that and stop doing

that work themselves, some things will be gained, it will be quicker to summarize a bunch of data

into an output. But it's also making yourself more like an AI, and less like a human being.

Yeah, it's remarkable the way in which there's an invitation to let go of a certain space of

the unknown, the mysterious, the novel, the unpredictable in our own minds. And perhaps

one scenario is that it becomes clear that these things are insufficient, and so we become even

more aware and we honor that aspect of ourselves even more. But there's also the possibility that

it wasn't necessary all along. And it is easy to imagine a situation where we become used to

offloading more and more and more decisions, and thereby accepting that ourselves, that we too are

predictable machines. So I think maybe part of that message is, has to do with prediction

and pattern. And what is in us that is not predictable, that is not pattern. And then can

we isolate that? Can we put our finger on it? But isn't that very putting the finger on it

part of the loop? Where do we put it? And where do we use that as a way of saying enough? Or like,

no, I'm not going to do this. No, I'm not going to turn even now. I'm very aware of when, you know,

what is it like 80% of things people watch on Netflix are based on the on the recommendation

engines? Like, oh, that's a lot, you know, I grew up, my whole world was like cultural recommendations,

opinions, how people constitute themselves through their taste, through turning people on,

all these forms of sociality, like this whole world is already kind of passing as we just feed

ourselves. And we give up that kind of human negotiation of decision of taste of options.

So we can already see that happening. And my again, my hope is that it just makes it more

obvious, let's say in things like writing or poetry, where we we have a revenge of the humanities,

in the sense that those things that humanities are pointing to, that's precisely what eludes

the repetition, at least, hopefully. You quote a 2017 blog post by Sam Altman, who's a CEO of

OpenAI. And I want to read a bit of it. It begins quote, a popular topic in Silicon Valley is talking

about what year humans and machines will merge. Or if not, what your humans will get surpassed by

rapidly improving AI or genetically enhanced species. Most guesses seem to be between 2025 and 2075.

And then he goes on to say quote, although the merge has already begun, it's going to get a lot

weirder. There's that word. We will be the first species ever to design our own descendants. My

guess is that we can either be the biological bootloader for digital intelligence, and then

fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.

So one distinctive thing about this culture you hear it in that Sam Altman post is it's

a bunch of people building something that they think has a non trivial chance of wiping out or

otherwise displacing humanity. And he might say, that's a weird thing to build. I probably would

not build something that I thought had a somewhere between 10 and 30% chance of upending the species.

So you've done a lot of writing. One of your early books was called Technosis about the Gnostic

mindset. And I think that's one way of understanding what is happening here. So can you talk a bit

briefly about the Gnostic quest for knowledge, like what that was and how it might be helpful

interpretive framework here? Yeah, absolutely. I mean, again, it's important to emphasize that I'm

using this as a model for thinking, a pattern, an archetype, if you will, rather than something

specific about second century curious Christians. But the idea boiled down to its sort of mythic core,

the idea of Gnosis is that there's an order of knowledge that is transcendent to the world that

we live in. And that often the world that we live in is seen as a mistake, or a trap even,

perhaps even constructed by a lower or evil deity. So the idea shifts, if you imagine the

Christian story is sin and redemption. The Gnostic story is ignorance and awakening or

ignorance and knowledge, kind of higher knowledge. We got to get out of here. And the only way out

is up. And one of the points that I made in technosis is, you know, not unlike Giblin's book

where there are these religious structures and metaphors that recur with such intensity

that we have to look at them as such, we can't just say, oh, it's just sort of, oh, it kind of

resembles this thing. Who cares that was then, this is now. And the Gnostic flavor is a kind of

denigration of matter, of the conventional reality of our bodies, and a willingness to put all your

cards in some higher order. So the exuberant embrace of the idea of uploading ourselves into

computers, which starts becoming a kind of a cultural point of Singulitarian ideas in the

1990s or 80s, even though you can trace the idea back farther. That's a good sign of that kind of

attitude and that sort of willingness to disidentify with our material conditions

and a kind of transcendent mode. So, and then what you can imagine happening is rather than having

it be some kind of spiritual transcendence is that it gets mutated, if you will, into a technological

possibility on the forward timeline. So rather than having it be something that I can transcend now

through various esoteric practices, instead it's something that's adhering in the technological

development is going to produce a moment in the future of something like transcendence.

I think it's really important to acknowledge the similarities because I think it's really

important at this stage in the game to intensify deep in our sense of what the human is by embracing

the whole course of what we've come through. Whereas the desire you hear in a lot of these

voices is all that is just junk. It's all BS. We're at this one inflection point of evolution

and we either jump on it or we don't. And I'm like, I don't know, maybe we have unleashed an

apocalyptic situation, but the very least, we got to take it all on board. It's the whole story.

All of the human experience is sort of demanded at these kinds of points. And again, it might not

work out. It might be in 10 years we look back at this and go, oh boy, we were huffing the glue.

I don't think so, but it's possible. But that still is not, it doesn't undermine what I'm saying,

because it's about in a way embodying the density and resonances of human beings and all of our

relations at a point when the fundamental question of the human is raised in our face,

culturally wide in a big way. Let me get to the huffing glue question. I think it was one

version where the system simply top out, the technology ends fine. I think it's another one

though, which you're getting at in one of your recent pieces on the potential banality of all this,

which is you're dealing with models trained on what we've already said and thought and done,

that is in a protean genre imitating way, mimicking it back at us. And so very far from making the

future unbelievably different than the past, what it will do is make the future more like the past.

It will be a boundary on human creativity and change and transformation. You write far from

serving away from a norm. These systems make the future by conservatively iterating the past.

Even the apparent creativity of large language models relies on the novel shuffling of a

gargantuan deck of cards that already exists. So I think that's another way of thinking about

what might fail here, that instead of being an opening to something totally different,

completely unpredictable, it's actually a narrowing to the completely predictable,

literally built on prediction engines.

Yeah, absolutely. And is there something else going on in our own experience? It's so clear

that we are not just predicting, that we're not just looping, that there is a space of novelty

and of potential creativity, of wrestling with possibilities, that is so intrinsic to how we

operate, that it's very difficult to imagine collapsing that and leading towards something

productive and something interesting. And what does that look like on a culture-wide basis as we

get used to enjoying cultural products that are produced by large language models

and the way in which they recirculate? Because you can cynically say, well, that's already

kind of happening. You look at popular music. What is popular music? Is it like incredible

acts of generative novelty? No, it's more like mixes and matches of things and a little bit of

action thrown in, a little bit of shifting here. So it's possible that reshuffling the

deck over and over again, the deck is big enough, there's still going to be enough novelty to

entertain us, let's say. But it's hard to sort of square that with any more expanded view of what

cultural products do for us or cultural works, great literature, great movies, whatever, that

it's hard to see it simply as an iterative process, that there's some other dimension to

those products. And are we actually getting to a place where we start to recognize that less and

less, that it's sufficient to simply be entertained by the reshuffled deck? Or is it just going to

be clear that there is this kind of difference that we're losing? Well, this gets to the third

or one of the third layers of weirdness that I think you get on this piece that I found was

really beautiful, which is a, I would call it the turning round of the question, right? This idea

that maybe it's not AI that's weird at all, maybe AI is banal or if not predictable, it's a thin

training on human data that what's weird is human beings. And that the real thing that's going to

get highlighted here, ultimately, is not the weirdness of the chatbot, but the weirdness of

the person on the other end. And that as AI colonizes some of the thin ways we've come to value

ourselves, I think, particularly through kind of productivity, that it's going to open space for

more appreciation of the strangeness of human beings. Yeah, I hope that that's going to happen.

And I think that a lot of the work to make that case is already happening, that the stakes are

high enough that everybody's playing their big game, which means that if you are on team human,

as Doug's Rushkoff puts it, you got to play the full hand. It's now is the time to make the case

and also to recognize inside yourself who you are, what you are, where does efficiency stop as a

value in your life and something else take off? Can we articulate those values, whether they're

interpersonal, whether they have to do with nature, whether they have to do with how we relate with

our own selves, with our higher potential, with death, all of these sort of elements that are

clearly kind of weird from a machine point of view. Look how these guys are behaving.

Those things will become more visible. And I think that part of the disc, even the discourse

around it is starting to raise these questions in really interesting ways, which is itself indicating

something.

There was a piece that Jerome Lanier, who's a, I would call him a techno-humanist philosopher,

but he was one of the founders of the term virtual reality and brilliant guy, and I love his work.

But he just wrote this piece in The New Yorker. And one of the things he says right at the beginning

is that he wishes we would stop calling it artificial intelligence. What it is is not

intelligence. It's this kind of social layer of human knowledge working through a technology,

and that's paraphrasing his argument somewhat. But this idea, and I've heard it from a lot of people

that I wish we wouldn't call this intelligence. I think I might write a piece about this, but

I think this is getting at something deeper, but that is a little backwards. I think these

things are clearly, whatever they are, intelligent. I mean, they're working with information in a

problem-solving way. It might not be conscious, it might not be sentient. And so I've been thinking,

why are we so scared of giving up the term intelligent? Why are we so afraid that something

else might get called intelligent? And I think it has to do with how much we have made that

the dominant way we value humanity, particularly in a secular dimension. I mean, why is it okay

that we treat cows and chickens and the natural world and other creatures the way we do? Well,

we're smarter, I guess. We're smarter. That's got to be it, right? Unless you have some kind of

version of the soul, and it has to be that we're intelligent. And so then if you give that up,

if you believe these things are intelligent, and maybe they're going to be more intelligent on

certain dimensions than we are, then you've lost something really profound. But if that's not how

you value humanity, if you don't think the worth of a human being is their intelligence,

which on some level, we obviously clearly don't. I mean, we think children are wonderful,

not just because they might become smart one day, but because they're wonderful, like the way

they experience a world is delightful. There is something about how much we have dehumanized

ourselves that I think is getting laid very bare in AI discourse. If we have such a thin

ranking of our own virtues and values, that these programs can destabilize it so easily,

given how limited they are, and probably will be for some time, I think it's getting at something

that is a little bit more discomfiting, which is that we have valued human beings very poorly,

and it would take a lot culturally and maybe call a lot that we have done into question to value

ourselves and other creatures in the world differently. But if we don't, then we actually

have no defense against at least a psychic trauma of this thing we're creating, which is sort of

aimed right at our own definition of intelligence. Yeah, my reaction to that is to immediately think

about animals because it's not coincidental, perhaps. It's sort of very interesting that just

at the point where we're wrestling with this question of machine intelligence and whether we

can call it intelligence or not, we are just getting more and more proof that our definitions

of human difference don't stand up to the realities that animals live in, that they are.

And the different reactions that that brings up, and I'm thinking in terms of animals, is that

exciting? Are we happy to welcome a much wider sense of cognitive potential and to

willingly step down from the throne? Is it threatening because of all of the moral issues

that it raises, particularly in terms of how we treat animals and the horrific extinction rate

that we face on the planet? So in a way, that's already playing with this issue.

And in a world where AI was driven by a different value set than five corporations battling it out,

almost like archons in Gnostic myths completely unconnected to individual human value in some

way, then I can imagine a sort of relationship with machines where we sort of play with these

edges of like, well, how intelligent are you? Maybe it is a kind of agent. Maybe there is a

reason to honor its decision-making possibility. Maybe it even has rights. How do we start thinking

about rights, the rights of robots, et cetera, et cetera. All of that kind of stuff makes sense

because we actually are at the limits of the human in a weird way. It's like we're just immersed

in this set of transformations, climate change, the shift of the human definition,

intense hypermediation, which is digitally intensified. And we're at this limit of like,

how do we define ourselves? And in a way, I think it's, if you have the time and the willingness,

it forces a kind of existential reckoning that again, my hope is that we will see more and more

engagement with this problem and hopefully some kind of revaluation of what it is that we do do,

which has more to do with children, with play, with wonder, with exuberant celebration, and with

existential reckoning with the conditions that we're in, to really, there's not much we can do

about a lot of the things that we face right now. We want to be able to change that perhaps,

but how do we reckon with our own kind of limitations? How do we honor ourselves in

relationship to these? These kinds of questions, I think, are being forced in a way. So if there's

enough time and if there's enough space, there's a potential to revalue. But a lot of my fears come

in with just the sheer speed and the onslaught and the fact that everyone is profoundly anxious.

Well, there's a potential to revalue, but there's also the, I think there's a lot of

barely submerged guilt and shame and a truly vicious judgment humans are making on ourselves

in a lot of this conversation. I mean, I think some of the fear is that if you created something

smarter than we are, it would treat us the way we treated everything else.

But how remarkable is the intelligence of an octopus? And how often do we eat it in pasta

sauce or sushi to say nothing of a cow, to say nothing of a, and we've been better and worse

about other human beings. I mean, we have done terrible things to people we think are less

smart or capable than we are. And even now, you don't have to go, I think, far back, but

the kind of life we will leave someone to in modern kind of capitalist society and other

societies, if they are not analytically sharp and not hardworking and not enmeshed in some other

kind of human network that will save them, we're pretty brutal and have been for a very

long time. And so I always think that that's one of the hard things to face up to in this

conversation that I think if we weren't that way, we might not worry that anything we would create

would be that way too. But we know what we've done. And we wouldn't want to be on the other side of

it. And we're still doing it. And I don't think that's all that's going on here, but I don't

think it's an irrelevancy. I think the shadow of the life we lead, I think we know the cost.

Yeah, yeah. No, that would make sense. The chicken's coming home to roost to use an animal metaphor.

Yeah, I think it's very easy to be at a point where you look at the whole course of what we've

done and feel finished in a way. My mom said once she goes, yeah, human beings, we had a

we had our run. And it was this weird kind of defeatism, not like Sam's there,

but something about the inability to reckon with the consequences of everything that we've done.

And that part of the frozenness I think we feel sometimes in terms of our own agency

is just that we're so aware of the consequences and even aware of the limits of what we know

in our own communities. And that that can inevitably affects how we imagine other forms of

intelligence. I want to end on a quote you end on from you, which is that as machines colonize a

human, in other words, a more fundamental mystery may leak through the weirdness that sentient

beings are and have always been luminous cracks in an order of things no longer ordered. Tell me

about that. Well, no, earlier, we were talking about this question of intelligence. Can we call

them intelligent or not? I don't have trouble calling machine intelligent, because I have a model

of human being of human consciousness that's multi layered. So intelligence, I can even imagine

is kind of a rational process. And I can imagine how a machine would do a rational process.

But I also believe or have experienced or have faith that there are other dimensions that don't

work along those lines and that we're kind of poly creatures in a way. And that that manifests

in writing about difficult things. I mean, this stuff is, I find very difficult to write about

and difficult to speak about, because there's so much on the table, so many different dimensions.

So one of the great things that writing provides, and in a way that I don't think

that the LLMs are going to get to anytime soon, is a way to sort of answer its own

question or gesture towards the space of an answer without filling it up with reasons.

And so that is the crack. It's like a crack in the machine. It's like a glitch in the matrix,

but not just a technological product, but more like the Leonard Cohen line that everybody quotes

about the crack and everything. That's how the light gets in. Well, yeah, actually, there's

a limit to all of our systems. All the systems fail. There's noise on the line. But that noise

isn't just a technical effect or an obstruction or entropy. It's actually an opening to something

else. And that's just a kind of constitutional gesture I have towards the open, towards the

beyond, towards what's outside of the known. But I think in this particular case, it's really,

really important to underscore it and recall it and gesture towards it again, because I think

that we have a chance for it to become more apparent now in whatever way that that sort of manifests.

I think that that brings up a big question of your work to me, which is an important framework

and a dangerous one, which is how do you keep yourself open to the weird, open to strangeness

without tumbling off the cliff into what you might call the woo?

How do you... I think to me, the classic version of where this goes bad is quantum physics is

weird and we don't understand it. And therefore, you get a lot of some people called quantum woo,

which is this, well, because quantum physics is weird, the ultimate nature of reality is kind

of whatever I wanted it to be already, right? This sort of detachment. When you know what we

have can empirically prove and know is not the whole of reality, think it'd be easier to lose

any sense of skepticism and just get tugged around. I mean, as I think the dark sides of

California do to weirdness untethered in a way that does not move anything forward. So how do

you balance that? Yeah, I don't know if my answer is very interesting because it's really just the

way that I've been constituted. My father and stepfather were both engineers. A lot of my friends

are engineers. I've always just respected and understood science in a certain way, even though

I'm kind of outside of it. I didn't study that much of it directly in school, but I've spent my

life reading it and understanding rational ways of understanding the world. And I was always then

curious about how I, and this goes back to California, growing up in Southern California,

the late 70s and early 1980s, sort of at the tail end of the counterculture surrounded by the spent

fuel rockets of that whole experience. I was introduced at a young age to experiences, unusual

experiences, altered states, different practices, different ideas that came to me not as fantasy

novels or as delusions, but as interesting ways that reality can manifest itself. And so I've

always kind of kept both of those tugs going on in my mind. And I'm very interested to, I think,

any kind of alternate view you have, any sort of woo call you hear, it's a legitimate call. Let's

see, let's go, what does it like to inhabit that? What's it like to experience that? But please,

at some point in the game, you have to take that construct and kind of dip it in an acid bath

of skepticism and see what remains. It might not be much, but that's okay too, because you're just

part of the iteration. So it's like you're just moving forward in this kind of openness. And

in a way, it's just provided, it's just always the way that I've been functioning in these realms.

And I think it's a good one, because I think it's really important to be open to the possibilities

of experience beyond your knowledge and the possibilities that things are operating very

differently than you can imagine. And at the same time to respect the understanding that we have

as a species, as knowledge holders, as members of a scientific society, as well as people who have

inherited a kind of skeptical operation that creates a space of freedom. I mean, that's the

thing we forget. It's like skepticism can seem like it's just deconstructing or saying no, like,

no, no, that's bullshit. No, no, no, that's not it. No, no, no, that's not it. When actually,

it's a gesture of freedom, of emancipation from delusion, from limited thinking. And so to keep

that play of freedom going in both sides, both in the exploration and then also in the conception

of what's actually going on. I think that is a good place to end. Always our final question.

What are three books you'd recommend to the audience? Yeah, absolutely. You know, I'm just

going to repeat your suggestion, because I just finished Megan Ogiblin's book, God, Human, Animal,

Machine. And I just loved it. I was just, it was just felt like a new friend, you know, and there's

so many good things to say about it. But I will focus on her reading of Calvinism in American

history and particularly in relationship to technology was absolutely brilliant. And to me,

that's the secret. People always talk about America in terms of Puritans. And then they get involved

in the kind of moral dimension of Puritanism and the kind of city on the hill and all that kind of

stuff. It's actually Calvinism and predestination and the preterite and the save. That's really the

kind of like weird Christian programming that I think we have to reckon with as Americans.

Thomas Pinchin saw this and she just, you know, ran with it really well. But even more than that

was the way in which she wrote as a model for how to think about all these big issues. It's just

this kind of thing. I mean, that's an example of what we were talking about, that AI and these new

concerns bring up all of these questions about philosophy and religion and who we are and our

own experience and the way she wove those things together, invited us in, didn't beat us over the

head. But then it was very, very smart, very accurate. I thought it was a great model for the

kinds of conversations that we need to be having, that we are having and an affirmation of that.

My second book is the new book by Mike Jay, who's kind of our best drug historian. And it's called

Psychonauts, Drugs and the Making of the Modern Mind. And Jay's been writing for years and years

and in a way, this book is kind of like a medley where a lot of the earlier works that he did,

looking at the history of drug taking in modernity has sort of woven them all together into this

remarkable story because that's one of the features of what's going on now that we didn't talk about

that I spent a lot of time thinking and writing about, which is the psychedelic renaissance,

so to speak, and the radical transformation of the possibility of drug taking as part of our

modern condition. I mean, it's really remarkable and he just does a marvelous job of showing

from artists to scientists to seekers how our relationship to psychoactive drugs that we can

take a material that then produces shifts and consciousness. What do we do with these experiences?

How do we manage it? How do we represent ourselves? How do we write ourselves through it?

Is really, really at the core of modernity in a way that we don't often acknowledge because we

keep pushing it to the side of drug abuse or crazy people or whatever we do.

And then my third is I've heard guests pull this move before, so I'm going to offer a podcast

instead of a book and that would be appropriately Weird Studies. Weird Studies is the work of two

very smart, very playful, very wonderful spirits, J. F. Martel and Phil Ford.

And they've just done a remarkable job. It's about 150 episodes of just looking at the literate,

mostly at the literature and the cultural artifacts and the art associated with the

weird broadly understood, but their rapport, their range of, again, critical understandings,

philosophical influences, but also an openness of heart and mind and spirit to possibility.

They also model a way of moving through this territory, which in a way is like,

kind of what interests me is not just, here's this territory. It's now the job is how do you model

how you move through this territory. And they do a remarkable job of reminding us of how

deep and dense and rich these currents are in modern culture, but also how much

meaning and insight can be had from thinking about them seriously.

Eric Davis, thank you very much. Thank you.

This episode of the other clown shows produced by Andy Galvin, our show is also made by Emma

Fogau, Virgin Karma, Jeff Geld and Kristen Lynn, fact checking by Michelle Harris, mixing by

a fumeship hero, original music by Isaac Jones, audience strategy by Shannon Busta, the executive

producer of New York Times opinion, audio is Andy Rose Strasser and special thanks to Sony

Herrero and Christina Similuski.

Machine-generated transcript that may contain inaccuracies.

In recent months, we’ve witnessed the rise of chatbots that can pass law and business school exams, artificial companions who’ve become best friends and lovers and music generators that produce remarkably humanlike songs. It’s hard to know how to process it all. But if there’s one thing that’s certain, it’s this: The future — shaped by technologies like artificial intelligence — is going to be profoundly weird. It’s going to look, feel and function differently from the world we have grown to recognize.

How do we learn to navigate — even embrace — the weirdness of the world we’re entering into?

Erik Davis is the author of the books “High Weirdness: Drugs, Esoterica and Visionary Experience in the Seventies” and “TechGnosis: Myth, Magic and Mysticism in the Age of Information” and writes the newsletter “Burning Shore.” For Davis, “weirdness” isn’t just a quality of things that don’t make sense to us, it’s an interpretive framework that helps us better understand the cultures and technologies that will shape our wondrous, wild future.

We discuss how Silicon Valley’s particularly weird culture has altered the trajectory of A.I. development, why programs like ChatGPT can profoundly unsettle our sense of reality and our own humanity, how the behaviors of A.I. systems reveal far more about humanity than we like to admit, why we might be in a “sorcerer’s apprentice moment” for artificial intelligence, why we often turn to myth and science fiction to explain technologies whose implications we don’t yet grasp, why A.I. developers are willing to keep designing technologies that they think may destroy humanity and more.

This episode contains strong language.

Mentioned:
Pharmako-AI by K Allado-McDowell

AI EEEEEEE!!!” by Erik Davis

The Merge” by Sam Altman

The Weird and the Banal” by Erik Davis

There Is No A.I.” by Jaron Lanier

Book Recommendations:

God, Human, Animal, Machine by Meghan O’Gieblyn

Psychonauts by Mike Jay

Weird Studies (podcast)

Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at .

This episode of “The Ezra Klein Show” is produced by Annie Galvin. Fact-checking by Michelle Harris. Mixing by Efim Shapiro. The show’s production team is Emefa Agawu, Annie Galvin, Jeff Geld, Roge Karma and Kristin Lin. Original music by Isaac Jones. Audience strategy by Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero and Kristina Samulewski.