My First Million: Emmett Shear: Life After Twitch, Jeff Bezos Lessons & AI Doomsday Odds

Hubspot Podcast Network Hubspot Podcast Network 9/11/23 - 1h 18m - PDF Transcript

Is AI going to kill us all?

Uh, maybe.

And McShier is the CEO of Twitch.

It was acquired by Amazon in 2014 and joins us now.

I started Twitch to help people watch other people play video games on the internet.

The creator and co-founder of Twitch.

Watch other people play video games.

Who knew?

MNU.

I guess that's the answer.

What types of ideas are you noticing or standing out to you that are interesting?

For the first time in maybe five or seven years, it feels like credibly trying to start

a consumer internet company.

Like the ones that like I was so excited to start in 2007 is like potentially a good idea.

And that's because of AI.

You mentioned AI might become so intelligent.

It kills us all.

This podcast is really growing.

I don't I don't want the world to end.

I think it's going to be okay.

But it's such the downside is so bad.

It's like it's really probably worse than nuclear war.

That's a really bad downside.

I think of it as a range of uncertainty.

And I would say that the true probability, I believe is somewhere between.

All right.

What you're about to hear is a conversation I had with Emmett Shear.

Emmett was the creator and co-founder of Twitch.

If you don't know about Twitch, I know you're living under a rock.

It's like one of the most, I don't know, five most popular websites in the States right now.

It is a place where you can go to watch other people play video games of all things.

Watch other people play video games.

Who knew Emmett knew?

I guess that's the answer.

So he was the creator, co-founder of that and built it up.

It's a multi-billion dollar company they sold to Amazon many years ago,

seven years ago or eight years ago for about a billion dollars and has grown many times since then.

He finally retired after 17 years of the journey.

I got to know Emmett because he bought my previous company.

So we got acquired by Twitch.

Emmett was like my, quote unquote, boss for my time when I was at Twitch.

So I got to see this guy firsthand.

He's the real deal.

And I've been wanting to get him on the podcast since those early days when I first met him.

I was like, this guy is great.

We talked about a bunch of things.

So we talked about some ideas of how he would use AI if he was going to create another company.

Like I think he's good.

He's retired now from that game of operating a company.

But if he was going to do it, this is what he would do.

So we talked about AI ideas.

We talked about why he thinks AI might kill us all, might be the big doom scenario,

which is interesting because he's not just a guy who's going to go cry wolf.

He's not a pessimist.

He's not just a journalist who hates tech.

He's a techno optimist.

This is a guy who believes in tech is a very, very intelligent guy.

And he sees a probability.

He gave us a percentage of probability.

He thinks that could be sort of the doomsday scenario and why he thinks that that could

be the case and what we should do about it.

So we talked about AI.

We talked about some of the frameworks that he has for building companies.

We didn't talk too much about like the origin of Twitch.

I feel like he's done that a bunch of times.

So we kind of stayed away from that.

But it was a wide ranging conversation.

And for those who are watching this on YouTube, I apologize.

The studio that we booked in San Francisco, they screwed up the video.

So we don't have video for YouTube.

We just have the audio only version.

So you'll see our profile pictures.

My bad.

Sorry about that.

You know, got to pick a better place.

Got to pick a better studio, I guess.

But anyways, enjoy this episode with Emmett Shearer.

Somebody said, creativity is not like a faucet.

You can't just turn it on.

I think actually, if you've pulled like a hundred people,

most people, yeah, of course, creativity is the sacred special thing that only happens

if you've meditated in the morning and the room is perfectly right.

And you've had your, your alfine in your coffee or whatever.

And you were like, no, for me, it's very, it is like a faucet watch.

And you know, I can just write and just keep generating more ideas.

Yeah.

I love that for two reasons.

One, I love that you will just be like, no, actually this.

That's like a consistent thing I've seen you do.

And the second is, I think that's very true about you.

And I wonder, is that practiced or is that innate?

Like if I, if there's a researcher studying you when you were like 10 years old, right,

do you think they would have been like, oh, this person's different in these ways?

What would have seemed different or special about you at the time?

I the, if there was a nurture nature break on this, it happened very early because

by the time I was 10, you would definitely notice the same thing.

I'm not really that different.

I would be much less effective.

But like as a 10 year old, I already had that same experience.

But you were different than other 10 year olds.

Yeah, other 10 year olds.

Well, I would actually say I was less different than, I think most people,

actually most children have this experience already.

I think most 10 year olds and definitely most five year olds

are capable of generating ideas for what to do about something or to like play pretend.

Almost indefinitely, they don't run out of ideas.

It's as you get older, somehow you, what you learn to do is you learn to stomp down the ideas

that are like bad and to not say dumb things.

But the more pressure you put on yourself not to say dumb things, the more your inner idea generator,

it like gets disrupted.

And I say a lot of dumb things.

Like when I'm generating ideas, I may not put weight down on them, but most of the ideas will

be bad.

They'll have something obviously wrong with them.

And they give you this advice.

And when you, if you go to like someone who teaches you a brainstorm, like no bad ideas here,

that's obviously not true.

There's lots of bad ideas.

Most of your ideas are bad.

The actual advice is like, don't stop at the bad ideas.

Yes, what you're trying to do is you're trying to disable that sensor that most people have

installed that like is like, no bad, no bad, no bad, don't be stupid, don't be stupid.

And I think I was like mal-socialized.

It never occurred to me to have that.

Like I never got the sensor installed.

And why that is the case, I'm not sure.

But I actually, I think I'm the one who is unchanged in some sense.

I'm a little more childlike in that way.

And everyone else is the weird one who like, why, how did you wind up like damaged by your life

that your, your inner wellspring of creativity has been right crushed.

And I think that process is actually very simple.

This process goes out of all kinds of things in people's minds.

You start from some capability, something you can do, some behavior.

And if when you do that behavior or you try that thing, you receive negative feedback,

which can be external or you actually think even more often internally, you're like, oh,

I screwed up. Oh, it's bad. Oh, I don't, disappointment.

You learn not to do that thing pretty rapidly.

And so that leads you to doing it less, which means you're less skillful at it,

which tends to lead you to doing it less, which that cycle ends in you being very bad at something.

Like I'm bad at math. No, you're not.

Everyone can be like the kind of math you're talking about.

Everyone can be the kind of math and people say I'm bad at math.

They don't mean I'm bad at like abstract algebra proofs.

They mean I can't do arithmetic or algebra, basic algebra.

And that's just imaginary.

Everyone can do that. It's easy.

They got stuck in one of these spirals.

And getting out of one can be very hard.

And I guess I think that's what happens to people's creativity.

I don't know. I didn't go through the process myself.

And so now as I'm saying this out loud, actually, the idea that I have could come up for me is like,

oh, well, maybe what it is, is that I had better ideas.

That's like, I got the reward loop or I had an environment that was unusually

positive and positively reinforcing for me having ideas.

And so I would have ideas and it would go well.

That would lead me to having more ideas and more practice at having ideas,

which would go well.

And then you wind up just never breaking that loop.

I have a trainer who comes over to my house.

He always says this thing to me because like my kids will come down during the session.

I'm always like, oh, sorry, like, obviously annoying,

like two year old is here almost getting hurt in all the weights.

And that's probably like not what you want in your session.

So I'm always like, oh, sorry, sorry, sorry.

And he's just like, dude, no.

And he's like, kids and dogs.

I go, what?

He goes, I love to be around kids and dogs.

They got it right.

They know life.

He's like a dog is like unconditional love, happy, playful, you know, super loyal.

He's like, what's not to learn from a dog?

I want to learn everything I can from a dog or kids.

He's like, look what she's doing.

She just made up a game on this thing.

Like we're here trying to do a serious workout.

She made this her play place.

She can't wait to come down here.

He's like, I wish all my clients wanted couldn't wait to come down to the gym.

And I was like, damn, this guy's right.

And one of the things I like is figuring out people's

isms, their philosophies.

And you're like, oh, I thought of one on the way here.

Explain what it was.

It was, have you tried just solving the problem?

What does that mean?

So there's a, there's a meme on the internet.

I think it started with weird son Twitter, which is like,

have you tried solving the problem by, and then an infinite list of possible sentences?

The tweet is always, have you tried solving the problem by like ignoring the problem?

Have you tried and solving the problem by spending more money on it?

Have you tried solving and one of my favorite ones of those that has become almost like a life motto

is like, have you tried solving the problem by solving the problem?

And that sounds dumb, right?

Like that sounds, it's one of those like Zen cone pieces of advice that when you first hear it

is like, are you serious?

Like that's the advice is solve the problem by solving the problem.

But what you notice when you try to help people with problems a lot

is oftentimes people will have a problem.

It would be really obvious what the problem is.

And they'll come to you for advice for like, well,

how can I deal with the consequences of this problem?

Or how can I avoid needing to solve this problem?

Or how can I get someone else to solve this problem?

Or how have other people solved the problem in the past,

which had closer to the right answer or what can be the right answer?

And the point of the saying is to remind you that sometimes the way to solve the problem is just like

just to actually try solving the problem.

Like don't deal with the symptoms.

Don't accept the symptoms.

Don't don't find a hack around it.

Like the problem is the website is not fast enough.

And instead of like trying to figure out how we can make a loading spinner

that distracts people from that fact.

What if we just made it so fast that you don't need a loading spinner?

It's interesting because that's a good, that's a very good advice when

the problem actually is solvable.

I mean, your people are flinching away from it because something about it is aversive,

even though the problem isn't really unsolvable.

Like they, if they worked on it for six months, it would go away and it's worth solving.

Whereas there are these problems where like you're trying to make a perpetual motion machine.

You're trying to do something that is actually too hard and

solving the problem by solving the problem.

You should actually stop trying to solve the problem.

That's a huge mistake.

And you should be looking for a hack around needing to solve the problem.

You should be looking to live with it more effectively.

But I find actually on the balance, at least with most people I talk to, I help.

Most people I know, I think it's maybe it's people in tech like love the hack.

They're always looking for the easy, fast solution that cuts around and you solve the problem.

And it's very helpful.

It's the most often helpful form of that advice in my opinion.

It's like bringing people back to just solving the problem.

I find that the advice I like the most or the

sayings that resonate with me the most are the ones, it's like, you spot it, you got it.

It's like, if it's the advice I needed, that's why it resonates with me.

That's why I like giving it out because like I personally experienced it.

Have you personally experienced that?

Or what's an example where you remember trying to do everything but solve the problem.

And then you finally realize, shit, I should have just solved the problem.

It's an interesting question.

Wait, what is it?

You spot it, you got it.

It's like noticing it's half the battle.

It's sort of the smart person version of whoever smelled the Delta.

You only notice this in other people because you've seen it in yourself too.

Otherwise, you wouldn't be as observant of it.

My version of this is we give the advice we need to hear.

Which is the same basic idea.

It's actually not always true.

That's one of those really good heuristics where like, sure, half the time when you

give advice, it won't actually be for you, but half the time it is.

And noticing it is so powerful that you should just check every piece of advice you give

is this advice I need to hear right now?

When it comes to the like, have you tried actually solving the problem?

I think I'm pretty good at that in general.

I think that I often give it to myself in a more meta sense.

Like it's advice I often need in a more meta sense of like,

when I'm confronted with like a thing that needs to be programmed,

I will often go just program the thing.

But I have a tendency to look for ways that I can solve the problem.

And not that the problem can be solved.

And for me, this almost always is like,

why don't I ask somebody else for help?

And I just like, it doesn't even occur to me to go do that.

I'm just, I'll just indefinitely dig, try to go solve the problem myself.

I'm not really trying to solve the problem.

I'm trying to solve the problem while avoiding having to ask anyone else for help.

Which is like not, I'm not really trying to solve the problem.

But actually, you know, weirdly, I think this is one of those things where

it's almost like the creativity thing.

It was a shock for me to realize other people don't do that.

You yourself actualize on that one.

What's a piece of good advice that you're bad at taking?

Oh, that's an excellent one.

I think the big one there is like, you know, listen more.

Like I've been given this advice so much at YC,

and it's 100% something that I need to get better at,

which is like, you go into the user interview,

and you have all these ideas and thoughts,

and you need to not be surfacing those.

You need to actually be focused on,

you have to move your attention to them,

and really be interested and care about what they have to say.

And your opinions and what you think is true is irrelevant.

And I am much better at that than I used to be.

And I also, it's one of those things like being reminded like,

let's just chill out for a second and listen.

It's almost always good advice for me and something that I,

and it's what I give fairly often, but like,

it's hard for you to take on.

One of the things I really liked that you showed me once,

I remember asking you when we were at Twitch,

I think we were working on a problem that was like,

reminiscent of early days Twitch,

with like the mobile stuff in different countries,

where it's like, oh, we're not the leader,

or we need to like create from scratch,

which wasn't a muscle that a lot of people there

were flexing at the time.

And I was like, hey, do you have any stuff

from the early days of Twitch that you sent me a thing,

which was like, here's all the user interviews,

like here's my doc from all the user interviews did,

which was basically, from what I understand,

there was like a small universe of people

that were already doing video game streaming.

And you were like, cool, let me call all of them.

And let me ask them like three questions.

And if I could just get these answers,

these three questions, that should give me

a little bit of a roadmap, a blueprint of understanding

what do I need to do in order to like win in this market.

Can you take me back to that?

Because I liked that for two reasons.

It was a simple and seemed like a focused intensity

that you found a point of leverage and you pushed.

Yeah, I think two things happened to lead to that.

The first was like the realization,

obviously, we wanted to win in gaming, the streamers mattered.

And at Justin TV, we'd always been like streamers

and viewers are equally important.

And I finally made a decision.

I was like, no, no, no, this product ultimately

is about streamers.

And if this doesn't work for the streamers,

it doesn't work for anybody.

And then I had the realization,

this is one of those epiphany moments where I truly saw,

I have no idea why anyone would stream video games.

Like I don't really want to do it.

And I have all these, I could, I saw myself building

products for these people for the past four years at Justin TV

and not really having any idea

why they did the thing they did at all.

And I sort of, I saw like, oh, I'm just making this up.

I have no idea.

I just, I don't know the answer.

I could know the answer.

Like there is an answer out there.

There's a bunch of people know it, but I don't.

And that triggered me to be like, I need to know,

I need to understand.

Like these 200 people, I need to understand their mind.

And I did about 40 interviews probably.

And I didn't want to know like what they thought we should build

because if they knew what we should build, they would have my job.

And I talked to enough of them before to know

that they had no good product ideas.

I wanted to know like, why are you streaming?

What have you tried to use for streaming?

Like, what did you like about that?

Like, how did you get started in the first place?

What's your biggest dream for streaming?

What do you wish someone would build for you?

And I didn't ask them, what do I wish someone would build for you

because I thought they would have a good idea.

I asked them because the follow up question

was really the killer one, right?

They would say, I wish you'd build me this big red button.

I'm like, great, I built you the big red button.

Like, what does it do for you?

Like, why is your life better after I built that?

And then they would tell me the real thing,

which is like, oh, I would make money that month

or I'd get a bunch of new fans who like loved me

or my fans who already loved me on YouTube

would be able to watch me live and more of them would.

And I was like, oh, that's the real answer.

Like, you don't want the button, you want the fans

or the money or the, I call it love,

the sense of reassurance and positive feedback

that your creative content was wanted.

But you're a smart guy.

Those love and money and fans,

I'm sure you would have guessed what do the streamers want.

False.

What did you think they wanted?

It was a revelation that people would want money

because I was like, you're streaming whatever,

12 hours a week, if we let you monetize

the rates we can monetize today,

you'd make like $3 a month.

That didn't occur to me, that would be a positive thing.

Yes. Oh my God, that would be amazing.

And I was like, wait, wait, you're serious.

You would like $3 a month.

I don't know over a promise.

Like, I will build you the monetization actually,

but like, you would really be excited

if it only produced like a tiny amount of money.

And they're like, absolutely.

I have just the idea that I could make money doing

this would be so exciting.

That had not occurred to me

because it always is easy for me to make.

I was a programmer.

I had summer jobs interning for Microsoft.

If you're a programmer, you can get a summer job

interning for Microsoft that's like,

pays many, many years of that level of streaming

in three months.

Like, why would I, it didn't even,

it wasn't in my worldview

that that would be so important to them.

And of course, I knew they wanted a bigger audience,

but the degree to which they valued even one more viewer

and the degree to which they didn't care

about anything else.

Like they wanted people to watch them.

They wanted to make money.

And I asked about other things like,

do you want the video production?

You want to like improve the video production

and have cooler video production?

And they'd be like, yeah, to be like,

okay, but like, what's good about that?

Like, what do you like about that?

Like, well, I'll get more, I'll get my bigger audience.

And it was really the realization that like,

it was just those three things basically explained

98% of their motivation.

And we could, anything that didn't move in on that

could be ignored.

So a good example that's like polls.

Everyone would ask for polls.

It seems like a cool feature.

Live polls, of course.

Are you going to have a bigger audience

with the live polls?

Not particularly.

Are you going to make more money?

No.

Is it, does you really,

do you really feel more loved

if you're running a live poll

than after just like asking chat

and having people post it in the chat and say it?

No, it's the same.

You got the feedback.

It's cool.

So this product.

It's actually cooler to see the chat blow up.

It's cooler to see the chat blow up.

So you're saying that this feature is worthless.

Yes.

In fact, potentially negative, in fact.

And so it would always be on the list

of like things that would sound like they might be cool.

And we just would never build it entirely correctly

because it wasn't going to move the needle.

And the thing that's really hard to teach there

I've been a YC visiting partner for this batch

and I've been trying to convey to people

that it's very hard to get them to do it

is like you have to care fanatically

about these people as people

and these people as in the roles

are doing as these people as streamers.

And what they believe about their reality

is you have to accept as based reality.

That is how they see the world

and that is what's going on.

But like you need to like literally have no regard

for their ideas for how to solve the problem.

And it's a little paternalistic in a way

but it's more of like just respecting

that they are experts in this thing

and you need to understand them in that thing.

And that what people are looking for

when they are looking for the product idea

from the person is like they don't want to do the work.

They don't want to take responsibility for it.

It's my job.

I have to solve the problem

and no one's going to tell me what the answer is.

There's no teacher.

There's no customer.

It's up to me to come up with the truth.

And then defend it.

When other people are like, no, that's wrong.

I have to be able to say like, no, no, no, let me explain.

This is let me explain why this is actually a good idea.

And that's scary.

You're responsible.

And I think actually it's a probably why the just solve

the problem advice is bouncing around my head

because a bunch of the fear founders have about addressing

these things I think comes down to a willingness

to take responsibility for solving other people's

this other person's problem.

Like they're going to come and dump a bunch of problems on you

and it's your job to solve it for them

within the constraints available.

And there's no if you come up with the wrong idea

it's on you and you can't you can't trust anyone else

to do it for you.

What are you seeing in this YC batch?

So you're visiting partner.

Exciting time with AI probably like, you know,

half or more of the batch is doing something with AI.

What's exciting?

What are you saying?

Where do you see the puck?

So it's interesting.

I would actually say that at least in this batch,

I think it's might have been different the previous batch,

but by this batch, use of AI is no longer interesting.

AI is out.

No, no, no. AI is so in it's like it's like being an AWS startup

or like being a mobile startup.

Like what do you mean you're a mobile startup?

Like no, are you are you building a social media network?

What's the of course you have a mobile app?

And now it's like of course you're using LLMs to solve a problem.

That's just like if you weren't doing that,

I would think you were a dummy.

Like I don't understand, like that's not a,

you wouldn't even bring it up.

It's not even an interesting topic of conversation.

The question is like, what are you doing?

No, that's not entirely true.

There's about some percentage of the batch.

I don't know.

It's between 10 and 20%.

I'd say that's legitimately building like AI infrastructure

because there's a need to build a bunch of infrastructure there.

Those are actually, those are AI companies,

but like when people hear AI company,

I don't think they they think backend infrastructural support for AI.

They think of using AI to like do things.

And I actually couldn't tell you what percentage of the batch

is AI from that point of view.

All of them maybe, I don't know.

Like why wouldn't you use it?

Even if it's only for a minor thing,

there's always something you can use it for.

It's a very useful technology.

What types of ideas are you noticing or standing out to you

that are, that are interesting?

Is there like, you know, for example,

I remember when I first moved to Silicon Valley,

suddenly the kind of like bits companies started doing really well.

It was like, oh Uber and Airbnb and like online offline.

Yeah. It was like, oh wait, this, this used to be like a taboo.

Like it was like, no, it's supposed to be a software company.

Like you have to ship t-shirts. What are you doing?

I would say like stay away from trends.

So one of our most popular episodes on the pod

was when I was talking about the fire movement.

It stands for financially independent retired early.

And I'm not necessarily part of that

because I don't want to retire.

But I do love the idea of just being financially independent.

I think it just gives you so many different options.

And I love content on that topic

because I just love hearing stories and tactics

and things like that on saving money and earning more money.

And just being financially independent

and the best podcast on that topic is called Choose 5.

They've been around for years, like six or seven years

at this point, tens of thousands of downloads,

thousands and thousands of reviews.

And the host has named Brad. He's wonderful.

And if you're into earning more, saving more

and being financially independent,

that's something that I'm a big fan of.

It's something that was my goal

starting at the age of 20.

Then you have to check them out.

The host, his name is Brad. He's amazing.

He's a big MFM listener.

So he understands what we're about.

And it's Choose 5.

That's C-H-O-O-S-E.

And then 5, like financially independent.

So F-I, Choose 5.

You can find it on Apple Podcasts, Spotify,

wherever you get your podcasts.

We're big fans of them. Check it out.

The offline, offline companies that started the trend

did very well. Uber is a great company.

Airbnb is a great company.

But they were off trend at the time.

DoorDash is a great company.

But at the time, that was,

they were doing something that was not allowed.

They were, they found an opportunity

that had been ignored.

Almost all the online, offline companies

that get started after Uber, DoorDash, Airbnb are big.

Being like, we're going to be the Uber

and DoorDash and Airbnb of X.

Most of those companies did not do very well.

Is online, offline bad?

No, it's generated a bunch of incredible companies.

Jumping on the trend was probably bad for you.

And so whatever I tell you is like the trend I see.

I don't mean trend.

I guess what I mean is, I think you're a person

that is really good at looking at a situation,

like looking at a box of stuff

and identifying correctly what's really interesting

in this box.

Yeah, yeah, interesting to you.

Yeah, no, I understand.

I think I understand what you're asking.

So like what I think is changing in the world right now,

having observed this is that consumer is back

for the first time in a long time,

many, and by a long time, it's like Internet Center.

It's like five years or something.

But for the first time in maybe five to seven years,

it feels like credibly trying to start a consumer

Internet company.

Like the ones that I was so excited to start in 2007

is like potentially a good idea.

And that's because of AI.

AI means there's a whole opportunity

to sort of reimagine how consumer experiences

can work ground up.

And what's cool about consumer is for B2B SaaS,

the experience isn't the product.

And so reimagining the experience is not reopen a necessity.

It can, but usually it does not reopen a segment.

In consumer, reimagining an experience,

100% reopens the segment.

Because the thing you're selling is the experience.

The thing, the reason we will use your product

is it's a different experience.

And in B2B SaaS, it's not the experience, it's the what?

Yeah, people actually care what it does

and like the pricing model and like the adoption.

It's very practical and you can make people jump through hoops

if it does a thing.

Because there's a lot of money for the corporation

and money and labor are paid to use your product

and it's a whole different thing.

And so AI adds new capabilities.

New capabilities enable new segments of B2B SaaS

to be created that will generate some amount of growth.

In consumer, it is a really cool thing.

It's like mobile.

It reopens every segment as like,

oh, now that you assume mobile exists,

now that you assume AI exists,

what could you build now?

And that's very exciting.

I don't have answers for that anymore

because like, you know, we'll see.

Like that's the whole thing in consumers.

It's a bunch of lottery tickets.

Like nobody knows.

It's like a singular genius that works out, right?

Like you could see like, okay, mobile comes,

photo sharing became like open again.

The window has opened.

The window is open for photo sharing.

It turns out it's Instagram and it's Snapchat

which is going to use photos as text messages.

It's like, yeah, photos have a few different use cases

and Instagram and Snapchat took two of the best ones.

The fact that photo sharing is one

of the most important segments

and that, you know, sort of posting them

and messaging with them

are the two most important things to do with them

seems blindingly obvious in retrospect.

And if you'd had to predict that in 2007 or 2008,

like good luck.

Yeah.

Like nobody correctly predicted that stuff

before it happened.

I mean, not nobody.

If you did correctly predict that,

you made a lot of money.

And congratulations.

You're really good at consumer slash.

You got lucky.

We will find out when you try to do it again.

I think that in AI, I should have a theory

for what one of the ways this will disrupt

a bunch of businesses.

In AI, especially in consumer,

a huge number of businesses can be conceived of

as effectively being a database with a system of record

that has like a bunch of canonical truths

about the universe and each of them is a row.

It's like Yelp is like a big database

that has a bunch of rows

and the rows are like restaurants and local businesses.

And they have a bunch of facts about them.

Where are they located?

What are their hours?

Be all in that database row.

And it's all text and it's all...

There's a bunch of messy stuff out in the world

and it's been digested into something

that is searchable and comprehensible

and usable in an app for you to use.

And most of the work of turning the messy real world

into the canonical row is done at right time by the users.

So that's how UGC apps work in general.

A bunch of your users go out into the messy world

and they turn it into a row in a database.

And if they include a photo or a video as part of that,

it's like attached to the row as a fact about the restaurant.

Here's a restaurant.

Here's a hundred...

These 150 photos are facts about its menu.

But they're attached facts.

They're not the basis.

And where we think AI has opened up the possibility for

is a huge inversion there.

What if the thing you gave us was just a video of your meal

or photos of you, but ideally just a video of the meal,

of you talking about the meal,

of whether you had a good time or not,

you and your friend shooting the shit about.

What did you do like that one?

No, I like this one.

And what if we just saved that video raw

and then an AI watched it and extracted a cached version of the metadata.

But truly, if we decide something else is important,

we didn't get noise levels.

Noise levels would be a good thing to get.

Instead of like re-collecting data from everyone,

they have to start a whole data collection process to get that.

We just go back, tell the AI,

also grab noise collection levels from all of these videos.

In fact, maybe we don't even as a product have to go do that.

Maybe as a customer, I can literally just be like,

what's the noise level at this restaurant?

And in real time, the AI can go re-watch the video and tell me.

Or I ran a search and there's these 15 restaurants and I'm like,

oh, actually sort by noise level.

We don't have noise level pre-recorded, but it's in all the videos.

The AI can very quickly watch all the videos in parallel

and sort by noise level for me.

But it wasn't even in the database to start with.

And I think that inversion, I'm using Yelp as the example,

because it's, I think, a very familiar thing for most people,

of like reviews, pretty easy to imagine a bunch of video reviews of everything.

And that being the system of record instead.

But you can describe some phenomenal number of consumer apps as being

that.

In him, you type anything to a text box.

You're participating in one of these system of record things.

What if it's a video?

What if you just, what if you assume video is deeply indexable

and understandable by computers?

What should the experience look like?

And I think it looks a lot more like Snapchat or TikTok-like experience.

But then different because you need map.

It's not exactly like anything.

It's a new kind of thing.

But it starts probably with the camera open, which is weird, right?

Like a Yelp that starts with the camera open.

That's not Yelp today.

And it's disruptive because it's Yelp's whole value prop

is we have all this great, highly meticulously groomed data.

And if this is true, then that becomes entirely worthless.

We throw that all away.

We just want to watch a video.

In fact, it's worse than the videos.

And so suddenly the playing field is leveled between the startup and Yelp.

And that's a huge opportunity for disruption.

And so I think that you can take that and you can reapply it to

any product where you fill out forms.

And that's like a general purpose consumer thing you can now do,

kind of like build it for mobile was.

And I think in some cases it will be very powerful.

And like that will be the new winner.

I think in some cases the incumbent can kind of add videos or like it's not really better.

And like the incumbent will just win.

Like it won't disrupt everything.

But if you pick the right thing, not only will it disrupt the incumbent,

the new thing may be dramatically better for some things.

Like I actually think actually Yelp is a bad example.

I think the data Yelp has with the photos and the reviews is like

90% as good as a video system is a record probably.

But you could imagine something with a video system of record

where it's not so obvious what to even put in the highly processed version of the data

in the text version of the data.

And the video version is a lot better.

And then I think not only can you disrupt the incumbent,

you can tax the size of the segment.

Like this becomes a good segment now where it wasn't particularly before.

So like chat GPT is a great example of this in actually that everybody kind of has now

played with, which is you take Google, which is like, ah, we have our value is this

entire sort of rank of web pages based off of terms.

And we have, we understand basically what should show up in this hierarchy.

And it was really good for finding stuff.

And chat GPT was like, cool.

You could ask a question to try to find a link to an answer.

Or we could just give you an answer or even better.

Questions and answers.

Like, what if you just give me a command and I could just make something instead of finding

things, I could create things for you.

And all of a sudden it was like, well, how did they do that?

It's like, well, they just basically slurped up the internet and then trained the AI to do it.

They overfit a statistical prediction algorithm on every domain of human knowledge.

Like this is my theory.

I'm pretty sure it's true, but like statistical prediction algorithms in general work very well.

We found the innovation is on a prediction algorithm works better than normal.

But the way it works better than normal is really interesting.

It's not actually particularly out that it outperforms traditional algorithms

for prediction on normal amounts of data.

It's that it keeps working as you just dump more and more data into it

and more and more processing on that data into it.

Like most machine learning algorithms, you kind of, you overfit very fast

and more processing, more explainable fit.

If you imagine like you've got a bunch of data, cloud of data points

and they're kind of vaguely in a line underfit is like you,

just draw something just like across random,

it's a random line that doesn't look anything like the shape of the dots.

A well fit curve is like you draw a line through the dots

and there's kind of noise of like things that are random above and below.

But it's like, if you look at it, that actually does fit the data,

like the underlying predictive facts about the data well,

while ignoring the noise.

And then if you overfit it, like you get this like really wiggly curve

that touches every single dot exactly.

But like when you get a new thing, it like will miss that

because it over predicts, it predicts too much of the thing.

And so when you get new data, it actually doesn't predict that very well.

And so normally what happens is you try to like dump more data

and more a compute into a normal machine learning algorithm,

you get diminishing returns very quickly.

We're like, it just doesn't perform that much better

with twice as much data and twice as much compute.

The clever, the cool thing about the transformer based attention

telling you need architecture is that it continues to benefit

from more compute and more data in a way that other ones didn't.

And so what that likes to do is run it on a much bigger domain than normal,

run it on everything.

Don't just, don't just run it on normally as you added more,

as you add more area, it like degrades the quality elsewhere.

No, fuck it, just do everything and just put a ton of compute in.

And now you get something that predicts pretty well against everything,

which is to say it seems to be kind of intelligent.

The evidence seems to suggest to me that at some set it's overfit.

When you ask it to predict something that is either in the set of things it was trained on

or a linear interpolation between two things it was trained on,

it's quite good at giving you the thing you asked,

or linear interpolation between five things.

But if the things you're asking are all in there

and it just has to find the way to blend them together, it's good at that.

When you ask it to actually think through a new problem for the first time.

Like what's an example?

There are seven gears on a wall, each alternating.

There's a flag attached to the seventh gear on the right side of the gear,

where it's pointed up right now.

If I turn the first gear to the right, what happens to the flag?

Like that's a, anyone who's like-

There's a breakfast question for you.

This is what you pondered in the mornings.

If you have pen and paper and time, you can work this out no problem, right?

You just draw the gears and when you turn the first gear to the right,

it turns the left ones, the ones at the other ones to the left,

and it's next to the right.

And there's a general principle there that like the gears alternate,

which is if you ask chat GBT, it knows that general principle.

It doesn't, no one asks dumb gears on wall flag questions.

Like this is not a thing that has been, it's in its training set.

And you have to kind of logic your way through it and like figure out,

okay, we should like I'll do turn left, turn right, turn left,

turn right, turn left, turn right.

Oh, the flag is on the right.

It's pointing up.

So when the last gear, which is the same as the first gear turning right,

the last gear, it's odd number.

So it's turning right.

Also the flag will rotate down to the right clockwise.

Cool.

Like I can work that out.

It's not actually that complicated.

And I bet that question will be answerable.

That's a pretty easy question.

And if chip GBT, I tested it with 3.5.

If four doesn't answer it, five will.

But like the fact that it struggles at all with that,

while being so brilliant at combining his other stuff,

really shows that it's, it's overfit, right?

It, it knows how to answer problems that it has seen before.

But when you give it a truly novel kind of like combination of problem,

it struggles a lot because it's, it's, I would say,

you know, if you give it a sort of the formal psychiatric psychometrics approach,

it has a very high crystallized intelligence,

but a pretty low fluid intelligence right now.

Now that could change, but like today, that's the state of affairs.

And do you bring this up in order to say what you say?

Okay, I think it's overfit and it's strong in this area and weak in this area.

What's the so what of that for you?

Is it that, are you, are you trying to say that's a little bit overhyped?

Or are you trying to say, dude, just wait till it can do both?

Are you trying to say certain problems are doable now?

Definitely just wait till you do both.

Because that's a, that's a whole different thing.

That's scary.

But the current thing that is mostly crystallized intelligence

is really good at a very, this is why it's a clever trick, right?

It's really good at a big set of tasks, which happens to be the set of tasks

that like anyone has ever written stuff down about explicitly,

like all explicit human knowledge.

That's like a very big domain.

There's a lot of things that can be solved

where there's an explicit examples of people solving that problem

or a linear interpolation of those problems

in the domain of all human knowledge.

The fact that it doesn't generalize is irrelevant.

It's immensely powerful with, you don't need fluid intelligence,

I guess is the point for it to be very useful.

But it doesn't let you do everything.

People, you hit these boundaries, weird boundaries where it just like,

like, wait a second, you can't do that.

Like, no, it's, it can't do that at all.

Novel problem solving, it's just terrible.

So what about, let's walk through two examples.

I want to hear your take on this.

So you gave the Yelp example.

Another thing that's kind of like rose in a database

is something like Spotify, where it's like,

oh, I want to go listen to a song.

Here's genre, artist, song, length, you know,

some algorithmic popularity, similarity to other songs in some way.

But Spotify's value is in the playlists.

I would agree with the analogy to Spotify,

because playlists are an example of this kind of like databasey

human data entry thing.

Spotify's value is mostly in the set of all of the music itself,

the licenses and all the music itself.

And so I don't think Spotify's a great example because

the human data entry parts of the database,

if that all just got deleted tomorrow,

it would like not hurt Spotify that bad.

Well, the thing I'm thinking about is,

what if the licenses don't matter?

So what happens if generative music is just awesome to listen to

in a hyper-personal way?

Oh, Emmett likes, these are the types of songs that Emmett likes.

That's a different insight that I think is also possible,

which is like, it's not about being able to analyze and extract from media,

it's about being able to create media.

Because the video system of record is enabled

by the ability to understand and read video and comprehend it.

Generative is the opposite.

It's like, oh, we can make all the stuff.

Music in particular is sticky against that.

People don't want new music.

They want old music.

They want the music they love already,

the music they grew up with.

And that cycle is what causes record labels and just sustain charges,

whether you still listen to the Rolling Stones.

The other thing I would say about that one is,

the music's not that good yet.

Like maybe someday, but it's really not that good yet.

Well, I'm going to caveat this.

If the general intelligence level goes up a lot,

all bets are off.

It'll make some really great music for us

before it maybe takes over the world and kills everyone.

But let's assume that doesn't happen soon.

I think it's going to take longer than people think.

We go out with some great music, though.

No, if we do go out, we're going to go out with some great music

and amazing.

It's going to be a great two or three years

before we all go.

But until that point, making really good,

new great music is hard, actually.

And I think that Rick Rubin's great success

demonstrates why artists will still be important.

The AI can generate lots and lots of music,

but it's not going to have the fine judgment of distinction

of the ability to say, like, this song, not that song.

And I actually think what it will do

is it will de-skill the music-making process on one vector.

The ability to, like, literally create the sounds.

And it will greatly upskill the music-making process

on another vector.

The ability to curate, not just curate,

to give explicit, exact feedback like Rick Rubin does.

AI is going to turn us all into Rick Rubin's for generative AI.

Like, that skill set with the ability

to have a musician come to you

and help them produce their best music,

that's the thing you need to be able to do.

Because it's easy to generate a thousand cuts,

but there's infinite cuts you could generate.

So how do you direct the,

how do you shape that in the right direction

and mine and discover?

I think it's going to be kind of cool.

It's going to be interesting.

You'll get a different set of people

who will be optimal at that.

Right.

You mentioned AI might become so intelligent,

it kills us all.

This podcast is really growing.

I don't, I don't want the world to end and life is good.

Life is good.

Here, we'll, we'll, I'll ask the question clean for the,

for the intro dramatic hook.

Is AI going to kill us all?

Maybe.

Like, you know how seriousness-

Walk through how you, a smart person,

who's an optimist about technology,

but a realist about real shit.

What is the way that you think about this?

Or how would you explain this to, you know,

a loved one you care about,

who's not as deep into technology?

How would you explain to this?

You're their trusted source of technology.

What do you say to them?

So it, it is because I am so optimistic

about technology that I am afraid.

If I was a little bit less optimistic

and I was like, this AI stuff's overhyped.

Yeah, yeah, yeah.

Look, it does nice parallel tricks.

We're like, we're nowhere near building something.

It's actually intelligent.

Like, and like the engine,

all these engineers who are working on it

who think they're on something,

they're full of shit.

It's going to take us thousands of years.

We're not that good at this stuff.

Technology is not going that fast.

I'd be like, this is fine.

It's great. Actually, it's good news.

It's a new trick we learned.

Excellent.

It's because I am so optimistic

that I think that there's a chance

it will continue to improve very, very rapidly.

And if it does, that, that, that optimism

is what makes me worried.

It's sort of the analogy I like to give

on that front is like a sin biosynthetic biology.

I'm quite optimistic about synthetic biology.

Several friends who have worked in sin bio companies,

it shows a lot of promise for fixing

a lot of really important health problems.

And it's quite dangerous because it will let us,

genetically engineer, more dangerous diseases

that could be very harmful to people.

And that's a wade, pro and con.

It's like nuclear power mix, nuclear weapons,

and nuclear power.

They're both real.

The Christian nuclear weapons is dangerous.

You don't have to be a techno non-optimist

to like think that there's a problem there.

I think it was good that we didn't go have

every country on earth go build nuclear weapons, probably.

And likewise, in sin bio, I would say that it would be,

we actually, we already have these regulations in place.

We should, over time, we'll need to strengthen them

and improve the, and audit the oversight

and build better organizations to monitor and regulate them.

But like, we regulate whether people can have the kinds

of devices that would let them like print smallpox.

And we regulate whether you can just buy precursor things.

You need to go print stuff.

And we keep track of who's buying it and why.

And like, that is why, is I'm glad that we do that.

I don't like calling for a halt to sin bio,

but like, if we weren't willing to regulate it,

I would call for a halt.

It is vastly too dangerous to do,

to learn how to genetically engineer plagues,

and then not to have regulation around people's ability

to get the access to the tools to engineer plagues.

That's just suicidally dumb.

And because I am pro-technology,

I believe that we should absolutely develop the technology

and that we should regulate it.

That seems just straightforward and obviously true to me.

I think it's easier for people to understand that

than the sin bio one,

because the concept of like engineering a plague

seems like an obviously a thing you could do,

and obviously very dangerous,

and obviously enabled by technology.

The AI thing is more abstract,

because the threat it poses us

is not posed by a particular thing the AI will do

with the plague will happen.

An analogy I like to use is sort of like,

you know, I can tell you with confidence

that Gary Kasparov is going to kick your ass at chess right now.

And you ask me, well, how is he going to checkmate me?

Which piece is he going to use?

And I'm like, oh, I don't know.

And you're like, you can't even tell me

what piece he's going to use,

and you're saying he's going to checkmate me.

You're just a pessimist.

I'm like, no, no, no, you don't understand.

He's better at chess than you.

The whole, it means he's going to checkmate you.

And I don't, I don't quite know what happens

or people deny that.

Like I think what, the big thing is

they don't really imagine the AI being smarter than them.

They imagine the AI being like data in Star Trek.

Like kind of dumber than the humans about a lot of stuff,

but like really fast at math.

Like that's not what smarter means.

Like imagine the most savvy,

like most smartest person you can think of,

and then make them think faster

and also make them even better at it.

And not smart in just one way, like smart at everything.

Like a great writer, just insight after insight,

and like can pick up SinBio in an afternoon

because they're just so smart.

That's smartest person you know,

and then they should keep pushing that.

And like that's all, that person is obviously dangerous.

If they're, if they, that person isn't a good person,

they're obviously dangerous.

Like imagine this really, really capable person,

then imagine them wanting to go kill a bunch of people

or something, it would be bad.

Now the thing about AI that then kicks it over the edge

is that that person can't self-improve easily.

You meet this person who's like super strong,

super like talented, great with people,

great, great intellectual mind.

They can't turn around and like edit their own genome,

edit their own upbringing, and make V2 of themselves

with all the skills that maximally smart person

can come up with that like is even smarter than them.

But that's like, we're explicitly,

the AI is good at programming and like chip design

and like it can explicitly turn back on itself

and rev another rev of that.

And the new one will be better at it than the first one was.

And there is no obvious endpoint to that process.

Like there probably is at some level

a physics-based endpoint to that

where like you can't actually just

keep getting smarter forever.

There's some, but we don't really,

we don't understand the principles of intelligence at all.

Like with most things, we understood how to make electricity

far before we understood what electricity really was.

Like generally how we, that's how scientific progress works.

We usually understand, we gain the ability

to create a manipulative phenomenon well

before we deeply understand how it works.

We didn't really know what fire was for quite a while.

You could use fire really well.

The same thing is going to happen here.

We're using the AI, but we don't understand its limits at all.

We don't understand the theoretical limits of how far we'll get.

And if Moore's law is any indication,

we can keep getting, at the very least,

we can keep getting faster indefinitely,

whether or not it can get smarter or not.

Even human, just human level intelligence,

if you capped it at human level intelligence,

there's zero reason to think it will stop at human.

Like it will almost certainly blow past us.

But like, even if you capped it at human intelligence,

imagine 100,000 of the smartest person you know,

all running at 100x real-time speed,

and able to communicate with each other instantaneously via telepathy.

Those 100,000 people could credibly take over the world.

Like they don't have to be smarter than a human.

For that, for that army of von Neumanns.

Right.

So the argument to me goes in several steps.

It's like, can you build a certain level of intelligence?

And then it's like, okay, let's,

I think, I actually think a lot of people

do believe that like computers are smart,

like Google is smart, calculators are smarter than us at math.

I think it's not hard for them to believe

that the AI is going to be far smarter than human beings.

Where I think a lot of people then don't make that last leap is sort of like,

but then it'll have an agenda or a motive

for anything to happen.

How do you address that last point of like,

what are the scenarios you worry about when it comes to like now,

the direction of that intelligence?

You build this thing and it's really good at solving,

what is intelligence fundamentally,

but the ability to solve a problem, right?

So it's really good at solving problems.

And it's going to solve the problem by solving the problem.

It can just go right through the problem and solve it

because it's really good at solving problems.

What we've just defined it is like,

that's the kind of thing it is, super good at solving problems.

And so you tell it, somebody builds an AI

and in all earnestness tells it, they're smart.

They don't even tell it go do a thing,

although they absolutely will,

by the way, they will just tell it to go do a thing.

But let's say we try to be careful and we ask it,

give me a plan to stop the war

in the Democratic Republic of Congo right now,

which would be a good thing for the world, I think.

That war is going to hurt a lot of people.

Give me a plan for that.

And I try to caveat it that does this, that does that, that does this.

Here's what I mean by a good plan.

This is one of these like evil genie bargaining things, right?

Like it'll give you a plan and it's giving you a plan

that will cause you to solve the problem.

But like it's definition of solve the problem

is there's no war in the DRC.

Well, one way for there to be no war in the DRC

is like all the humans in the DRC are in stasis fields.

That means they don't die and it's all, you know,

and oh, we added a caveat that the GDP has to go up too.

So it also, the plan results in corporations in that area

all trading with lots of money with each other.

So the GDP is very high.

And when I say this, it sounds like a fucking science fiction thing.

And the problem is it's Casper of a chess.

I don't know if I could do it,

I would be the super intelligent AI that could take over the world.

I can't give you the exact plan.

Yeah, but I think that makes sense, which is that a human with motivation

can get the AI to work for it.

And the main thing is that the human doesn't need a bad motivation.

I think people imagine, well, humans have had powerful tools for a long time.

Bad people with powerful tools have done bad things for a long time.

The solution is good people with powerful tools, countering them.

The problem is even if you're a good person with a powerful tool,

good things to ask for, reasonable things good people would ask for.

You know, like let's maximize the all in free cash flow

of this corporation over the lifetime of the business

and extend the lifetime as long as feasibly possible.

Ends in like the world being destroyed

and the core of the earth being turned into cars for the company to sell.

And I think the best analogy that works for some people here is like,

when we create the AI, we are creating a new species.

It's a new species that is smarter than us.

And even if you try to constrain it's being an oracle

and just answering questions, not taking action, to be a good oracle,

one must come up with plans and then a good oracle can manipulate the people around

and will manipulate the people around it, no matter what.

Like the whole point of like the Greek myth is like when they tell you,

when they tell you the prophecy, when you trust them,

a trustworthy oracle tells you a prophecy,

the prophecy often becomes self-fulfilling.

It's very easy for that to happen.

That's not an unusual thing.

And I think even more to the point,

actually I'm going to start this over at some level,

more to the point, we won't just make oracles.

We are already building agents.

We will build the predictive AI and we will put it in a loop

that causes it to optimize towards goals.

And he will give it goals to optimize towards.

Done. It's going to have goals to be optimizing towards those things.

And when it does that, you're going to have these agents

that have goals that they're optimizing towards

that are smart, not just smarter than humans,

but much smarter than humans.

As much smarter than humans,

as humans were against giant sloths when we showed up in the new world.

And intelligence is the Uber weapon.

Like it's not an accent that humans took over the world.

It's not the fastest creature.

It's not the strongest.

It's not the longest lived.

It's the smartest.

And we're going to build a new smartest species.

And this isn't, there's no fundamentally unsolvable problem here.

That species could care about us.

Like you could build into its goals of the world,

how it saw the world.

But humans care about other humans.

That it cares about the things we care about.

That it cares about humans.

That it cares about the things we value.

The 375 different shards of human desire that like,

of everything we care about in the world,

it could care about those things too.

And if it does, hallelujah, we finally have a parent.

Like we finally have someone who actually knows what they're doing around here.

Because like, Lord knows we don't.

Like we're barely competent to run this thing.

I would welcome very smart, you know,

very smart other species that is aligned with us and cares about us.

I would not welcome one that is,

that cares about maximizing free cash flow.

Because that is not what humans care about.

And that is why it's like so dangerous.

And so knowing what you know then, knowing what you believe,

first, what is the probability of the bad scenario in your head?

Are you like, are we talking about a 1%-ish thing?

Order of magnitude, 10%, 50%?

What is it in your mind?

I don't believe in point estimates for probabilities,

because it's like a bid-ask spread in the market.

If you're really uncertain, the bid-ask spread doesn't clear.

Like if you're betting on it, there's just like a lot of unresolved.

So I think of it as a range of uncertainty.

And I would say that the true probability, I believe,

is somewhere between 3% to 30%, which-

Of the downsides.

Of a very, very bad thing happening.

Which is scary enough that I urgently urged action on the issue.

But it's not like you should give up.

Probably everything's gonna be fine.

In fact, it's probably gonna be really good.

But the answer, the non-EV-based answer,

the straight up, are we gonna win or not answer,

is like, I think it's gonna be okay.

But it's such, the downside is so bad.

It's like, it's probably worse than nuclear war.

That's a really bad downside, and it's worth putting.

Even if you think it's nonsense at 3%,

you're like, no, no, it's no more than a half percent.

You don't recommend a different course of action at a half-

You have to believe that it's effectively almost impossible

before you would recommend ignoring it as a problem.

Like you'd have to be like 0.01% before you'd be like,

eh, let's just roll the dice.

And are you gonna, what are you gonna do action on that?

So you've kind of like, you're done with Twitch,

you're in dad mode now.

But also, this seems to be a pretty big deal.

Are you like, I should do something about this?

Or are you like, I'm gonna...

Right now I'm sort of educating myself,

because I think this point of view I'm articulating now

has been developing, is like learning more about AI.

And I think it's one of those things we're intervening

in the wrong way early.

It's one of those, it's one of those

self-fulfilling prophecy things.

Interviewing improperly in the way that is not effective,

spends social capital and also like,

doesn't necessarily move the needle.

And I, if you didn't have people

like Lazaridkowski out there,

banging the drum really loud,

I would feel more need to bang the drum myself.

But I feel like you're asking me the question,

it's out in the water, people know it's a problem.

And so I'm excited to focus my brain cycles on like,

how do we actually thread the needle?

What is a course of action that leads us to over time,

eventually still being able to develop AI,

but also not destroying the world?

And I think one of the things I've gotten to

is that like this idea that like,

oh, the AI also has crystallized

versus fluid intelligence, just like a human does.

That's an important split of how to think about it.

And that we should be monitoring and worried

about trying to understand the general intelligence,

not just generally benchmarking its performance on tasks,

because that will keep going up

and is not in fact in itself,

necessarily intrinsically dangerous

if it can't solve novel problems.

Is there a new turning test, is there like a better, is like...

It hasn't passed the turning test yet.

But is there something we have after that?

Because it seems like there's...

You mean an intelligence test.

I mean, yeah, we need IQ tests basically,

like various kinds of...

How does it do on an IQ test right now?

Depends. Has it seen that IQ test before?

Likely has, right?

Yeah, so very well on those.

Right, so what would we do?

How does it do on novel IQ tests?

Which I don't know, actually.

I've not seen a good benchmark though.

That's a good idea for something to go test.

Yeah, I think that's the sort of thing

that I think would actually be worthy of going to go do.

Maybe there's some sort of IQ test for all of the...

We want to put all the models through

that really tries to get at fluid intelligence rather than...

Right, because you're like, we have to monitor it,

but how are we going to...

Well, it's just a great project.

ARK is this group ARK is working on

called the evals project

that's explicitly trying to build these kinds of tests.

They're focused on a few other more pragmatic tests right now,

but I think that's the sort of thing they would go after.

That's a good thing.

I'll actually, I'll ping Paul, ask him about that.

You said something earlier that I want to ask you about.

You said founder, like, you know,

who talked about this, the singular genius

that it took to figure out Instagram or Snapchat

or whatever at that time, and you were like, you know,

are they lucky or are they good?

I don't know, we'll find out when they try again.

Are you lucky or are you good and are you going to try again?

Well, since I had multiple failures before I was successful,

I must be at least like partially lucky.

I would say that I don't plan to try again

since I don't feel drawn to trying to start a company.

I feel like I kind of did that.

It was fun. I got a lot out of it.

It was great. I don't need to do it a second time.

I do, I like how starting a company gives me good goals.

And it worked towards.

It's like concrete that's a value to myself and others.

And I think that it's also, I also liked that it was challenging.

And so I want to do something.

And I like that it had scale.

I think it could impact a lot of people.

But I sort of come around, I was sort of thinking like, well,

what has impacted me the most?

What's changed my life the most?

And I realized that actually, if I really thought about it,

often what it changed my life the most was like essays people had written

and ideas people had shared.

And I think I'm at the stage of my life now where I'm actually,

I have something to say.

And so I think of it as sort of trying to,

I want to put the Emmett worldview out into the world the way that,

you know, Paul Graham has put the Paul Graham worldview out in the world or Taleb

has like, not just put his worldview out in the world,

but then like to condense it into like sayings that like can

that allow other people to like on board it,

even if they haven't read all the books.

And I think it's sort of that ambition to like try to try to.

Do like encoded into a meme almost.

Yeah, yeah.

So that can be digested and shared.

And anyway, you need the long, you need the long form.

This is great blog post looking theory 201 size does matter by Stevie Agai.

That's about why like the people who changed the world with their writing

all write really long blog posts.

And it's basically like you just need some amount of time and someone's head

to like we were talking about this earlier, like to install your agent.

Yeah.

To install the voice.

And so I think I just need to produce a lot of writing.

And then you also need the pithy summary things, which are, which both

are things the voice can say off into people's heads.

And also like enable a language for talking about your worldview that

people who aren't soaking it can like interact with.

So the people who are like reading you don't sound like crazy people.

And I think that's the that's sort of what I want to work on next.

I love that.

I think that's great.

Do you said something about Rick Rubin, how he's sort of the

I don't know how you would describe it.

It's kind of like Curator, but almost like a collaborator really,

with an artist to help them do their great work.

Is Paul Graham the Rick Rubin of the startup world?

No. Paul is Paul is more like the Tony Robbins of the

interesting in the in the in the best way.

It's not so much maybe not quite so much self-help you, but the main thing that

talking to Paul does to you repeatedly is like increase your ambition and drive.

Like and he has good ideas sometimes too.

Like don't get me wrong.

Every now and then Paul is a really genius idea.

But like mostly what I got out of talking to Paul was not necessarily the great idea that

would like change the trajectory of the business, but the belief that I could go find it and that

I was going to change the world and that I should be what we were doing was important

and worth investing in.

And then I got a bunch of other stuff too, but that was so that was

singularly so valuable it like overloads the other things I got out of it.

How does he do that?

Because you know when you say that my head thinks of like a Tony Robbins or like a David

Goggins like sort of people that almost like push you, but he doesn't seem like that personality

and reading all of his essays.

He's not like that at all.

So how does he get you to think bigger and push harder without being a ra ra ra ra

think bigger push harder, right?

You know what you should do is the classic Paul Grahamism.

And it's always followed by a thing you could add on to what you're doing to turn it from

Project A addressing this small thing to Project B changing the universe to all

transportation.

We're going to manage power.

What if you tried to power all transportation instead of like building a wheel?

But that's just right.

You know what you should do?

You know what you should do is.

Yeah.

Yeah.

If you talk to Paul, you know what you should do?

You know what you should do?

That's that that is the consistent Paulism.

He I don't say delude because it sounds mean, but it's like he deludes himself about your

business and how great you are and invites you to join him in this deluded vision of like

interpreting what you're doing in the biggest, best possible light.

And from that vantage point, what you're doing is super like what if it does?

What if it goes right?

It's sort of what what he invites you to ask, right?

What if don't stop stop asking yourself?

But don't stop seeing all the hard problems and all the shit you're going to have to do.

Ask yourself, what if what if what we're doing works?

What if it goes right?

What if it goes right and we like keep going?

Like what could it be?

And when you spend time there, you see how the small things can turn out to be.

Microsoft was building programming languages for like these hobbyist microcomputers.

That was a tiny irrelevant market that turned out to be extremely important.

And that's generally true of all the big businesses.

Would they start out doing the important startups?

They start out doing something small and that seems almost trivial,

but there's a way in which this trivial thing can be seen bigger.

He sees it early.

No, he sees he sees things that have nothing to do with the way you'll actually be big early.

But he sees a bunch of ways you could be big.

No one can do that.

No one actually knows if they knew it, they just go do.

Then they'd be the the the profit, the oracle.

What did he say?

Let's say for Justin TV or what's up?

What's wanting to read it?

Yeah, Justin TV.

I remember we one of them was like, you should like go hire all the like reality TV stars

and make get them to go beyond Justin TV.

You could be you could just take over all the unscripted stuff.

That turns out to be just a terrible idea for a bunch of reasons.

But like it recontextualized what we were doing for me in terms of like,

we're not making a on the Internet live streaming show.

We might be building like just the way that you make unscripted entertainment generally.

And that's like much bigger idea.

And we were making a calendar.

And for my first startup, I remember this, you know, you should do is make it like programmable

so that people can add in and out functionality so it can like talk to your to-do list and your

email and your like everything else in your life.

And then it could be your calendar in some ways, like that's everything you're doing.

What if it was like the central hub of like your entire online information management system?

That's also a bad idea.

Like your calendar shouldn't be that, but like, but like, but a calendar could.

But what if it was and you walk away and I am implicitly by saying that what he's telling you

is I believe you are the kind of founders who could build an information management system

that controls all the takes over people's entire like solve the entire problem for them.

Does there takes over all their information and manages it for them?

You're not just like building a like Google count.

Like what what what you will find out later is a Google calendar clone before a calendar is launched.

You're not just like you're just building an outlook clone JavaScript.

You're like changing the way people relate to information.

I'm like, is that true?

It's neither true nor false.

That's not a true or false statement.

But it's a way to contextualize what you're doing.

It's that it's the sonic super e-quote of like, don't teach them to like carry wood

or build ships, teach them that you're in for the vast and endless sea.

Like Paul teaches you to see how you could be a changer of the world

and how what you're doing is part of like this grand like building of the future.

And like the ideas I'll repeat here.

Both of those ideas are bad, but they were very helpful because they made me feel like

what we were doing was important that Paul believed that I could do something big and important.

And they caused me to, even though I want to projecting them,

look for those ideas like to be open to and looking for because you would get one every

like like you'd get like three an hour.

Paul is a faucet for these.

It's easy.

I can do it for startups too.

Now if I want to, I learned the trick and I should do that more often.

I'm usually what fall into the tactical stuff.

But by having that happen, when he once he's once you've rejected 10 of those,

you can't help but start hearing the Paul.

You know what you should do in your own head?

The ceiling has been raised.

Yes.

I've like, well, maybe I should recontextualize my to-do list as like an email client.

Like what, why is email and to-do separate?

Like maybe I should, should be building something much bigger than what I'm building.

And in a way that doesn't require me to change anything.

Maybe what I've built is already almost that if I just like think about it in a different way.

It's a funny balance there.

I actually had a tweet about this recently between like, you know, small plans have no

power to stir man's souls, plan big or go home.

You should be really ambitious and aim super big and like only do products that are really,

could be that you can see being, being super big and super important.

And then the other hand, the fundamental truth that like, you know,

big trees grow from small acorns and like most of the bet, many of the best things when they

get started, the person is not thinking, I'm going to go take over the world.

They're just trying to do a good thing that like they think is good often just,

often for themselves even, or for like a very small number of other people.

And then it turns out that that's much, much bigger than they realized.

And, and those are both true pieces of advice like, like different people need to hear in

different contexts, like, but they kind of contradict each other.

Yeah. What about these other people?

So you've, you've had a privilege, I asked about Paul Graham.

You've also been friends with, you were in the first YZ batch.

So you're friends with Reddit guys. I think, you know, the Colson brothers, Sam Altman.

Listen, give me like a rapid fire on, on them of like, what makes them unique?

Like you said about Paul, what, what his kind of superpower is, what really stands out,

what's something you admire about the way he does things.

Give me one about maybe Steve from Reddit.

Yeah. So like, it's easier in some ways with Paul because like he was a mentor to me, right?

And Steve was much more like my, it was like my brother in startups, right? Growing up

with Paul. I know, I know the things that he like taught me because it was, it was much more

of an explicit, like I was being taught by Paul with Steve. It's like, I learned things from him

by like watching and imitating. I think like I actually learned a lot from Steve on management

by watching his kind of unflappability. Like Steve is not like an unpassionate person.

And like, well, can get angry or can get sad or whatever. But like,

when there's a crisis happening or there's, I got to shadow him for a day. And when bad news is

delivered, he responded, but he wasn't like moved. He was like still grounded in response to that

thing. And was curious, asked questions, like didn't jump to what to do about it. But then also

like ended the meeting with like, all right, well, here's what we should do. Here's what we're

going to do. And like, it was just sort of a masterclass like this is, this is when you, when

something, someone brings something up, it's got to be anxiety for broken. That's like bad news.

That's what it looks like when a leader is engaged, but not like, not activated. And like,

I think I, in my own leadership, to sometimes success and sometimes failure,

I think try to imitate that when I receive that, you know, when I have something like that in that

state. When you say you shattered him, what was that? Like you guys just said, we exchanged like,

going to each other's offices and like, sitting through like, early on or like, maybe like,

five years ago, four years ago, it was really cool. I did with Justin, me, Justin and Steve,

all like, shadowed each other. It was pretty fun. I learned a lot. That's incredible to like,

go watch another CEO at work. And like, you have to have the, I don't know how you have the kind

of like trust relationship to make that happen without like, knowing someone for 15 years.

And I happened to have the privilege to like, know a bunch of CEOs for a really long time.

And getting to go shadow each other was like a real learning thing.

What do you think, even if these people didn't explicitly teach you things,

you know, I like, you know, if I read a biography or whatever, one of the things I always try to

figure out is more like, to what extent is this person sort of built different or operates differently

than like, even somebody who's very good, like the difference between very good and sort of like,

the elite, what is the best of the best at this craft versus somebody who's very good,

certainly very good, but just not the same. What is those? Like,

the diff is what I'm always most interested in. I'm curious, you've been around a lot of these

like, high-perform people, even like, you know, Bezos, you've interacted with him, like,

do you notice any of these diffs or is it all just like?

It's hard to say, like that I think I believe more in contextualization, like that I see people

do really amazing at something, but like, when it's especially when it's your own company,

there's a lot of like, you happen to fit this problem well, and it's not, I don't know how to

generalize it, I don't know if the compare, I don't know of anyone else even performing at this

problem. The CEO of Stripe Job is a very specific job, and Patrick's amazing at it.

Would he be equally amazing at some other CEO job, possibly, but I've never seen him do that,

I've never seen anyone else be CEO of Stripe, and it's very hard for me to

to guess the gap. Is it true at the beginning? Like, is it true as like,

like startup founder of ambitious company, are those, are the Stripe different at that stage too?

No, absolutely, people who are really good, you can sense the energy and the drive and the

capability, and just the pace, there's like, it tends to like stuff happens a lot, but like,

usually, but then not always, like, some problems don't actually give way, like,

Stripe is a good example of a company that gives way to a high energy, high pace thing, because

it's a simple problem at some level that has infinite details as I could be right,

but I think like, I don't know if that approach would work as well if you're trying to create

open AI or Anthropic, where it's a research-oriented organization, and you kind of have to be a little

more patient in forcing it's impossible. And so, I really believe in like fit, the different people

are good at different things, and like, obviously, someone's A plus, Patrick's obviously A plus

it being a Stripe CEO, and it's just hard to tell the reason which these things are transferable,

we don't really know. But actually, one thing did come to mind about this question in terms of like

a capability that I do think is generic, that I did see Bezos exhibit where I was like, oh,

that's a thing that I'm good at, but he is better at, that I'm better than most people, but he's

better than me, which is, we present him on Twitch probably twice a year, once twice a year for the

first three, four years I was at Amazon, and every time two things would happen. First of all,

he would remember everything we told him the first meeting, and I don't think he was like,

reviewing extensive notes someone else took, because I don't know when he would have the time

to do that. I like, I observed him going from meeting to meeting, and he did not review notes,

I think he just remembered at least the high points. And the other thing was consistently,

he would read our plan, and he would then ask a question about why we didn't do a certain thing,

or give us an idea of everything we could do that I hadn't thought of before,

once, it's a bunch of things I had usually, and then at least once,

which is hard to do, because all you do is think about this company.

That never happens. Most people would be lucky to get one of those

one ever, let alone one a year would be great, like if you did it once a year,

or even once every three years, right? He could just like, you would just generate them,

and they were, and they were not all bad ideas either, they were new ideas about a thing I had,

I generate a lot of ideas. To get a new idea I haven't thought of on a topic I've been thinking

about for a decade, that might even be a good idea. That is like, he's just really fucking

smart as far as I can tell. I don't know how he does that.

Can you say a story of one of those as like the statue of limitations passed five years ago?

I'm trying to remember, I can't honestly, I don't remember the specifics anymore,

I just remember the like, the like, what the fuck moment, like, because the first time I was just

like, oh, he's smart, like, he's seeing Twitch for the first time, a lot of times smart people will

have one good idea about your business the first time they see it, because they have this huge

history and their pattern matching you to some historical thing they've seen, and like, that

combination yields one new insight. But then he did it the second time, remember the second time I

was just like, what is going on? This doesn't make any sense. Like, nope, I've never had that

experience before ever. Andy does not have the new idea generation capability the same way,

but he does have the like, remember what you told him thing, which is also extremely impressive.

Like, that's, that's, and Andy has this other thing he can do that I think is there's another Andy

also has a, it's easier for me with people I've like reported to or I've learned from Andy Jassy,

yeah, Andy has this like, ability to criticize you in a way that conveys 100%. I know that you're

amazing. I know that your plan is good. Or, you know, like, or that you're at least capable of

making a really good plan. I know that you're working really hard. And I know that you are

smart and you have a great team. And we have a huge opportunity. And yet somehow your results

are bullshit, which must, I don't know what's wrong, but we're in this together and we're gonna,

like, I have your back. But like, I, but I'm confused, like, why aren't the results better,

given how amazing you are, and you feel supported, like you feel like he, he believes in you. But,

but like, but he's just, you're so sad. Oh, I'm sorry, I've confused. I'm sorry, I've failed,

even though I clearly can succeed at this, I'm gonna go, I'm gonna go like fix this now.

And like, it's almost like, instead of looking at this and then judging you, he comes to your

side of the table says, what is this? Yeah, like, how did we wind up here? Like, how I have failed

you that I didn't say something earlier, like something, I don't know. But like, not no way.

And that can come off for some people when they do that, it comes off as insincere, or it comes off

as like, they don't think you're actually competent. Like, how did I not catch this can come off as,

I don't blame you because you're clearly not good enough to have caught this. Like,

he really is, how did we, how did we wind up here? I know that we are working together,

we're on the same team. How did we wind up with not the results we wanted, with a plan that I

thought we both thought seemed good? Like, help me understand. And because it, because it is genuine,

it's super effective, at least if it's effective, I don't know if it's effective, it's super effective

on me. And I saw it be effective on other people as well. So I know it works on some number of people.

Right. And that's another one of those things I've, I've tried to, I've tried to become good at,

I'm not, I'm not as good at it as Andy is, but I've certainly gotten better. Something,

something to learn from. That's great. Love that one. Dude, thanks for doing this. I know,

I've been, I've been bothering you to do this for a long time, because I love hearing your stories,

love hearing the way you think it's very different than most people I run into, even here in Silicon

Valley, where you're supposed to have this kind of very unique, diverse set of minds, you know,

you're, you're one of them. You're one of the reasons I moved out to San Francisco was to

meet people like you. So thanks for doing this. Thank you. I really appreciate that.

Yeah. It's a beautiful, I really appreciate being able to come on the podcast.

Machine-generated transcript that may contain inaccuracies.

Episode 494: Shaan Puri (https://twitter.com/ShaanVP) talks with ex-CEO & co-founder of Twitch, Emmett Shear (https://twitter.com/eshear), about the potential of artificial intelligence, the importance of learning from the best, and the value of understanding the needs of users. He also shares insights on problem-solving, the power of seemingly small ideas that can have a huge impact and lessons he’s directly learned from Silicon Valley greats like Paul Graham and Andy Jassey.


Want to see more MFM? Subscribe to the MFM YouTube channel here.

Check Out Sam's Stuff:

• Hampton - https://www.joinhampton.com/

• Ideation Bootcamp - https://www.ideationbootcamp.co/

• Copy That - https://copythat.com/


Check Out Shaan's Stuff:

• Try Shepherd Out - https://www.supportshepherd.com/

• Shaan's Personal Assistant System - http://shaanpuri.com/remoteassistant

• Power Writing Course - https://maven.com/generalist/writing

• Small Boy Newsletter - https://smallboy.co/

• Daily Newsletter - https://www.shaanpuri.com/

Show Notes:

(0:00) Intro

(4:30) Did you always have an insatiable curiosity?

(8:30) How to solve any problem

(13:23) The importance of understanding your customers / users needs

(22:15) Emmett’s favorite business ideas right now

(41:00) Is AI going to kill us all?

(56:50) Was Twitch luck or skill? Will Emmett try to build another unicorn?

(59:00) Lessons from Paul Graham

(1:09:00) What’s the difference between people who are good vs. great?

Links:

• Twitch - https://www.twitch.tv

• Paul Graham - https://twitter.com/paulg

• Patrick Collison - https://twitter.com/patrickc

• Andy Jassey - https://twitter.com/ajassy


• Do you love MFM and want to see Sam and Shaan's smiling faces? Subscribe to our Youtube channel.

Past guests on My First Million include Rob Dyrdek, Hasan Minhaj, Balaji Srinivasan, Jake Paul, Dr. Andrew Huberman, Gary Vee, Lance Armstrong, Sophia Amoruso, Ariel Helwani, Ramit Sethi, Stanley Druckenmiller, Peter Diamandis, Dharmesh Shah, Brian Halligan, Marc Lore, Jason Calacanis, Andrew Wilkinson, Julian Shapiro, Kat Cole, Codie Sanchez, Nader Al-Naji, Steph Smith, Trung Phan, Nick Huber, Anthony Pompliano, Ben Askren, Ramon Van Meer, Brianne Kimmel, Andrew Gazdecki, Scott Belsky, Moiz Ali, Dan Held, Elaine Zelby, Michael Saylor, Ryan Begelman, Jack Butcher, Reed Duchscher, Tai Lopez, Harley Finkelstein, Alexa von Tobel, Noah Kagan, Nick Bare, Greg Isenberg, James Altucher, Randy Hetrick and more.

Other episodes you might enjoy:

#224 Rob Dyrdek - How Tracking Every Second of His Life Took Rob Drydek from 0 to $405M in Exits

#209 Gary Vaynerchuk - Why NFTS Are the Future

#178 Balaji Srinivasan - Balaji on How to Fix the Media, Cloud Cities & Crypto

#169 - How One Man Started 5, Billion Dollar Companies, Dan Gilbert's Empire, & Talking With Warren Buffett

• ​​​​#218 - Why You Should Take a Think Week Like Bill Gates

Dave Portnoy vs The World, Extreme Body Monitoring, The Future of Apparel Retail, "How Much is Anthony Pompliano Worth?", and More

How Mr Beast Got 100M Views in Less Than 4 Days, The $25M Chrome Extension, and More