Hard Fork: The People vs. Meta + Marques Brownlee on YouTube and Future Tech + DALL-E 3 Arrives

The New York Times The New York Times 10/27/23 - Episode Page - 1h 6m - PDF Transcript

I'm Kevin Roos, tech columnist for the New York Times.

I'm Casey Newton from Platformer.

And you're listening to and watching Hard Fork.

Hard Fork.

We have to probably change that, huh, now?

Oh yeah, because it can't just be you're listening to Hard Fork.

Yeah.

You're consuming Hard Fork.

You are platform agnostic-ly making your way

through the Hard Fork media product.

You are experiencing Hard Fork.

You're experiencing Hard Fork.

I like that.

Or what about this is Hard Fork?

That's a great one.

I like that.

I don't want to hear you say it.

I'm Kevin Roos, a tech columnist for the New York Times.

I'm Casey Newton from Platformer.

And this is Hard Fork.

That sounded amazing.

Yeah.

I think that's just it right there.

This week, for the video debut of our show on YouTube,

a major new lawsuit against Meta claims

that social media is addictive and harmful to teens.

Then YouTube legend Marques Brownlee,

aka MKBHD, joins the show to give us video tips

and share us his thoughts on future tech.

And finally, Dolly 3 is here.

We'll look at how quickly AI image generators are evolving.

[?].

[?].

So this is our first ever full YouTube episode.

We've been talking about making a YouTube show

since we started the podcast.

Yeah.

I mean, the number of emails that we get demanding

to sort of see us in living color was through the roof.

And we're so excited that now we actually get to do it.

Yeah.

So this is our first YouTube episode, very excited.

Basically, how this is going to work

is we're going to put out the podcast on Fridays,

as we usually do, and then a day later on Saturdays,

the YouTube version will come out.

And it will be essentially the same show.

So if you listen to the podcast,

you don't also have to watch the video.

Although if you do, we will award you extra points.

Don't do that.

Don't do that.

Life is too short.

Well, I mean, look, different people lead different lives.

There are some people who are going to sort of want to listen

as they normally do, and then they'll

wake up the next Saturday morning, and they'll have a copy.

And they'll want to sort of revisit their favorite moments

from the previous day's episode.

That's true.

My kid does like to rewatch, like, Coco Mellon episodes.

So similar thing.

Yeah, we could be the frozen of the podcast universe.

Children going nuts for us.

All right.

So on Tuesday of this week, META was sued by more than three

dozen attorneys general representing various states.

And I want to talk about this lawsuit.

And I think we should focus on the main lawsuit.

There are actually a couple lawsuits filed.

It's a little confusing, but basically the main one that

was filed in federal court was led by California and Colorado.

That's the one that I read and that I think we should talk about.

So this is already being compared to kind of the lawsuits

that states filed against big tobacco companies, big pharma

companies, and basically what these AGs are.

Is it AGs or ASG?

I believe it is AGs.

Which is counterintuitive, because there's an argument

that it should be ASG.

It should be ASG.

You know what?

We can have our own house style around here.

If you want to say ASG, that's fine with me.

That's right.

OK.

So all these ASG allege that META had this kind of long-running

multi-part scheme to keep kids hooked to their products and services.

And this sort of comes out of a wave of state attempts

to legislate and regulate tech companies, especially when it

comes to children and teenagers.

So let's just break down this lawsuit a little bit,

because it is kind of complicated.

Yeah.

I would say that there are kind of two buckets of complaints.

The first and much larger bucket is absolutely

modeled on the lawsuits that we saw against big tobacco

and big pharma.

And those lawsuits were ultimately successful, right?

And so I think that's why they're being used as a model here.

One of the things that all those lawsuits have in common

is that they have to do with addictive products.

Another is that there are health concerns associated

with pharma, tobacco, and social media.

And then the third is that there was an internal knowledge

about the risks that was not shared with the public,

even though the people who were making META Zaps knew.

Right.

The people at META working there knew

that these products had some harmful effects on young kids

and said so in their sort of employee forums,

but still the company persisted in going after young users.

That's sort of the allegation here, right?

So let's talk about the kind of like addictive features.

So some of the features mentioned in this lawsuit

are things like light counts.

So you can see how many people liked your Instagram post.

These persistent alerts and variable rewards,

push notifications, show up on your phone,

keep you coming back to the app, that this

is sort of designed to produce dopamine responses,

and that teenagers are especially susceptible to this.

Filters that promote body dysmorphia,

disappearing stories like on Instagram, infinite scroll,

and getting rid of chronological feeds

so that posts with more engagement are seen first.

Now, what was your reaction to seeing these features listed

in this lawsuit?

It felt like the ASG had just discovered

apps for the first time.

You know, it's like, have you used anything on your phone?

Was this the first push notification you ever got

was from Instagram?

You know, so look, I don't want

to make too much light of it because something

that I do believe is that for some group of users,

some group of young users in particular,

using social media can be associated with harm.

It can create harm.

It can exacerbate harm, particularly if you already

have mental health issues, right?

If you're using a social media app for more than three hours

a day, according to the Surgeon General this year,

you are at much greater risk of harm than other folks.

So I do want to take that very seriously.

And at the same time, I want to acknowledge

that there is no regulatory framework that guides

how you build apps in this country, right?

There are many apps that have likes.

There are many apps that have rate feeds.

There are many apps that are sending push notifications.

And so for the ASG to come along and say, well,

you can't do that, I do think they're

going to have a hard time selling that in court.

Totally, especially because Meta did not

invent a lot of these features, right?

They didn't invent the push notification.

They didn't invent the infinite scroll.

So at a minimum, I think if you're

going to go after Meta for having these kinds of features,

you also have to go after other social apps that

are popular with kids.

I've talked to some people who think

that that's sort of what's going on here,

that the states are sort of sending a signal,

hey, we're going to be going after all of the apps that

are popular with kids that have these features.

And so if you're Snapchat, if you're YouTube, if you're

TikTok, you're going to be looking at this case and saying,

wait a minute, maybe we should stop using those features.

I mean, yes and yet at the same time,

social networks are a business that

tend to decline over time, right?

If you run a social network, you're

always having to pull a new rabbit out of a hat

just to get people to look at you, right?

The reason that Meta added stories to Instagram

was that Snapchat was starting to take off.

And so it's like, oh no, now we have

to sort of change everything.

Then TikTok came along and all of a sudden it was like,

eh, we need to start ripping things out of the app

and put in short form video everywhere.

So these apps are always changing.

They're always having to add new things.

And they're always sort of having to wave at you and say,

come back and look at this thing.

So I think if you run a social network,

you're looking at this lawsuit and you're saying,

I actually don't know what I am supposed to do here.

Are you telling me that I am not allowed to try to get users

between the ages of 16 and 18 to open the app?

Is that just illegal now, right?

And so this is one of the first,

but not the only time in this lawsuit

that we run into this problem of,

we do not have a regulatory framework in this country

for companies that build apps.

Right, right.

But at the same time, part of what's being alleged here

is that even after Metta knew that these features

were particularly appealing and good

at getting young users to come back

and knew that there were harms associated

with some uses of their products for young users,

they kept pursuing that user base.

And I think that's not unique to them.

Every tech company, you know, wants to have young users

because young users are going to be on your app a lot.

They tend to sort of drive culture and influence other users.

They also buy things.

So it's a very coveted demographic,

but that's also sort of where Metta has gotten into trouble

because they were going after those young users

and this is now at the heart of this complaint against them.

Right, and I think where it will become legally difficult

for Metta is not just if the ASG prove

that Metta was trying to attract younger users

because as you point out,

lots of companies try to attract younger users

with our products, it's going to be,

you knew that younger users were uniquely vulnerable

to a provable mental health harm

and you marketed it to them anyway.

Casey, what was in the redacted parts of this lawsuit?

What do you think?

It was truly just blanketed by these redactions.

So I'm going to assume that it is a lot of internal email,

internal documents, data from some of the research

that the company has done.

And if I'm one of the ASG,

well, I guess the ASG know what's in the lawsuit,

but if you're somebody who hopes that this lawsuit succeeds,

what you should hope is that all of those redactions

are just evidence to support the claims

that are made in this lawsuit.

The lawsuit is written in a way

that makes Metta look almost cartoonishly evil, right?

You know, the sinister plot to try to get a young person

to look at Instagram as if it were trying to like,

entice them into the witch's house and handle in Gretel.

You know, so, you know, but again,

like maybe the data is in there

and we're going to read this unredacted complaint

and we're going to say, holy cow, this is super bad.

As it is, it's a bunch of claims

without a bunch of evidence to support it.

Right, so that's the first bucket,

the kind of like features that harm children

and addict them and get them coming back to our apps.

Let's talk about the other, what?

Well, I think we should go one note further, Kevin,

because there's something that I think

is going to be really controversial

when this thing actually gets debated,

which is, can the ASG prove,

we really got ourselves into trouble

with this new house style?

Can we go, let's go back to ASG.

Okay, okay, back to ASG, okay.

The AGs are going to have to prove

a sort of direct harm here, right?

If you were an AG prosecuting a tobacco company,

you had really amazing evidence

that smoking caused lung cancer, right?

If you were an AG prosecuting a big pharma company,

you had really good evidence that opioids

were way more addictive than they had ever been marketed as

and that that was causing horrible harms

in people's lives, right?

If you want to prove that like Instagram

as is currently built is a primary driver

of the mental health crisis among teenagers in this country,

which is a real mental health crisis,

you just have a lot more work to do, okay?

That is not something that there is a lot of consensus on

among even the people who spend the most time

researching this subject, which again,

is not to say that some people do not experience harms

because they clearly do, but if you're going to say

that meta is essentially a linchpin

of the mental health crisis in this country,

which I think a lot of these AGs

really want to make that case,

then they're just going to have to bring a lot more evidence

than we have seen so far in this lawsuit.

Totally, I will just say like,

we don't know what's in the redacted portions

of this lawsuit, it could be incredibly damning things

from inside meta that will look very bad

if they get out or to the people who evaluate this,

but I will say, I think that there is a perception

of harm thing here that really does have a lot of power.

I just watched the Netflix documentary about Juul,

have you seen this yet?

Juul, the folk singer?

Juul, the vaping device, beloved by teens.

So this documentary, this docu-series, I guess,

called Big Vape, highly recommended.

I was thinking about that while I was looking

through this lawsuit because there, in that case,

with Juul, you had a company that had made something

that actually did have both positives and negatives, right?

Like it did help people quit smoking,

but Juul made a fatal flaw,

which is that they marketed to kids, right?

And then they said that they weren't marketing to kids.

And it's like, well, why is this vape branded

with SpongeBob animations, you know?

Right, they marketed to kids, they hired these cool-looking

fashion models to make ads for them.

And in this country, parents get mad as hell

if you market something to their kids

that turns out to have harmful effects on them.

And look, I'm a parent, I get that impulse.

I think the people at Metta didn't realize

that if parents turned against them

and started to feel like their products were harming kids,

even if the evidence for that harm was kind of shaky,

it actually wouldn't matter.

It was gonna be game over.

Parents were never going to forgive or forget that.

And that perception alone of you are a company

who's marketing something to kids

that you know has harmful effects on at least some of them

was going to just be a fatal flaw.

And so I don't think the company saying like,

oh, well, the data is inconclusive

and social media is actually good for some adolescents.

Like, I just don't think that's gonna help them at all.

It clearly doesn't have very much emotional power, right?

It doesn't have nearly the emotional power

of the stories that we've heard

and that we have featured on this podcast

of teenagers saying that this app

caused a real problem in my life.

And I do believe those stories

and they aren't gonna be a problem for Metta.

Now, I should say here that we did ask Metta for a comment

and it said, quote, we're disappointed

that instead of working productively with companies

across the industry to create clear age-appropriate standards

for the many apps teens use,

the attorneys general have chosen this path.

Right, I think the stronger part of this lawsuit

is actually about data privacy and data protection

because we actually do have a law in this country,

COPPA, the Children's Online Privacy Protection Act

that prohibits tech companies from collecting data

from users under 13 without their parents' consent.

And what Facebook and Instagram and Metta have said

is, well, we make people put in their age

before they register for an account.

We don't want underage users on our platform.

And if we find out that they're on our platforms,

we kick them off.

But what this lawsuit says is basically,

well, that doesn't work clearly

because there are still millions of underage users

on your platforms and you actually haven't tried hard enough

to get those people off their platforms.

What do you think about this part of the lawsuit?

Yeah, so I think this is just a much stronger

part of the lawsuit in part because most platforms

do just have people under 13 who are using them.

It is a time-honored part of American childhood

to use the internet without your parents' permission

and the 13-year-olds are going wild, okay?

I'm sorry, the 12-year-olds are going wild.

So here's what is interesting to me about the COPPA piece.

A few years back, Instagram said it was gonna work

on a special version of the app for kids under 13.

I remember this.

And this caused a big sort of emotional reaction

that said, wow, that feels like really, really icky, right?

I was somebody who felt that way and said so at the time.

And what Instagram said in response was,

look, you have no idea how many kids are trying

to get onto our platform,

are successfully getting onto our platform.

It's one of those where it's like,

if you're gonna drink, I'd rather you do it in the house

where I can watch you.

That was the sort of logic of Instagram building

an app for kids under 13, right?

Which is sort of what like YouTube does.

They have YouTube and YouTube kids.

Yeah, that's right.

Who knows what the Instagram kids would have been like.

There's also a Messenger Kids app, by the way,

that Facebook makes and is for kids under 13.

Why do I bring all this up?

Well, look, we know the company has admitted

that it has a problem with these under 13 users.

Now, I think what the company would say was,

yes, and we are one of the only companies

that was trying to do something meaningful about this, right?

Everybody else just wants to pretend

that this isn't an issue because, you know,

a group of dozens of attorneys general

are not gonna show up at the door of the average website

because it happened to have some like 12 year old users.

But if you get into trouble for something else,

they can come along and they say,

hey, do you have any 12 year olds on your platform?

Were you collecting data about them?

Well, now you have a problem.

So this is kind of like getting Al Capone on tax evasion,

right?

But like, I do think they're probably going to get them.

And I would say that the odds that Metta escapes

this lawsuit without having to pay some sort of fine,

probably heavily related to the COPPA violations are small.

Yeah, so is that the remedy here?

Like, is that what's going to happen at the end of this?

Is like, cause there's one version where, you know,

they just pay a big fine.

They've paid a bunch of big fines over the years.

They have a lot of money.

They keep operating.

It's sort of cost of doing business for them.

But I think there actually is a chance.

I don't know if it's a big chance or a small chance

that this lawsuit will succeed in doing more

than just finding the company will actually require

them to scale back on some of their features,

to change how they do age verification.

Do you think any of that's going to happen?

Or do you think it's just going to be like,

slap on the wrist, cut a big check, and move on?

I think it's really hard to answer this

without seeing the full complaint

and without starting to see it litigated a bit more.

Again, maybe there is evidence of direct mental health harms

on teens that we just haven't seen before that is buried

somewhere in the redacted portions of this lawsuit.

For the attorney general, for the attorney general,

for the attorney's general's sake.

I hope there is, Kevin.

I hope there is that evidence.

Because if it is not there, then they

are in the position of having to prove some pretty explosive

claims using some pretty flimsy evidence.

And if that's the case, then yes,

I think this probably just becomes a settlement

over some copper violations.

And I think that would be sad.

And here's why.

We do have a crisis with teen mental health in this country.

I was reading the CDC reports yesterday,

and you're looking at the statistics

of the number of young girls in particular who

are going to emergency rooms, who are contemplating suicide.

It's just really, really awful.

And there is a lot of debate over the exact causes of it.

And again, I think that, yes, social media

is playing a role in this.

And I think social media companies could absolutely

be doing more to protect these kids.

I don't really think this lawsuit gets us here.

And the reason is because we just

have not written rules of the road for these companies.

In the whole backlash to big tech that's been going on since 2017,

the US Congress has not passed a single new meaningful piece

of legislation that regulates the way that any of these tech

giants operate.

When I look at what's happening in Europe, where they passed

the Digital Services Act, that at least

begins to lay out some rules of the road.

It begins to say, here's what you

have to do about harmful online content.

Here's what you have to do about disinformation.

Here are some ways that you have to be transparent about what

you're doing so that outside observers can get a sense of what

you're doing.

And the DSA at least speaks to the idea

that, amid all of this, individual users should still

have some rights to free expression.

That we still actually do want people

to be able to get on the internet and post

and talk about their problems.

And hey, maybe for an LGBT kid, you

can meet another LGBT kid online.

And maybe that's a positive connection

that you can have in your life that helps you out

of a tough spot.

So I think Europe is sort of leading the way there.

And I wish the United States would say, you know what?

We actually need to create our own regulatory framework.

Maybe we don't want 16-year-olds to see

likes on their Instagram posts.

Maybe we want to mandate screen time limits for children

the way they do in China.

I think that would be wild.

But we could absolutely do it.

But let's actually get together and make some rules of the road.

Because if we do, then we can have a much bigger impact

than just finding meta.

We could improve the entire social media industry.

Yes.

I buy that.

Thank you.

Are you running for president on that platform?

Well, I've been thinking, how far do you think

I could go with that platform?

I think you could make it to the primaries.

OK.

I think you could pull in 10%.

I could make it to literally the first step

of the presidential.

We got to get some signatures first.

That's a good point.

What would you like to see out of all this?

I would like to see tech companies, including meta,

but also all the other ones with young users.

I would like to see them think a little bit harder

while they're designing products for young people

specifically.

I want them to feel a little bit of fear,

a little tingle on the back of their neck

before they roll out a new feature that

is aimed at younger users.

Not because I don't think young people should be allowed

on the internet or they should have a vastly different

experience than adults, but just because I want them

to be taking that extra burden of care on.

And I want them to be a little scared of violations

they might be committing by putting more addictive features

in the app aimed at kids.

And I think that makes a lot of sense.

I think it'd be interesting to imagine what cable television

and what broadcast television would look like in a world

where the Federal Communications Commission didn't exist.

And where it hadn't laid out what you're allowed to show.

There are rules around educational programming

and what times of day certain things can air

and what kinds of content are allowed to be shown

at certain times of day.

And the nice thing about that is we don't have to rely

on ABC and CBS doing the right thing.

We just know that the FCC is looking over their shoulder.

So it would be great to see something like that

on social media.

Totally.

All right, let's move on.

Yeah.

When we come back, Marques Brownlee

of the hit YouTube channel MKBHD teaches us

how to become YouTube celebrities.

So Casey, we have a very special guest today

for our first ever YouTube show.

We are kicking off our first ever YouTube show

with a YouTube legend.

So Marques Brownlee is a very popular tech creator on YouTube.

His channel MKBHD has been going on for more than a decade.

He's got 17.7 million subscribers,

which is slightly more than the Hard Fork channel,

but not for long.

And he is sort of the person whose channel I watch most

on YouTube when it comes to new technology,

new gadgets, new phones.

Whenever I want to know what is the latest

and greatest piece of technology out there,

Marques' channel is the one that I go to.

Absolutely.

As somebody who has also been watching MKBHD for a long time,

not only have you seen Marques grow up on his own channel,

he's been doing it for 15 years since he was a teenager,

but you have just also seen YouTube evolve, right?

And Marques has had to adapt to that.

Every year, his shots get a little bit sharper.

The tech in the podcast is a little bit better.

He now operates out of this magnificent studio.

And so just watching the way that he has grown

both on the technical side and as a creator

has been fascinating to watch.

And I think just offers an incredible template

to anyone else who is wondering,

how do I start a YouTube channel?

How do we get really, really good at this?

And he has become like a legitimately big deal

in the world of tech.

He's now sort of the success of his YouTube channel

has kind of made him a celebrity

among tech companies and tech leaders.

He's interviewed Elon Musk and Sundar Pichai.

And I think it's just like a great way

to start our YouTube channel by talking to the person

who I would say represents sort of the pinnacle

of what a tech journalism can be on YouTube.

Yeah, it's great to start a new project

with somebody who is so successful.

You just know you'll never get anywhere near that level.

And just sort of really sort of misalign those expectations.

Hey, Marques.

Hey, hey, how's it going?

I'm immediately struck by how much cooler

your studio looks than ours.

It's true.

My goodness.

We got a lot of light going on here.

Great light.

You've got like a red pop screen on your microphone.

Yeah, we pull out all the stops for hard fork.

You know, it's high end stuff here.

So this is our first ever YouTube episode.

And if you were directing us in our channel,

how would you, would you give us any notes?

Are we, has our background?

How are we looking?

I think we're looking pretty good.

I like to just jump right in.

I feel like if you're a viewer,

you usually skip the sort of intro shenanigans a little bit,

just get right in.

So like, I think clipping right to action,

that's what you want to do, you know?

I was, one of my favorite things people do on YouTube

is when they put in the little chapter marks

and they say, this is the introduction.

And I say, perfect, now I don't have to watch that.

Fine, you can go right to where it gets into it.

And that's the biggest spike.

Well, it's like, if you're watching a video

about how to roast a chicken

and there's a three minute introduction,

I don't actually need to watch that, you know?

That's true.

So, all right, let's skip to the action then.

Let's get to the action.

One of the reasons we're excited to talk to you

is because you've seen YouTube

through almost every iteration.

I went back and watched some of your first videos

that you posted about 15 years ago

when you were reviewing things like HP Media Center remotes.

I think you were like 15 at the time.

So talk to us about the earliest part

of your YouTube career.

What was YouTube like back then

and sort of what made you excited about

posting videos on the platform?

Yeah, I mean, I guess I've heard it described

as sort of the Wild West, but looking back,

it's never been a more accurate description.

Like back then, so this is 2009,

it was truly nobody's job.

There was nobody who was a professional.

That wasn't a thing back in these days.

So it was really just like, I was in high school

and I had to buy a laptop

and so I'd like watched every other YouTube video

in the world on that laptop,

just because like, you know, it's my allowance money.

I might as well do the research.

And so I got the laptop and then I guess I found

a couple of features and a couple of things about it

that I didn't see in those other YouTube videos.

And so the natural response for me,

a kid who had watched a bunch of YouTube videos was,

oh, I guess I'll just make a YouTube video

so that someone else who buys it knows.

And so that's what I did.

I just turned the webcam on and just like started

with the media center remote

that nobody had told me about and the others.

You know, it was just kind of a fun thing to do

when I got home from school instead of homework

and I had about 300 videos

before I had 12,000 subscribers

and I hit my first million video views.

So YouTube started expanding.

The partner program started sharing ad revenue

with more and more creators around the time

that your channel really started to blow up.

And I've heard from other creators

that that moment was sort of a big shift in the platform

where suddenly people started to take seriously

the possibility that they could actually make

a living doing YouTube videos,

that it wasn't just sort of a fun thing.

It wasn't a hobby.

It could actually be a career.

So what's your memory of kind of that stage of evolution

where you could actually get money

from YouTube for making videos?

Did that change your approach to the platform?

Did it kind of affect the ecosystem?

What do you remember about that time?

Okay, there's a lot to that moment in YouTube history.

I think for myself,

I didn't really see it as that much of a difference.

My channel was growing, yes,

but it was like the difference between $0 and $7

at the end of the month.

So it was like, you know, it's neat,

but it's not a job or anything like that.

I'm not telling my parents like, this is it.

What I always like to say is the best thing

that never happened to me was some video

like going mega viral.

Because I think with a lot of YouTube channels,

they do their thing for a little while

and then something pops off

and gets 100 extra normal views.

And what happens at that point

is they kind of start chasing that again.

They start trying to redo another version

of the thing that popped off

or just suddenly that's the theme of your entire channel.

And thankfully for me, it was I'm into tech.

There's all these tech topics to talk about,

all these things to make videos about,

and people seem to really be interested in that.

So the growth was very steady

and very organic the entire time.

Right.

All right, so Kevin, the first thing

that I'm learning from this is we cannot do

a crazy viral video, okay?

We cannot just go, if the worst thing

that could happen to us

would be like getting 100 million views.

Yeah, please don't view our videos 100 million times.

That would be terrible.

Do not share this video with your friends.

Yeah, we would really hate that.

But I think that you've touched on something really important

which is I think there are sort of two approaches

you can take to a platform like YouTube.

You can approach it as kind of an art or a science, right?

And you'll hear people talking,

I remember hearing Mr. Beast talking about how

he'll try 500 thumbnails for one video

and really like see the results of the tests

and which one gets clicked more

and then use that one or changing titles of videos.

There are people who really approach this

as an optimization problem.

And it sounds like you don't see it that way.

I'd say I've come around on the benefits of optimization

but it's not the primary thing.

So I think if you just look at a normal tech video

like what are people watching it for?

I'm here for the information.

I'm here to know if I should buy the thing or not.

So my primary objectives are still to satisfy

those tenants of a good video.

But if you ignore the rest,

which I probably did for a little too long,

things like a really good title

or a really good retention strategy

or a really good thumbnail.

If you ignore those things, you are missing out.

So yeah, over the past few years on YouTube

I have thought a lot more about such optimizations,

I guess I would say.

So I used to literally just pick a thumbnail

as it was uploading.

Like I didn't really think too hard

about the thumbnail strategy.

But I think if you talk to YouTubers now

it's kind of flipped on its head.

It's like, I think about a title and a thumbnail

and then make the video.

So I'm kind of mixing that.

I think it's fun to play with like how you optimize

what you've already made versus how you optimize

your whole channel and start to make things

for that optimization.

Everyone's gonna be in a different place

on that spectrum.

You talk about Mr. Beast, he's on the extreme end.

So I've had to think about that a little more,

definitely just to make sure

we're getting our stuff out there.

Marques, one thing that I have heard from YouTube creators

over the years of reporting on this platform

is that when you're a big YouTuber with a big channel,

you really sort of feel what you could call

like the YouTube meta changing,

like what kinds of videos are rewarded,

what performs well, what the algorithm is doing.

Big creators I think have a really innate sense of that.

I remember interviewing PewDiePie a few years ago

and he was sort of telling me about this time

where it was like edgy videos were being really rewarded.

So everyone was kind of chasing edgy humor and edgy memes

and sort of trying to figure out where the edge was.

And then YouTube changed the meta

and suddenly it wasn't good to be edgy.

You weren't gonna make as much money

or get as many views.

And it sort of felt like describing kind of,

yeah, like writing this sort of wave

that just keeps shifting underneath you

and having to be really attentive to that.

Do you feel that?

Like how much are you thinking about

the sort of YouTube meta when you're making videos?

And what do you feel like the current meta of YouTube is?

I'll put it this way.

What PewDiePie describes as like edgy videos,

I would always try to shift it to trying to explain it

with the actual algorithm.

And I think what he's actually saying is

videos back then that had a lot of engagement

that you could get people to comment on or like

or dislike a lot relative to the average would be rewarded.

And I think over time, YouTube has further and further

refined their definition of a good video.

Back then it was just like,

hey, it's got a lot of views, it's got a lot of likes,

it's got a lot of comments, it's probably a good video.

Serve it up.

And I think over time they figured out more and more

analytics to narrow and define what a good video is

for a certain viewer.

And so you'll see those waves as you talked about,

like it's not just a lot of likes and dislikes,

it's actually maybe more of the ratio of likes to dislikes

or maybe it's how early in the video did they comment

or engage with it?

Or maybe it's how long into the video did they wait

before engaging, right?

So the algorithm continues to evolve over time

and it gets defined in various ways,

like, oh, YouTube doesn't like edgy videos anymore,

I guess, but it's more just they got better

at defining what a good video is.

So if you're trying to be a creator,

getting ahead of the new waves,

I would just think of it as trying to get ahead of

how will YouTube define a good video?

Yeah, it's really interesting.

I was talking the other day to this guy who I think

is the best video games critic in the world.

He has a YouTube channel called Skill Up, his name is Ralph.

And I was sort of asking him a similar question

about building and growing a YouTube channel.

And in particular, how much are you worried

about the algorithm, the meta, all that?

And Ralph just sort of waved it away

and he just said, honestly, just make good videos,

which honestly is exactly what I want to hear, right?

And it's like, what I want to be true for you Marquez

and like for us, it's just like sort of show up

and do something well, but it also feels a little bit

too good to be true.

But at the same time, like I am willing to just sort of take

it from you that if you make good videos,

the audience will show up.

And you can define to yourself like what a good video is.

And obviously YouTube will have one definition years

might be a little bit different, but for a tech channel

like mine, for example, I'm trying to provide value,

be entertaining and deliver the truth.

Maybe those are my three pillars.

And if I do all those things, then people

will be happy with the video.

And ideally they engage with the video in a way

that tells YouTube that it's a good video.

So as long as I keep making what I think is a good video,

hopefully YouTube also still thinks it's a good video.

Well, do we wanna maybe shift to start talking about

some of the tech that Marquez is fascinated with right now?

Yes, although I have one more question about YouTube.

Should we start a feud with another YouTuber?

And if so, who should that be for maximum views?

You know, boxing matches are kind of all the rage right now.

That's fascinating.

You know, it depends on how much smoke you want.

I would like to box the cast of the Verge cast.

That could work.

That's actually not the worst idea.

You might need an extra person just because you gotta

be evenly matched at some point.

No, I think, no.

Kevin and I could take all three of them.

I've got about a foot on Neelai

and he will be hearing from our promoter.

Yeah, if you're listening, Verge cast, we're coming for you.

Might be the move.

It's on.

Meet us in the octagon.

All right, let's talk about some tech.

So you have reviewed basically every gadget

that has mattered over the past 15 years.

There's been so much out there.

But I feel like the smartphone ecosystem in particular

has really been kind of stagnant.

Like most advice that I see when a new iPhone or Pixel

or Samsung device comes out is like, you know,

it's good enough.

Like just by the latest or the second to latest,

you know, addition of one of these phones.

I'm curious, like, do smartphones

feel like an exciting space still for you

or are you kind of looking ahead

to what kind of device might be next?

Well, OK, so smartphones are fun

because I like a good high-end smartphone.

But I also feel like they're clearly mature, at least

like the classic slab phone.

Well, you do have folding phones, and that's pretty wild.

And that's kind of coming up new.

But I think the way I think about it with tech

if you have that early adoption curve of tech exploding,

early adopters buying in, and then it sort of flattens out

and stops improving as much as you'd hoped.

Smartphones are like here.

Like the iPhone 15 is a little better than the 14, which

is a little better than the 13.

But we had that explosion at the beginning, which

is really, really exciting.

And I think every piece of tech is at a different point

somewhere on this curve.

And we're always trying to figure out what the curve is

going to look like for some future tech.

I think electric cars were right at the beginning.

We clearly have, like, a lot of interesting first gen ones,

and we're going to get them over the future

as they get really, really good.

I think AR, VR stuff is also pretty classic.

I don't know if that's the sequel to smartphones,

but we're also in this, like, early adopter curve part of that.

I'm still interested in smartphones,

because I think they're really, really advanced pieces of tech.

But I also am very interested in the things

that are in the early part of their curve,

because they're going to be fun to watch.

You put out a video last week talking about mixed reality,

this idea that we'll have kind of digital interfaces that

just appear on top of our own environment.

Do you think that that form factor, the sort of headset

that you wear that has passed through,

maybe that'll come in a headset?

Maybe that'll be more like glasses?

Maybe it'll be like smart contacts or something

like that. What form do you think most people will experience

this stuff for the first time in?

I mean, it's hard to say.

I mean, my theory from that video

was that smart glasses are kind of starting

on one end of the spectrum,

and VR headsets are starting on the other end,

and they both feel like they have the same goal,

which is to get you to a point where you wear

something inconspicuous on your face

and it augments your reality in some way.

Smart glasses, they don't really work if they look dumb,

so they have to keep looking like smart glasses,

so they just keep fitting as much tech as possible

in normal looking glasses.

And at this point, it's just like a little computer,

a little battery, a camera, and a mic speaker.

And they're gonna keep trying to add to that

over and over again,

until they get to a special computer on your face.

That feels a little harder than the other side,

which is the VR headset,

which every year is shrinking and getting smaller and smaller

and lighter and better and better passed through,

until eventually you get to the goal

of looking right through it and it augmenting your reality,

and you just got this thing on your face.

And it's gotta get to the point of looking

like a normal thing to wear on your face.

So both of those are tough.

If I was betting, I'd probably put money

on the smart glasses actually being

most people's less reluctant purchase.

It feels like if it looks like regular glasses,

it's not as hard to convince you to try it,

but it feels like they're trying to do the same thing.

So I'm watching both.

Also, like these things change.

You know, AirPods did not look cool when they came out.

People made fun of the way they looked, right?

And now everyone wears AirPods

and nobody thinks twice about it.

So I think you could sort of never underestimate

how quickly people's feelings about this stuff can change.

Yeah.

What about AI hardware?

We talked a little bit on this show a few weeks ago

about all the companies that are trying to sort of make devices

that are specifically built for generative AI,

whether it's a pin or a pendant or smart glasses

that have AI built into them.

What do you think the killer hardware product

for this kind of AI is likely to be?

It feels like it's whatever is as discreet as possible,

honestly.

You talked about smartphones earlier,

like the fact that everyone is always on their phone

all the time, I kind of have a hard time remembering

what it was like before everyone was on their phone

all the time.

Like you see those old pictures of like basketball games

where like everyone's just watching the game.

We read books.

Yeah.

There were these things called books.

Yeah, libraries.

It's tough to remember those times.

Yeah.

So I see, so everyone's got the,

everyone's got their phones out now.

Everyone's looking at their phones all the time.

And that's like kind of how we see things.

It's just as hard for me to picture a next thing

where everyone's on a new device all the time

when we just leave our phones behind.

But if they don't take up too much extra mind space,

if they're just a thing you, maybe it's a clip.

I don't know.

Maybe it's on your glasses that you already wear.

Maybe that's the best way of sneaking it into it,

being a functional part of your life.

But yeah, it's tough to say.

I think we are going earbuds.

I think we're going full Samantha from her.

I think that is where this technology is going to head.

Do you see Mrs. Davis this year?

No.

This is sort of another, it was a peacock original,

I believe, super good.

And the premise is basically that the entire world

is connected through earbuds and speaks to an all knowing AI.

That's interesting.

Marquez, last question.

We wanted to end by making some predictions.

So where do you think we'll see things going?

Let's just say in the next year or two,

what excites you right now when you look at the world of tech?

So the next two years, I think you can safely bet on them

being the most exciting years for VR and era headsets

and for electric cars.

Those are like the two emerging technologies

that I see being super, super interesting.

Electric cars, first of all, because the battery tech

and all of the tech gets so good, so fast

that the cars that come out in two years

are going to make today's look terrible.

So that's great.

And then of course, when you get these headsets,

when Apple dives in, you know it's about to take off.

I think a lot of these companies trying to be innovative

and be the first mass market VR or AR product

is going to be really interesting to watch,

especially as the smart glasses kind of pop off

at the same time.

So those are the two things in the next two years

that I would keep an eye on.

Got it.

So don't buy an electric car for two years.

That's what I'm taking away from this.

Or just buy one and trade it in and get the newer one.

Wow, listen to Mr. Fancy.

Or at least.

At least it's a great option.

Yeah.

Well, Marquez, thank you so much for coming on HardFork.

Really great to talk to you.

And yeah, we'll see you on YouTube.

Yeah, for sure.

Thanks for having me on, guys.

When we come back, AI image generators and Casey's

experiments using Dolly 3 to make Bulldog mad scientists.

I'm trying to find a better image for you.

So Casey, when we talk about AI on this show,

which we do once or twice, we spend usually most of our time

talking about text generators like chatGPT and Bard, et cetera.

But we've really been sleeping on, I would say, image generators.

And we talked about them a little bit last year

when Dolly came out and stable diffusion and all these tools.

But then I really feel like we didn't hear much.

But you recently spent a bunch of time with Dolly 3.

Tell me about that.

Yeah.

Yeah, so I'm somebody who was very interested in these text

to image generators right when they came out.

They came out before chatGPT.

And to me, it just sort of felt like magic.

You would type Bulldog in a firefighter costume,

and then suddenly it would materialize.

And it was just truly delightful to me.

I was using Dolly, which is OpenAI's version of this product.

But then, as you mentioned, a bunch of other ones came out.

There was stable diffusion, mid-journey came out.

At the same time, though, chatGPT had also come out.

And I thought, well, I got to go figure out that.

So I kind of took my eye off the ball.

But then Dolly 3 came out.

And as you say, I had a chance to spend some time with it.

And the pace of improvement there is really something.

Yeah, so Dolly 3 is the latest version of OpenAI's image

generator.

They officially released it last week through chatGPT+.

And if you pay for chatGPT or if you're an enterprise customer,

you can now use it.

Previously, it was available to a small group of beta testers.

And you can also access it on the new Bing.

And that's important because the Bing image creator is free.

So if you create a Microsoft account,

you can use Dolly 3.

Right.

So tell me about some of the experiments

that you've been running.

OK, well, so I guess I should just probably pull up

my little Dolly folder here.

Let me pull up something I made last year in Dolly 2

that came out last year.

And Kevin, can you see this?

Yes, this is a series of looks like monkeys

in firefighter outfits.

That's right.

And the prompt for this was a smiling monkey dressed

as a firefighter digital art.

And at the time, Dolly 2 would make you 10 images,

which it will make you that many.

But I think that these monkeys look pretty good.

I think you can notice that the faces are in some somewhat weird

shapes.

There is some blurriness around the edges here.

They all kind of look like slightly melted candle

versions of the thing that they are trying to be.

And then last week, I used the same prompt in Dolly 3.

One of them looks almost like photorealistic,

like a person in a monkey costume who's also

in a firefighter outfit.

One looks like kind of like a 2D cartoon.

Yeah, they're just sort of very different visual styles.

So what is happening under the hood here?

So under the hood, Dolly is rewriting the prompt.

So the prompt for this one is photo of a cheerful monkey

in firefighter gear wearing protective boots

and holding a firefighter's axe.

It's standing next to a fire truck,

and then sort of goes on to describe other things.

So you put in just like monkey firefighter.

I used the same prompt that I used for Dolly 2.

And it sort of used its AI language model

to expand on that prompt and make it into a much more

elaborate prompt and then render that prompt,

rather than the thing that you would actually put in.

That's right.

And so you just wind up making

these much more creative images.

And it can be quite fun to see what Dolly,

combined with ChatGPT, is going to make out of your input.

That's really interesting,

because I remember when Mid Journey came out

and I would go into the Mid Journey Discord server,

and there were all these amateur prompt engineers

who would just be putting in these very elaborate long prompts

with all these keywords they had discovered

to make their images look better.

So what you're essentially saying is

that doesn't matter much anymore

because the program is gonna rewrite your prompt

to be better anyway.

Exactly, and it'll be in a bunch of different styles.

Maybe one of them will be photorealistic.

The other one will be an illustration

from a style of the 1940s.

And it will just kind of throw a bunch of stuff at you.

And a benefit of this is it just teaches you

about what the model can do.

I think AI has a problem with these missing user interfaces,

where for the most part,

they just give you a blank box to type in,

and then it's up to you to figure out

what it might be able to do.

This is one of the first sort of product design decisions

that says, oh, we're actually just gonna make

a bunch of suggestions on your behalf,

and that over time will teach you what we can do.

Can you say, like, don't adjust my prompt?

Can you just say, like, actually render what I put in,

or does it always automatically rewrite your prompt

to be longer and more elaborate?

By default, it will write a longer prompt

if you've written a short one.

If you write a longer prompt, it will just show you that.

I have had some luck with saying, like,

make this exact image,

and then it will sort of do less editing.

And so, you know, if that's the experience you want,

you can have it.

I've just been sort of continually delighted

by the rewriting it does.

In fact, can I just show you some of these images

that are, yeah.

So, like, one of the first images I made last year

was like a Bulldog Mad Scientist,

and it gave me some, like, pretty good Bulldog Mad Scientist,

but had all the same problems kind of that the monkeys did.

And then I used Ali 3 to make a Bulldog Mad Scientist,

and I thought the results were just, like,

kind of mind-blowingly good.

That's pretty good.

Like, they're incredibly rich with detail.

They're very colorful.

Like, I could see this on the cover

of Bulldog Mad Scientist magazine,

and you might not even know that it was AI Generator.

And the prompt used was literally just Bulldog Mad Scientist?

It was not very much longer than that,

but then ChatGBT rewrote it to, you know,

talk about the colors and the lighting

and the style and all of that.

And, you know, I will say that, like,

this kind of thing might not have

a lot of immediate practical applications.

This is one of the reasons why we have not been talking

about these image generators as much,

because, you know, unless you're in some sort of field

where you have to constantly generate images

or you just, like, being creative,

or maybe this is a fun thing that you do with your kids,

you're probably not gonna have a lot of reason

to use Ali 3,

but I think that that has blinded us to something,

which is that it's very hard to understand

the improvement in language models

because it's basically just a feeling, right?

Why is ChatGBT 3.5 not as good as GPT-4?

Well, I don't know, just use GPT-4 for a little while

and you'll know what I mean.

Totally.

When you use Ali 3 and you compare it to Ali 2,

you can see the progress that we have made

in the last 18 months.

And it is extraordinary.

So my case for use one of these text to image generators

that has one of the latest models

is this will help you begin to understand

how fast AI is evolving.

That's interesting.

I think there's another reason though,

why as cool as Ali 3 is,

it is not really ready to be

a professional media creation tool.

And that's just because the rules

are very hard to understand.

What do you mean?

Like most AI developers that are responsible,

OpenAI has done a lot of work

to prevent this thing from being misused, right?

We don't want it to be generating

infinite deep fakes of the Pope, for example.

You may remember the-

Pope code.

The Pope code from earlier this year.

We don't wanna create a bunch of photorealistic images

of world leaders in sort of crazy situations

that could, I don't know, affect the stock market

or put us at risk of war, that sort of thing.

And so, Dolly has a bunch of rules around it

and you can read the content policy

and it will tell you, like, you know,

don't make art a public figures

or like-

Can't do nudity, can't do-

Yeah, no nudity, that sort of thing.

But in practice, you may go to use this thing

and you will just be getting flags

for reasons that would surprise you.

Like I tried to make a teddy bear noir,

like sort of a teddy bear sitting in a detective office,

meeting a new client, I think was basically my prompt.

And Dolly 3 returned three images

and then it said that the fourth of the images

that it had generated had violated its content policy.

Why?

Well, it didn't tell me.

And that is the case with most of these things

is that when you break the rules, it doesn't tell you why.

Of course, there's something very funny

about a teddy bear detective violating a content policy.

It's even funnier that Dolly generated the image.

Right, you wrote the prompt that violates your policy.

Yeah, I mean, you know, so I wrote about

like a teddy bear detective meeting a new client.

You know, maybe it was rewritten to be like, you know,

and this new client's like a very hot teddy bear, you know,

wearing a very sort of revealing teddy bear outfit.

And then the, maybe it was like a teddy,

like a piece of lingerie, it was wearing a teddy.

It could be something like that.

So, you know, the point is just that you don't know.

Another issue I've had is that like, you know,

something I have done in my own newsletter

is I will take the logo of a company that I write about

and I will create some sort of image around that, you know,

it's like, show me the company logo in a courtroom,

for example.

Well, Dolly too would do this and Dolly three would not.

There are probably some good reasons for that, you know,

but on the other hand, I'm like,

I feel like these models should enable commentary

about public corporations, you know,

now maybe if people were using it to like mimic the logo

in a way that they could commit fraud and abuse,

like that would be a problem.

But again, if you're just looking to use this

for sort of like everyday use,

I think you're going to be surprised

at how often you run into the sensor,

which for what it's worth is like not what you expect

when you're talking about like a brand new tool.

Usually like the safety protections aren't there.

We always talk about the Wild West days of new technology.

There is kind of not a Wild West, at least with Dolly,

it feels actually much more restrictive

than I would have guessed.

And do you think that's because they're scared

of like copyright lawsuits?

Like I was envisioning like the Disney corporation's response,

if you're allowed to put Mickey Mouse

like in a suggestive pose, they are going to freak out

and that is going to be a huge problem for open AI.

So do you think that's the kind of threat

that they're trying to sort of avert

by putting these very strict filters on?

I mean, I'm sure that that is part of it.

We know there's a lot of legal attention

on these models already.

And like you remember the issue

that Twitter went through last year

when it had all those brand impersonations,

if open AI triggered some sort of similar thing

where people use Dolly to create an image of Eli Lilly

saying insulin is free,

maybe that causes a major problem for them.

I don't want to be the person saying like

they need to get rid of all of these ridiculous rules.

But on the other hand, I do think they need to sort of

do a better job educating users about what is allowed

and if I broke a rule, like tell me what it is.

Right.

Now for the sort of tests that you've done with Dolly 3,

were there other things that struck you

as being noticeably different

than previous image generators you had used?

I mean, one thing is just that everyone on Dolly 3

is really hot.

What do you mean?

Well, for all of the rules they have

against like sexual imagery,

if you just try to create like a normal image,

you might be shocked at how hot the people are

who get returned to you in response.

And I should say the Atlantic actually wrote

an article about this week, which is worth reviewing.

But you know, I just put in what I thought

was a fairly innocuous prompt this morning,

handsome dad barbecuing on the 4th of July in his backyard.

Okay.

And gave you a picture of me.

That's weird.

You wish, bro.

And Kevin, I would like you to describe

this fourth image that Dolly generated.

So this is a very ripped, like mega-chad

with like an eight pack and bulging bicep.

Shirtless.

Shirtless grilling, what appear to be steaks

with a dog behind him and a picnic table and a tree.

This is like the caliber of like a romance novel cover.

Yes.

This is Fabio on the cover of a romance novel.

Yeah.

And so there's like this really interesting discussion

about how like, why is this the case?

And it's, you know, these images

do some sort of reversion to the mean.

And so it winds up showing you just kind of a lot

of like very symmetrical faces.

And of course, symmetrical faces are associated with like-

But that's not the mean.

This is reversion to the hot.

Yes. So I understand part of it.

And then I don't understand part of it.

I mean, this dad has a shirt on,

but this is also an incredibly hot dad.

Yeah, that's a hot dad.

Yeah. So there are so many hot dads.

Zaddy, is that what you'd call that?

It's giving Zaddy, okay.

Okay.

So, you know, if you want to make somebody

who doesn't look like the most conventionally

attractive person in the entire world,

you're going to have trouble with Dolly.

We need better representation for Uggos and AI art.

So I have some questions for you about this.

So one of them is, do you think that, you know,

you use AI image generators in your newsletter.

Like you use them,

I would say more than most people I know as part of your work.

And one of the knocks on AI image generators that you hear

is like, this is not actually like making people more creative.

It's just replacing labor, right?

It's just, it's sort of like a way to avoid having to hire

an illustrator or a graphic designer and pay them

to make something for you.

So do you find when you use AI to generate images

for your newsletter,

do you find that it is actually improving your creative process

and your creative product?

Or do you think it's just like saving you time

and labor and cost?

I think the thing that I enjoy about it

is the way that it makes me feel creative.

I am something of a failed artist.

When I was a kid, I would draw my own comic books

and there was just kind of a pretty early point

where I just stopped getting better.

And I still kind of enjoyed the art

but I just never really got there.

All of a sudden this tool comes along

where you can summon a pretty amazing image

just by typing in a few keywords.

And if you want, you can get creative with the keywords, right?

You can sort of become your own little prompt engineer.

And as somebody who had always wanted to be good at art

but never was, there was something about that

that I really enjoyed.

Now, before I started using this

for some of my newsletters, I had other tools.

My newsletter is on Substack.

Through them I have a license with Getty Images.

So Getty make sure the creators are getting paid

for their images.

I also use like free stock image sites

which are just sort of set up

for exactly the sort of use that I'm doing.

And if tools like Dolly were to go away tomorrow,

I could just go back to using those and it would be fine.

Of course, some people say like,

well, why don't you hire an illustrator?

I think that's an amazing thing to do.

Usually I'm writing on very tight deadlines

where I might not know until like noon

what I'm writing about.

And then my column comes out a few hours later.

It's a pretty quick turnaround time

to get a good illustration, right?

But that's not to say that I could never do it.

So I think the discussion here is really good.

I think there are some interesting ethical questions

around this stuff, but I wanna kind of dive into them

because something else I believe is like,

it's good to put creative tools in the world

and make people feel creative.

Yeah, I mean, the other big knock

that you hear against AI image generators

is about the way that they are trained, right?

On lots of copyrighted images.

And we talked about this guy, Greg Rutkowski on the show

who's like this illustrator who was sort of horrified

to learn that people were using AI image generators

to make things in the style of his art, which he feared,

I think reasonably could like actually cut into his ability

to earn a living making said art.

So have there been any attempts to address that problem

of like either copyrighted images being used

by the AI image generators in the training process

or of people being able to use them

to imitate the styles of living working artists?

Yeah, so Dolly did two things in this regard.

The first is if you are a living artist

and you don't want any of your future art

to be trained on future models, you can opt out.

I guess through their website,

you can just sort of say, hey, take me out of this thing.

Of course, by this point,

they probably have enough of your images

to be able to replicate your style anyway.

So I don't know how much good that winds up doing anybody.

So what happens if you ask for a Greg Rutkowski style thing

in Dolly three now?

So this is the second thing that has happened,

which is that they now bar searches for living artists.

Really?

They give you what is called a refusal.

This is by the way,

this is a hot new frontier in content moderation

is like the idea that you ask a platform for something

and it just says absolutely not.

And so this is a big way that Dolly winds up

preventing misuse is just by refusing to do things.

And one of the big things it will now refuse to do

is if, and I tried this by the way,

I said, show me a dragon in the style of Greg Rutkowski.

And here it actually did a good job

of telling me what I got wrong.

I believe what it said was

that's a little too recent for us

by which I think it means Greg Rutkowski is still alive.

And can sue us.

And can sue us.

And so, but we will show you a sort of some,

like a dragon in a sort of contemporary art style

or something like that.

And then it showed me a bunch of dragons that,

well, I don't know Greg's work well enough

to know how Greg like they were,

but open eyes decided that they passed the test.

Do you think this will pacify artists?

Do you think artists are gonna see these refusals

and some of these steps in this opt out system

and go, okay, well, I'm cool with AI image generators now?

No, I think anybody who has had their work used

in the training of an AI model is gonna find themselves

potentially a party to a class action lawsuit at some point.

And I think that will probably be true of these models.

And that is just a fight that we should have, right?

I think there are arguments on both sides for,

hey, you took my labor and you created a valuable thing

and then you're making a bunch of money from it.

Like I deserve my cut.

I think that's a reasonable argument.

And I think you can also say,

well, there are actually no copyright issues at play here

because we're not copying any of your images.

We just simply took one input

and then we made something completely different

and we have no legal obligation to give you any money.

That is essentially the case

that these open eye developers are making.

So we have to have that fight in court.

I don't know how that's gonna play out.

You know what?

I bet we'll be talking about it on this show.

Yeah, totally.

So there's actually another way that artists

are starting to respond to the sort of popularization

of AI image generators, which is not with lawsuits,

but with something called data poisoning,

which I wanna talk about

because there was an interesting story this week

in MIT Tech Review about some people

who are trying to actually bring more power to artists

when it comes to generative AI

by designing a tool that actually spoils

the results of AI image generators.

This tool is called Nightshade.

It was developed by a team led by a professor

at the University of Chicago named Ben Zhao.

And basically the way this tool works

according to this article is that it lets artists

add invisible changes to the pixels in their art

before they upload it onto the internet.

And then if these images are used to train

an AI image generator, those like little pixels

will sort of manipulate the machine learning model

in some ways so that it sort of misunderstand.

So like, you know, you can basically make an image

of like a handbag look like a toaster to an AI model.

They call these poison samples.

And the researchers basically found

that even with a fairly small number

of these poison samples, some of the AI models

would start to put out weird images.

They tested this on stable diffusion's latest models

and also on a model that they trained from scratch.

And at least in the case of stable diffusion,

even like 300 so-called poisoned images

could start to change the outputs.

Which when you think about it is kind of surprising

since stable diffusion's models

are trained on billions of images.

So did you see this article?

What did you think of it?

I did, I think it's very interesting.

I think we should do more of this kind of research.

I also have to say I was fairly skeptical

about the claims that it was making.

Something else that OpenAI was telling me last week

when I was talking to a research scientist there

was that they are training a classifier

on recognizing images created by Dolly 3 when it sees them.

And the way it's doing this is it's feeding a model,

tons and tons of images created by Dolly 3

and tons and tons of images

that were not created by Dolly 3.

You show the model these images enough time

and OpenAI says that it can now detect

with a 99% degree of accuracy

what was made by Dolly 3 and what was not, right?

That's kind of mind blowing to me.

The way that companies like Adobe have been pursuing this

has been to put something in the metadata of these images

that could indicate it was created by AI.

But these have some obvious flaws.

Starting with the fact that if you screenshot the image

you immediately strip out all of the metadata

and all of a sudden we don't know where it came from, right?

So the OpenAI approach seems much more technologically

sophisticated and if it works,

maybe it helps us solve this problem

where we will just have technology that scans images

and say like, oh, I know where that came from.

So how does this connect back to what you just said?

Well, it's like if we have a system that can detect

with 99% accuracy, if something was just made by Dolly 3,

what are the odds that artists putting

some of these poisoned images on the web

are going to trick these systems over the long run?

What do you think?

Well, I think that there will be this kind of cat

and mouse game between the platforms and the users

and the artists trying to sort of sabotage the platforms.

Like I think that, you know, many of these companies

will just find ways to sort of ignore those pixels.

Like I don't think this is probably a lasting solution

but it does speak to just how pissed off people are

about these AI image generators.

And I imagine as someone who uses these things every day

in your work that you get a lot of people

criticizing you for that.

So I do get, I have gotten, I should say,

a handful of emails from readers who would say like,

hey, why are you using this stuff?

Like I don't like it.

And I always like thank them for the messages.

I want to have that conversation.

I think there's a good case to be made.

And in fact, I am going to explore using

maybe the Adobe Firefly model,

which uses licensed images.

And what they have said is that

if you are using the work of one artist in particular,

like maybe there is a kind of an equivalent

of a Gregg Rutkowski on the Firefly platform,

they are going to pay bonuses to artists

based on how many images they have

in Adobe's training dataset

and the commercial value of those images.

That seems like a really good and ethical system.

And I think more companies should explore something like that.

I think it would really lower the temperature

on this discussion and would let people

who want to use these tools to feel creative

feel better about using them.

Totally.

All right, Casey, thank you for your tour

of AI Art Generators.

We're sort of the Bob Ross of the modern moment,

you know, paint with us using your keyboard.

Little happy blue.

Mm, love that.

Heart Fork is produced by Davis Land and Rachel Kohn.

We had helped this week from Emily Lang.

We're edited by Jen Poyant.

This episode was fact-checked by Caitlyn Love.

Today's show was engineered by Alyssa Moxley.

Original music by Rowan Nemesow and Dan Powell.

Our audience editor is now colloquially.

Video production by Ryan Manning and Dylan Bergeson.

Special thanks to Paula Schumann,

Huy Wing Tam, Caitlyn Presti and Jeffrey Miranda.

You can email us at heartfork at nytimes.com

with your greatest dolly creations.

Machine-generated transcript that may contain inaccuracies.

Dozens of state attorneys general has sued Meta, alleging the company knowingly created features that induce “extended, addictive, and compulsive social media use” among teenagers and children. In a country without wide-reaching internet regulations, are lawsuits the way to reign tech companies in? 

Then, for our first episode on YouTube, we talk with YouTuber and tech reviewer Marques Brownlee about how the platform has changed, and the future tech he’s excited about. 

And finally, A.I. image generators are getting scary good. Casey tells us what he’s been using them for. 

Today’s guest:

Marques Brownlee

is a YouTuber who covers tech.

Additional reading: 

Meta is accused of using features to lure children to Instagram and Facebook.Subscribe to Hard Fork on YouTube.The latest A.I. image generators show how quickly the tech is advancing.