Plain English with Derek Thompson: Why There Is So Much Bullshit in Science

The Ringer The Ringer 1/11/23 - 59m - PDF Transcript

Hey everyone, it's Ariel Hohwani, and I'm Chuck Mendenhall, and I'm Pete T. Carroll,

and together we are 3Pac.

Join us on the brand new Spotify Live app immediately after all of the biggest fights

in combat sports.

And also during the weigh-ins, because that's when the real drama happens.

So what are you waiting for?

Follow the ringer MMA show right now on our exclusive Spotify podcast feed, and come

join the best community in MMA.

Peace!

Let's get out of here.

Today's episode is about a big problem in science.

In the last few years during the pandemic, liberals and conservatives have fought over

the concept of science.

That is science with a capital S. You've got conservatives saying that Anthony Fauci just

wants to imprison them and make America a totalitarian dictatorship.

You've got liberals putting signs in their front lawn saying they believe in science,

they follow the science with a capital S. But while these groups have been squabbling

over the politics of science, something more important has been happening under the hood.

Science has been slowing down.

This all-important engine of progress, of health, has been sputtering.

And the really scary thing is nobody is entirely sure why.

So we should be living in a golden age of creativity in science and technology.

We know more about the universe and ourselves than we did in any other period in history.

We have easier access to superior research tools.

Our pace of discovery should be accelerating.

But the opposite is happening.

As one group of researchers at Stanford University put it, quote, everywhere we look, we find

that ideas are getting harder to find.

Another paper found that, quote, scientific knowledge has been in clear secular decline

since the early 1970s, end quote.

And yet another paper concluded that new ideas no longer fuel economic growth the way they

once did.

This really concerns me.

As regular listeners of this podcast know, I am obsessed with this topic of why it seems

like in industries as different as music and film and physics, new ideas are losing ground.

It is harder to sell an original script or original movie to the American public.

It is harder to make an original hit song.

And it is harder to publish a groundbreaking paper.

And while these trends are not the same thing, I repeat, they are not the same thing.

They do rhyme in a way that I can't stop thinking about.

How did we build a world where new ideas are so endangered?

This year, a study titled Papers and Patents Are Becoming Less Disruptive Over Time inches

as close to an explanation of why this is all happening.

The upshot is that any given paper today is much less likely to become influential than

a paper in that same field from several decades ago.

Disruption, progress is slowing down and it's not just happening in one or two places, it

is happening across the landscape of science and tech.

Today I speak to one of the study's co-authors, Russell Funk.

Professor Funk is a professor at the Carlson School of Management at the University of

Minnesota.

We talk about the decline of progress in science, why it matters, why it's happening, and we

give special attention to a particular theory of mine, which is that the incentive structure

of modern science encourages too much research that basically doesn't serve any purpose

except to get published.

In other words, science has a bullshit paper problem.

And because science is the wellspring from which all progress flows, its crap paper problem

is our problem.

I'm Derek Thompson.

This is plain English.

Professor Russell Funk, welcome to the podcast.

It's great to be here.

Thank you for having me.

So progress in science seems to be slowing down.

Why should we pay attention to that?

Well, I think there are a couple of reasons.

If you look back over the past couple of hundred years, but especially the past hundred years

or so, a lot of the great improvements in human life from health to technology, to education,

and so forth, have come and originated with scientific research and scientific breakthroughs.

And so just a lot of what makes the world that we know and live in today, what it is,

has been from scientific progress and important scientific discoveries.

There's another factor as well, which is that scientific progress is also very closely

tied up with economic growth.

And so scientists make discoveries in their laboratories doing research, which then often

serve as the seeds for new technologies.

And you think of things like the internet or space exploration and so forth.

All of those had their origins with scientific breakthroughs.

And then they serve as the basis of new technologies, which create new industries, new job opportunities,

and so forth.

And so to the extent that science is slowing down, it may be one important contributing

factor that we're seeing to slowing rates of economic growth and so forth.

And when people say that science is slowing down, what are they actually talking about?

How can we actually measure the degree to which disruptive science or important breakthroughs

are slowing down over time?

Well, it's a really good question.

And one that, since the paper has come out, has generated a lot of discussion and debate

about how do you really measure scientific progress and how do you really identify breakthroughs?

And in some sense, it's hard to do because it's a bit subjective.

And how do you quantify things that are very different, like the invention of the integrated

circuits and the discovery of DNA?

They're both important, but how do you put them on a scale and different people have

different views?

So it's something that we often have to measure kind of indirectly, especially if we want

to look at it on a large scale.

And so people have used a number or quite a few different metrics to look at this.

One that I think that's kind of interesting and that's just easy to wrap your head around

looks at Nobel Prizes and the discoveries that are awarded Nobel Prizes.

And so Nobel Prizes are essentially universally recognized important breakthroughs.

And so some things that people have done is look at the gap between the date of discovery

for a certain Nobel Prize winning breakthrough and when the prize is awarded.

And what they found is that that gap is increasing over time, such that in recent decades, the

prizes are going to discoveries that were made 20, 30, 40 years previously, which kind

of suggests that the breakthroughs that are made today are not seen as being quite as

significant as the ones that were made before.

There are other ways of looking at this as well.

So you can look at things like citation patterns in science and you can see so citations are

very important in scientific papers.

Science is usually seen as kind of a cumulative endeavor.

And so researchers build off of the findings of prior work and they indicate their kind

of indebtedness to prior work by making citations and including reference lists in their papers.

And so you can look at things like when were the papers that researchers today are citing

written.

And if you look across a lot of different fields, you'll see that the age of the most

cited papers is increasing, which suggests that the kind of older foundational knowledge

or the things that people are building off of most is this older knowledge and older

information and they're citing less of the newer stuff, which again could be seen as

some kind of indicator of significance.

I'll give just one more example that has been kind of established in other research and

then I could talk about what we did in our paper.

But so there are a lot of theories that people have developed in philosophy of science, economics

of science, sociology of science and so forth that tries to understand what is innovation

and what is discovery.

And one of the biggest theories is this idea of recombination.

So most of the things that we think of as new and innovative aren't just cut a new from

whole cloth, but they're kind of combinations of existing things.

And so the way that we get new stuff is by taking stuff that's out there and putting

it together in new configurations.

You think of like the iPhone, none of the components of the iPhone were really new.

We had digital cameras, we had touch screens and so forth, but what made it so novel and

valuable was that it brought all these together in a way that was really easy to use and so

forth.

And so the same thing goes for a lot of scientific discoveries.

We want to look at whether or not scientists are exploring new things and putting together

new combinations.

And so people have trapped over time the extent to which that's happening.

So are scientists taking ideas from disparate fields that haven't been thought about together

before and are they exploring them or are they in the world of invention, are inventors

taking different technologies that haven't been brought together before.

And if you look at that, you see there's these dramatic decreases in that over time as well.

And there's a lot of other indicators that people have looked at, but those are just

some of the highlights.

There was so much there and I'm not, I promise I am not going to try to recast every single

thing that you say in some kind of entertainment or sports metaphor, but I think it is useful

to just go over very, very quickly everything you pointed out.

You said there's three buckets of information that tell us that progress in science is slowing

down.

Number one, Nobel prizes, number two, citations, and number three, this slowdown and recombinatory

innovation, as you said.

For the first two, the way that I kind of think of it is like with Nobel prizes, these

are awards.

So it's like with the Academy Awards.

Imagine if the Academy Awards shifted away from honoring movies that were made in the

last year toward the bulk of those being honorary awards.

It would suggest that most of the great achievements in movies were happened decades ago rather

than the last 12 months.

There's not a literal way for the Academy Awards to do that right now, but imagine if

they did, that would be a reflection of the fact that the Academy itself believes that

the bulk of genius work is decades old rather than months old.

That's something that's happening in the Nobel Prize space.

In citations, what you're saying is it's hard to write a paper that is essentially among

the 1% most cited papers.

It's harder to make a hit.

And this is also something the entertainment industry has seen ironically.

In music, it has become much harder to create a hit because there's simply so many new songs

that are being written that it's harder to write a song that has a similar shot of being

a 1%, 0, 1% hit.

So awards are going to older work and it's harder to write a hit paper.

These are some ideas that suggest that progress in science is slowing down.

Let's jump into your paper.

Give me the executive summary.

What did you say here that was actually new?

So as you can see from the summary that I've given you so far, and as I said, that's just

a small sampling of the various indicators that people have used to look at progress.

There's a lot of different metrics and a lot of them are pretty field specific.

So a couple of examples of other things I didn't mention were Moore's Law is something

that a lot of people pay attention to and it basically concerns the packing of transistors

on an integrated circuit, so scaling down the size of our computers and processors.

And there's a prediction that the amount of packing doubles every two years.

So that's a law that's, I think, surprised everybody because it's continued for a really

long period of time.

But there's signs that that's started slowing down and you see that in a few ways.

One is that it requires more and more researchers and investment to make incrementally smaller

improvements and pushing us forward.

There's other things as well.

If you look at crop yields per acre, if you look at improvements in health and longevity

that come from research efforts and investments and so forth.

But so there's all these different metrics that we see across different fields.

And so those are great and they're obviously very important metrics within fields, but

it's harder to get a sense than of the big picture.

We don't really know if this is something that holds across a lot of different fields

of science and technology or if this is something that might maybe just by chance, we've been

looking at a subset of fields where this is happening.

And if we looked at newer fields or other places, we'd actually see progress going up.

And so a big part of what we wanted to do in this paper was to try to take this 30,000

foot overview where we could see use a common metric that would be, in principle, meaningful

across a lot of different fields of science and technology and see, first of all, whether

on this metric we found or could replicate what other prior studies had said about progress

going down, and then also look to see if it was happening, was it going at similar rates

across different fields, was it slowing down at similar times, and then the idea would

be that that would give us some information or at least start to think about what might

be some of the different causes.

Right, in a way, you essentially, I believe the metric that you use is called the Consolidation

Disruption Index or the CD Index, and in the article that I just wrote for the Atlantic

based on this paper, I said it's kind of like, if I write a really crappy literature review

that no scientist ever references because it's just so damn basic, all right, my index

would be very low.

But if I look at a bunch of findings from quantum mechanics or something, and I come

up with this law of quantum gravity that is so mind-blowingly genius that people only

cite my paper and don't even bother with the citations that I reference, that would give

me a very high CD index.

And what you're finding is that across all of these domains, across science and technology,

progress is plunging as seen by the fact that the CD index is going down.

People are publishing less disruptive work.

Okay, this is fascinating to me.

It's fascinating to me in large part because of this tension that you mentioned recombinatory

innovation.

Well, we have more knowledge than ever.

We have more scientists than ever.

We have more spending on science than ever.

We have more PhDs than ever.

We should be living in a golden age of productivity in science, and instead we seem to be living

in something like the opposite.

So I see this as a fascinating mystery to solve, and the way that I want to think about solving

it is treating it kind of like a crime mystery.

I want to think about the suspects that we can line up against the wall and blame for

the decline of progress in science.

Suspect number one is this famous idea in science called the burden of knowledge.

And the burden of knowledge essentially says there's so much knowledge that people have

to learn before they enter into a domain like, say, physics or chemistry, that it becomes

harder to push forward the frontier.

Science just becomes harder to do.

And a quick example here to show that it just must be true in some cases, is you think about

something like physics.

Take a breakthrough in physics 400 years ago.

Isaac Newton figures out the laws of gravity with a telescope, a pen, and a paper pretty

much.

This century, 40 years later, discovering a new elementary particle like the Higgs-Boson

requires a $10 billion underground tunnel to smack subatomic particles into each other

at near light speed.

These are not equivalent discoveries.

It's much harder to build a Hadron collider than to write the rules of gravity with pen

and paper, I should think.

So to a certain extent, you have all this energy around burden of knowledge.

The problem is that we know so much that making that incremental progress in science

becomes harder.

How do you feel about the burden of knowledge hypothesis to explain the slowdown in science?

Well, I definitely think that that is an important suspect to use your metaphor.

And in fact, it's something that a lot of prominent scientists have been thinking about

for a long time.

I mean, this is something that Albert Einstein talked about and wrote about, how a set of

the golden days of physics might be over, because in the coming generations, there was

going to be so much that people had to learn and master that it would lead to this specialization

and would have these adverse effects on scientific progress.

I also think that you start to see this in some of the data.

And we include a number of examples of this in the paper.

We could include a number of analyses of this in the paper.

So one way you could start to see if this is really happening is by looking at the kinds

of knowledge that scientists are building on when they're doing their research.

And if there's this kind of overwhelming amount of knowledge or information that people have

to master to get to the frontiers of their field, well, what would a natural thing to

do be?

You'd find ways to kind of constrain the scope of what you needed to learn.

And so you see growing specialization within the sciences, so people really learn their

particular area.

They get to know it really, really well, but it's a smaller space of the field, a smaller

part of the pie that they really need to know, and the kind of thing you can start to master

in a lifetime.

And so one way you might see that manifest itself is by looking at these citation patterns,

and we can actually see some interesting things going on there.

So one is that, and you hinted at this a little bit already, the diversity of prior work that

people build on is going down.

So you're getting an increasing concentration of citations to a smaller number of popular

works that people read probably when they're in their PhD programs, these are the classics

and everybody knows.

That small subset is getting an increasing share of the citations, and they're also becoming

more semantically similar, suggesting they're kind of on similar sorts of topics and so

forth.

And so there you see evidence of less diversity.

You also see an increasing number of self citations per paper.

Self citations just mean that I write, say, a couple of papers per year, and then in every

paper I cite some of my own prior papers, which presumably those are the papers I know

best because I wrote them, and so it's more familiar knowledge, knowledge that's within

my area and again might be one of these adaptive strategies.

You also see the age of stuff that's being cited going up, again, suggesting that people

are looking at stuff that's more familiar, struggling to keep up with the existing literature.

And you don't even need to look at the metrics.

You can just kind of see that there's so much more that people need to know to get to the

frontier.

There's a couple ways you can look at this.

So I mean, this is complicated to know itself, but so many people in so many different fields

are doing not just one, but two, three postdocs after they get their PhD.

And so by the time they really are in that independent researcher position, they're well

into their 30s or so forth, which if you look at the Einstein's and Dirac's and others,

they were all had all long made their most important contributions by then.

You can also just think about, well, what does research look like today to go with your

example?

We don't even need to think about something as big as the Large Hadron Collider.

What you could do his work was paper and pencil or pen or quill.

But today, if you want to do research, so data driven, you've got to essentially become

a programmer, learn programming languages, database management, how to wrangle data and

so forth.

In addition to all the scientific theories and so forth that the people from the last

generation had to master.

And so there's just a lot more knowledge that you need to build to be able to make a real

contribution.

It's the death of the Renaissance man in a way that we're used to science being pushed

forward in the 1600s, 1700s by people like Benjamin Franklin that were dabbling in chemistry

and physics and electricity and the future democracy.

And today, if you want to make a breakthrough in quarks and subatomic particles, you want

to make a breakthrough in the efficiency of solar energy, you have to know so damn much

about the history of photovoltaic cells or subatomic particles, and it takes so long

to gain that mastery.

And by the time you're at the frontier of science, you're not 22 anymore.

You might have a family.

You might have two kids.

It doesn't make a lot of sense to spend 15 years on one super duper big swing.

It makes sense to kind of publish, publish, publish your way incrementally, build a lab

slowly and be able to afford that mortgage.

So I think you're right to point to the fact that the incentives here have changed quite

a lot.

Before we bring in another suspect, we've already kind of mentioned some other suspects,

but I want to try to braid together two things that you said.

One thing you said is that new knowledge tends to be a combination of old knowledge.

I don't know that much about the Isaac Newton story, but I know enough that he took observations

of the stars that relied on people like Kepler, and he blended it with breakthroughs in math,

and he, through that blending, created a new form of integrated calculus that explained

the laws of gravitation.

So new knowledge is a combination of old knowledge.

You also said that the diversity of work that people are building on is going down.

And you think, okay, what's the best way to build new knowledge if it's all combinatorial?

Is to have a really, really broad pantry of knowledge that you can sort of cross-fertilize.

It's kind of like a chef.

How are you going to make the best new recipe no one's ever built before?

You need a huge friggin' pantry so that you can combine flavors that no one's ever thought

to combine before.

But what you said is that because of the burden of knowledge and other things happening in

science, scientists' pantries of knowledge are getting smaller.

They're getting more specialized, and as a result, it might be harder for them to cross-fertilize

ideas that create a new, big, extraordinary breakthrough that opens up a field the same

way that Einstein's theory of relativity or Bohr and quantum mechanics did a hundred

years ago.

Before we move on, is that a fair summary?

Is it fair for me to sort of combine those ideas in that way?

Yeah, I think so.

I might just qualify your metaphor just a tiny bit.

So rather than them having smaller pantries, I think that the pantries have exploded actually.

So the pantry is now just not a closet next to the kitchen, but it's like an entire grocery

store.

And so there's all this stuff that they could pick from, but it's hard to—we've all spent

time wandering the aisles of the grocery store trying to find something we want or something

that looks interesting and come up empty-handed because we're just overwhelmed.

And so instead, we go and just kind of look at a small corner of the grocery store.

And so while there's more stuff than ever, more combinations to be made, we're just

making far fewer because we sort of effectively constrain the size of our pantry.

That's great.

Thank you.

That's a perfect edit for my blabbering.

And it brings us perfectly to suspect number two, which is the paradox of choice.

Despite an enormous increase in scientists and papers since the middle of the 20th century,

you've observed and others have that the number of highly disruptive studies each year hasn't

increased.

More research is being published than ever, but the result is actually slowing progress.

And one explanation might be your grocery store metaphor, that there's so much to read

and absorb.

And so papers in these crowded fields are citing new work less because they're simply too

much to read.

There's too much to stay on top of.

And as a result, they're canonizing these sort of highly cited articles more.

People are clustering around the same hit papers.

And as a result, they're all kind of publishing the same research.

It's kind of like, if you're home on a Friday night and you boot up Netflix, and there's

just too much there, you say, you know what?

This is too exhausting cognitively to go through.

I'm just going to see whatever is the top streaming movie, and I'm going to watch that.

And if everyone makes that decision, then ironically, the amount of options available

to the consumer lead to a clustering of decisions.

How do you feel about this suspect number two, a paradox of choice in science?

Yeah, no, I think it's really an important suspect.

And I think it's interesting because it does help to explain this tension of how can we

have more stuff than ever to build on, more possibilities, more tools that let us see

so many new things, but then not be seeming to move the needle forward.

So I think that's one thing we tried to do in the paper was say, hey, maybe this is a

way to reconcile this if we look at what's there, but then what gets used.

And I think it's also just really, there's a couple other thoughts I have about that.

So one is that the explosion is just even bigger than we realize because not only are

there more papers than ever, but so much of science is happening in other places outside

of the traditional journal venues.

I mean, you look at Twitter, you look at blogs, you look at people's websites, newsletters,

conferences, conference proceedings, all sorts of places.

It's not just one venue anymore.

Not just one journal or a couple journals that you need to keep up with in your field.

If you really want to keep on top of things, you've got to look at all sorts of different

media.

And so it's just overwhelming in kind of a scale and scope that we haven't seen before.

There's also kind of some interesting research that suggests that information technologies

might be changing to some degree the way that we search for information in ways that could

potentially accelerate this.

And this is not true not just only for the sciences, but also kind of more broadly.

It used to be when you were looking for what to read or looking for kind of the latest

information or trying to get ideas as a scientist, you know, go to the stacks of the library,

go to the library, and you'd browse the library shelves, and maybe what you really wanted was

a book on X.

But as you're walking there, you're passing all these other books and all these other

journals and you see something that catches your eye, or you find a book you wanted on

X and then you look over next to it and there's something else that's, you know, on a different

topic, but you think, oh, that's interesting.

And so you kind of pick that up.

And what's changed is that with new information technologies on the one hand, you would think

this would make the process of search, you know, so much easier because everything is

at our fingertips.

But there's almost a cost of everything being at our fingertips because it lets us get exactly

what we want and kind of minimizes that browse function.

And so some people have called this like a filter bubble, you know, so we kind of, and

this happens through recommendation algorithms as well as, you know, we're going on whatever

the publisher's website to get our journal or the database and we see, you know, there's

things that we've looked at in the past and so the algorithms are giving us kind of more

and more of what we've looked at in the past and making it harder for us to see and get

exposed to those different types of ideas.

It's really interesting how many of the phenomena that you're pointing out in academia match

up with phenomena that people have observed in other, let's call it content industries.

We had an episode last year with Ted Joya talking about why old music is killing new

music.

That's his terminology.

It refers to the fact that now that people have access to places like Spotify and Apple

Music, they have 40 million songs at their fingertips.

An old song is as far away from them as a new song.

And at the same time, it just so happens that people are listening to older music more than

they used to.

And as a result, it's harder to create new hits because you have so many ears that are

focused on old hits.

I don't want to create too close of a parallel here.

There's many, many different things between scientists writing papers and musicians writing

songs.

But there's similar in that there seems to be a curse of plenty that consumers change

their behavior in marketplaces of overabundance of choice.

And we might be seeing similar dynamics with old papers being emphasized over new papers

and old songs being listened to more than new songs.

There's an interesting little dynamic that seems to hold across content industries.

I want to bring in suspect number four because in many ways this is the suspect that I think

might get the most attention and one that I'm very interested in.

Just review, suspect one was the burden of knowledge.

Suspect two was the paradox of choice.

Suspect three is the bullshit paper hypothesis.

And the bullshit paper hypothesis is something like this.

The path to success in modern academia is paved in publications, papers and citations.

And as a result, the market logic of this industry encourages budding scientists to

simply publish or die, publish as much as possible or fail to move through this game.

And the narrow game of getting one's work published consistently in prestigious enough

journals has supplanted the broader game, the bigger, more important game of disruption,

of replacing last generation's discoveries with a fresh set of novel breakthroughs.

And as a result, the market logic of modern academia forces young middle-aged and even

older scientists to publish more bullshit papers and fewer highly disruptive papers.

Professor, what do you think about the bullshit paper hypothesis?

I think there's something to it.

I mean, I might not use quite that strong of a term, but I mean, this is something that's

been reflected in a lot of the conversation that's been happening about the paper since

it's come out with people speculating about why this is happening.

And they talk about the incentives in academia.

One thing that's happened over time is that there's become more and more pressure to have

benchmarks to evaluate academic performance and to evaluate that on kind of seemingly

objective scales.

So how do you compare a scholar A to scholar B?

Are we going to give a scholar A tenure, a scholar B tenure?

Well, how do we do that?

How do we eliminate bias in that and so forth?

And well, what do we have that's easily quantifiable?

We've got things like counts of how many papers did you produce or like counts of how many

papers in top journals did you produce or counts of how many citations your paper received.

And those are things that we can easily measure and quantify.

But it's a lot harder to measure even setting aside this index that we created.

It's a lot harder to quantify how do you shift the conversation in science?

Are you really carving out a new meaningful area?

Are you kind of making these radical improvements over what came before?

Because that's just, as some of the debate about the paper has shown, it's something

that is in a lot of ways more subjective.

And so I definitely think that that could be part of it.

If you think of what people are incentivized to do, they're incentivized to get up with

those numbers.

How do you get up those numbers?

Well, if you want to publish a lot of papers, it's probably better to publish papers on

things that you really know, spend less time studying broad literatures, working outside

your field, which takes time, which might not prove to be successful, and instead just

write things on a smaller set of topics that you know about and you know how to do.

Same thing with getting your citations up.

We know that maybe in the long run, a good strategy would be to develop a new theory

of gravity or some revolutionary kind of idea, but those take a lot longer to get picked

up and they might not get picked up at all.

And so if you really just want to bump up your citations, probably writing on stuff

that's more familiar is a good way to go as well.

I want to hold on this because I think there's a lot of people who have very closely held

feelings about the heroic status of scientists.

On the right, there's this emerging idea that scientists are bureaucratic villains trying

to destroy our lives.

On the left, you have these signs, you know, follow the science, believe in the science

science with a capital S. They're heroes, scientists are.

And what I'm hearing you say is scientists are people in a system following incentives,

just like practically every single person listening to the show is a worker in a system

playing to win the game of that system.

And if we're frustrated by the fact that modern scientists maybe publish too many crap papers

or maybe don't spend enough time thinking about truly disruptive ideas, we need to recognize

that they are responding to very real pressures to get grants from the NIH, hit that sweet

spot of familiarity and surprise in proposing projects, the NIH, familiar surprise and plausibility,

that they feel pressure to publish as many papers as possible at a high volume to get

through grad school, get that associate professorship, get tenure.

And there is a game of science that we have to understand that is bigger than the individual

players.

And just because that game was created with the best of intentions in the middle of the

20th century and in the decades that followed, does it mean that it's necessarily the right

game to give us the maximal number of truly disruptive papers?

Yeah, I think that's, I think there's a lot of truth to that.

And I think if you look at the evidence from our paper and look at evidence from some other

papers, the slowing of progress is not something that I think you can attribute to any single

cause.

I mean, something as complex and tied up in everything from slowing rates of economic

growth to how many transistors you can put on an integrated circuit to everything else

is not just going to be a silver bullet that will fix everything, but the nature of a lot

of the causes and something you also just see in the conversation that's come out around

this paper is that the social organization of science, by which I mean just the process

through which science is done, the incentives that scientists face, the organizations they

need to work in, seems to be a thing that is worth looking at and seeing, you know,

it seems to be that some of those factors are a cause because they push scientists to

look at certain types of problems and pursue them in certain types of ways.

And so thinking about are there ways that we could redesign the social organization

of science or adapt it or revise it so that it would make it easier to pursue these types

of discoveries I absolutely think is something worth looking at.

We've already glanced at this final suspect, but I just want to put a fine point on it.

I call this suspect big bad science.

In the last 100 years, science has evolved, I think, from individuals to teams to teams

of teams.

That is, you take a look at a domain like, say, genetics.

The father of genetics, Gregor Mendel, was a monk living alone.

He figured out dominant and recessive traits by looking at peas in his backyard.

That's great.

But fundamentally, we're talking about a friar looking at plants in his backyard inventing

a new scientific domain.

That's all it took to crack it open.

Today, if you want to move the genetics frontier forward, you can't rely on a smart monk looking

at plants in his backyard.

You need hundreds, maybe thousands of people all over the world sequencing genomes, figuring

out how certain genetic traits match up with certain demonstrated traits and figuring out

really complicated things about what are the polygenic origins of schizophrenia.

This stuff is maddeningly complex, and it requires teams of teams, and teams of teams

are hard.

We don't really know how teamwork works.

We're still figuring it out, and if science has to move forward one bureaucracy at a time,

it's going to be really messy, and it might not be particularly efficient.

This is the final suspect that I have to throw at you.

It is the phenomenon of big bad science, which essentially means science got big, and there

are certain bad consequences of science getting big.

Yeah, and I think that I also think that there's definitely some truth to that, and it relates

a little bit to what we were just talking about with the social organization of science.

I think that there's research that would suggest that, including some literature using this

disruption indicator that we use in the paper, that suggests that small versus large teams

produce systematically different types of work, and that the larger teams tend to produce

work that's more consolidating or developmental in nature, whereas the smaller teams are the

ones that tend to produce those disruptive ideas.

We also know over the same period that across all fields of science and scholarship, the

average size of teams has gone up consistently.

If different types of sizes of teams produce different types of ideas and are moving in

a direction of larger and larger teams, then it's very plausible, and the evidence suggests

that that's a big factor in maybe helping to explain this decline.

I mean, I also do think it's important to say that science didn't become big just for

the sake of becoming big, that there were certain problems like the Potomac bomb and

space exploration and sequencing of the human genome and so forth that just couldn't have

been done by a team.

Building a large HageOn collider.

Yeah.

It's so interesting because Suspect 4, I'm sorry to jump right in, but Suspect 4 takes

us right back to Suspect 1, as the frontier of knowledge becomes harder to push forward

because you're not just looking at the characteristics of P's, you're figuring out the polygenic origins

of a complex disease like schizophrenia.

You can't rely on individuals, you need teams of teams, but as science becomes bureaucratic

in response to the scale of these problems, it can even accidentally create its own problems

of scale.

Is that accurate?

Yeah.

And I think, so what I would probably suggest is that it's not that we should dismantle

big science because there's an important role for big science, but finding a way and kind

of rewarding a mix and ecology.

We've got some collaborations that are these large HageOn colliders, and then we've got

some that are individuals working either alone or with a couple different people, or maybe

the same people working on the large HageOn collider take out some time and they work

alone on their ideas.

And we've got either groups of people that are working in small teams or operating, running

their careers according to that small science kind of style, and then we've got some people

that are doing big or people shift across those different types of ecosystems.

And then it's not that that happens, not that every paper that gets published has 10 people,

but just a lot of the pressures that we've talked about before are pushing for in the

direction of stuff becoming bigger and bigger, or just lots of organizational reasons and

so forth.

And so how do you reward small teams?

How do you let small teams compete with big teams on some of these metrics that we use

for evaluation?

It's a lot easier to pump out a lot of papers if you're working in collaboration with other

people and you can kind of divide and conquer than if it's one person trying to put them

out all by themselves.

So lining up these suspects, we've got two review.

Number one, the burden of knowledge.

Number two, the paradox of choice.

Number three, the bullshit paper hypothesis.

And number four, big bad science.

Is there a big important suspect that I have left out of this lineup, or if there is not

a big important suspect I haven't left off this lineup, do you want to put your finger

on the scale here behind one of these culprits rather than another?

I think one big suspect that we haven't talked about is this low hanging fruit theory.

And so here the idea is, and you can see echoes of this, I think all these suspects that we've

talked about are related and maybe they were working together on the crime.

So this is related as well, and we've touched on this a little bit, but basically the low

hanging fruit idea is this idea that all the really easy but important discoveries to be

made in science have already been made.

And so you can only do something as monumental as discovering the theory of relativity once.

Or if you think of technology, once the wheel's invented, it's invented and it's pretty easy,

but it's a lot harder to find something else that's that easy and that significant in terms

of improving the quality of human life and so forth.

And so that's kind of the low hanging fruit theory that you've got a field, they start

kind of making progress, they make these big important discoveries, and then everything

else after that just either gets a lot harder, just seems a lot less significant or some

kind of combination of the two.

And so we address that in the paper and I think overall what we try to suggest is that

the low hanging fruit theory is, our evidence suggests that it's not the primary driver.

I think there's certainly something to be said about that and that it's probably an

important contributing factor.

But one piece of evidence, and this goes back to the beginning of what did we do and what

was different when we looked at this 30,000 foot view, comparing the rates of change across

a lot of different fields across time.

And one thing that we found is that kind of the decline happens at a fairly similar rate

than across a broad set of fields at about the same time, which in my mind suggests that

it's not all the low hanging fruit because physics is much, much older than the social

sciences.

And so if you were thinking who's going to eat up their low hanging fruit first, I'd

say it's physics.

And so they should have seen this decline way before we even started to 1945 when we

started our study and then social sciences, which were probably just seriously getting

going around then.

We might see it decline a little later and there are differences across fields, but overall

the curves look pretty similar to one another, which in my mind suggests, well, maybe it's

not all the low hanging fruit, but what has changed across fields more or less at the

same time, it's a lot of it's the social organization of science.

Is there a field where disruption is declining the least, where maybe we can look into this

domain?

I think maybe, I remember from your paper, maybe it was the life sciences where the decline

seemed to be the most shallow.

When we look into some sliver of science and say, this group seems to have figured something

out or maintained some consistent batting average for doing really disruptive, fantastic

breakthrough work and say, maybe we can take a lesson from there and spread it to the other

domains.

Yeah.

I mean, there are, if you look at the curves, there are some different slopes.

I mean, so they do decline at different rates.

The life sciences and physics being two examples, part of the reason though, why they decline

it at slower rates is that they just started out at lower points and so they're just a

little bit flatter, whereas the social sciences started out as being much more disruptive

and so it's had a longer way to go.

I'm not exactly sure where to look for evidence of fields that have bucked this problem because

it is quite universal across them.

One thing though that I think will be interesting to look at, and I do wish that when we had

done this study or when we started this study, the data we were using was the latest and

greatest and very up to date.

We ended in 2010, but one thing that I would really like to look at and has also been part

of the conversation that people have been having around the paper is what's been happening

in recent years.

There's been a lot of seemingly fundamental changes in both, there's been scientific breakthroughs

and also new ways of doing science with artificial intelligence and advances in computation and

so forth.

It seems very plausible that if we've got radically new methods for doing science and

we can assist scientists with artificial intelligence and so forth, that maybe that will help to

overcome some of these challenges that we've been talking about.

You might see the curve start to bend a little, so to speak.

In fact, you can see even in our data, and we've looked a little bit further than we

have in the paper, but in fields like computers and communications and technology patents

and electrical patents that the curves seem to have flattened out and are even ticking

up a little by 2010 and a couple of years after.

It would certainly be interesting to look at and those might be some places if we wanted

to look for examples that might be good places to start.

I want to end by asking you about takeaways.

I have a point that I want to make about takeaways for individuals looking at this paper for

knowledge or influence about how to do breakthrough work in their own life.

What's the most important takeaway for society or for science writ large?

What do you want people to learn from this?

I think there's a couple.

I think one is that hopefully this just draws attention and gets people interested in thinking

about big questions like what is scientific progress?

How do we measure it?

How do we measure scientific innovation and somewhat sound the alarm bells of there seems

to be something happening?

The nature of scientific discovery is changing.

What do we do about that?

How do we get a right mix of disruptive and developmental work?

What is the right mix?

How do we best support scientists to be able to do that?

This is really important stuff.

The progress in science, scientific discovery is the basis of all sorts of technologies

going back to the beginning of our conversation that propel the economy forward, improve

our quality of life.

It's something that we really need to pay attention to.

There's a number of different suspects, but we really don't have a great accounting of

all the different factors that could be involved.

We don't really know too well about how they're interlinked together.

I think the biggest thing I would take away is we should support more research on this.

We should try to look into this more systematically and try to address the problem.

Benjamin Page, who's one of the scholars who came up with the concept of burden of knowledge

and has helped to bolster something you talked about, the idea that maybe the low hanging

fruit from some domains has been picked and it's harder to climb up higher into the tree

of physics or genetics and pick that next fruit.

He did a paper with Brian Uzi at Northwestern a few years ago that's left a really big impression

on me where they looked at what are the components of a hit paper, essentially a paper that would

be, by your definition, disruptive and get a ton of citations and even replace the papers

that it cites in the literature.

One way that it was explained to me is that these kind of papers tend to have an optimal

combination of established, old, and new ideas.

They create this perfect fertilization between established ideas and latest breaking breakthroughs.

One way that Ben described it to me that is just incredibly memorable is he said, imagine

that the Alexandrian library is burning.

It's 2000 years ago and you and I are sitting reading papyrus in the Alexandrian library

of Egypt and it starts to burn down, as it in fact did in reality.

We have five minutes to grab whatever we can in our hands before we run out the doors as

the pillars fall.

What do we grab?

There's a specialist strategy which says if you're a social scientist, you run to the

social science stacks and you grab as much from your domain of social science as you

can.

There's a generalist strategy that says you run helter-skelter through this burning library

and through the embers in the smoke and you grab whatever papyrus you can fit into your

arms.

Then there's the other strategy that Ben and Brian settled on.

They said, with one arm, go to the part of the library that you know the most about and

get the oldest volumes, the volumes that are the most true because they're true as they've

stood the test of time.

Then in your other arm, go to an adjacent field that almost doesn't even matter what

it is, that has some new knowledge and grab books there, grab a papyrus there so that

when you run out of the burning library, you have half deep, true specialist knowledge

and half, essentially, I was talking about a chef's pantry, spices that you can salt

into that deep knowledge because he said these breakthrough papers seem to have this really

interesting combination of deep truths and new late-breaking ideas.

It's difficult but important for scientists that feel incentivized to be specialists,

to go really, really deep and vertical into one domain, to maintain a kind of porous expertise

where new ideas can seep into those vertical fields and fertilize some new ideas.

I've always been motivated by that, even in my own work, to, even as I find myself going

deeper, deeper, deeper into one domain, keep a part of my mind open to the idea that maybe

some breakthrough will come from the fertilization of something that seems to have nothing to

do with the subject that I'm writing about.

I'll leave you with that thought if you have any commentary on it or maybe it somewhat inflects

your own approach to science in your domain.

Yeah, no.

I think I have a couple of thoughts on that.

First of all, I think it's totally true and there's lots of theory to back it up that

innovations about recombination, but lots of combinations aren't valuable.

In my teaching, I teach about technology and these theories of combinations and I always

find these examples of stupid patents or goofy patents and there's one I saw as a flying

submarine which looks like one of the classic US Navy submarines that someone glued airplane

wings onto.

I've never seen that as far as I know it's never developed, but there's lots of combinations

that don't make any sense and not all of them are good.

We want to think of what are ways that will have meaningful combinations and we need something

to ground them on.

That's the existing established knowledge which is both established, presumably quality

and stuff that people are familiar with as well and so it makes it easier for them to

get picked up.

Just to bring it full circle, I really love the Library of Alexandria metaphor and there's

a quote about that that I've always really liked from Borges, the short story writer

that says every few centuries, the Library of Alexandria must be burned.

I think that really relates to disruption and emphasizes why and how disruption could

be valuable.

Disruptions about doing away with the old, carving out space for the new, shifting the

conversation and we've talked a lot about how this burden of knowledge can be kind

of or the growth of knowledge can be counterproductive and so it suggests every now and then we've

got to clear things out and these disruptive discoveries can do that, which not only do

they represent big improvements but they also create new opportunities to do new things

by opening up these new fields and so I think disruption is good not only or could be good

not only because they represent these tangible improvements and things we care about but

also for propelling scientific progress forward.

That's lovely.

Yeah, Max Planck said science advances one funeral at a time.

I suppose he could have said science advances one burned library at a time as well.

Professor Russell Funk, thank you so much.

Thank you.

This was great.

Thank you for listening.

Plain English is produced by Devon Manzi.

If you like the show, please go to Apple Podcasts or Spotify.

Give us a five-star rating, leave a review and don't forget to check out our TikTok

at Plain English Underscore that's at Plain English Underscore on TikTok.

Machine-generated transcript that may contain inaccuracies.

We should be living in a golden age of creativity in science and technology. We know more about the universe and ourselves than we did in any other period in history, and with easy access to superior research tools, our pace of discovery should be accelerating. But as one group of researchers from Stanford put it: “Everywhere we look we find that ideas … are getting harder to find.” Another paper found that “scientific knowledge has been in clear secular decline since the early 1970s,” and yet another concluded that “new ideas no longer fuel economic growth the way they once did.”
As regular listeners of this podcast know, I am obsessed with this topic—why it seems like in industries as different as music and film and physics, new ideas are losing ground. It is harder to sell an original script, harder to make an original hit song, and harder to publish a groundbreaking paper, and while these trends are NOT all the same, they rhyme in a way I can’t stop thinking about. How did we build a world where new ideas are so endangered?
This year, a new study titled “Papers and Patents Are Becoming Less Disruptive Over Time” inches us closer to an explanation for why this is happening in science. The upshot is that any given paper today is much less likely to become influential than a paper in the same field from several decades ago. Progress is slowing down, not just in one or two places, but across many domains of science and technology.
Today, I speak to one of the study’s coauthors. Russell Funk is a professor at the Carlson School of Management at the University of Minnesota. We talk about the decline of progress in science, why it matters, and why it’s happening—and we give special attention to a particular theory of mine, which is that the incentive structure of modern science encourages too much research that doesn’t serve any purpose except to get published. In other words, science has a bullsh*t paper problem. And because science is the wellspring from which all progress flows, its crap problem is our problem.
Host: Derek Thompson
Guest: Russell Funk
Producer: Devon Manze
Learn more about your ad choices. Visit podcastchoices.com/adchoices