Huberman Lab: Mark Zuckerberg & Dr. Priscilla Chan: Curing All Human Diseases & the Future of Health & Technology

Scicomm Media Scicomm Media 10/23/23 - Episode Page - 2h 16m - PDF Transcript

Themes

AI assistants, social engagement, learning, mental health, physical health, interactive experiences, affordable technology, glasses and myopia, accessing the visual system, eye tracking, technology trade-offs

Discussion
  • Mark Zuckerberg and Dr. Priscilla Chan discuss the Chan Zuckerberg Initiative (CZI) and its goal of curing all human diseases through funding, cell research, and artificial intelligence.
  • They emphasize the importance of understanding individual cells and developing tools to measure and observe human biology.
  • The podcast explores the role of collaboration, open science, and AI in advancing scientific discoveries and analyzing large datasets.
  • The potential of creating a virtual cell and the impact of genetic mutations on cellular processes are also discussed.
  • The podcast episode features a conversation with Mark Zuckerberg about the intersection of technology and health, discussing the impact of technology on physical and mental health and the measures taken to ensure user safety.
Takeaways
  • AI has the potential to enhance social engagement, learning, and mental and physical health. However, it is important to recognize that AI can be used for both positive and negative purposes.
  • Continued research and data analysis are crucial for advancing scientific and medical knowledge.
  • AR glasses have the potential to become a full augmented reality product, but there are significant technological challenges to overcome.
  • Investing in scientific and engineering tools is essential for advancing our understanding of human biology and finding cures for diseases.
  • Optimism and belief in a better future can be powerful motivators.

00:00:00 - 00:30:00

Mark Zuckerberg and Dr. Priscilla Chan discuss the Chan Zuckerberg Initiative (CZI) and its goal of curing all human diseases through funding, cell research, and artificial intelligence. They emphasize the importance of understanding individual cells and developing tools to measure and observe human biology. The podcast also explores the role of collaboration, open science, and AI in advancing scientific discoveries and analyzing large datasets. The potential of creating a virtual cell and the impact of genetic mutations on cellular processes are also discussed.

  • 00:00:00 In this episode of the Huberman Lab Podcast, Andrew Huberman interviews Mark Zuckerberg and Dr. Priscilla Chan. They discuss the Chan Zuckerberg Initiative (CZI) and its goal of curing all human diseases through funding, cell research, and artificial intelligence. They also explore the impact of social media platforms, virtual reality, and AI on mental health and everyday life. The podcast emphasizes that it is separate from Huberman's teaching and research roles at Stanford, and aims to provide science-based information to the general public.
  • 00:05:00 Mark Zuckerberg and Dr. Priscilla Chan discuss the Chan Zuckerberg Initiative (CZI), a philanthropic effort aimed at building a better future for everyone. Their big mission is to cure, prevent, or manage all diseases by the end of the century. CZI focuses on funding scientists, developing software tools, and conducting science to accelerate the pace of scientific discoveries. They believe that understanding the cell is crucial in understanding and addressing diseases.
  • 00:10:00 The podcast discusses the importance of scientific and engineering tools in advancing discoveries in various fields, including biology. The focus is on understanding individual cells and developing tools to measure and observe human biology. The next 10 years are seen as crucial for building these tools and empowering the community to cure, prevent, or manage diseases. The role of philanthropy, better imaging tools, and the potential of AI in managing and understanding data are also mentioned.
  • 00:15:00 The Chan Zuckerberg Initiative (CZI) incentivizes collaboration and open science through grants, software and hardware development, and partnerships with scientists. They aim to encourage scientists from different backgrounds to explore new fields, share knowledge through preprints, and build durable and scalable tools for the scientific community. CZI also focuses on understanding cells and the impact of genetic mutations on cellular processes.
  • 00:20:00 The podcast discusses the challenges of interpreting DNA data and the role of AI in analyzing large datasets. It explores the potential of creating a virtual cell based on curated data sets to advance scientific and medical research. The conversation also touches on the discovery of new cell types and how it can change our understanding of diseases like cystic fibrosis.
  • 00:25:00 The transcript discusses the use of large data sets in single-cell research to understand the expression of genes and their relationship to diseases. It highlights a tool called Cell by Gene, developed by CZI, which allows researchers to explore the expression of specific genes across different cell types. The transcript also mentions the importance of interdisciplinary collaboration and the potential implications for drug development.

00:30:00 - 01:00:00

The podcast explores the strengths and limitations of large language models (LLMs) in science and AI, emphasizing the need for validation by scientists and engineers. It discusses the potential of LLMs to reach human-level or super intelligence and their current applications in various fields. The podcast also highlights the use of machine learning in biomedical research, interdisciplinary collaboration, and the creation of biohubs for studying cells and tissues. The CZI's unique funding approach, focus on rare diseases, and collaboration with researchers and drug developers are also discussed.

  • 00:30:00 The podcast discusses the strengths of modern LLMs in imagining different states of things and the development of a large nonprofit life sciences AI cluster. It also acknowledges the limitations of LLMs and emphasizes the importance of validation by scientists and engineers. The podcast highlights the unique funding approach of CZI in supporting longer-term, larger projects in science.
  • 00:35:00 The podcast discusses the use of large language models in science and AI. These models, such as the transformer model architecture, allow for the analysis of large amounts of data and the identification of patterns. The potential of these models to reach human-level or super intelligence is still uncertain. However, they have already unlocked new use cases and applications in various fields.
  • 00:40:00 The podcast discusses the potential of machine learning in biomedical research and the shift towards conducting experiments in silico. It emphasizes the importance of ensuring that research translates effectively to humans and highlights the need for interdisciplinary collaboration in scientific endeavors. The creation of additional biohubs and their focus on physical material engineering and cellular endoscopy are also mentioned.
  • 00:45:00 The Chicago and New York bio hubs are collaborating with universities to study cells and tissues. They are using engineered skin tissue with embedded sensors to understand cell behavior and inflammation. In New York, they are engineering individual cells to identify changes in the human body and potentially take action. These projects have the potential to revolutionize medical understanding and treatment.
  • 00:50:00 The podcast discusses the potential of using engineered cells and AI in scientific research, particularly in the field of immunology and neurodegenerative diseases. They also explore the decision to collaborate with universities and the goal of understanding human health and finding effective treatments. The CZI's approach is seen as a unique and collaborative model in the scientific community.
  • 00:55:00 The podcast discusses the work of CZI in funding rare disease organizations and their efforts to partner with researchers and drug developers. They highlight the importance of rare diseases as windows into understanding normal biology. The hosts also touch on their optimism and how it influences their work.

01:00:00 - 01:30:00

The podcast episode features a conversation with Mark Zuckerberg about the intersection of technology and health. Zuckerberg discusses the impact of technology on physical and mental health, highlighting that its effects can be both positive and negative depending on how it is used. The discussion explores the role of technology in promoting well-being and the measures taken to ensure user safety. The importance of self-regulation and awareness of one's usage is emphasized.

  • 01:00:00 The podcast discusses the importance of optimism and productivity in viewing the world. The guest shares a personal story of her family's journey as refugees and how it inspires her belief in a better future. The conversation also touches on the impact of having children on one's worldview.
  • 01:05:00 The podcast episode features a conversation with Mark Zuckerberg about the intersection of technology and health. Zuckerberg discusses the impact of technology on physical and mental health, highlighting that its effects can be both positive and negative depending on how it is used. He emphasizes the importance of forming meaningful connections and friendships through social media, while also acknowledging the potential for negative experiences such as bullying. The discussion explores the role of technology in promoting well-being and the measures taken to ensure user safety.
  • 01:10:00 The podcast discusses the impact of social media on people's well-being and the importance of creating a positive experience. It highlights the need for a balanced approach, where social media platforms connect people, provide inspiration and education, and offer tools to block bullying or harassment. The guest also shares personal experiences with Instagram, mentioning the allure of violent content and the enjoyment of cute animal videos.
  • 01:15:00 The podcast discusses the algorithm used by social media platforms to curate content and whether it aims to provide a positive user experience or simply reflect users' previous interests. The guest mentions efforts to minimize clickbait and the importance of balancing user preferences with real feedback. They also highlight the need to avoid being paternalistic and allow users to like what they want, except for content that promotes bullying or incites violence.
  • 01:20:00 The podcast discusses the challenge of making people aware of the various tools available on social media platforms. It emphasizes the need for conversations and in-product education to increase awareness and understanding of these tools. The importance of providing different levels of control and nuance in product design is also highlighted.
  • 01:25:00 The podcast discusses the tools and functions available to users to self-regulate their usage of social media platforms. It also explores the potential future of augmented reality glasses and the impact of spending excessive time interacting with people through such technology. The importance of self-regulation and awareness of one's usage is emphasized.

01:30:00 - 02:00:00

The podcast explores the advancements in virtual and augmented reality technology, discussing the potential for a mixed reality experience that combines both. It also touches on the implications of this technology for collaboration, dating, and the integration of the physical and digital worlds. The guest discusses the potential of virtual reality in transforming exercise and physical activities, as well as its applications in education and expression. They also mention the design goals and future potential of augmented reality glasses, including a collaboration with Ray-Ban for the first version.

  • 01:30:00 The podcast discusses the advancements in virtual and augmented reality technology, highlighting the potential for a mixed reality experience that combines both. It explores the possibilities of holograms and smart glasses, envisioning a future where physical objects can be replaced by digital representations. The conversation also touches on the implications of this technology for collaboration, dating, and the integration of the physical and digital worlds.
  • 01:35:00 The podcast discusses the integration of the digital and physical worlds in virtual reality experiences. The guest believes that the future will involve a combination of the physical world and digital artifacts, creating a more profound experience. They also discuss the importance of physical activity and how virtual reality can encourage movement.
  • 01:40:00 The podcast discusses the potential of virtual reality (VR) and augmented reality (AR) technologies in transforming exercise and physical activities. It explores the idea of using VR to improve flexibility, form, and resistance training, as well as the challenges of tracking body movements and applying haptic feedback. The conversation also touches on the possibilities of using VR for artistic pursuits such as painting and learning to play musical instruments.
  • 01:45:00 The podcast discusses the potential of virtual technology in education and expression, highlighting the accessibility and affordability it can provide. It also explores the idea of using mixed reality for experiments and training in various fields, such as surgery. Additionally, the conversation touches on the potential benefits and challenges of using virtual interactions to improve social skills and reduce anxiety. The discussion concludes with a mention of the new Ray-Ban glasses that allow for voice commands and personalized audio experiences.
  • 01:50:00 The guest discusses the design and future potential of augmented reality (AR) glasses, aiming for a stylish and comfortable form factor. They mention the challenges of miniaturizing a supercomputer and creating holographic displays. Two approaches are being taken: one for a long-term vision and another for a more immediate version with basic features. The guest also mentions a collaboration with Ray Ban for the first version of the glasses.
  • 01:55:00 The podcast discusses the potential of smart glasses, highlighting their ability to enhance user experiences by allowing them to listen to music, take calls, and access AI assistants. The design goals of smart glasses include enabling users to see and hear what their AI assistant does, while keeping them present in the real world. Privacy concerns and the need for consent when recording with smart glasses are also addressed.

02:00:00 - 02:15:31

Mark Zuckerberg and Dr. Priscilla Chan discuss the development of accessible and affordable technology, including smart glasses and AI interfaces. They explore the potential benefits of these technologies in addressing issues like myopia and social engagement. The conversation also emphasizes the importance of maintaining control over AI representations. Overall, they express optimism about the positive possibilities of AI in various aspects of life.

  • 02:00:00 The podcast discusses the approach of building accessible and affordable technology that can be used by everyone, contrasting it with companies like Apple that focus on premium pricing. They explore the potential of glasses to address the problem of myopia caused by excessive screen time and the benefits of accessing the visual system through technology. The conversation also touches on the value of eye tracking and the trade-offs involved in incorporating different sensors into devices.
  • 02:05:00 The podcast discusses the development of smart glasses and the different options consumers may have in terms of features and design. It also explores the potential for AI interfaces and avatars in social media, allowing creators to engage with their audience through AI versions of themselves. The importance of maintaining control over these AI representations is emphasized.
  • 02:10:00 The podcast episode features a discussion with Mark Zuckerberg and Dr. Priscilla Chan about the potential of AI assistants and their impact on various aspects of life, including social engagement, learning, and mental and physical health. They highlight the positive possibilities of AI and express optimism about the future. The conversation also touches on the use of AI characters in interactive experiences, with Snoop Dogg mentioned as an example.

Welcome to the Huberman Lab Podcast, where we discuss science and science-based tools

for everyday life.

I'm Andrew Huberman, and I'm a professor of neurobiology and ophthalmology at Stanford

School of Medicine.

My guests today are Mark Zuckerberg and Dr. Priscilla Chan.

Mark Zuckerberg, as everybody knows, founded the company Facebook.

He is now the CEO of Metta, which includes Facebook, Instagram, WhatsApp, and other technology

platforms.

Dr. Priscilla Chan graduated from Harvard and went on to do her medical degree at the

University of California, San Francisco.

Mark Zuckerberg and Dr. Priscilla Chan are married and the co-founders of the CZI, or

Chan Zuckerberg Initiative, a philanthropic organization whose stated goal is to cure

all human diseases.

The Chan Zuckerberg Initiative is accomplishing that by providing critical funding not available

elsewhere, as well as a novel framework for discovery of the basic functioning of cells,

cataloging all the different human cell types, as well as providing AI or artificial intelligence

platforms to mine all of that data to discover new pathways and cures for all human diseases.

The first hour of today's discussion is held with both Dr. Priscilla Chan and Mark Zuckerberg,

during which we discuss the CZI and what it really means to try and cure all human diseases.

We talk about the motivational backbone for the CZI that extends well into each of their

personal histories.

Indeed, you'll learn quite a lot about Dr. Priscilla Chan, who has, I must say, an absolutely

incredible family story leading up to her role as a physician and her motivations for

the CZI and beyond.

And you'll learn from Mark how he's bringing an engineering and AI perspective to discovery

of new cures for human disease.

The second half of today's discussion is just between Mark Zuckerberg and me, during

which we discuss various meta-platforms, including, of course, social media platforms, and their

effects on mental health in children and adults.

We also discuss VR, virtual reality, as well as augmented and mixed reality.

And we discuss AI, artificial intelligence, and how it stands to transform not just our

online experiences with social media and other technologies, but how it stands to potentially

transform every aspect of everyday life.

Before we begin, I'd like to emphasize that this podcast is separate from my teaching

and research roles at Stanford.

It is, however, part of my desire and effort to bring zero cost to consumer information

about science and science-related tools to the general public.

In keeping with that theme, I'd like to thank the sponsors of today's podcast.

Our first sponsor is AteSleep.

AteSleep makes smart mattress covers with cooling, heating, and sleep tracking capacity.

I've spoken many times before on this podcast about the fact that getting a great night's

sleep really is the foundation of mental health, physical health, and performance.

One of the key things to getting a great night's sleep is to make sure that the temperature

of your sleeping environment is correct.

And that's because in order to fall and stay deeply asleep, your body temperature actually

has to drop by about one to three degrees.

And in order to wake up feeling refreshed and energized, your body temperature actually

has to increase by about one to three degrees.

With AteSleep, you can program the temperature of your sleeping environment in the beginning,

middle, and end of your night.

It has a number of other features like tracking the amount of rapid eye movement and slow

wave sleep that you get.

Things that are essential to really dialing in the perfect night's sleep for you.

I've been sleeping on an AteSleep mattress cover for well over two years now, and it

has greatly improved my sleep.

I fall asleep far more quickly.

I wake up far less often in the middle of the night, and I wake up feeling far more refreshed

than I ever did prior to using an AteSleep mattress cover.

If you'd like to try AteSleep, you can go to AteSleep.com slash Huberman to save $150

off their Pod 3 cover.

AteSleep currently ships to the USA, Canada, UK, select countries in the EU, and Australia.

Again, that's AteSleep.com slash Huberman.

Today's episode is also brought to us by Element.

Element is an electrolyte drink that has everything you need and nothing you don't.

That means plenty of electrolytes, sodium, magnesium, and potassium, and no sugar.

The electrolytes are absolutely essential for the functioning of every cell in your body

and your neurons, your nerve cells, rely on sodium, magnesium, and potassium in order

to communicate with one another electrically and chemically.

Element contains the optimal ratio of electrolytes for the functioning of neurons and the other

cells of your body.

Every morning, I drink a packet of element dissolved in about 32 ounces of water.

I do that just for general hydration and to make sure that I have adequate electrolytes

for any activities that day.

I'll often also have an element packet or even two packets in 32 to 60 ounces of water

if I'm exercising very hard and certainly if I'm sweating a lot in order to make sure

that I replace those electrolytes.

If you'd like to try Element, you can go to DrinkLMNT.com slash Huberman to get a free

sample pack with your purchase.

Again, that's DrinkLMNT.com slash Huberman.

I'm pleased to announce that we will be hosting four live events in Australia, each

of which is entitled The Brain Body Contract, during which I will share science and science-related

tools for mental health, physical health, and performance.

There will also be a live question and answer session.

We have limited tickets still available for the event in Melbourne on February 10th, as

well as the event in Brisbane on February 24th.

Our event in Sydney at the Sydney Opera House sold out very quickly, so as a consequence,

we've now scheduled a second event in Sydney at the Aware Super Theater on February 18th.

To access tickets to any of these events, you can go to HubermanLab.com slash Events

and use the code Huberman at checkout.

I hope to see you there.

And as always, thank you for your interest in science.

And now for my discussion with Mark Zuckerberg and Dr. Priscilla Chan.

Priscilla and Mark, so great to meet you, and thank you for having me here in your home.

Oh, thanks for having us on the podcast.

Yeah.

I'd like to talk about the CZI, the Chan Zuckerberg Initiative.

I learned about this a few years ago when my lab was and still is now at Stanford as

a very exciting philanthropic effort that has a truly big mission.

I can't imagine a bigger mission.

So maybe you can tell us what that big mission is, and then we can get into some of the mechanics

of how that big mission can become a reality.

So like you're mentioning, in 2015, we launched the Chan Zuckerberg Initiative.

And what we were hoping to do at CZI was think about how do we build a better future for

everyone and looking for ways where we can contribute the resources that we have to bring

philanthropically, and the experiences that Mark and I have had for me as a physician

and educator for Mark as an engineer, and then our ability to bring teams together to

build builders.

Mark has been a builder throughout his career, and what could we do if we actually put together

a team to build tools, do great science.

And so within our science portfolio, we've really been focused on what some people think

is either an incredibly audacious goal or an inevitable goal.

But I think about it as something that will happen if we continue focusing on it, which

is to be able to cure, prevent, or manage all disease by the end of the century.

All disease.

So that's important, right?

A lot of times people ask which disease, and the whole point is that there is not one disease.

And it's really about taking a step back to where I always found the most hope as a physician,

which is new discoveries and new opportunities and new ways of understanding how to keep people

well come from basic science.

So our strategy at CZI is really to build tools, fund science, change the way basic

scientists can see the world and how they can move quickly in their discoveries.

And so that's what we launched in 2015.

We do work in three ways.

We fund great scientists.

We build tools right now, software tools to help move science along and make it easier

for scientists to do their work.

And we do science.

You mentioned Stanford being an important pillar for our science work.

We've built what we call biohubs, institutes where teams can take on grand challenges to

do work that wouldn't be possible in a single lab or within a single discipline.

And our first biohub was launched in San Francisco, a collaboration between Stanford, UC Berkeley,

and UCSF.

Amazing.

Curing all diseases implies that there will either be a ton of knowledge gleaned from

this effort, which I'm certain there will be, and there already has been.

We can talk about some of those early successes in a moment.

But it also implies that if we can understand some basic operations of diseases and cells

that transcend autism, Huntington's, Parkinson's, cancer, and any other disease, perhaps there

are some core principles that would make the big mission a real reality, so to speak.

What I'm basically saying is, how are you attacking this?

My belief is that the cell sits at the center of all discussion about disease, given that

our body is made up of cells and different types of cells.

So maybe you could just illuminate for us a little bit of what the cell is in your mind

as it relates to disease and how one goes about understanding disease in the context

of cells, because ultimately, that's what we're made up of.

Well, let's get to the cell thing in a moment, but just even taking a step back from that.

We don't think that it sees the eye that we're going to cure prevent or manage all diseases.

The goal is to give the scientific community and scientists around the world the tools

to accelerate the pace of science.

We spent a lot of time, when we were getting started with this, looking at the history

of science and trying to understand the trends and how they've played out over time.

If you look over this very long-term arc, most large-scale discoveries are preceded

by the invention of a new tool or a new way to see something, and it's not just in biology.

It's like having a telescope came before a lot of discoveries in astronomy and astrophysics,

but similarly, the microscope and just different ways to observe things or different platforms.

The ability to do vaccines preceded the ability to cure a lot of different things.

This is the engineering part that you were talking about, about building tools.

We view our goal is to try to bring together some scientific and engineering knowledge

to build tools that empower the whole field.

That's the big arc, and a lot of the things that we're focused on, including the work

in single-cell and cell understanding, which you can jump in and get into that if you want.

I think we generally agree with the premise that if you want to understand this stuff

from first principles, people study organs a lot, like they study how things present

across the body, but there's not a very widespread understanding of how each cell operates.

This is a big part of some of the initial work that we tried to do on the human cell

atlas and understanding what are the different cells, and there's a bunch more work that

we wanted to do to carry that forward.

Overall, I think when we think about the next 10 years here of this long arc to try to empower

the community to be able to cure, prevent, or manage all diseases, we think that the

next 10 years should really be primarily about being able to measure and observe more things

in human biology.

There are a lot of limits to that, and it's like you want to look at something through

a microscope.

You can't usually see living tissue because it's hard to see through skin or things

like that.

There are a lot of different techniques that will help us observe different things.

This is where the engineering background comes in a bit.

When I think about this from the perspective of how you'd write code or something, the

idea of trying to debug or fix a code base, but not be able to step through the code line

by line, it's not going to happen.

At the beginning of any big project that we do at Meta, we like to spend a bunch of the

time up front just trying to instrument things and understand what are we going to look at

and how are we going to measure things so we know we're making progress and know what

to optimize.

This is such a long-term journey that we think that it actually makes sense to take the next

10 years to build those kind of tools for biology and understanding just how the human

body works in action, and a big part of that is cells.

I don't know.

Do you want to jump in and talk about some of the efforts?

I'd just interrupt briefly and just ask about the different interventions, so to speak,

that CZI is a unique position to bring to the quest to cure all diseases.

I can think of, I know as a scientist that money is necessary but not sufficient.

When you have money, you can hire more people.

You can try different things.

That's critical, but a lot of philanthropy includes money.

The other component is you want to be able to see things, as you pointed out.

You want to know the normal disease process.

What is a healthy cell?

What's a diseased cell?

Cells constantly being bombarded with challenges and then repairing those, and then what we

call cancer is just kind of the runaway train of those challenges not being met by the cell

itself or something like that.

Better imaging tools, and then it sounds like there's not just a hardware component but

a software component.

This is where AI comes in.

Maybe we can, at some point, we can break this up into three different avenues.

One is understanding disease processes and healthy processes, we'll lump those together.

Then there's hardware, so microscopes, lenses, digital, deconvolution, ways of seeing things

in bolder relief and more precision, and then there's how to manage all the data, and then

I love the idea that maybe AI could do what human brains can't do alone, and manage understanding

of the data.

It's one thing to organize data.

It's another to say, as you pointed out in the analogy with code, that this particular

gene and that particular gene are potentially interesting whereas a human being would never

make that potential connection.

The tools that CCI can bring to the table, we fund science like you're talking about.

We try to, there's lots of ways to fund science, and just to be clear, what we fund is a tiny

fraction of what the NIH funds, for instance.

You guys have been generous enough that it definitely holds weight to NIH's contribution.

I think every funder has its own role in the ecosystem, and for us, it's really how do we

incentivize new points of view?

How do we incentivize collaboration?

How do we incentivize open science?

A lot of our grants include inviting people to look at different fields.

Our first neuroscience RFA was aimed towards incentivizing people from different backgrounds,

immunologists, microbiologists, to come and look at how our nervous system works and how

to keep it healthy.

Or we asked that our grantees participate in the preprint movement to accelerate the

rate of sharing knowledge and actually others being able to build upon science.

That's the funding that we do.

In terms of building, we build software and hardware, like you mentioned.

We put together teams that can build tools that are more durable and scalable than someone

in a single lab might be incentivized to do.

There's a ton of great ideas, and nowadays, most scientists can tinker and build something

useful for their lab, but it's really hard for them to be able to share that tool sometimes

beyond their own laptop or forget the next lab over or across the globe.

We partner with scientists to see what is useful, what kinds of tools.

In imaging, napari, it's a useful image annotation tool that is born from an open source community.

How can we contribute to that?

Or cell by gene, which works on single cell data sets, and how can we make it build a

useful tool so that scientists can share data sets, analyze their own, and contribute

to a larger corpus of information?

We have software teams that are building, collaborating with scientists to make sure

that we're building easy to use, durable, translatable tools across the scientific community in the

areas that we work in.

We also have institutes.

This is where the imaging work comes in, where we are proud owners of an electron microscope

right now.

It's going to be installed at our imaging institute, and that will really contribute

to a way where we can see work differently.

More hardware does need to be developed.

We're partnering with a fantastic scientist in the Biohub network to build a mini-phase

plate to align the electrons through the electron microscope to be able to increase the resolution

so we can see in sharper detail.

There's a lot of innovative work within the network that's happening, and these institutes

have grand challenges that they're working on.

Back to your question about cells.

Cells are just the smallest unit that are alive.

Your body, all of our bodies, have many, many, many cells.

There's some estimate of 37 trillion cells, different cells in your body, and what are

they all doing, and what do they look like when they're healthy, and you're healthy?

What do they look like when you're sick?

Where we're at right now with our understanding of cells and what happens when you get sick

is basically we've gotten pretty good at, from the Human Genome Project, looking at

how different mutations in your genetic code lead for you to be more susceptible to get

sick or directly cause you to get sick.

We go from a mutation in your DNA to, wow, you now have Huntington's disease, for instance.

There's a lot that happens in the middle, and that's one of the questions that we're

going after at CZI is, what actually happens?

So an analogy that I like to use to share with my friends is, right now, say we have

a recipe for a cake.

We know there's a typo in the recipe, and then the cake is awful.

That's all we know.

We don't know how the chef interprets the typo.

We don't know what happens in the oven, and we don't actually know sort of how it's exactly

connected to how the cake didn't turn out, how you had expected.

A lot of that is unknown, but we can actually systematically try to break this down.

And one segment of that journey that we're looking at is how that mutation gets translated

and acted upon in your cells.

And all of your cells have what's called mRNA.

mRNA are the actual instructions that are taken from the DNA.

And what our work in single cell is, looking at how every cell in your body is actually

interpreting your DNA slightly differently.

And what happens when healthy cells are interpreting the DNA instructions and when sick cells are

interpreting those directions?

And that is a ton of data.

I just told you there's 37 trillion cells.

There's different large sets of mRNA in each cell.

But the work that we've been funding is looking at how, first of all, gathering that information.

We've been incredibly lucky to be part of a very fast moving field where we've gone

from in 2017 funding some methods work to now having really not complete, but nearly

complete atlases of how the human body works, how flies work, how mice work at the single

cell level, and being able to then try to piece together like, how does that all come

together and when you're healthy and when you're sick?

And the neat thing about the sort of inflection point where we're at in AI is that I can't

look at this data and make sense of it.

There's just too much of it.

Biology is complex.

Human bodies are complex.

We need this much information.

But the use of large language models can help us actually look at that data and gain insights,

look at what trends are consistent with health and what trends are unsuspected.

And eventually our hope through the use of these data sets that we've helped curate in

the application of large language models is to be able to formulate a virtual cell,

a cell that's completely built off of the data sets of what we know about the human

body, but allows us to manipulate and learn faster and try new things to help move science

and then medicine along.

Do you think we've cataloged the total number of different cell types?

Every week I look at great journals like Cell, Nature and Science and for instance I saw

recently that using single cell sequencing, they've categorized 18 plus different types

of fat cells.

We always think of like a fat cell versus a muscle cell.

So now you've got 18 types.

Each one is going to express many, many different genes in RNAs, mRNAs, and perhaps one of them

is responsible for what we see in advanced type 2 diabetes or in other forms of obesity

or where people can't lay down fat cells, which turns out to be just as detrimental

in those extreme cases.

So now you've got all these lists of genes, but I always thought of single cell sequencing

as necessary but not sufficient.

You need the information, but it doesn't resolve the problem.

And I think of it more as a hypothesis generating experiment.

So you have all these genes and you could say, wow, this gene is particularly elevated

in the diabetics cell type of let's say one of these fat cells or muscle cells for that

matter, whereas it's not in non-diabetics.

So then of the millions of different cells, maybe only five of them differ dramatically.

So then you generate a hypothesis, oh, it's the ones that differ dramatically that are

important, but maybe one of those genes, when it's only 50% changed, has a huge effect because

of some network biology effect.

And so I guess what I'm trying to get to here is how does one meet that challenge and

can AI help resolve that challenge by essentially placing those lists of genes into a 10,000

hypothesis?

Because I'll tell you that the graduate students and postdocs in my lab get a chance to test

one hypothesis at a time.

And that's really the challenge, let alone one lab.

And so for those that are listening to this, and hopefully it's not getting outside the

scope of standard understanding or the understanding we've generated here.

But what I'm basically saying is you have to pick at some point.

More data always sounds great, but then how do you decide what to test?

So no, we don't know all the cell types.

I think one thing that was really exciting when we first launched this work was cystic

fibrosis.

Cystic fibrosis is caused by mutation in CFTR.

That's pretty well known.

It affects a certain channel that makes it hard for mucus to be cleared.

That's the basics of cystic fibrosis.

When I went to medical school, it was taught as fact.

So their lungs fill up with fluid, these are people carrying around sacks of fluid filling

up.

I've worked with people, and then they have to literally dump the fluid out.

Exactly.

They can't run or do an intense exercise.

Life is shorter.

Life is shorter.

And when we applied single cell methodologies to the lungs, they discovered an entirely

new cell type that actually is affected by a mutation in the CF mutation, the cystic

fibrosis mutation, that actually changes the paradigm of how we think about cystic fibrosis.

Just unknown.

So I don't think we know all the cell types.

I think we'll continue to discover them, and we'll continue to discover new relationships

between cell and disease.

Which leads me to the second example I want to bring up is this large data set that the

entire scientific community has built around single cell is starting to allow us to say,

this mutation, where is it expressed?

What types of cell types it's expressed in?

And we actually have built a tool at CZI called Cell by Gene, where you can put in the mutation

that you're interested in.

And it gives you a heat map of cross cell types, of which cell types are expressing

the gene that you're interested in.

And so then you can start looking at, okay, if I look at gene X, and I know it's related

to heart disease, but if you look at the heat map, it's also spiking in the pancreas.

That allows you to generate a hypothesis, why?

And what happens when this gene is mutated in the function of your pancreas?

Really exciting way to look and ask questions differently.

And you can also imagine a world where if you're trying to develop a therapy, a drug,

and the goal is to treat the function of the heart, but you know that it's also really

active in the pancreas again.

So what is there going to be an unexpected side effect that you should think about as

you're bringing this drug to clinical trials?

So it's an incredibly exciting tool and one that's only going to get better as we get

more and more sophisticated ways to analyze the data.

I must say I love that because if I look at the advances in neuroscience over the last

15 years, most of them did necessarily come from looking at the nervous system.

It came from the understanding that the immune system impacts the brain.

Everyone, prior to that, talked about the brain as immune privileged organ.

What you just said also bridges the divide between single cells organs and systems, right?

Because ultimately cells make up organs, organs make up systems and they're all talking to

one another and everyone nowadays is familiar with like gut brain access or the microbiome

being so important, but rarely is the discussion between organs discussed, so to speak.

So I think it's wonderful.

So that tool was generated by CZI or CZI funded that tool.

So how does this?

We built it.

You built it.

So is it built by meta?

Is this meta?

No, no, no.

CZI has its own engineers.

Got it.

Yeah.

They're completely different organizations.

Incredible.

And so a graduate student or postdoc who's interested in a particular mutation could

put this mutation into this database that graduate student or postdoc might be in a laboratory

known for working on heart, but suddenly find that they're collaborating with other scientists

that work on the pancreas, which also is wonderful because it bridges the divide between these

fields.

Fields are so siloed in science, not just different buildings, but people rarely talk, unless

things like this are happening.

I mean, the graduate student is someone that we want to empower because one, they're the

future of science, as you know.

And within Cell by Gene, if you put in the gene you're interested in and it shows you

the heat map, we also will pull up like the most relevant papers to that gene.

And so like read these things.

Fantastic.

As we all know, quality nutrition influences, of course, our physical health, but also our

mental health and our cognitive functioning, our memory, our ability to learn new things

and to focus.

And we know that one of the most important features of high quality nutrition is making

sure that we get enough vitamins and minerals from high quality, unprocessed or minimally

processed sources, as well as enough probiotics and prebiotics and fiber to support basically

all the cellular functions in our body, including the gut microbiome.

Now I, like most everybody, try to get optimal nutrition from whole foods, ideally mostly

from minimally processed or non-processed foods.

However, one of the challenges that I and so many other people face is getting enough

servings of high quality fruits and vegetables per day, as well as fiber and probiotics that

often accompany those fruits and vegetables.

That's why way back in 2012, long before I ever had a podcast, I started drinking AG1.

And so I'm delighted that AG1 is sponsoring the Huberman Lab podcast.

The reason I started taking AG1 and the reason I still drink AG1 once or twice a day is that

it provides all of my foundational nutritional needs.

That is, it provides insurance that I get the proper amounts of those vitamins, minerals,

probiotics and fiber to ensure optimal mental health, physical health and performance.

If you'd like to try AG1, you can go to drinkag1.com slash Huberman to claim a special offer.

They're giving away five free travel packs plus a year supply of vitamin D3K2.

Again, that's drinkag1.com slash Huberman to claim that special offer.

I just think going back to your question from before, I mean, are there going to be more

cell types that get discovered?

I mean, I assume so, right?

I mean, no catalog of the stuff is ever, you know, doesn't seem like we're ever done, right?

We keep on finding more.

But I think that that gets to one of the things that I think are the strengths of modern LLMs

is the ability to kind of imagine different states that things can be in.

So from all the work that we've done and funded on the human cell atlas, there is a

large corpus of data that you can now train a kind of large scale model on.

And one of the things that we're doing at CCI, which I think is pretty exciting, is

building what we think is one of the largest nonprofit life sciences AI clusters, right?

It's like a, you know, on the order of a thousand GPUs and, you know, it's larger than what

most people have access to in academia that you can do serious engineering work on.

And, you know, by basically training a model with all of the human cell atlas data and

a bunch of other inputs as well, we think you'll be able to basically imagine all of

the different types of cells and all the different states that they can be in and when they're

healthy and diseased and how they'll interact with, you know, different, you know, interact

with each other, interact with different potential drugs.

But I mean, I think the state of LLMs, I think this is where it's helpful to understand

you know, have a good understanding and be grounded in like the modern state of AI.

I mean, these things are not foolproof, right?

I mean, one of the flaws of modern LLMs is they hallucinate, right?

So the question is, how do you make it so that that can be an advantage rather than a disadvantage?

And I think the way that it ends up being an advantage is when they help you imagine

a bunch of states that someone could be in, but then you, you know, as the scientist or

engineer go and validate that those are true, whether they're, you know, solutions to how

a protein can be folded or possible states that a cell could be in when it's interacting

with other things.

But, you know, we're not yet at the state with AI, that you can just take the outputs

of these things as, like as gospel and run from there.

But they are very good, I think, as you said, hypothesis generators or possible solution

generators that then you can go validate.

So I know that that's a very powerful thing that we can basically, you know, building

on the first five years of science work around the human cell atlas and all the data that's

been built out, carry that forward into something that I think is going to be a very novel tool

going forward.

And that's the type of thing that I think we're set up to do well.

I mean, you all, you had this exchange a little while back about, you know, funding

levels and how CZI is, you know, just sort of a drop in the, in the, in the bucket compared

to NIH.

But I think we have this, the thing that I think we can do that's different is funding

some of these longer term, bigger projects that it is hard to galvanize the, and pull

together the energy to do that.

And it's a lot of what most science funding is, is like relatively small projects that

are exploring things over relatively short time horizons.

And one of the things that we try to do is, is like, build these tools over, you know,

five, 10, 15 year periods.

They're often projects that require hundreds of millions of dollars of funding and world

class engineering teams and infrastructure to do.

And that I think is a pretty cool contribution to the field that, that I think is there aren't

as many other folks who are doing that kind of thing, but that's one of the reasons why

I'm personally excited about the virtual cell stuff, because it just, it's like this perfect

intersection of all the stuff that we've done in single cell, the previous collaborations

that we've done with the field and, you know, bringing together the, the industry and AI

expertise around this.

Yeah, I completely agree that the model of science that you're put together with CCI is,

isn't just unique from an age, but it's extremely important that the independent investigator

model is what's driven the progression of science in this country and to some extent

in Northern Europe for the last hundred years.

And it's wonderful on the one hand, because it allows for that image we have of a scientist

kind of tinkering away or the people in their lab and then the Eurekas.

And that hopefully translates to better human health.

But I think in my opinion, we've moved past that model as the most effective model or

the only model that should be explored.

Yeah, I just think it's a balance.

It's a balance.

You want, you want that, but you want to empower those people.

I think that that's these tools and power those.

And there are mechanisms to do that like NIH, but, but it's hard to do collaborative

science is it's sort of interesting that we're sitting here not far, because I grew up

right near here as well, not far from the garage model of tech, right?

The Hewlett and Packard model, not far from here at all.

And the idea was, you know, the tinkerer in the garage, the inventor.

And then people often forget that to implement all the technologies they

discovered took enormous factories and warehouses.

So, you know, there's, there's a similarity there to Facebook, Meta, et cetera.

But I think in science, we imagine that the scientists alone in their laboratory and

those Eureka moments, but I think nowadays that the big questions really require extensive

collaboration and certainly tool development.

And one of the tools that you keep coming back to is these LLMs, these large language

models, and maybe you could just elaborate for, for those that aren't familiar, you

know, what are, what is a large language model for the, the uninformed?

What is it and what does it allow, what does it allow us to do that, you know, different

types of other types of AI don't allow?

Or more importantly, perhaps, what does it allow us to do that a bunch of really smart

people, highly informed in a given area of science, staring at the data, what can

it do that they can't do?

Sure.

So I think a lot of the progression of machine learning has been about building

systems, neural networks or otherwise, that can basically make sense and find patterns

in larger and larger amounts of data.

And there was a breakthrough a number of years back that some folks at Google actually made

called this transformer model architecture.

And it was this huge breakthrough because before then there was somewhat of a cap where,

you know, if you fed more data into a neural network past some, some point, it didn't

really glean more insights from it.

Whereas transformers just, you know, we haven't seen the end of how big that can scale

to yet.

I mean, I think that there's a chance that we run into some, some ceiling, but it's

never asked some totes, we haven't observed it yet, but we just haven't built big enough

systems yet.

And so I would guess that, um, I don't know, I think that this is actually one of the

big questions in, in the AI field today is basically our transformers and are the

current model architecture is sufficient.

And if you just build larger and larger clusters, do you eventually get something

that's like human intelligence or super intelligence, or, um, is there some kind of

fundamental limit to this architecture that we just haven't reached yet?

And once we, we kind of get a little bit further and building them out, then we'll

reach that and then we'll need a few more leaps before we get to, you know, the level

of, of, um, AI that I think will unlock, you know, a ton of really futuristic and

amazing things, but there's no doubt that even just being able to process the amount

of data that we can now with this model architecture, um, has unlocked a lot of

new use cases.

And the reason why they're called large language models is because one of the first

uses of them is, um, people basically feed in all of the language from, uh, basically

the worldwide web.

And you can think about them as basically prediction machines.

So if you, you, you fit in, um, you put in prompt and it can basically, you know,

predict a version of what should come next.

So, you know, you, you like type in a headline for a news story and it can kind

of predict what it thinks the story should be, or you could train it so that it

could be a chat pod, right?

Where, okay, if, if you're prompted with this question, you can get, um, this response.

But one of the interesting things is it turns out that there's actually nothing

specific to using human language in it.

So if instead of feeding it human language, if you use that model architecture for,

for a network and instead you feed it all of the human sell out list data, then if

you prompt it with a state of a cell, it can spit out, um, different versions of,

of like what, you know, how that cell can interact or different states that the

cell could be a next when it interacts with different things.

Does it have to take a genetics class?

So for instance, if you give it a bunch of genetics data, do you have to say,

Hey, by the way, and then you give it a genetics class so it understands that,

you know, you've got DNA RNA, mRNA and proteins.

I think that the basic, the basic nature of all these machine learning techniques

is they're, they're basically pattern recognition systems.

So they're these like very deep statistical machines, um, that are very

efficient at finding patterns.

Um, so it's, it's not actually, you know, you don't need to teach a, a language

model that's trying to, you know, speak a language, um, you know, a lot of specific

things about that language, either you just feed it in a bunch of examples.

And then, you know, let's say you teach it about something, um, in English, but

then you also give it a bunch of examples of people speaking Italian.

Um, it'll actually be able to explain the thing that it learned in English and

Italian, right?

Even though it is, so the crossover and just the pattern recognition, um, is the

thing that is pretty profound and powerful about this, but it really does apply to

a lot of different things.

Another example in the scientific community, um, has been the work that Alpha

fold, you know, that, that basically the, um, the folks at DeepMind have done on,

on protein folding, um, it's, you know, just basically a lot of the, the same

model architecture, but instead of language there, they folded, they kind of

fed in all of these, the protein data and you can give it a state and it can spit

out solutions to how those proteins get folded.

So it's very powerful.

I don't think we know yet as an industry, what the, um, what the natural limits

of it are and that that's one of the things that's pretty exciting about the

current state, but, um, it certainly allows you to solve problems that just

weren't solved with the generation of machine learning that came before it.

Sounds like CZI is moving a lot of work that was just done in vitro, in dishes,

um, and in vivo, in, um, living organisms, model organisms or humans to in

silico, as we say.

So, um, do you foresee a future where a lot of biomedical research, certainly

that the work of CZI, um, included is done by machines?

I mean, obviously it's much lower cost, um, and you can run millions of experiments,

which of course is not to say that humans are not going to be involved.

Um, but I love the idea that we can run experiments in silico, um, en masse.

I think the in silico experiments are going to be incredibly helpful to test

things quickly, to cheaply and to, um, to just unleash a lot of creativity.

I do think you need to be very careful about making sure it still translates

and matches this, uh, humans.

Um, you know, one thing that's funny in, uh, basic science is we've basically

cured every single disease in mice, like mice have, uh, we, we know what's

going on when they have a number of diseases because they were, they're

used as a model organism, but they are not humans.

And a lot of times that research is relevant, but not directly one to one

translatable to humans.

So you just have to be really careful about making sure that it actually

works for humans.

Sounds like what CZI is doing is actually creating a new field.

Mac, I, as I'm hearing all of this, I'm thinking, okay, this, this transcends

immunology department, you know, cardiothoracic surgery.

I mean, neuroscience, I mean, the idea of, of a new field where you certainly

embrace the realities of universities and laboratories, because that's where

most of the work that you're funding is done.

Is that right?

It's okay.

So, um, maybe we need to think about what it means to do science differently.

And I think that's one of the things that's most exciting, um, along those

lines, it seems that bringing together a lot of different types of people at

different, uh, major institutions, um, is going to be especially important.

So I know that the initial CCI biohub, um, gratefully, um, included Stanford,

um, we'll put that first in the list.

Um, but also UCSF forgive me, many friends at UCSF and also Berkeley.

Um, but there are now some additional institutions involved.

So maybe you could talk about that and what motivated the decision to branch

outside the Bay Area and, um, and why you selected those particular additional

institutions to be included.

Well, I mean, I'll, I'll just say part of a big part of why we wanted to create

additional biohubs is we were just so impressed by the work that the folks who

were running the first biohub did.

Um, and I also think, and you should walk through the, the, uh, work of the

Chicago biohub and the New York biohub that we just announced, but I think

it's actually an interesting set of examples that balance, um, the limits

of what you want to do with like physical material engineering and, and, um,

and, and where things are purely biological because the Chicago team is

really building more sensors to be able to understand what's going on in your

body, but that's more of like a physical kind of engineering challenge.

Whereas the, the New York team, we basically talk about this as like a cellular

endoscope of being able to have a, like an immune cell or something that can go

and understand, um, you know, what is, like, what's the thing that's going on

in your body, but it's not like a physical piece of hardware.

It's a cell that you can basically, um, you know, have, have, um, uh, just go

report out on, on, on different things that are happening inside the body.

So, so you should sell the microscope.

Yeah.

And then, and then eventually actually being able to act on it, but I mean,

but you should, you should go into more detail on all this.

So a core principle of how we think about biohubs is that it has to be, um, when

we invited proposals, it has to be at least three institutions.

Um, so really breaking down the barrier of a single university.

Oftentimes asking for the people designing the research aim to come from

all different backgrounds, um, and to explain why that the problem that they

want to solve, uh, requires interdisciplinary and inter university

institution collaboration to actually make happen.

Um, we just put that request for proposal out there, uh, with our San Francisco

biohub as an example, where they've done incredible work in single cell biology

and infectious disease.

And we got, I think, I want to say like 57 proposals from over 150 institutions.

A lot of ideas came together and, you know, we are so, so excited that we've

been able to launch Chicago and New York.

Um, Chicago is a collaboration between UIUC, uh, University of Illinois,

Urana, Champaign, and University of Chicago and Northwestern.

Um, if I, obviously these universities are multifaceted, but if I were to

sort of describe them by their like stereotypical strength, Northwestern has

an incredible medical, um, uh, system and hospital system.

University of Chicago brings to the table incredible basic science strengths.

Uh, University of Illinois is a computing powerhouse.

And so they came together and proposed that they were going to start thinking

about cells and tissue.

Um, so that one of the, one of the layers that you just alluded to.

So how do the cells that we know behave and act differently when they come

together as a tissue and the first, one of the first tissues that they're

starting with is skin.

So they've been already been able to, um, as a collaboration under the

leadership of Shayna Kelly, design, um, art, uh, engineered skin tissue.

The architecture looks the same as what's in you and I.

And what they've done is built these super, super thin sensors, and they

embed these sensors, um, throughout the layers of this engineered tissue.

And they read out the data.

They want to see how these cells, what these cells are secreting, how these

cells talk to each other, and what happens when these cells get inflamed.

Um, inflammation is an incredibly important process that drives 50% of

all deaths.

Um, and so this is another sort of disease agnostic approach.

We want to understand inflammation and they're going to get a ton of

information out from these sensors that tell you what happens when something

goes awry.

Cause right now we can say, like, when you have an allergic reaction, your

skin gets red and puffy, but what is the earliest signal of that?

And these sensors can look at the behaviors of these cells over time and

you can apply a large language model to look at the earliest statistically

significant changes that can allow you to intervene, um, as early as possible.

So that, that's what Chicago's doing.

They're starting in the, um, uh, skin cells.

They're also looking at the neuromuscular junction, um, which is the connection

between where a neuron, uh, attaches to a muscle and tells the muscle how to behave.

Super important in things like ALS, but also in aging.

Um, the slowed transmission of information across that neuromuscular

junction is what causes old people to fall.

Their brain cannot trigger their muscles to react fast enough.

And so we want to be able to embed these sensors to understand how these

different, um, uh, interconnected systems within our bodies work together.

In New York, uh, they're doing, uh, a related, uh, but, uh, equally exciting

project where they're, um, engineering individual cells to be able to, um, go

in and identify changes in, uh, in the human body.

So, um, what they'll do is, uh, they're calling us.

It's wild.

I mean, I love it.

I mean, it's, this is, I don't want to go on a tangent, but for those that

want to look it up, adaptive optics, you know, there's a lot of, uh, distortion

and interference when you try and look at something really small or really

far away and really smart physicists, uh, figured out, well, use the interference

as part of the microscope, make those actually lenses of the microscope.

We should talk about imaging separate.

So, so it's just, I mean, it's, it's, it's extremely clever along those lines.

It's not intuitive, but then when you hear it, it's like, it makes so much sense.

You know, it's not immediately intuitive.

Make the cells that are already can navigate to tissues or embed themselves in

tissues, be the microscope within that tissue.

I love it.

Um, the, the, the way that I explain this to my friends and my family is this is

fantastic voyage, but real life, um, like we are going into the human body and

we're using the immune cells, which, you know, are privileged and already working

to keep your body healthy and being able to target them to examine certain things.

So like you can engineer an immune cell to go in your body and look inside your

coronary arteries and say, are these arteries, um, healthy or are there plaques?

Cause because plaques lead to blockage, which lead to heart attacks.

Um, and the cell can then record that information and report it back out.

That's the first half of what the New York bio hub is going to do.

The second half is, can you then engineer the cells to go do something about it?

Can I then tell a different cell, immune cell that is able to transport in

your body to go in and clean that up in a targeted way?

And, um, so it's incredibly exciting.

They're going to study things that are sort of immune privilege that your immune

system normally doesn't have access to, um, things like ovarian and pancreatic cancer.

Um, they'll also look at a number of neurodegenerative diseases since, um, the

immune system doesn't presently have a ton of access into the nervous system.

Um, but they, it's, it's both mind blowing and it feels like sci-fi, but

science is actually in a place where if you really pushed a group of incredibly

qualified scientists say, could you do this if given the chance, the answer is

like, probably give us enough time, the bright team and resources, like it's doable.

Yeah.

I mean, it's a, it's a 10 to 15 year project, but it's, it's awesome engineered cells.

Yeah.

I love the optimism and, and the moment you said, make the cell, the microscope, so

to speak, I was like, yes, yes, and yes, it just makes so much sense.

What motivated the decision to do the work of, of CZI, um, in the context of

existing universities as opposed to, you know, there's still some real estate up

in Redwood city where there's a bunch of space to put biotech companies and just

hiring people from all, all backgrounds and, and saying, Hey, you know, have at it

and, and doing this stuff from scratch.

I mean, it's a, it's a very interesting decision to do this in the context of an

existing framework of like graduate students that need to do the thesis and

get a first author paper.

Cause there's a whole set of structures within academia that I think both

facilitate, but also limit the progression of science.

You know, that independent investigator model that we talked about a little bit

earlier, it, it's so core to the way science has been done.

This is very different and frankly sounds far more efficient if I'm to be completely

honest and, um, you know, we'll see if I renew my NIH funding after saying that.

But, um, but I think we all want the same thing.

We all want to, as scientists and, and as, um, you know, as humans, we want to

understand the way we work and we want, um, healthy, um, people to persist to be

healthy and we want sick people to get healthy.

I mean, that's really ultimately the goal.

It's not super complicated.

It's just hard to do.

So the teams at the bio hub are actually, um, independent of the university.

So each bio hub will probably have in total, maybe 50 people working on sort of

deep efforts.

However, it's an acknowledgement that not all of the best scientists who can

contribute to this area are actually, um, going to one, want to leave a university

or want to take on the full-time scope of this project.

So it's the ability to partner with universities, um, and to have the faculty

at all the universities be able to contribute to the overall project, um, is

how the bio hub is structured.

Got it.

But a lot of the way that we're approaching CSI is this long-term

iterative project to figure out, try a bunch of different things, figure

out which things produce the most interesting results, and then double

down on those in the next five year push, right?

So we just went through this period where we kind of wrapped up the first

five years of the science program and we tried a lot of different models, right?

All kinds of different things.

And, um, it's not that the bio hub model, we don't think it's like the best or

only model, but we found that it was sort of a, a really interesting way to

unlock a bunch of collaboration and bring some technical resources that

allow for this longer-term development.

And it's not something that is widely, um, being pursued across the rest of the

field.

So we figured, okay, this is like a, an interesting thing that we can, that

we can help push on.

But I, I mean, yeah, we, we do believe in the collaboration.

Um, but I also think that we come at this with, you know, we don't think

that the way that we're pursuing this is like the only way to do this or the

way that everyone should do it.

We're, we're pretty aware of, you know, that, um, you know, what is the rest of

the ecosystem and how we can play a unique role in it.

It feels very synergistic with the way science has already done and also

fills in incredibly important niche that, frankly, wasn't filled before.

Um, along the lines of implementation.

So let's say, um, your large language models combined with imaging tools reveal

that a particular set of genes, um, acting in a cluster, um, I don't know, set

up an, an organ crash, let's say the pan, the pancreas crashes at a particular

stage of, uh, pancreatic cancer.

I mean, still one of the most deadliest of the, of the cancers.

And, um, uh, there are others that you certainly wouldn't want to get, but

that's among the ones you wouldn't want to get the most.

So you discover that.

And then, and the idea is that, okay, then, um, AI reveals some potential

drug targets that then bear out in vitro in a dish and in a mouse model.

Um, how is the actual implementation of two drug discovery or maybe this target

is drug-able, maybe it's not, maybe it requires some other approach, you know,

a laser, laser ablation approach or something we don't know, but ultimately

is CZI going to be involved in the implementation of new therapeutics?

Is that the idea?

I less so, less so, uh, that's, you know, this is where it's important to work

in an ecosystem and to know your own limitations.

Like there are groups and startups and companies that take that and

bring it to translation very effectively.

I would say the place where we have a small window into that world is actually

our work with rare disease groups.

Um, we, we have, uh, through our rare as one portfolio funded patient advocates

to create, um, rare disease organizations where patients come together.

Uh, and, uh, actually pool their collective experience.

Uh, they build bio registries, registries of their natural history, and

they both partner with, uh, researchers to do the research about their disease.

And with, uh, drug developers to, uh, incentivize drug developers to focus

on what they may need for their disease.

And, you know, one thing that's important to point out is that rare diseases

aren't rare.

They're over 7,000 rare diseases and, uh, collectively impact many, many individuals.

And I think the thing that's from a basic science, uh, perspective, the incredibly

fascinating thing about rare diseases is that they're actually windows to how the

body normally should work.

And so there are often, uh, mutations that when, uh, when genes that were, when

they're mutated cause very specific diseases, but that tell you how the

normal biology works as well.

Got it.

So you discussed basically the, the goals, major goals and initiatives of the

CZI for the next, say five to 10 years.

Um, and then beyond that, the targets will be explored by biotech companies.

Um, look, they'll grab those targets and, and test them and implement them.

There've also, I think been a couple of teams from the initial bio hub that

were interested in spinning out ideas right into startup.

So that's just, it's, even though it's not a thing that we're going to pursue

because we're a philanthropy, um, we want to enable the work that gets done to be

able to get turned into companies and things that other people go take and,

and, and run towards, towards building, you know, ultimately therapeutics.

So that's, that's another zone, but that's just, that's not a thing that we're

going to do.

Yeah.

Got it.

Um, I gather you're both optimists.

Yeah.

Is that part of what brought you together?

Forgive me for switching to a personal question, but I love the optimism that,

that seems to sit at the root of the CZI.

I will say that we are, uh, incredibly hopeful people, but it manifests in

different ways between the two of us.

Yeah.

What, what do you, how would you describe your optimism versus mine?

It's not a loaded question.

Um, I don't know.

Um,

huh.

I mean, I think I'm more probably technologically optimistic about what can

be built.

And I think you, because of your focus as an actual doctor, um, kind of have more

of a sense of how that's going to affect actual people in their lives.

Um, whereas for me, it's like, I mean, a lot of my work, it is, you know, it's

like we touch a lot of people around the world, um, and the scale is sort of

immense.

And I think for you, just like it's like being able to improve the lives of

individuals, whether it's, you know, students at any of the schools that you've

started or any of the stuff that we've supported through the education work,

which isn't the goal here or, you know, like just being able to improve

people's lives in that way, I think is the thing that I've seen you be.

Super passionate about.

I don't know.

Does that, do you agree with that characterization?

I'm trying, I'm trying to, yeah, I agree with that.

I think that's very fair.

And I'm sort of giggling to myself because in a day-to-day life as like life

partners, our relative optimism comes through as, uh, Mark just like is overly

optimistic about his time management and we'll get engrossed in interesting ideas.

I'm late and he's late.

Physicians are very punctual.

And because he's late, I have to channel Mark as an optimist whenever I'm waiting

for him.

That's such a nice way.

Okay, I'll start using that.

That's what I think when I'm in the driveway with the kids waiting for you.

I'm like, Mark is an optimist.

Um, and so his optimism translates to some tardiness, whereas I'm a sort of,

I'm like, how does, what's, how is, how is this going to happen?

Like I'm going to open a spreadsheet.

I'm going to start putting together a plan and like a putting, pulling

together all the pieces, calling people to sort of like bring something to life.

But it is.

It's one of my favorite quotes that is optimists tend to be successful and

pessimists tend to be right.

And yeah, I mean, I think it's, it's true in a lot of different aspects of life.

Right?

It's like, did you say that?

Mark said, no, no, no, no, no, no, no, no, no, no, no, no, no.

I like it.

I did not invent it.

Um, we'll give it to you.

We'll put it out there.

No, no, no, no, no, no, just kidding.

But, but I do think that there's really something to it, right?

And there's like, if you're discussing any idea, there's all these reasons why it

might not work.

And so I think that, and those reasons are, you know, probably true.

The people who are stating them are, you know, probably have some validity to it.

But the question is that, is that the most productive way to view the world?

And, you know, I think across the board, you know, I think the people who tend to

be the most productive and get the most done, um, you kind of need to be optimistic

because if you don't believe that someone can get done, then why would you go work on it?

The reason I asked the question is that, you know, that these days we hear a lot

about, you know, the future is looking so dark in these various ways.

And, um, you have children.

So you have families and, um, you are a family, excuse me.

And, and you also have families independently, um, that are now merged.

But, um, I love the optimism behind the CZI because, you know, you know, behind

all this, there's sort of a set of big statements on the wall.

One, the future can be better than the present in terms of treating disease.

Maybe even you said eliminating diseases, all diseases.

I love that optimism.

Um, and that there's a tractable path to do it.

Like what we're going to put literally, you know, money and time and energy and

people and technology and AI behind that.

And so, um, I have to ask, uh, was having children, um, a significant, um, modifier

in terms of your, your view of the future, like, wow, like you hear all this

doom and gloom, like what's the future going to be like for them?

Well, did you sit back and think, you know, what would it look like if there

was a future with no diseases?

Um, is that the future we want our children in?

I mean, I'm voting a big yes.

So we're not going to, we're not going to debate that at all, but was having

children, uh, sort of an inspiration for the CZI in some way.

Yeah.

So I think my answer to that, um, I would dial backwards for me.

Um, and I'll just tell a very brief, uh, story about my family.

I'm the daughter of Chinese Vietnamese, uh, refugees.

Uh, my parents and grandparents were boat people.

If you remember, people left Vietnam during the war in these small boats

into the South China Sea.

And, um, the, there were stories about how these boats would sink with

whole families on them.

And so my grandparents, who both sets of grandparents, who knew each other,

decided that there was a better future out there and they were willing to

take risk for it.

Um, but they were afraid of losing all of their kids.

I, my dad is one of six.

My mom is one of 10.

And so they decided that there was something out there in this bleak time.

And they paired up their kids, one from each family, and sent them out on

these little boats before the internet, before cell phones, and just said,

we'll see you on the other side.

And, um, the kids were between the ages of like, you know, 10 to 25.

So young kids, my, my mom was a teenager, early teen when this happened.

Um, and everyone made it.

And I get to sit here and talk to you.

So how could I not believe that, like, better is possible?

And like, I hope that that's in my like epigenetics somewhere.

And then I carry that on.

That is a spectacular story.

Isn't that wild?

It is spectacular.

How can I be a pessimist with that?

I love it.

Yeah.

And I so appreciate that you became a physician because you were not bringing

that optimism and that epigenetic understanding and cognitive understanding

and emotional understanding to the field of medicine.

So I, I'm grateful to, um, the people that made that decision.

Yeah.

And then, you know, when I think you don't, I've always known that story,

but you don't understand how wild that feels until you have your own child.

And you're like, well, I can't even, I refuse to let her use, you know,

glass bottles only or something like that.

And you're like, oh my God, like the risk and sort of willingness of my

grandparents to believe in something bigger and better, um, is just

astounding and it else, and our own children sort of give it a sense of urgency.

And spectacular story.

And you're sending knowledge out into the fields of science and bringing

knowledge into the fields of science.

And I love this.

We'll, we'll see you on the other side.

Yeah.

I'm confident that it will all come back.

Well, thank you so much for that.

I, um, Marco, you have the opportunity to talk about, did having kids

change your world view?

It's really tough to beat that story.

It is tough to beat that story.

And they are also your children.

So in this case, you, you know, you get, you get, um, two for the price of one.

So this week, so, um,

having children definitely changes your time horizon.

So I think that that's one thing is you just, like, there were all these things

that I think we'd had talked about if for as long as we've known each other,

that you eventually want to go do.

But then it's like, oh, we're having kids.

We need to like get on this, right?

So I know there's, that was actually one of the checklists, the baby checklist

before the first was like,

the baby's coming, we have to like start CZI, um, and like sitting in the

hospital delivery room, finishing editing the letter that we were going

to publish to, to announce that it was a work that we were doing at CZI.

Some people think that was an exaggeration.

It was not.

We really were editing the final drafts.

Birth CZI before you were a child was, well, it's, it's an incredible initiative.

I've, I've been following it since its inception and, um, and it's already

been tremendously successful and everyone in the field of science.

And I have a lot of communication with those folks.

It feels the same way and, and the future is even brighter for it.

It's clear.

And thank you for expanding to the Midwest and New York.

And, um, we're all very excited to see where all of this goes.

I share in your optimism and thank you for your time today.

Thank you.

Thank you.

A lot more to do.

I'd like to take a quick break and thank our sponsor, Inside Tracker.

Inside Tracker is a personalized nutrition platform that analyzes data

from your blood and DNA to help you better understand your body and help

you reach your health goals.

I've long been a believer in getting regular blood work done for the simple

reason that many of the factors that impact your immediate and long-term

health can only be analyzed from a quality blood test.

A major problem with a lot of blood tests out there, however, is that you

get information back about metabolic factors, lipids and hormones and so forth.

But you don't know what to do with that information with Inside Tracker.

They make it very easy because they have a personalized platform that allows

you to see the levels of all those things, metabolic factors, lipids, hormones,

et cetera, but it gives you specific directives that you can follow that

relate to nutrition, behavioral modification, supplements, et cetera, that

can help you bring those numbers into the ranges that are optimal for you.

If you'd like to try Inside Tracker, you can go to inside tracker.com

slash Huberman to get 20% off any of Inside Tracker's plans.

Again, that's inside tracker.com slash Huberman.

And now for my discussion with Mark Zuckerberg.

Slight shift of topic here.

You're extremely well known for your role in technology development, but by

virtue of your personal interests and also where meta technology interfaces

with mental health and physical health, you're starting to become synonymous

with health, whether you realize it or not.

Part of that is because there's post footage of you rolling jiu-jitsu.

You wanted jiu-jitsu competition recently.

You're doing other forms of martial arts, water sports, including surfing

and on and on.

So you're doing it yourself, but maybe we could just start off with

technology and get this issue out of the way first, which is that I think many

people assume that technology, especially technology that involves a screen,

excuse me, of any kind is going to be detrimental to our health.

But that doesn't necessarily have to be the case.

So could you explain how you see technology meshing with, inhibiting

or maybe even promoting physical and mental health?

Sure.

I mean, I think this is a really important topic.

It's, you know, the research that we've done suggests that it's not like all

good or all bad.

I think how you're using the technology has a big impact on whether

it is basically a positive experience for you.

And even within technology, even within social media, there's not kind

of one type of thing that people do.

I think at its best, you're forming meaningful connections with other people.

And there's a lot of research that basically suggests that, you know,

it's the relationships that we have and friendships that kind of bring

the most happiness and in our lives and at some level end up even

correlating with living a longer and healthier life, because, you know,

that kind of, you know, grounding that you have in community ends up

being important for that.

So I think that that aspect of social media, which is, you know, the

ability to connect with people, to understand what's going on in people's

lives, have empathy for them, communicate what's going on with your life,

express that, that's generally positive.

There are ways that it can be negative in terms of bad interactions,

things like bullying, which we can talk about because there's a lot that

we've done to basically make sure that people can be safe from that and give

people tools and give kids the ability to have the right parental controls,

that their parents can oversee that.

But that's sort of the interacting with people side.

There's another side of all of this, which I think of as just like passive

consumption, which at its best, it's entertainment, right?

And entertainment is an important human thing too.

But I don't think that that has quite the same association with the long

term well-being and health benefits as being able to help people connect

with other people does.

And I think at its worst, some of the stuff that we see online, you know, I

think these days, a lot of the news is just so relentlessly negative that it's

just hard to, you know, come away from an experience where you're, you know,

looking at the news for, you know, half an hour and like feel better about

the world.

So I think that there's a mix on this.

I think the more that social media is about connecting with people and the

more that when you're kind of consuming and using the media part of social

media to learn about things that kind of enrich you and can provide inspiration

or education, as opposed to things that just leave you with a more toxic feeling.

And that that's sort of the balance that we try to get right across our products.

And I think we're pretty aligned with the community because at the end of the

day, you know, I mean, people don't want to use a product and come away feeling bad.

You know, there's a lot that people talk about, evaluate a lot of these products

in terms of information and utility, but I think it's as important when you're

designing a product to think about what kind of feeling you're creating with the

people who use it, right?

Whether that's kind of an aesthetic sense when you're designing hardware or just

kind of like what, like, what do you, what do you make people feel?

And generally people don't want to feel bad, right?

So I think when, you know, that doesn't mean that we want to, you know,

shelter people from bad things that are happening in the world, but I don't

really think that, you know, it's not what people want, you know, for us to just

be kind of just showing like all this super negative stuff all day long.

So we, so we work hard on all these different problems, you know, making

sure that we're helping connect people as best as possible, helping make sure

that we give people good tools to block people who might be bullying them or

harass them or especially for younger folks.

You know, anyone under the age of 16 defaults into an experience where

they're experienced as private, we have all these parental tools.

So that way, you know, parents can kind of understand what, what their

children is up to and are up to in a good balance.

Um, you know, and then on the other side, we try to give people tools to

understand how they're spending their time.

Um, and we try to give people tools so that, you know, if you're, if you're

a teen and, and you're kind of stuck in some, you know, loop of, of just

looking at one type of content will nudge you and say, Hey, you've been looking

at content of this type for a while, like how about something else?

And here's, here's a bunch of other examples.

So I think that there were things that you can do to kind of push this

in a positive direction, but I think it just starts with having a more nuanced

view of like, this isn't all good and all, or all bad.

And the more that you can make it kind of a positive thing, the, the better

this will be for all the people who use our products.

That makes really good sense in terms of the negative experience.

I agree.

I don't think anyone wants a negative experience in the moment.

I think where some people get concerned, perhaps, and I think about my own

interactions would say Instagram, which I use all the time, um, for

getting information out, but also consuming information.

And I happen to love it.

It's where I essentially launched the non podcast segment of my podcast and

continue to, I can think of experiences that are a little bit like, um, highly

processed food where it tastes good at the time.

It's, uh, it's highly engrossing, but it, it's not necessarily nutritious

and you don't feel very good afterwards.

So for me, that would be, um, my, uh, the little collage of default, um, options

to click on an Instagram, occasionally I notice, and this just reflects my

failure, not Instagrams, right?

That there are a lot of, um, like street fight things, like people

beating people up on the street.

And I have to say these have a very strong gravitational pull.

I'm not somebody that enjoys seeing violence per se, but you know, I find

myself, I'll click on one of these, like what happened?

And I'll see someone like, you know, get hit and there's like a little

melee on the street or something.

And those seem to be offered to me a lot lately.

And again, this is my fault.

It reflects my prior searching experience, but it, I noticed that it has a bit

of a gravitational pull where, um, you know, there's no, I didn't learn anything.

It's not teaching me any kind of useful street, um, self-defense skills of any kind.

Um, and at the same time, I also really enjoy some of the, the cute animal stuff.

And so I get a lot of those also.

So there's just polarized, you know, collage that's offered to me that reflects

my prior search behavior.

You could argue that the, the cute animal stuff, um, is just entertainment, but

actually it fills me with a feeling in some cases that truly delights me.

I delight in animals.

And we're not just talking about kittens.

I mean, animals I've never seen before interactions between animals.

I've never seen before that truly delight me.

They energize me in a positive way that when I leave Instagram, I do think I'm

better off, so I'm grateful for the algorithm in that sense.

But I guess the direct question is, is the algorithm just reflective of what

one has been looking at a lot prior to that moment where they log on?

Or is it also trying to do exactly what you, uh, described, which is trying

to give people a good feeling experience that leads to more good feelings.

Yeah.

I mean, I think we try to do this in a long-term way, right?

I think one simple example of this is we had this issue a number

of years back about clickbait news, right?

So articles that would have, um, you know, basically a headline, um,

that grabbed your attention that made you feel like, oh, I need to click on this.

And then you click on it.

And then the article is actually, you know, about something that's somewhat

tangential to it, um, but people clicked on it.

So, you know, the naive version of this stuff, you know, the like 10-year-old

version was like, oh, people seem to be clicking on this.

Maybe that's good, but it's actually a pretty straightforward exercise to

instrument the system to realize that, hey, people click on this.

And then, you know, they don't really spend a lot of time reading the news

that after clicking on it.

And, um, after they do this a few times, they, you know, it doesn't really

correlate with them, you know, saying that they're having a good experience.

Um, some of what you, some of how we, we measure this is just by looking

at how people use the services, but I think it's also important to balance

that by having like real people come in and, you know, tell us, okay, we show

them, here are the stories that we could have showed you, which of these, um,

are most meaningful to you or would, would make it's that you have the best

experience and just kind of like mapping the algorithm and what we do to

that ground truth of what people say that they want.

So I think that through a set of things like that, we really have made large

steps to minimize things like clickbait over time.

It's not like gone from the internet, but I think we've done a good job of

minimizing it on our services.

Within that though, I do think that we need to be pretty careful about not

being paternalistic about what makes different people feel good, right?

So, I mean, I don't know that everyone feels good about, about cute animals.

I mean, I can't imagine that people would feel really bad about it, but maybe

they don't have as profound of a positive, um, reaction to it as, as, as you

just expressed.

Um, and I don't know, maybe people who are more into fighting would look at the,

you know, the street fighting videos, assuming that they're within our

community standards.

I think that there's a level of violence that we just don't want to be

showing at all, but that's a separate question.

Um, but if they are, I mean, then, you know, it's like, I mean, I'm

pretty into MMA.

I don't, I don't get a lot of street fighting videos, but if I did, maybe I,

I'd feel like I was learning something from that.

Um, I think at various times in the company's history, we've been a

little bit too paternalistic about saying, this is good content.

This is bad.

You should like this.

Um, this is unhealthy for you.

And I think that we want to look at the long-term effects.

So you don't want to get stuck in a, in a short-term loop of like, okay, just

because you did this today doesn't mean it's like what you aspire for

yourself over time.

But I think as long as you look at the long-term of what people both say

they want and what they do, giving people a fair amount of latitude to

like the things that they like, I just think feels like the right set of

values to bring to this.

Um, now, of course, that doesn't go for everything.

There are things that are kind of truly off limits.

And, and, you know, things that are like bullying, for example, or, you know,

things that are really like inciting violence, things like that.

I mean, we have a whole community of standards around this.

But I think except for those things, which I would hope that most people

can agree, okay, bullying is bad, right?

I hope that, you know, a hundred percent of people agree with that.

And, you know, not a hundred, maybe 99% except for the things that kind of get

that sort of very, that feel pretty extreme and bad like that.

I think you want to give people space to like what they want to like.

Yesterday, I had the very good experience of learning from the meta team about

safety protections that are in place for kids who are using meta platforms.

And frankly, I was like really positively surprised at the huge number of

filter based tools and, and, and just ability to customize the experience so that it can

stand the best chance of enriching, not just remaining neutral, but enriching their mental

health status.

One thing that came about in that conversation, however, was I realized there are all these

tools, but do people really know that these tools exist?

And I think about my own experience with Instagram.

I love watching Adam Asari's Friday Q&As because he explains a lot of the tools that I didn't

know existed.

And people haven't seen that.

I highly recommend they watch that.

I think every takes questions on Thursdays and answers them most every Fridays.

So if I'm not aware of the tools without watching that that exists for adults,

how does meta look at the challenge of making sure that people know that they're all these

tools? I mean, dozens and dozens of very useful tools, but I think most of us just know the

hashtag, the tag, the click stories versus feed.

We now know that, you know, I also post to threads.

I mean, so we know the major channels and tools, but this is like owning a vehicle that has

incredible features that one doesn't realize can take you off road, can allow your vehicle to

fly. There's a lot there.

So what do you think could be done to get that information out?

Maybe this conversation could cue people to their existence.

I mean, that's part of the reason why I wanted to talk to you about this is, I mean,

I think most of the narrative around social media is not, okay, all the different tools that

people have to control their experience. It's, you know, the kind of narrative of,

is this just negative for teens or something? And I think, again, a lot of this comes down to,

you know, do, you know, how is the experience being tuned? And is it actually, you know,

like are people using it to connect in positive ways? And if so, I think it's really positive.

So yeah, I mean, I think part of this is we probably just need to get out and talk to people

more about it. And then there's an in-product aspect, which is, you know, if you're a teen and

you sign up, we take you through a pretty, you know, extensive experience that tries to outline

some of this. But that is limits too, right? Because when you sign up for a new thing,

if you're bombarded with like, here's a list of features, you're like, okay, I just signed up

for this. I don't really understand much about what the service is. Like, let me go find some

people to follow who are my friends on here before I like, learn about controls to

prevent people from harassing me or something. That's why I think it's really important to also

show a bunch of these tools in context. So, you know, if you're looking at comments,

and, you know, if you, if you go to, you know, delete a comment or you go to edit something,

you know, try to give people prompts in line, it's like, Hey, did you know that you can manage

things in these ways around that? Or when you're in the inbox and you're filtering something,

right? It's remind people in line. So I know just because of the number of people who use the

products and the level of nuance surrounding to the controls, I think the vast majority of

of that education, I think needs to happen in the product. But I do think that through

conversations like this and others that, you know, we need to be doing,

and we can create a broader awareness that those things exist. So that way, at least people are

primed. So that way, when those things pop up in the product, people are like, Oh yeah, like,

I knew that there was this control and like, here's, here's like, how I would use that.

Like I find the restrict function to be very useful. More than the block function. In most cases,

I do sometimes have the block, but the restrict function is really useful that you could filter

specific comments. You know, someone might have a, you might recognize that someone has a tendency

to be a little aggressive. And I should point out that I actually don't really mind what people

say to me, but I try and maintain what I call classroom rules in my comment section where I

don't like people attacking other people because I would never tolerate that in the

university classroom. I'm not going to tolerate that in the comment section, for instance.

Yeah. And I think that the example that you just, you just used about restrict versus block

gets to something about product design that's important too, which is that block is sort of

this very powerful tool that if someone is giving you a hard time and you just want them to disappear

from the experience, you can do it. But the design trade-off with that is that in order to make it

so that the person is just gone from the experience. And that you don't show up to them,

they don't show up to you. Inherent to that is that they will have a sense that you blocked

them. And that's why I think some stuff like restrict or just filtering, like I just don't

want to see as much stuff about this topic. You know, people like using different tools for

very subtle reasons, right? I mean, maybe, maybe you want the content to not show up,

but you don't want the person who's posting the content to know that you don't want it to show

up. Maybe you don't want to get the messages in your main inbox, but you don't want to tell the

person that you, like that you actually, you know, you're, that you're not friends or something

like that. I mean, you actually need to give people different tools that have different levels of,

of kind of power and nuance around how the social dynamics around using them play out

in order to really allow people to tailor the experience in the ways that they want.

In terms of trying to limit total amount of time on social media, I couldn't find really good data

on this. You know, how much time is too much? I mean, I think it's going to depend on what one

is looking at the age of the user, et cetera. But I know that you have tools that cue the user to

how long they've been on a given platform. Are there tools to self-regulate like I'm thinking

about like the, the Greek myth of the sirens and, you know, people, you know, tying themselves to

the mast and covering their eyes so that they're not drawn in by the sirens? Is there a function

aside from deleting the app temporarily and then reinstalling it every, every time you want to

use it again? Is there a true lockout, self-lockout function where one can lock themselves out of

access to the app? Well, I think we give people tools that let them manage this and, and, and

there's the tools that you get to use them. There's the tools that the parents get to use

to basically see how the usage works. But yeah, I think that there's, there's different kind of,

you know, I think for now we've mostly focused on helping people understand this and then,

you know, give people reminders and things like that. It's tough, though, to answer the question

that you were talking about before this of, is there an amount of time which is too much? Because

it does really get to what you're doing. But if you fast forward beyond just the apps that we have

today to an experience that, you know, is like a social experience in the future of the augmented

reality glasses or something that we're building, a lot of this is going to be, you know, you're

interacting with people in the way that you would physically as if you were kind of like

hanging out with friends or working with people. But now they can show up as holograms and you

can feel like you're present right there with them, no matter where they actually are. And the

question is, is there too much time to spend interacting with people like that? Well, at the

limit, if we can get that experience to be kind of as rich and giving you as good of a sense of

presence as you would have if you were physically there with someone, then I don't see why you would

want to restrict the amount that people use that technology to any less than what would be the,

you know, amount of time that you'd be comfortable interacting with people physically,

which obviously is not going to be 24 hours a day. You have to do other stuff. You have work,

you need to sleep. But I think it really gets to kind of how you're using these things. Whereas

if what you're primarily using the services for is to, you know, you're getting stuck in loops,

reading, you know, news or something that is really kind of getting you into a negative mental

state, then I don't know, I mean, I think that there's probably a relatively short period of

time that maybe that's kind of a good thing that you want to be doing. But again, even then, it's

not zero, right? Because it's, it's just because news might make you unhappy doesn't mean that the

answer is to be unaware of negative things that are happening in the world. I just think that

there's like different people have different tolerances for what they can take on that. And I

think we, you know, it's generally having some awareness is probably good as long as it's not

more than you're kind of constitutionally able to take. So I don't know, try to not be too paternalistic

about this is our approach, but we want to empower people by giving them the tools, both people,

and if you're a teen, your parents to have tools to understand what you're experiencing and what

the, and how you're using these things and then, and then go from there. Yeah, I think it requires

of all of us some degree of self-regulation. I like this idea of not being too paternalistic. I mean,

that's, it seems like the right way to go. I find myself occasionally having to make sure that I'm

not just passively scrolling, that I'm learning. I like forging for organizing and dispersing

information. That's been my life's career. So I have learned so much from social media. I find

great papers, great ideas. I think comments are a great source of feedback. And I'm not just saying

that because you're sitting here. I mean, Instagram in particular, but other meta platforms have been

tremendously helpful for me to get science and health information out. One of the things that

I'm really excited about, which I only had the chance to try for the first time today, is your

new VR platforms, the newest Oculus. And when then we can talk about the glasses, the Ray Bands.

Sure. Those are still, those two experiences are still kind of blowing my mind, especially the

Ray Bands glasses. And I have so many questions about this, so I'll resist. But

We can get into that. Okay. Well, yeah, I have some experience with VR. My lab has used VR.

Jeremy Balanson's lab at Stanford is one of the pioneering labs of VR and mixed reality. I guess

they used to call it augmented reality, but now mixed reality. I think what's so striking about

the VR that you guys had me try today is how well it interfaces with the real room,

let's call it the physical room. I could still see people. I could see where the furniture was,

so I wasn't going to bump into anything. I could see people's smiles. I could see my

water on the table while I was doing this, what felt like a real martial arts experience,

except I wasn't getting hit, well, I was getting hit virtually, but it's extremely engaging.

And yet on the good side of things, it really bypasses a lot of the early concerns that Balanson

lab, again, Jeremy's lab was early to say that, oh, you know, there's a limit to how much VR one

can or should use each day, even for the adult brain, because it can really disrupt your vestibular

system, your sense of balance. All of that seems to have been dealt with in this new,

this new iteration of VR. It didn't come out of it feeling dizzy at all. I didn't feel like I was

re-entering the room in a way that was really jarring. Going into it is obviously, whoa,

this is a different world, but you can look to your left and say, oh, someone just came in the

door. Hey, how's it going? Hold on. I'm playing this game just as it was when I was a kid playing

a Nintendo, and someone would walk in. It's fully engrossing, but you'd be like, hold on, and you

see they're there. So first of all, bravo. Incredible. And then the next question is,

what do we even call this experience? Because it is truly mixed. It's a truly mixed reality

experience. Yeah. I mean, mixed reality is sort of the umbrella term that refers to

the combined experience of virtual and augmented reality. So augmented reality is what you're

eventually going to get with some future version of the smart glasses, where you're primarily

seeing the world, but you can put holograms in it. So we'll have a future where you're

going to walk into a room, and you're going to be like as many holograms as physical objects.

If you just think about all the paper, the kind of art, physical games, media, your workstation,

any screen. If we referred to, let's say, an MMA fight, we could just draw it up on the table right

here and just see it repeat as opposed to us turning and looking at a screen. I mean, pretty

much any screen that exists could be a hologram in the future with smart glasses.

There's nothing that actually physically needs to be there for that when you have glasses that

can put a hologram there. And it's an interesting thought experiment to just go around and think

about, okay, what of the things that are physical in the world need to actually be physical? And

your chair does, right? Because you're sitting on it, a hologram isn't going to support you,

but I like that art on the wall. I mean, that doesn't need to physically be there. I mean,

so I think that that's sort of the augmented reality experience that we're moving towards.

And then we've had these headsets that historically we think about as VR. And

that has been something that kind of, it's like a fully immersive experience. But now

we're kind of getting something that's a hybrid in between the two and capable of both,

which is a headset that can do both virtual reality and some of these augmented reality

experiences. And I think that that's really powerful, both because you're going to get

new applications that kind of allow people to collaborate together. And maybe the two of us

are here physically, but someone joins us and it's their avatar there. Or maybe it's some

version of the future, like you're having a team meeting and you have some people there physically

and you have some people dialing in and they're basically like a hologram, they're virtually,

but then you also have some AI is that personas that are on your team that are helping you do

different things and they can be embodied as avatars and around the table meeting with you.

Are people going to be doing first dates that are physically separated? I could imagine that

some people would, is it even worth leaving the house type date and then they find out and then

they meet for the first time? I mean, maybe. I think, you know, dating has physical aspects to

it too. And some people might not be, they want to know whether or not it's worth the effort to

head out to, they want to reach the divide, right? It is possible. I mean, I know,

like some of my friends who are dating basically say that in order to make sure that they have

like a safe experience, then when they're, if they're going on a first date, they'll schedule

something that's like shorter and maybe in the middle of the day, like some, maybe it's coffee,

so that way if they don't like the person, they can just kind of get out before like going and

scheduling a dinner or like a real full date. So I don't know, maybe in the future, people will

kind of have that experience where you can feel like you're kind of sitting there and it's,

and it's even easier and lighter weight and safer. And if you're not having a good experience,

you can just like teleport out of there and beyond. But yeah, I think that this will be

an interesting question in the future is there are clearly a lot of things that are only possible

physically that, or so much better physically. And then there are all these things that we're

building up that can be digital experiences, but it's this weird artifact of kind of how

the stuff has been developed that the digital world and the physical world exist in these

like completely different planes, where you want to interact with the digital world. Well,

we do it all the time, but we pull out a small screen or we have a big screen. And just basically

we're interacting with the digital world through these screens. But I think if we fast forward,

you know, a decade or more, it's, I think one of the really interesting questions about what,

like what is the world that we're going to live in? I think it's going to increasingly be this mesh

of the physical and digital worlds that will allow us to feel a that the world that we're in

is just a lot richer, because there can be all these things that people create that are just

so much easier to do digitally than physically. But B, you're going to have a real kind of physical

sense of presence with these things and not feel like interacting in the digital world is taking

you away from the physical world, which today is just so much viscerally richer and more powerful.

I think the digital world will sort of be embedded in that and will feel kind of just as

vivid in a lot of ways. So that's why I always think, you know, when you were saying before,

you know, you felt like you could look around and see the real room. I actually think that

there's an interesting kind of philosophical distinction between the real room and the physical

room, which, you know, historically, I think people would have said those are the same thing.

But I actually think in the future, the real room is going to be the combination of the physical

world with all the digital artifacts and objects that are in there that you can interact with them

and feel present, whereas the physical world is just the part that's physically there.

And I think it's possible to build a real world that's the sum of these two that will actually be,

you know, a more profound experience than what we have today.

Well, I was struck by the smoothness of the interface between the VR and the physical room.

Your team had me try a, I guess it was an exercise class in the form of a book. It was like,

essentially like hitting mitts boxing, so hitting targets boxing.

Super natural.

Yeah. And it comes at a fairly fast pace that then picks up. It's got some tutorials,

it's very easy to use, and certainly got my heart rate up. I'm in at least decent shape.

And I have to be honest, I never once desired to do any of these on-screen fitness things.

I mean, I can't think of anything more aversive than like a clap, like I don't want to insult

any particular products. But like riding a stationary bike while looking at a screen,

pretending I'm on a road outside. I can't think of anything worse for me.

Maybe only...

I do like the leaderboard. Maybe I'm just a very competitive person. It's like,

if you're going to be running on a treadmill, at least give me a leaderboard so I can beat

the people who are ahead of me.

I like moving outside and certainly an exercise class or aerobics class, as they used to call

them. But what the experience I tried today was extremely engaging. And I've done enough boxing

to at least know how to do a little bit of it. And I really enjoyed it, gets your heart rate up.

I completely forgot that I was doing an on-screen experience because, in part because I believe

I was still in that physical room. And I think there's something about the mesh of the physical

room and the virtual experience that makes it neither of one world or the other. I mean,

I really felt at the interface of those and certainly got presence, this feeling of forgetting

that I was in a virtual experience. And got my heart rate up pretty quickly. We had to stop

because we were going to start recording. But I would do that for a good 45 minutes in the

morning. And there's no amount of money you could pay me truly to look at a screen while

pedaling on a bike or running on a treadmill. So again, bravo. I think it's going to be very

useful. It's going to get people moving their bodies more, which certainly social media up until

now and a lot of technologies have been accused of limiting the amount of physical activity that

both children and adults are engaged in. And we know we need physical activity. You're a proponent

of and practitioner of physical activity. So is this a major goal of Metta? Get people moving

their bodies more and getting their heart rates up and so on? I think we want to enable it. I think

it's good. But I think it comes more from like a philosophical view of the world than it is

necessarily. I mean, I don't go into building products to try to shape people's behavior,

right? I believe in empowering people to do what they want and be the best version of themselves

that they can be. So no agenda. That said, I do believe that there's the previous generation

of computers were devices for your mind. And I think that we are not brains and tanks.

You know, it's like, I think that there's sort of a philosophical view of people of like, okay,

you are primarily, you know, what you think about or your values or something. It's like,

no, you are that and you are a physical manifestation. And people were very physical. And

I think building a computer for your whole body and not just for your mind is very fitting with

this worldview that like the actual essence of you, if you want to be present with another person,

if you want to like be fully engaged in experience is not just, okay, it's not just a video conference

call that looks at your face and where you can like share ideas. It's something that you can

engage your whole body. So yeah, I mean, I think being physical is very important to me. I mean,

you know, it's just that's a lot of, you know, the most fun stuff that I get to do.

It's a really important part of how I personally balance my energy levels and just get a diversity

of experiences because I could spend all my time running the company. But I think it's good

for people to do some different things and, you know, compete in different areas or learn

different things. And all of that is good. If people want to, you know, do really intense

workouts with the work that we're doing with Quest or with, you know, eventual AR glasses,

great. But even if you don't want to do like a really intense workout, I think just having a

computing environment and platform, which is inherently physical, captures more of the essence

of what we are as people than any of the previous computing platforms that we've had to date.

Yeah, it was even thing just of the simple task of getting a better range of motion,

a.k.a. flexibility. I could imagine inside of the VR experience, you know, leaning into a stretch,

you know, a standard kind of like like a lunge type stretch, but actually seeing a meter of like,

are you approaching new levels of flexibility in that moment where it's actually measuring some

kinesthetic elements on the body and the joints. Whereas normally, you might have to do that in

front of a camera, which then would give you the data on a screen that you'd look at afterwards or

hire an expensive coach, or looking at form and resistance training. So you're actually

lifting physical weights, but it's telling you whether or not you're breaking form. I mean,

there's just so much that could be done inside of there. And then my mind just starts to spiral

into like, wow, this is very likely to transform what we think of as quote unquote exercise.

Yeah, I think so. And there's still a bunch of questions that need to get answered. You know,

it's I don't think most people are going to necessarily want to install, you know, a lot of

sensors or cameras to track their whole body. So we're just over time getting better from the

sensors that are on the headsets of being able to do very good hand tracking, right? So we have

this research demo where you now just with the hand tracking from the headset, you can type,

it just projects a little keyboard onto your table and you can type and people like type

like 100 words a minute with that with a virtual keyboard. Yeah, we're starting to be able to

using some modern AI techniques be able to like simulate and understand where your torso's position

is. Even though you can't always see it, you can see it a bunch of the time. And if you fuse

together what you do see with like the accelerometer and understanding how the thing is moving,

you can kind of understand what the body position is going to be. But some things are still going

to be hard, right? So you mentioned boxing. That one works pretty well, because, you know, we

understand your head position, we understand your hands. And now we're kind of increasingly

understanding your body position. But let's say you want to expand that to Muay Thai or kickboxing.

Okay, so legs, that's a different part of tracking. That's harder, because that's out of the field

of view more of the time. But there's also the element of resistance, right? So you can throw a

punch and retract it and shadowbox and, you know, do that without upsetting your kind of physical

balance that much. But if you want to throw a roundhouse kick, and there's no one there,

then I mean, you know, the standard way that you do it when you're shadowboxing is you basically

do a little 360. But like, I don't know, is that is that going to feel great? I mean, I think that

there's a question about what that experience should be. And then if you want to go even further,

if you want to get like grappling to work, I'm not even sure how you would do that without

having resistance of understanding what the forces applied to you would be. And just then you get

into, okay, maybe you're going to have some kind of body suit that can apply, you know, haptics.

But I'm not even sure that that even a pretty advanced haptic system is going to be able to

be quite good enough to do to simulate like the actual forces that would be applied to you.

In a grappling scenario. So this is part of what's fun about technology, though, is you get you

keep on getting new capabilities. And then you need to figure what things you can do with them.

So I think it's really neat that we can kind of do boxing and we can do this supernatural thing.

And there's a bunch of awesome cardio and dancing and things like that. And then there's also

still so much more to do that that I'm excited to kind of get to over time. But but it's a long

journey. And what about things like painting and art and music? You know, I imagine, you know,

of course, you like different mediums, like I like to draw a pen and pencil, but I could imagine

trying to learn how to paint and virtually and of course, you could print out a physical

version of that at the end. This doesn't have to depart from the physical world. It could

end in the physical world. Did you see the demo, the piano demo, where you either you're there

with a physical keyboard or it could be a virtual keyboard. But the app basically highlights what

keys you need to press in order to play the song. So it's basically like you're looking at your piano

and it's teaching you how to play a song that you choose an actual piano. Yeah. Yeah. But it's

illuminating certain keys in in the virtual space. Yes. And it could either be a virtual piano, if

you or keyboard, if you don't have a piano or keyboard, or it could use your actual keyboard.

So yeah, I think stuff like that is going to be really fascinating for education and expression.

And for broad excuse me, but for and for broadening access to totally expensive equipment. I mean,

piano is is no small expense. Exactly. It's and it takes up a lot of space and needs to be tuned.

Yeah. You can think of all these things like the kid that has very little income or their

family has very little income could learn to play a virtual piano at much lower cost.

Yeah, it gets back to the question I was asking before about this thought experiment of how many

of the things that we physically have today actually need to be physical. The piano doesn't.

Maybe there's some premium where it's maybe it's a somewhat better, more tactile experience

to have a physical one. But for people who don't have the space for it, or who can afford to buy

a piano or just aren't sure that they would want to make that investment at the beginning of learning

how to play piano. I think in the future, you'll have the option of just buying, you know, an app

or a hologram piano, which will be a lot more affordable. And I think it's going to be that's

going to be unlock unlock a ton of creativity too, because instead of the market for piano makers being

constrained to like a relatively small set of experts who have like perfected that craft,

and you're going to have like, you know, kids or developers all around the world designing

crazy designs for potential keyboards and pianos that look nothing like what we've seen before,

but are maybe like bring even more joy and are even more kind of fun in the world where you have

fewer of these physical constraints. So I think it's going to just give me a lot of wild stuff

to explore. This definitely could be a lot of wild stuff to explore. I was just had this

this idea slash image in my mind of what you were talking about merged with our earlier

conversation when Priscilla was here of, you know, I could imagine a time not too far long from now

where you're using mixed reality to run experiments in the lab, literally mixing virtual solutions,

getting potential outcomes, and then picking the best one to then go actually do in the real world,

which is very both financially costly and and time wise costly.

Yeah, I mean, people are already using VR for surgery and education on it.

And there's some study that was done that basically did it tried to do a controlled

experiment of people who learned how to do a specific surgery through just the normal kind

of textbook and lecture method, versus like, you show the knee, and you have it, you know,

be a large blown up model and people can manipulate it and kind of practice where they

would make the cuts and and like the people in that class did better. So I think that there's,

yeah, it's I think that it's going to be profound for a lot of different areas.

And the last example that leaps to mind, you know, I think social media and online culture has been

accused of creating a lot of real world, let's call it physical world social anxiety for people.

But I could imagine I'm practicing a social interaction, or a kid that has a lot of social

anxiety or that needs to advocate for themselves better learning how to do that progressively

through a virtual interaction and then taking that to the real world, because it's in my

very recent experience today, it's so blended now with with real experience that the kid that

feels terrified of advocating for themselves or or just talking to another human being or an adult

or being in a new circumstance of a room full of kids, you could really experience that in

silico first, get comfortable, let the nervous system attenuate a bit and then take it into the

quote unquote physical world. Yeah, I think we'll see experiences like that. I mean, I also think

that some of the social dynamics around how people interact in this kind of blended digital world

will be more nuanced in other ways. So I'm sure that there will be kind of new anxieties that

people develop to just like, you know, teens today need to navigate dynamics around texting

constantly that that we just didn't have when we were kids. So I think it will help with some

things. I think that there will be new issues that hopefully we can help people work through too.

But but overall, I think, yeah, no, I think it's going to be really powerful and positive.

Let's talk about the glasses. Sure. This was wild. Yeah. Put on a pair of Ray bands. I like the way

they look. They're clear. They look like any other glass Ray band glasses, except that I could

call out to the glasses, I could just say, you know, hey, Metta, I want to listen to the Bach

variations, the Goldberg variations of Bach and Metta responded. And no one around me could hear,

but I could hear with exquisite clarity. By the way, I'm not getting paid to say any of this. I'm

just still blown away by this, folks. I want a pair of these very badly. I could hear, okay,

I'm selecting those now and or that music now, and then I could hear it in the background,

but then I could still have a conversation. So this was neither headphones in nor headphones out.

And I could say, wait, pause the music and it would pause. And the best part was I didn't have to

leave the room mentally. I even have to take out a phone. It was all interfaced through this very

local environment in and around the head. And as a neuroscientist, I'm fascinated by this because,

of course, all of our perceptions, auditory, visual, et cetera, are all occurring inside the

casing of this thing we call a skull. But maybe you could comment on the origin of that design

for you, the ideas behind that and where you think it could go because I'm sure I'm just scratching

the surface. The real product that we want to eventually get to is this kind of full augmented

reality product in a kind of stylish and comfortable, normal glasses form factor.

Not dorky VR headset, so to speak. Because the VR headset does feel kind of like

thing on the face. There's going to be a place for that too, just like you have your laptop and

you have your workstation. Or maybe the better analogy is you have your phone and you have your

workstation. These AR glasses are going to be like your phone in that you have something on your face

and you will, I think, be able to, if you want, wear it for a lot of the day and interact with it

very frequently. I don't think that people are going to be walking around the world wearing

VR headsets. But yeah, that's certainly not the future that I'm kind of hoping we get to.

But I do think that there is a place where for having, because it's a bigger form factor,

it has more compute power. So just like your workstation or your kind of bigger computer can

do more than your phone can do, there's a place for that. When you want to settle into an intense

task, right, if you have a doctor who's doing a surgery, I would want them doing it through the

headset, not through the kind of not through their phone equivalent or the just kind of lower powered

glasses. But just like phones are powerful enough to do a lot of things, and the glasses will

eventually get there too. Now that said, there's a bunch of really hard technology problems

to address in order to be able to get to this point where you can like put kind of full holograms

in the world. You're basically miniaturizing a supercomputer and putting it into a pair of glasses

so that the pair of glasses still looks stylish and normal. And that's a really hard technology

problem. Making things small is really hard. A holographic display is, you know, it's different

from what our industry has optimized for for 30 or 40 years now building screens. There's like a whole

kind of industrial process around that that goes into phones and TVs and computers and like

increasingly so many things that that have different screens like there's a whole pipeline that's

gotten very good at making that kind of screen. And the holographic displays are just a completely

different thing, right? Because it's not it's not a screen, right? It's a thing that you can shoot

light into through, you know, a laser or some other kind of projector and it can place that as

an object in the world. So that's going to need to be this whole other industrial process that

gets built up to doing that like in an efficient way. So all that said, we're basically taking two

different approaches towards building this at once. One is we are trying to keep in mind what is the

long term thing that it's not super far off, I think within, you know, a few years, I think we'll

have something that that's sort of a first version of kind of this full vision that I'm talking about

and we have something that's working internally that we use as like a that we'll use as a dev kit.

But that one, that's that's kind of a big challenge. It's going to be more expensive.

And it's harder to get all the pieces working. The other approach has been alright, let's start

with what we know we can put into a pair of stylish sunglasses today and just make them as

smart as we can. So, you know, for the first version, you know, we worked with, we did this

collaboration with Ray Ban, right, because that's like, well accepted, you know, these are well

designed glasses, they're classic people have have used them for decades. For the first version,

we got a sensor on the front so you could capture moments without having to take your phone out of

your pocket. So you got photos and videos. You had the speaker and the microphones, you can listen

to music. You could communicate with it. But it was, you know, that was that was sort of the

first version of it, we had a lot of the basics there, but we saw how people used it. And we

tuned it, we made the camera is like twice as good for for this new version that we made. The

audio is a lot crisper for the use cases that we saw that people actually used, which is some of

it is listening to music, but a lot of it is like people want to take calls on their glasses.

They want to listen to podcasts, right? But the biggest thing that I think is interesting is

the ability to get AI running on it, which it doesn't just run on the on the glasses, it also

it kind of proxies through your phone. But I mean, with all the advances in

LLMs, we talked about this a bit in the first part of the conversation,

having the ability to have your meta AI assistant that you can just talk to and basically ask any

question throughout the day is, I think it'd be really fascinating. And, you know, like you were

saying about kind of how we how we process the world as people. Eventually, I think you're going

to want your AI assistant to be able to see what you see and hear what you hear. Maybe not all the

time, but you're going to want to be able to tell it to go into a mode where it can see what you see

and hear what you hear. And what's the kind of device design that best kind of positions an

AI assistant to be able to see what you see and hear what you hear so can best help you? Well,

that's glasses, right? Where we're basically as a sensor to be able to see what you see and

a microphone that is close to your ears that can hear what you hear.

The other design goals is, like you said, to keep you present in the world, right? So I think

one of the issues with phones is they kind of pull you away from from what's physically

happening around you. And I don't think that the next generation of computing will do that.

I'm just chuckling to myself because I have a friend, he's a very well-known photographer,

and he was laughing about how people go to a concert and everyone's filming the concert on

their phone so that they can be the person that posts the thing. But there are literally millions

of other people who posted the exact same thing. But somehow, it feels important to post our unique

experience with glasses that would essentially smooth that gap completely. You can just worry

about it later, download it. There are issues I realize with glasses because they are so seamless

with everyday experience, even though you and I aren't wearing them now. It's very common for

people to wear glasses, issues of recording and consent. I go into a locker room at my gym.

I'm assuming that the people with glasses aren't filming, whereas right now, because there's a

sharp transition when there's a phone in the room and someone's pointing it,

people generally say no phones in locker rooms and recording. So that's just one instance.

I mean, there are other instances. We have the whole privacy light. I don't know. Did you

get a chance to explore that? Yeah. So anytime that it's active, that the

camera sensor is active, it's basically pulsing a white light. By the way, more than cameras do.

Right. So phones aren't showing a light, a bright sensor when you're taking a photo.

No, people oftentimes will pretend they're texting and they're actually recording. I

actually saw an instance of this in a barbershop once where someone was recording and they were

pretending that they were texting. It was an intense interaction that ensued and it was like,

wow, it's pretty easy for people to feign texting while actually recording.

Yeah. So I think when you're evaluating a risk with a new technology,

the bar shouldn't be, is it possible to do anything bad? It's, does this new technology

make it easier to do something bad than what people already had? And I think because you have this

privacy light that is just broadcasting to everyone around you, hey, this thing is recording now,

I think that that makes it actually less discreet to do it through the glasses

than what you could do with a phone already, which I think is basically the bar that we

wanted to get over from a design perspective. Thank you for pointing out that it has the

privacy light. I didn't get long enough in the experience to explore all the features, but

again, I can think of a lot of uses being able to look at a restaurant from the outside and see

the menu, get status on how crowded it is, as much as I love, I don't want to call out.

Let's just say app-based map functions that allow you to navigate and the audio is okay.

It's nice to have a conversation with somebody on the phone or in the vehicle and just it'd

be great if the road was traced where I should turn. Yeah, absolutely.

These kinds of things seem like it's going to be straightforward for meta engineers to create.

The future version, we'll have it so it'll also have the holographic display where it can show

you the directions, but I think that there will basically just be different price points that

pack different amounts of technology. The holographic display part I think is going to be

more expensive than doing one that just has the AI, but is primarily communicating you

with you through audio. The current Ray-Ban meta glasses are $299. I think when we have one that

has a display in it, it'll probably be some amount more than that, but it'll also be more

powerful. I think that people will choose what they want to use based on what the capabilities

are that they want and what they can afford. A lot of our goal in building things is we try

to make things that can be accessible to everyone. Our game as a company isn't to build things and

then charge a premium price for it. We try to build things that then everyone can use

and then become more useful because a very large number of people are using them.

So it's just a very different approach. We're not like Apple or some of these companies that just

try to make something and then sell it for as much as they can, which I mean, they're a great

company. I think that model is fine too, but our approach is going to be, we want stuff that

can be affordable so that way everyone in the world can use it. Long lines of health, I think the

glasses will also potentially solve a major problem in a real way, which is the following.

For both children and adults, it's very clear that viewing objects in particular screens up close

for too many hours per day leads to myopia. Literally a change in the length of the eyeball

and near sightedness. On the positive side, we know based on some really large clinical trials

that kids and adults who spend two hours a day or more out of doors don't experience that and maybe

even reverse their myopia. It has something to do with exposure to sunlight, but it has a lot to do

with long viewing, viewing things at a distance greater than three or four feet away. With the

glasses, I realize one could actually do digital work out of doors. It could measure and tell you

how much time you've spent looking at things up close versus far away. This is just another

example that leaps to mind, but in accessing the visual system, you're effectively accessing the

whole brain because it's the only two bits of brain that are outside the cranial vault. It

just seems like putting technology right at the level of the eyes, seeing what the eyes see

is just got to be the best way to go. Yeah, I think, well, multimodal, right? I think is

you want the visual sensation, but you also want kind of text or language, so I think it's-

But that all can be brought to the level of the eyes, right?

What do you mean by that? Well, I mean, I think what we're describing here is essentially taking

the phone, the computer and bringing it all to the level of the eyes. And of course, one would

physically at your eyes, right? And one would like more kinesthetic information,

as you mentioned before, where the legs are, maybe even lung function. Hey, have you taken

enough steps today? But that all can be figured out on the phone, it can be by the phone, it can

be figured out by glasses. But there's additional information there, such as what are you focusing

on in your world? How much of your time is spent looking at things far away versus up close? How

much social time did you have today? It's really tricky to get that with a phone. Like if my phone

right in front of us is if we were at a standard lunch nowadays, certainly in Silicon Valley.

And then we're peering at our phones. I mean, how much real direct attention was in the conversation

at hand versus something else? I mean, you can get issues of where are you placing your attention

by virtue of where you're placing your eyes. And I think that information is not accessible with

a phone in your pocket or in front of you. I mean, a little bit, but not nearly as rich

and complete information as one gets when you're really pulling the data from the level of vision

and what kids and adults are actually looking at and attending to. So it's extremely valuable.

You get autonomic information, size of the pupils, so you get information about internal states.

There's internal sensor and outside. So the sensor on the Ray-Ban Meta Glass is external,

right? So it basically allows you to see what you're seeing. There's a separate set of things,

which are eye tracking, which are also very powerful for enabling a lot of interfaces, right?

So if you want to just look at something and select it by looking at it with your eyes rather

than having to kind of drag a controller over or pick up a hologram or anything like that,

you can do that with eye tracking. So that's a pretty profound and cool experience too,

as well as just kind of understanding what you're looking at so that way you're not

kind of wasting compute power, drawing pixels and high resolution in the part of the kind of

world that's going to be in your peripheral vision. So yeah, all of these things,

there are interesting design and technology trade-offs where if you want the external sensor,

that's one thing. If you also want the eye tracking, now that's a different set of sensors,

each one of these consumes compute, which consumes battery. They take up more space,

right? So it's like, where are the eye tracking sensors going to be? It's like, what you want to

make sure that the rim of the glasses is actually quite thin because, I mean, there's a kind of

variance of how thick can glasses be before they look more like goggles than glasses.

Something that this is, there was this whole space and I think people are going to end up

choosing what product makes sense for them. Maybe they want something that's more powerful,

that is more of the sensors, but it's going to be a little more expensive, maybe like slightly

thicker, or maybe you want like a more basic thing that just looks like very similar to what

Ray-Ban glasses are that people have been wearing for decades, but kind of has AI in it and you

can capture moments without having to take your phone out and send them to people. In the latest

version, we got the ability into live stream. I think that that's pretty crazy that now you can be

going back to your concert case or whatever else you're doing. You can be

doing sports and just watching your kids play something and just you can be watching and

you can be live streaming it to your family group so people can see it. I think that's pretty cool

that you basically have a normal looking pair of glasses at this point that can live stream and

has an AI assistant. The stuff is making a lot faster progress in a lot of ways than I would have

thought. I think people are going to like this version, but there's a lot more still to do.

I think it's super exciting and I see a lot of technologies. This one's particularly exciting

to me because of how smooth the interface is and for all the reasons that you just mentioned.

What's happening with and what can we expect around AI interfaces and maybe even avatars

of people within social media? Are we not far off from a day where there are multiple versions

of me and you on the internet where people, for instance, I get asked a lot of questions. I don't

have the opportunity to respond to all those questions, but with things like chat, GPD,

people are trying to generate answers to those questions on other platforms. Will I have the

opportunity to soon have an AI version of myself where people can ask me questions about what I

recommend for sleep and circadian rhythm, fitness, mental health, etc. based on content I've already

generated that will be accurate so they could just ask my avatar. Yeah, this is something

that I think a lot of creators are going to want that we're trying to build and I think we'll probably

have a version of next year, but there's a bunch of constraints that I think we need to make sure

that we get right. So for one, I think it's really important that it's not that there's a bunch of

versions of you, it's that if anyone is creating an AI assistant version of you, it should be

something that you control. I think there are some platforms that are out there today that just let

people make that AI bot of me or other figures and it's like, I don't know. We have platform

policies for decades. Since the beginning of the company at this point, which is almost 20 years,

that basically don't allow impersonation. Real identity is one of the core aspects that

that kind of our company was started on is like you want to authentically be yourself. So yeah,

I think if you're almost any creator, being able to engage your community and there's just going

to be more demand to interact with you than you have hours in the day. So there are both people

out there who would benefit from being able to talk to an AI version of you. And I think you

and other creators would benefit from being able to keep your community engaged and service that

demand that people have to engage with you. But you're going to want to know that that AI kind

of version of you or assistant is going to represent you the way that you would want. And

there are a lot of things that are awesome about kind of these modern LLMs, but having perfect

predictability about how it's going to represent something is not one of the current strengths.

So I know there's some work that needs to get done there. I don't think it needs to be

100% perfect all of the time, but you need to have very good confidence. I would say that it's

going to represent you the way that you'd want for you to want to turn it on, which again,

you should have control over whether you turn it on. So we wanted to start in a different place,

which I think is a somewhat easier problem, which is creating new characters for AI personas. So

that way it's not, you know, we built, you know, one of the AI's is like a chef and they can help

you kind of come up with things that you should that you could cook and can help you cook them.

There's like a couple of people that are interested in different types of fitness that can help you

kind of plan out your workouts or help with recovery or different things like that.

There are people, there's an AI that's focused on like DIY crafts. There's someone who's a travel

expert that can help you make travel plans or give you ideas. So, but the key thing about all these

is they're not, they're not modeled off of existing people. So they don't have to have

kind of 100% fidelity to like making sure that they never say something that, you know, a real

person who they're modeled after would never say because they're just made up characters.

So that is, that's a somewhat easier problem. We actually got the, we got a bunch of

different kind of well-known people to play those characters because we thought that

would make it more fun. So there's like Snoop Dogg is the dungeon master. So you can like

drop them into a thread and play text-based games. And it's just like, I do this with my

daughter when I, when I tuck her in at night and she just like loves it, like storytelling,

right? And it's like, it's like Snoop Dogg is the dungeon master will come up with like,

here's what's happening next. And she's like, okay, like turn into a mermaid. And then I like

swim across the bay and I go and find the treasure chest and unlock it. And it's like, and then

Snoop Dogg just always will have a next version of the, like the next iteration on the story. So

when it's, it's stuff is fun, but you know, it's not actually Snoop Dogg. He's just kind of the

actor. He's playing the dungeon master, which makes it more fun. So that's probably the right

place to start is you have like, you can kind of build versions of these characters that that

people can interact with doing different things. But I think what you want to get over time is to

the place where any creator or any small business can very easily just create an AI assistant that

can represent them and interact with your kind of community or customers if you're a business

and basically just help you grow your, your enterprise. So I don't know, I think that's

gonna be cool, but I think this is, it's a long term project. I think we'll have more progress

on it to report on next year, but, but I think that's coming. I'm super excited about because

I've, you know, we hear a lot about the downsides of AI. I mean, I think people are now coming

around to the reality that AI is neither good nor bad. It can be used for good or bad and that

there are a lot of life enhancing spaces that it's going to show up and really,

really improve the way that we engage socially, what we learn and that mental health and physical

health don't have to suffer and in fact can be enhanced by the sorts of technologies we've been

talking about. So I know you're extremely busy. I so appreciate the large amount of time you've

given me today to sort through all these things and to talk with you and Priscilla and to hear what's

happening and where things are headed. The future certainly is bright. I share in your optimism and

it's been only strengthened by today's conversation. So thank you so much and keep doing what you're

doing and on behalf of myself and everyone listening, thank you because regardless of what

people say, we all use these platforms excitedly and it's clear that there's a ton of intention

and care and thought about what could be in the positive sense and that's really worth highlighting.

Awesome. Thank you. I appreciate it.

Thank you for joining me for today's discussion with Mark Zuckerberg and Dr. Priscilla Chan.

If you're learning from and or enjoying this podcast, please subscribe to our YouTube channel.

That's a terrific zero cost way to support us. In addition, please subscribe to the podcast on both

Spotify and Apple and on both Spotify and Apple, you can leave us up to a five star review. Please

also check out the sponsors mentioned at the beginning and throughout today's episode. That's

the best way to support this podcast. If you have questions for me or comments about the podcast or

guess that you'd like me to consider hosting on the Huberman Lab podcast, please put those in the

comment section on YouTube. I do read all the comments. Not during today's episode, but on

many previous episodes of the Huberman Lab podcast, we discuss supplements. While supplements

aren't necessary for everybody, many people derive tremendous benefit from them for things like

enhancing sleep, hormone support and improving focus. If you'd like to learn more about the

supplements discussed on the Huberman Lab podcast, you can go to live momentous spelled O US. So

live momentous.com slash Huberman. If you're not already following me on social media, it's

Huberman Lab on all social media platforms. So that's Instagram, Twitter, now called X threads,

Facebook, LinkedIn. And on all those places, I discuss science and science related tools,

some of which overlaps with the content of the Huberman Lab podcast,

but much of which is distinct from the content on the Huberman Lab podcast. So again,

it's Huberman Lab on all social media platforms. If you haven't already subscribed to our monthly

neural network newsletter, the neural network newsletter is a completely zero cost newsletter

that gives you podcast summaries as well as tool kits in the form of brief PDFs. We've had tool

kits related to optimizing sleep, to regulating dopamine, deliberate cold exposure, fitness,

mental health, learning and neuroplasticity and much more. Again, it's completely zero

cost to sign up. You simply go to Huberman Lab.com, go over to the menu tab, scroll down to newsletter

and supply your email. I should emphasize that we do not share your email with anybody.

Thank you once again for joining me for today's discussion with Mark Zuckerberg and Dr. Priscilla

Chan. And last but certainly not least, thank you for your interest in science.

Machine-generated transcript that may contain inaccuracies.

In this episode, my guests are Mark Zuckerberg, CEO of Meta (formerly Facebook, Inc.), and his wife, Dr. Priscilla Chan, M.D., co-founder and co-CEO of the Chan Zuckerberg Initiative (CZI). We discuss how CZI plans to cure all human diseases by the end of this century by funding transformative projects and technologies at the intersection of biology, engineering, and artificial intelligence (AI). They describe their funding and development of CZI Biohubs and the progress already underway to accelerate the understanding of cell function, pathways, and disease. Then, Mark discusses social media, its impact on mental health, and new tools for online experiences. We also discuss Meta’s virtual reality (VR), augmented and mixed reality tech, and how AI will soon completely transform our online and physical life experiences. This episode ought to interest anyone curious about biology, medicine, mental health, AI, and the future of technology and humanity.

For the full show notes, including the episode transcript (available exclusively to Huberman Lab Premium members), please visit hubermanlab.com.

Pre-sale password: HUBERMAN

Thank you to our sponsors

AG1: https://drinkag1.com/huberman

Eight Sleep: https://eightsleep.com/huberman

LMNT: https://drinklmnt.com/huberman

InsideTracker: https://insidetracker.com/huberman

Momentous: https://livemomentous.com/huberman

The Brain Body Contract

Tickets: https://www.hubermanlab.com/events

Timestamps

(00:00:00) Mark Zuckerberg & Dr. Priscilla Chan

(00:02:15) Sponsors: Eight Sleep & LMNT; The Brain Body Contract

(00:05:35) Chan Zuckerberg Initiative (CZI) & Human Disease Research

(00:08:51) Innovation & Discovery, Science & Engineering

(00:12:53) Funding, Building Tools & Imaging

(00:17:57) Healthy vs. Diseased Cells, Human Cell Atlas & AI, Virtual Cells

(00:21:59) Single Cell Methods & Disease; CELLxGENE Tool 

(00:28:22) Sponsor: AG1

(00:29:53) AI & Hypothesis Generation; Long-term Projects & Collaboration

(00:35:14) Large Language Models (LLMs), In Silico Experiments

(00:42:11) CZI Biohubs, Chicago, New York

(00:50:52) Universities & Biohubs; Therapeutics & Rare Diseases

(00:57:23) Optimism; Children & Families

(01:06:21) Sponsor: InsideTracker

(01:07:25) Technology & Health, Positive & Negative Interactions

(01:13:17) Algorithms, Clickbait News, Individual Experience

(01:19:17) Parental Controls, Meta Social Media Tools & Tailoring Experience

(01:24:51) Time, Usage & Technology, Parental Tools

(01:28:55) Virtual Reality (VR), Mixed Reality Experiences & Smart Glasses

(01:36:09) Physical Exercise & Virtual Product Development

(01:44:19) Virtual Futures for Creativity & Social Interactions

(01:49:31) Ray-Ban Meta Smart Glasses: Potential, Privacy & Risks

(02:00:20) Visual System & Smart Glasses, Augmented Reality

(02:06:42) AI Assistants & Creators, Identity Protection

(02:13:26) Zero-Cost Support, Spotify & Apple Reviews, Sponsors, YouTube Feedback, Momentous, Social Media, Neural Network Newsletter

Title Card Photo Credit: Mike Blabac

Disclaimer