Lenny's Podcast: Product | Growth | Career: Becoming evidence-guided | Itamar Gilad (Gmail, YouTube, Microsoft)

Lenny Rachitsky Lenny Rachitsky 9/21/23 - Episode Page - 1h 13m - PDF Transcript

Themes

Evidence-based decision-making, Product development, Metrics, Prioritization, OKRs

Discussion
  • Itamar Gilad discusses the importance of being evidence-guided in product development.
  • Being evidence-guided means using data and research to inform decision-making and prioritize features.
  • Gilad shares his experiences implementing evidence-guided practices at various companies.
  • He provides insights on how to build a culture of evidence-guided decision-making.
  • The episode highlights the benefits of adopting an evidence-guided approach in product development.
Takeaways
  • Companies should focus on evidence-guided decision-making and be willing to pivot or abandon projects that don't work out.
  • Consider implementing OKRs as a goal-setting tool within your organization, starting with metrics, mission, and team missions.
  • Create an environment in your organization where leaders are encouraged to provide evidence for their ideas.
  • Evaluate and potentially transform the areas of goals, ideas, steps, and tasks in product development.
  • Focus on delivering value and creating valuable content.

00:00:00 - 00:30:00

In this episode, Itamar Gilad, a former product manager at Gmail, YouTube, and Microsoft, discusses the importance of being evidence-guided in product development. He explains that being evidence-guided means using data and research to inform decision-making and prioritize features. Itamar shares his experiences implementing evidence-guided practices at various companies and provides tips for product managers looking to adopt this approach. Overall, the episode highlights the benefits of using evidence to drive product decisions and improve user experiences.

  • 00:00:00 Itamar Gilad, a product coach and former product manager at Google, discusses his book 'Evidence Guided' and the importance of an evidence-guided approach to decision-making in product development. He shares practical frameworks and strategies for implementing this approach, including the confidence meter, metrics, trees, and gist board. Gilad also discusses the misuse of ICE for prioritizing ideas and offers insights on making OKRs more effective.
  • 00:05:00 Google's attempt to launch a social network called Google Plus failed as people didn't need or use it. The project involved a large team of about 1,000 people and was eventually shut down. This failure highlights the problem of opinion-based development and the importance of adopting a fail-fast mentality.
  • 00:10:00 The podcast guest discusses their experience working at Google and how they implemented a user-centric approach to improve Gmail's tabbed inbox feature. They emphasize the importance of balancing human judgment with evidence in successful product companies like Google, Amazon, and Apple. The podcast host also mentions the guest's book, 'Evidence Guided,' which provides a system for bringing evidence-guided thinking into organizations.
  • 00:15:00 The podcast discusses the importance of evidence-based decision-making in organizations, particularly in the startup and scale-up phases. It emphasizes the need for leaders to critically evaluate ideas and rely on evidence rather than opinions. The example of Steve Jobs and the development of the iPhone is used to illustrate the power of evidence in shaping successful outcomes. The podcast also explores how to encourage founders and executives to consider counterpoints and data-driven proposals.
  • 00:20:00 The podcast discusses signs of a non-evidence-guided system in product development, such as missing user-facing metrics, excessive time spent on planning, lack of experimentation and learning, and disengaged teams. The guest introduces the GIST model, which stands for Goals, Ideas, Steps, and Tasks, as an approach to becoming more evidence-guided. The model breaks down the product development process into four parts and incorporates principles from lean startup, design thinking, and product discovery. The model emphasizes the need to evaluate and potentially transform each area. Strategy and vision are considered separate from the GIST model, and research is seen as complementary to it. The guest also highlights the importance of adapting the process to fit each company's unique context.
  • 00:25:00 Goals are meant to define the desired end state of an organization. Evidence-guided companies use models to create overarching goals for the entire organization. The North Star metric is a key metric used to measure the value delivered to the market. It is important to break down goals into metric trees to identify the most important metrics that drive overall success.

00:30:00 - 01:00:00

In this episode, Itamar Gilad, a former product manager at Gmail, YouTube, and Microsoft, discusses the importance of being evidence-guided in product development. He explains that being evidence-guided means using data and research to inform decision-making and prioritize features. Itamar shares his experiences implementing evidence-guided practices at various companies and provides insights on how to build a culture of evidence-guided decision-making. He also emphasizes the need for product managers to be curious, ask the right questions, and continuously learn from user feedback and data. Overall, the episode highlights the benefits of adopting an evidence-guided approach in product development.

  • 00:30:00 The podcast discusses the concept of metrics trees and how they can help organizations align their goals and assess the impact of their experiments. It also emphasizes the importance of having a clear understanding of the math formula that drives the North Star metric or revenue. The episode introduces the ICE framework for evaluating ideas in a more objective and consistent way.
  • 00:35:00 The podcast discusses the concept of Impact Confidence and Ease, which assigns values to ideas based on their impact, ease, and confidence. The hosts mention Sean Ellis, who created the ICE framework, and explain how to calculate confidence using a tool called the Confidence Meter. They also emphasize the importance of testing ideas and avoiding overcomplication.
  • 00:40:00 The podcast discusses the process of evaluating and gaining confidence in ideas before implementing them. It emphasizes the importance of considering different factors such as thematic support, feedback from colleagues and stakeholders, data analysis, and testing. The hosts also highlight the need to know when to stop pursuing an idea and the role of product managers in saying no to prevent ineffective ideas from being implemented.
  • 00:45:00 The podcast discusses the importance of focusing on getting the right outcomes rather than just getting the bits into production. The evidence-guided method is highlighted as a more impactful and resource-efficient approach compared to opinion-based methods. The podcast also touches on different approaches for companies at different stages of development.
  • 00:50:00 The podcast discusses the steps involved in validating ideas and assumptions in organizations. It emphasizes the importance of learning at a lower cost and highlights various methods such as assessment, data digging, testing, building rough versions, and conducting experiments. The goal is to help companies realize that there are multiple ways to validate ideas before investing in elaborate MVPs.
  • 00:55:00 The podcast discusses the challenges of bridging the gap between planning and execution in organizations. It suggests involving developers more in the process and introduces the concept of a GIST board to facilitate communication and alignment. The GIST board consists of goals, ideas, and steps that teams can regularly update and discuss.

01:00:00 - 01:12:50

In this episode, Itamar Gilad, a former product manager at Gmail, YouTube, and Microsoft, discusses the importance of evidence-based decision-making in product development. He explains how relying on data and experimentation can lead to better outcomes and help teams avoid common pitfalls. Gilad also shares insights on how to build a culture of experimentation within organizations and provides practical tips for product managers looking to become more evidence-guided in their work.

  • 01:00:00 The podcast discusses a project management tool that emphasizes learning milestones and evidence-guided thinking. It involves a team owning each step of the process and actively changing the board based on results. The framework is different from traditional roadmaps and encourages outcome-based goals. The book mentioned provides more details on implementing the framework.
  • 01:05:00 The speaker discusses the use of OKRs (Objectives and Key Results) as a tool for goal-setting and alignment within organizations. They explain that OKRs are based on metrics, mission, and individual team missions, and can be supplemented with additional goals related to company and product health. The speaker also suggests that organizations can gradually adopt OKRs and become more evidence-guided in their decision-making.
  • 01:10:00 The podcast episode discusses the importance of delivering value and creating valuable content. The guest shares the valuable lesson they learned from their parents about striving to be the best and delivering the most value. They also mention their favorite Israeli food, shawarma. The episode concludes with information on how to find the guest's book and reach out to them.

You fake it, you do a fake door test, you do smoke test,

wizard of all tests.

We used a lot of those in the tabbed inbox, by the way.

One of the first early versions was actually,

we showed the tabbed inbox working to people,

but it wasn't really Gmail, it was just a facade of HTML.

And behind the scenes and according to the permissions

that the users gave us, some of us moved just the subject

and the sender into the right place.

So initially the interviewer kind of destructed them

and then they show them their inbox

and in it the top 50 messages were sorted to the right place,

more or less, if we got it right.

And people were like, wow, this is actually very cool.

But it gave us some evidence to go and say,

hey, we should try and build this thing.

Welcome to Lenny's podcast,

where I interview world-class product leaders

and growth experts to learn from their hard-won experiences

building and growing today's most successful products.

Today, my guest is Idemar Galad.

Idemar is a product coach, author, speaker,

and former long-time product manager at Google,

where you worked on Gmail, Identity, and YouTube.

He also just published an awesome new book

called Evidence Guided,

creating high-impact products in the face of uncertainty.

Idemar has an important perspective on why

and also how you can push your team and organization

from an opinion-based decision-making process

to a more evidence-guided approach.

In our conversation, Idemar shares a number

of very practical and handy frameworks to do just that,

including the confidence meter, metrics, trees,

gist, and the gist board,

plus his take on how people often misuse ice

for prioritizing ideas,

also how you can make your OKRs more effective,

and so much more.

Enjoy this episode with Idemar Galad

after a short word from our sponsors.

This episode is brought to you by ASRA,

the leading full-body cancer screening company.

I actually used ASRA earlier this year,

unrelated to this podcast, completely on my own dime,

because my wife did one and loved it,

and I was super curious to see if there's anything

that I should be paying attention to in my body

as I get older.

The way it works is you book an appointment,

you come in, you put on some very cool silky pajamas

that they give you that you get to keep afterwards.

You go into an MRI machine for 30 to 45 minutes,

and then about a week later,

you get this detailed report

sharing what they found in your body.

Luckily, I had what they called an unremarkable screening,

which means they didn't find anything cancerous,

but they did find some issues in my back,

which I'm getting checked out at a physical next month,

probably because I spend so much time

sitting in front of a computer.

Half of all men will have cancer at some point

in their lives, as will one-third of women.

Half of all of them will detect it late.

According to the American Cancer Society,

early cancer detection has an 80% survival rate

compared to less than 20% for late-stage cancer.

The ASRA team has helped 13% of their customers

identify potential cancer early,

and 50% of them identify other clinically significant issues,

such as aneurysms, disc herniations,

which maybe is what I have, or fatty liver disease.

ASRA scans for cancer and 500 other conditions

in 13 organs using a full-body MRI powered by AI,

and just launched the world's only 30-minute full-body scan,

which is also their most affordable.

Their scans are non-invasive and radiation-free,

and ASRA is offering listeners $150 off their first scan

with code LENI150.

Book your scan at ASRA.com slash LENI.

That's E-Z-R-A dot com slash LENI.

This episode is brought to you by Vanta,

helping you streamline your security compliance

to accelerate your growth.

Thousands of fast-growing companies like Gusto,

Com, Quora, and Modern Treasury trust Vanta

to help build, scale, manage, and demonstrate

their security compliance programs

and get ready for audits in weeks, not months.

By offering the most in-demand security and privacy frameworks,

such as SOC2, ISO 27001, GDPR, HIPAA, and many more,

Vanta helps companies obtain the reports they need

to accelerate growth, build efficient compliance processes,

mitigate risks to their businesses,

and build trust with external stakeholders.

Over 5,000 fast-growing companies use Vanta

to automate up to 90% of the work involved

with SOC2 and these other frameworks.

For a limited time, LENI's podcast listeners

get $1,000 off Vanta.

Go to Vanta.com slash LENI.

That's V-A-N-T-A dot com slash LENI

to learn more and to claim your discounts.

Get started today.

Hitmar, thank you so much for being here.

Welcome to the podcast.

It's a pleasure being here.

Thank you for inviting me.

It's my pleasure.

I thought we'd start with the story of your work

on Google Plus and Gmail

and how those experiences formed your perspective

on how to build successful product.

Can you share that story?

Google Plus was my first experience at Gmail.

I joined Gmail in August 2011.

And the first thing they asked me is,

let's connect Gmail with Google Plus.

If you're hazy about the story,

back then Facebook was massive.

It's still massive, but then it was growing

like mushrooms, people were spending hours.

It really freaked out Google.

And the solution, the obvious solution

was to launch a social network of Google called Google Plus.

And we all believe in this thing.

It really caught on very well initially.

We all used it, we all believed in it.

So our mission was to build this thing

and Google really cut no costs.

It created a whole new division within Google

and it created a whole strategy around Google Plus.

And we had to connect Gmail and YouTube

and search to Google Plus to make them more personalized

in a sense and more social.

So that was the idea and we went on

and we launched a series of features in Gmail

for a couple of years, honestly.

And Google Plus itself became this massive project,

very feature-rich and with a lot of redesigns

and iterations and none of it worked.

It turned out people actually didn't need

another social network, people didn't love it,

people didn't use it.

Eventually in Gmail, we rolled back

all the Google Plus integration a few years later

and Google Plus itself was shut down in 2018.

So putting aside all the tremendous waste

that went into this, all the millions of person hours

and person weeks, in hindsight,

not only did Google bet on the wrong thing,

it missed much easier opportunities.

So just not far from Google's headquarters,

there was WhatsApp, not very famous in the US,

but they actually created massive impact.

Hundreds of millions of people were using their stuff

and they became a threat to Facebook

much more than Google was.

So Google missed the opportunity of social mobile apps

like WhatsApp, Snapchat, et cetera.

And for me, this story kind of was the epitome

of what I call today opinion-based development.

We come up with an idea, we believe in it,

all the indications show it's good,

maybe the early tests show it's good,

then we just go all in and we try to implement it.

And I made this very mistake many times

as a product manager, I was the guy pushing for the ideas.

So for me, this was kind of a turning point.

I felt we need to adopt a different system.

And just before you move on to the next story,

how big was the team roughly?

How many years was it spent on this area

just to give people a sense of the waste, as you said?

So there was a tremendous earthquake inside Google

to create the Google Plus team.

Teams and entire divisions were kind of thrown apart

and reformatted.

And I think at its peak, it was about 1,000 people inside.

Wow.

It was a division, the size of Android and Docs

and a really sizable thing, their own buildings.

Yeah, it's taken from the playbook of Steve Jobs,

you know, create this whole secretive project inside

and just run like hell.

Yeah, I remember though, Facebook was really scared.

I remember they shut everything down,

it was like a code Defcon 1 situation too.

So it really scared Facebook at the same time.

Yeah, it's true.

But at the end of the day,

neither Google's advertising revenue was affected,

neither was Facebook affected.

So it turned out this idea was not that necessary after all.

Yeah, okay.

So that's an example of something that didn't work

because it was opinion based software,

I think the phrase used.

And then there's a different experience

with tabs, I think with Gmail.

That's right.

So Google is a very successful company.

It's not for me to crystallize it

or to in hindsight kind of say you guys need to be better.

And some of the people that were behind Google Plus

or some of the smallest leaders,

and I still think there are, despite this story,

if you look back at the history of Google,

how things started in the first decade or so,

Google was what I call an evidence guided company.

So essentially it put a high premium

on focusing on customers,

on coming up with a lot of ideas,

on looking at the data,

looking at how these ideas actually worked out.

They weren't shy about launching betas

and things that were very rough and incomplete

and learning from that.

And then they expected people to take action

based on the results.

So fail fast is a very famous paradigm.

And so you had to kill your project

or pivot it seriously if it didn't work out.

And I think had we kept fail fast,

it would really have helped Google Plus

if we had this mentality.

But for some reason, with Google Plus,

Google put this playbook aside

and used a different playbook,

which I call plan and execute essentially.

But I think inside Google, the DNA still existed.

So inside Gmail, the next project after Google Plus

was the tabbed inbox.

So it started with the,

it was kind of the reverse of Google Plus.

It started with a very small idea

that no one believed in.

And we started looking what's behind the city,

what's the goal, what's the problem actually

we're trying to solve.

It turned out that a lot of people were receiving

social notifications and promotions, et cetera.

And most of them were very passive.

They weren't cleaning their inbox.

They were just living in this world of clutter.

And I came up with an idea how to fix this.

I was sure it was great.

I wanted to push it, you know, plan and execute.

But my colleagues were like, hold on,

we actually tried this.

We have a bunch of ideas to help people

organize their inbox.

They're not using it.

Why is your idea good?

So that sent us kind of me and my team

into searching, into researching these users,

into establishing a goal that was much more user centric.

And then thinking of other ideas.

And then we started testing them much more rigorously.

And basically we started testing on our own inboxes.

And then we recruited other dog fooders,

other Googlers to test the same inbox.

Then we put it outside for external testers

with the usability studies, we did data.

We built a whole data mining team

and a whole machine learning team

to build the right categorization.

And we ended up with a solution that turned out

to be very successful for a lot of these passive users.

And this was a surprise to a lot of people

because most of my colleagues

and most of the people I talk with

actually know how to manage their inbox.

So for them, that solution makes complete nonsense.

Like splitting promotions and social to the side

sounds like the stupidest idea.

But there's about 85% of the population,

85 to 88% that absolutely love it.

And today Gmail has about 1.8 billion active users

according to Gmail.

Most of these users are using this feature.

So it was a pretty high impact feature as well.

And the feature specifically just in case people

aren't totally get it is the promotions folder

and the social, I think.

And then there are a couple more

that you can enable in settings if you like.

I use it.

I love it.

Except it puts my newsletter in people's promotions folder.

Who do I talk to about that?

Yeah, newsletters that are very complicated,

the scenario for the categorization engine.

Yeah, we just need an exception for my newsletter.

And then we're good.

Okay, but go on.

So in hindsight, I was asking myself,

why was this project so different?

And I think the reason is that we didn't have

that much confidence in our opinions.

We had opinions, we had ideas,

but we didn't just go all in and just let's build it.

We actually used an evidence-guided system.

And I think that's not unique just to Google.

I think every successful product company out there

that you look at, Amazon, Airbnb,

anyone you will check, at least in their best periods,

they found a way to balance human judgment with evidence.

They didn't try to obliterate human judgment and opinion

just to supercharge them with evidence.

And they came up with very different models.

Apple is another example,

but the principle still holds in all these companies.

Awesome, so you took that experience

and all the experience you've had from coaching,

product leaders, working with companies,

and you wrote this book called Evidence Guided,

which people on YouTube can see sitting there behind you.

And so I wanna talk through some of these stories

and then some of these other lessons and frameworks

that emerged, but maybe just to start,

what's the elevator pitch for this book?

So this is a book for people like us,

product people who want to bring evidence-guided thinking

or modern product management, if you like,

into their organizations.

There's a lot of challenges, it's not simple.

We all read the books, we all know the theory,

we all know some parts of the system.

It tries to give you a system how to do that.

It's a meta framework that kind of helps you

lift your organization in the direction

of evidence guidance, if that's what you want to do.

So going back to the story briefly,

before we get into the frameworks and lessons of the book,

in the first example of Google+, basically came top down,

hey, we need to build a social network, go build it.

Obviously that happens at a lot of companies.

I don't know if there's an easy answer to this,

but are there cases where it does make sense

to approach it that way?

Obviously Apple is a classic example of steep jobs,

it's like we need to build an iPhone.

I don't know if that's exactly how it went,

but are there instances where it is worth

just approaching new product ideas that way

based on kind of the experience and creativity

and insights of the founder?

Or is your thinking it should always come

from this evidence-based approach?

I think the founders are very important,

especially in the startup and scale-ups phase.

They come up with many of the most important ideas,

and it's super important that they have the space

to express and to push the organization to look at those.

However, it's not about shutting them down,

it's about looking at them critically.

You need to create the environment in your organization

where the leader comes and says,

you know what, I talked to these three customers,

I figured it out, here's what we need to do

in the next five years,

and you need to ask, where's your evidence?

And by the way, the example you give,

that's a classic example, steep jobs,

you just brainstorm in his, I don't know, kitchen,

the iPhone, and then just hold it in to build it.

That's the story Steve Jobs told,

but it's not the real story at all.

Now we know what actually happened,

and the iPhone has actually a story of discovery,

of trial and error, multiple projects to do with,

multi-touch, with phones, most of them failed.

Steve Jobs was the architect,

he kind of managed to connect the dots

and eventually come up with this perfect device,

but he wasn't actually the creator,

it wasn't his brand child,

he was actually against it for a while,

but over time, as he saw the evidence,

as he saw what this thing can do,

as he saw the demos,

he was able to piece together something

that was very useful.

That's really important insight.

People that are hearing this might feel like,

okay, I like this idea of pushing back

and encouraging the founders to make it more evidence-guided.

In the case of, say, Google+, was it even possible?

Could you have come to Larry and Sergey

and be like, here's all this data I've gathered

that tells us this is not gonna work.

Do you have any advice for how to push back

and encourage the founders and execs

to really take that counterpoint seriously

or really kind of vet their idea?

So another nice thing about Google

is that it's a very open culture,

and people are not shy to tell even Sergey and Larry

that they are wrong, and they do this all the time.

In certain forms, right?

It's not, you need to know the right channels,

but there was a very big discussion about Google+,

and whether it's the right thing

to create a clone of Facebook.

There was a very public internal discussion.

I think what I would change

is not have this discussion based on opinions,

because when you have the discussion,

you come with your own opinions,

usually the most senior person's opinions

will win.

That's just the way it is.

If we had come with data, hard data,

we said, listen, things are not actually panning out

the way you guys are all expecting.

What can we do?

Should we continue?

Should we pivot this?

I think the discussion would have done better.

Now, I'm doing a huge disservice.

I was not in all the discussions,

I know probably in Google+,

there were very serious discussions happening

along these lines, but it just has a general trend.

I find that evidence is very empowering for us smaller people

in the organization or mid-level managers

to be empowered to challenge the opinions.

Is there anything tactically found to be useful

and effective in giving people,

say they don't work at Google,

they work at companies where founders and bosses

and execs are not as open to challenge?

Is there any tactically found about how to present

a counter proposal or like, hey, I have this data

that we should really pay attention to?

I think if you come with data,

if you run a secret experiment and you come back

and you show them, you usually get one of two results.

Either they get extremely mad at you

and they tell you to get back to work

and to do what you were told.

And in that case, probably you need to start

polishing your resume and look for another place,

either inside the organization or outside it

because that person is not being reasonable, to be honest.

But the more common case is they are pleasantly surprised.

And that's what happened with Steve Jobs as well.

He was against phones, but then people showed him

all sorts of evidence that Apple can make a phone.

He was against multi-touch initially,

but then he changed his mind.

There was a lot of, like, back and forth.

So even Steve Jobs, given evidence,

was willing to flip.

And I say this in many organizations.

So evidence is so powerful,

that's why this is the principle I based the book on.

You have this concept of being evidence-guided.

People listening may feel like,

hey, we're evidence-guided, we're in experiments.

We make decisions using data.

Oftentimes, they aren't actually.

And so what are signs that maybe you're not actually

that evidence-guided as you think you are?

I think there's a few telltale signs that I look for.

First, the goals are very unclear.

Either there are many or they're very kind of obscure

and vague or they're about output, there's misalignment.

So the goals part is not there.

Usually this goes hand-in-hand with metrics,

missing metrics or just using revenue and business metrics,

but there's no user-facing metrics.

So that's another telltale sign.

Then there is a lot of time and effort spent on planning,

especially on road mapping,

creating the perfect road map,

which really can consume a lot of time

of the top management and PNs, et cetera.

Then as you go down,

you see there's not a lot of experimentation

and if there is experimentation,

there's not a lot of learning.

And finally, another telltale sign

is that the team is disengaged.

So the engineers are kind of getting the signal

that what they need to do is deliver.

They're focused on output.

That's what they're measured on.

So they're kind of disengaged.

They're disengaged from the users, from the business.

They don't care that much.

That's usually a sign of...

It's usually something that you can fix

by adopting a more evidence-guided system.

Okay, so let's dive into your approach

to becoming more evidence-guided.

In the book, you share this model that you call the GIST model,

which is kind of this overarching approach

to building product that almost forces you

to be more evidence-guided.

So let's just start with,

what's the simplest way to understand this GIST model?

With your permission, I can show a few slides and...

Oh, let's do it.

Maybe that will help.

Here we go.

Yeah, and then, yeah, a good excuse to go check this out on YouTube.

All right, you're seeing this.

So this is the GIST model.

Goals, ideas, steps, and tasks.

And essentially, it tries to break the change,

which is a really big change for a lot of companies

into four slightly more manageable parts.

They're still big,

but each one you can tackle on its own.

And that's kind of the reason I kind of split it.

And goals are about defining what we're trying to achieve.

Ideas are hypothetical ways to achieve the goals.

Steps are ways to implement the idea

and validate it at the same time.

So essentially, build, measure, learn loops.

And tasks are the things we manage, you know,

in Kanban and JIRA and all these good tools.

These are the things that your development team

is usually very focused on.

And just listening to this,

a lot of this will sound familiar to you

because GIST is not a brand new invention.

It's a meta framework that puts in place

a lot of existing methodologies.

It's based on lean startup, on design thinking,

product discovery, growth.

There's a lot of all of these things here.

It just tries to put them all into one framework or one model.

So what's the simplest way to think about

what this model is meant for?

Is this how you think about your roadmap?

Is this how you plan?

What is this trying to tell people to do differently

in the way they build product broadly?

I would say these are four areas that you need to look at

and ask, are we doing the right thing in each?

In each one, you may need to change or even transform.

And as I go and explain each one of those,

I'll give you basically three things.

In each chapter in the book,

I try to touch on three things.

The principles behind them,

the frameworks or models that implement the principles,

and then process.

And the process honestly is the most brittle part

and the one that you will need to change

and adapt to your company

because not two companies are exactly the same.

And it's very tempting when you write a book

not to give any process,

but that's the part that people actually want the most.

So it's included as well,

but just be aware that you will have to change this process.

Awesome.

Okay, so we're gonna talk about each of these four layers

before we do that.

Where do you like vision and strategy fit into this?

Do they bucket into one of these four layers?

And how do you think about strategy and vision?

That's a great question.

So there's this whole strategic context

that is outside of GIST.

GIST is not trying to tackle that.

It assumes it's in place.

There's another huge blob, which is research.

GIST is not about,

research is more about kind of discovery and delivery.

But strategy is extremely important

and you can use some of the tools

we will talk about to develop your strategy as well.

In many companies, the strategy is just

a roadmap on steroids.

It's more plan and execute just on a grand scale.

And Google Plus, again, was a strategic choice actually,

if you think about it.

So in the book, there is a chapter where I touch on strategy

and I explain how the same evidence-guided methods

are being used by companies to develop their strategy as well.

Awesome.

Maybe one last context question.

So people might be seeing this and thinking,

okay, cool, I have goals, I have ideas, steps, I have tasks.

I'm already doing this.

What is this kind of a counter or a reaction to?

What are people probably missing when they're seeing this?

And they're like, oh, I see.

This is like what we're not doing

and this is the most important.

This is something we should probably change.

And I know we'll go through these in detail too.

I think talking about each one will help.

Let's do it.

Let's do it.

But we can talk about in each level

what's actually being done.

So when people say, I have goals,

usually they take the goals layer

and use it as a planning session.

They talk about what shall we build by when?

What are the resources?

And that's actually not goals at all.

That's planning work.

Cool.

Let's talk about goals.

And I know part of this is OKR related too.

So I'm excited to hear you're taking OKRs.

Oh, that's a whole different discussion.

You had Christina, you had a real expert over there.

So I doubt I can add more to that.

But it's true. OKR is all part of it.

But let's start with goals.

What are goals supposed to be?

Goals are supposed to paint the end state

to define where we want to end up.

And the evidence will not guide you

unless you know where you want to go.

And in many companies, what you have is goals at the top

for revenue, market share, whatever it is.

And then a bunch of siloed goals for each department.

There's engineering goals, there's design goals,

there's marketing goals, et cetera.

And that actually pushes people into different vectors

and it's really hard to decide.

And I would argue that in evidence-guided companies,

and you worked for a few, so probably you've seen this,

they use models in order to construct

overrouting goals for the entire organization.

One of the models I show in the chapter about goals

is the value exchange loop, where basically the organization

is trying to deliver as much value as it can to the market

and to capture as much value back.

And by creating a feedback loop between these two,

you are actually able to grow very fast.

Now, I would argue that you want to measure both of this

and to put a metric on each.

And the metric we usually use to measure value delivery

is called the North Star metric.

I know you wrote an article, a very good article about it.

Thank you.

And in it, you listed dozens and dozens of companies,

like leading companies.

And what they consider the North Star metric,

super interesting.

I would argue that what they told you

is what is the most important metrics we measure.

What is the number one metric for us?

But it's not what I call the North Star metric.

The North Star metric measures how much value

we create for the market.

For example, let's take WhatsApp.

WhatsApp for very long time measured messages sent.

Because every message sent is a little increment of value

for the sender, the receiver, it's free, it's rich media,

you can send it for anywhere in the world.

Compared to SMS, that's huge value.

So if in year one we have a billion messages being sent

and year two, two billion,

probably we double the amount of value.

In Airbnb, I think one of your key metrics

or the real North Star metric was Knights Booked.

I don't know if it was still the case while you were there.

Yeah, absolutely.

And there are examples like this in Amplitude, for example.

They measure active learning users

or weekly active learning users,

which are users that found in the tool some insight

that was so important that they shared it

with at least two other users and they consume it.

So it's a very powerful thing to point at this metric side.

This is the most important metric,

combined with the value metric that we want to capture,

revenue, market share, whatever it is.

Once you have these two, you can further break them down

into what I call metrics trees.

So there's a metric tree for the North Star metric

and there's the metric tree for the top KPI,

the top business metric,

which you see here on the left side in blue,

and usually they overlap.

So you might find in the middle some metrics

that are super, super important

because moving them actually moves the needle

on everything else.

Can you clarify again the difference

between what you call this top KPI versus North Star metric?

So the North Star metric is measuring how much value

we're creating for the user,

the core value that they're getting.

In this case, this is some productivity suite.

So this is number of documents created per month, for example,

because we think that every document created,

maybe it's a small document, I don't know,

AI is in fashion now, is a little incremental value.

So that's the number we're trying to grow.

The top KPI is what we expect to get.

It should be a revenue or profit.

I see, this is the value exchange.

I see one is what users are getting,

one is what you're getting back from them.

Exactly.

Basically what the business is,

how the business is benefiting.

Awesome.

I think this is a really important concept, the metric tree.

I think a lot of people think they have something like this

in mind where they're just like,

cool, here's our North Star metric.

Here's the levers and things that we can work on

to move that.

But I think actually mapping it out the way you have it here

where it kind of goes layers and layers deep

to all of the different variables that impact this metric.

Not only is a way to think about impact

and goals and things like that,

but also helps you estimate the impact of the experiment

you're potentially thinking about running.

So if you're gonna work on something at the bottom here,

like activation rate, like say you move that 10%,

how much is that gonna impact this global metric?

It's probably a very small amount.

This is a very important one

and we will talk about impact assessment shortly.

This helps with it.

It also helps with alignment

because the entire organization

is trying to move these two metrics.

It's the two sides of our mission essentially.

We have the mission, that's the top objective of the company

and these are the two top most key results if you like,

the top most things.

So when you go and work with another team

and you say, hey, why don't you work on my project?

They might say, you know, this project, this idea,

actually might move the North Star metric

more than your idea.

And that helps you guys align.

And I've seen cases where team B put aside their own ideas

to jump on the ideas of team A because of this model.

It also creates an opportunity to give some submetrics

to teams to own on an ongoing basis.

So it creates a little sense of ownership as well

and mission within the tree.

It also helps you figure out what teams you should have,

which teams are have the biggest potential

to impact the metric.

And I think that happens in a lot of organizations.

The team topology reflects, you know,

the structure of the software

or some hierarchical model

where we want to organize the organization in a particular way.

But if you start with a metrics tree,

you can try to arrange the topology around goals.

And sometimes you need to readjust.

It's not a constant reorg,

but from time to time you will realize

the goals have changed and we need to reorganize.

So the tree helps visualize that as well.

I think for people that are listening to this

and thinking about this,

I think the simplest way to even think about this

is basically there's a formula,

there's like a math formula that equals your North Star metric

or your revenue or whatever you're trying to do.

And if you don't have some ideally really clear sense

of what that math formula is, you should work on that

because that will inform so much of how you think about

where to invest, what teams to have,

where to invest more resources, less resources.

Right.

Imagine a place where you can find all your potential customers

and get your message in front of them in a cost-efficient way.

If you're a B2B business, that place exists

and it's called LinkedIn.

LinkedIn ads allows you to build the right relationships,

drive results and reach your customers

in a respectful environment.

Two of my portfolio companies, Webflow and Census,

are LinkedIn success stories.

Census had a 10x increase in pipeline

with a LinkedIn startup team.

For Webflow, after ramping up on LinkedIn and Q4,

they had the highest marketing source revenue quarter to date.

With LinkedIn ads, you'll have direct access to,

and can build relationships with decision makers,

including 950 million members, 180 million senior execs

and over 10 million C-level executives.

You'll be able to drive results with targeting

and measurement tools built specifically for B2B.

In tech, LinkedIn generate a two to five X higher return

on ad spend than any other social media platforms.

Audiences on LinkedIn have two times the buying power

of the average Web audience, and you'll work with a partner

who respects the B2B world you operate in.

Make B2B marketing everything it can be

and get $100 credit on your next campaign.

Just go to LinkedIn.com slash pod Lenny to claim your credit.

That's LinkedIn.com slash pod Lenny.

Terms and conditions apply.

Okay, so metrics trees, what comes next?

All right, so next we need to go to the ideas layer.

And the ideas layer is there to help us sort

through the many ideas we might encounter.

And they may come from, as you said, the founders,

the managers, the stakeholders, from the team,

from research, from competitors,

from we're flooded with ideas.

And what usually happens inside the organization

is some sort of battle of opinions

or some sort of politics sometimes

or highest paid person opinion.

Hippo, you had Ronny Kuchavi

who invented this term in your show.

What doesn't happen is very rational, logical decisions.

These are the best ideas.

Because it's really, really hard to predict honestly.

There is so much uncertainty in the needs of the users,

in the changes in the market, in our technology,

in our product, in our own organization.

It's almost impossible to say this idea

is going to be the best.

But we do say this because we have cognitive biases

that kind of convince us that this idea is far superior

to anything else.

And it's definitely the right choice.

In order to avoid this, what we want to do

is to evaluate the ideas in a much more objective

and consistent and transparent way.

In the book, I suggest using ICE, Impact Confidence and Ease.

I think I have a slide coming on this.

So Impact Confidence and Ease,

which is basically a way to assign three values to each idea.

The impact tries to assess

how much impact it will have on the goals.

And that's why it's so important

that we have very clear goals and not many.

How we're measuring the ideas on the North Star metric,

on the top business KPI, on a local metric of the team,

whatever it is, let's be clear about it

and then let's evaluate the ideas against this thing.

Ease is basically the opposite of effort,

how easy or hard it's going to be.

But both of those are guesstimates.

Both of those are things we need to estimate.

I would argue that just by breaking the question

to these two questions,

we usually have a slightly better discussion

than just my idea is better than yours.

But then there's the third element, which is confidence,

which tries to assess how should we or should we be

about our first guesstimates, about the impact and the ease.

It's interesting to use the word ease

because I think it's usually effort.

You kind of make it positive.

Is that an intentional tweak you make?

I'm using the definitions of Sean Ellis.

Sean invented ice.

You know Sean, I don't know if you've had him yet.

I haven't had him on yet.

Yeah, for the people who don't know him,

Sean is amazing.

He's like one of the fathers of the growth movement.

He coined the term growth hacking

and he very popularized the concept of product market fit

and created ice.

He created a bunch of things that we use in product

that we don't even know.

I didn't know he came up with ice.

Okay, cool.

So the original version of ice is ease instead of effort.

Exactly, yeah.

Fun fact.

A lot of viewers are wondering where is the R

because there's another variant of this called rice

where it is rich as well.

I prefer ice because I prefer to fold the rich

into the I for various reasons,

but both are valid, both are equivalent in a sense.

I'm in your boat.

That's exactly how I think about it.

I think people overcomplicate this stuff

and try to get so many math formulas

involved with estimating impact.

And I feel like these are just simple heuristics

to kind of bubble the best ideas to the top.

It doesn't have to be a perfect like estimate

of impact and confidence and all those things.

So I think the simpler is better

and always ends up being a spreadsheet.

People always have these tools to estimate these things,

but I feel like a spreadsheet Google Sheets.

Great.

So yeah, you're actually leading me to my next point.

So when you come to estimate impact,

you will realize it's the hardest part.

So sometimes it's just a gut feeling and it's a guess.

And sometimes it's based on some spreadsheet

or some analysis and back of the envelope

calculation you've done.

And I think that's legitimate.

Sometimes these things do show you some things

you didn't think of.

And sometimes the best case, it's based on tests.

You actually tested it.

You interviewed 12 customers, you showed them the thing

and out of those, only one actually liked it.

You should reduce your impact based on that, usually.

Or you do other types of tests.

So we'll talk about this in a second.

What happens is that people tend to just go with gut instinct

and then give themselves a high confidence.

They say it's an eight and I'm pretty convinced

so it's eight for confidence.

And I found this a bit disturbing

because it kind of subverts the whole system.

So I wanted to help people realize

when they have strong evidence in support of their guesses

and when it's weak evidence.

How to calculate confidence in a sense.

And for that I created a tool called the Confidence Meter

which you can see here, this colorful thing.

And shall I go and explain it?

Yeah, let's do it.

And then again, if you're just listening to this,

you can check this out on YouTube

and you can see the actual slide.

All right, awesome.

So basically I constructed it a bit like a thermometer.

It goes from very low confidence

which is the blue area or the upper right

all the way to high confidence which is the red area.

And you can see the numbers going from zero to 10

where zero is very low confidence.

We don't know basically anything.

We're just guessing in the dark.

And 10 is full confidence.

You know for sure this thing is a success.

No doubt about it.

And across the circle,

I put various classes of evidence you might find along the way.

So for example, starting at the top right,

all of this blue area is about opinions.

It could be your own self-confidence in the idea,

your self-conviction.

You feel it's a great idea.

Guess what?

Behind every terrible idea that was ever,

someone thought it was great.

That gives you 0.01 out of 10.

Maybe you created a shiny pitch deck

or a six page document that explains in detail

why this is a great idea.

Slightly harder to do but still very low confidence.

Maybe you connected it to some theme.

You know it's about the blockchain.

Well, sorry, the blockchain is out of fashion.

What's hot right now?

AI.

Exactly, AI.

It's about AI.

That makes it a good idea?

Absolutely not.

Or the strategy of the company.

That's another thematic support.

Thousands and thousands of terrible ideas

are being implemented right now

as we speak based on these themes.

So all these things combined can give you a maximum

of 0.1 out of 10, according to the tool if you follow it.

Then we move into slightly harder tests.

One is reviewing it with your colleagues,

your managers, your stakeholders, the idea.

They don't know it either.

They don't have a crystal ball.

They're usually not the users.

They cannot predict, but they can evaluate it

in a slightly more objective way

and maybe find flaws in your idea.

On the other hand, groups tend to have biases too.

Politics, group think.

So groups can actually arrive.

Sometimes it's worse decisions than individuals.

There's some research to that.

Next are estimates and plans.

So you may do some sort of back-of-the-end

of calculation or your colleagues might go out

and try to evaluate the ease a little bit better.

That gives you a little bit more confidence,

but still we're at the level of guesswork at this point.

Next, we're moving to data.

And data could be anecdotal.

So you find a few data points dotted across your data

or you talk to a handful of customers

or maybe one competitor has that same idea.

In many companies I meet,

if the leading competitor has this feature

and we think it's a good idea, validation is done.

Let's launch it.

That's it.

It's a great idea.

We need to do it.

Never works, honestly.

You should not assume that your competitor

actually knows what they're doing any more than you do.

Data could be also what I call market data

that comes from surveys,

from assessing a lot of your data

by doing a deep competitive analysis.

And there are other methods

where you create a larger data set

and you can trust your idea against it.

Finally, to gain medium and high confidence,

you really need to build your idea and test it.

And that's where the red area is.

So there's various forms of tests.

We will talk about them if we have time.

And they give you various levels of confidence.

Awesome. This is a very cool visual.

We'll link to an image of this in the show notes too

if people want to check it out.

I think what's awesome about this

is you could just use this as a little tool

on your team of just like,

where are we on the spectrum?

Like we think the impact of this is very high,

but we're probably in this like blue area of confidence.

And so let's just make sure we understand that.

And it's really clear language to help people understand.

I see if we had this, it'd be a lot more confident.

So you can also tie your investment into the idea

based on the level of confidence you had found, essentially.

So early on, you want to do the cheap stuff

just to gain more confidence

and then you can go and invest more.

If it's a really cheap idea,

you can jump to a high confidence idea, a test.

You can do an AB experiment, early adopter program,

whatever it is, and then launch it.

Some ideas you don't need to test.

Sometimes expert opinion is enough.

If you're just changing the order of the settings,

no one sees this or no one will be impacted,

the risk is low, you can launch it without testing.

So part of the trick is also knowing when to stop,

not just trying to force your way all the way up

when you don't have to.

That's a really important point.

The other important point here is just,

a big part of a PM's job is to say no

and to stop stupid shit from happening.

And this is an awesome tool to help you do that,

to be like, okay, here's this idea you have.

Just like, let's just be real, how confident are we this?

And okay, it's gonna take us three months to do this.

Maybe we should think about something different.

Maybe we should work up the confidence meter

before we actually come into this.

Yeah, this is a real world usage that I hear about a lot.

People use this to kind of do an objective way

to say no and gently or to say, we will think about it,

but look at these other ideas we have

and how their impact and ease and confidence take up.

Classic PM move, just like, that was a great idea,

but what about this better idea?

Coming back to something that we talked a bit about

at the beginning, save a founder

who's actually very smart and experienced.

Save in at a startup where you don't really have the time

to build tons of evidence for ideas.

You have a different perspective

on how much time to spend building confidence in ideas

versus just like, well, they actually have a really good idea.

Let's just see what happens.

So there's always like a trade-off between speed of delivery

and speed of discovery.

And that actually leads to the next layer

of how do we combine the two?

Because people tend to think it's an either,

either we are building very fast or we're learning

and then we're building very slow.

But I think we're using the wrong metric.

The metric is not how fast can we get the bits

into production?

When there's a lot of uncertainty

and we all face uncertainty and start-upists especially,

it's not about getting the bits to production.

It's about getting the right bits to production.

It's about creating the outcomes that you need, the impact.

And so it's about time to outcomes.

And I would argue that the evidence-guided method

is far more impactful, is far faster,

is far more resource efficient

than the opinion-based method.

Because opinion-based methods tend to waste a lot more

of your resources building the wrong things

or discovering, learning too late,

while evidence-guided helps you learn earlier.

Plus, it is a fallacy that if you learn, you don't build.

Good teams know how to do both at the same time.

And that's actually what the steps layer is meant to teach you

or to help you do.

Awesome.

So maybe just to close off that loop,

say someone listening is at a baker company,

say Netflix versus a Series A, Series B, or start-up.

Is there something you'd recommend

about them approaching this differently?

Any kind of guidance there of just how to take

what you're sharing differently

if you're a different source of companies like that?

Absolutely.

I think the concept we talked about of the nostal metric,

the value created versus the value captured

is very important in every company.

Building your entire metric streams,

maybe overkill, doing heavy weighted OKRs,

maybe overkill for an early stage.

Early stage companies even don't know how they create value.

So they need to iterate.

And their goal is really to find product market fit.

Beyond that, what happens is that you need

to start building your business model.

So that's your goal and you iterate towards that.

And you need to put metrics on that.

And then when you move into scale,

you need to try to create order.

Because when you scale up,

and all of this is covered in the book,

there's a special chapter just about these questions.

When you scale up, you get a lot of people

and a lot of money and everything is happening

at the same time.

So there, you need the order of evaluating ideas

in a very systematic way.

In a company like Netflix, by the way,

I don't know if they need this specific method.

They're very...

Yeah, maybe that's a bad example.

They're probably doing things pretty well.

One thing I discovered, by the way,

there's two types of companies

that really benefit from this technique.

One is those companies that are kind of emerging

into modern product development.

They have product teams, they have product managers,

they have OCRs, they're starting to do agile,

but they're starting to do experimentation,

but they're struggling to put it all together.

Every CPO is building their own little framework.

And the other type is those companies

that used to be evidence-guided and they regressed.

And that happens way too often.

Change of management, change of culture.

And then all of a sudden, they need to rediscover,

to rekindle that spirit that was lost on Google+.

So some of the people that actually respond

to the strongest are actually surprisingly

in these companies.

What I love about your frameworks

and kind of all these things we're talking about

is these are just kind of a...

You can almost think of them as a grab bag set of tools

to make you more evidence-guided as a company.

You could start with thinking about the confidence meter.

You could start using ICE more.

You could start using the metrics tree.

And all these things just push you closer and closer

to being more evidence-guided.

You don't have to adopt this whole thing all at once.

Absolutely.

I would recommend that you don't try

because if the transformation is way too big,

you will get fatigued and you will just create a lot

of a process for a lot of people

and you will not see the results

and after a quarter you will give up.

So exactly what you suggested is the right approach.

What would be the first thing you'd suggest

if people were trying to move closer

to being less opinion-oriented and more evidence-based?

Which of these frameworks or models would you recommend first?

I recommend that they discuss internally

where is the biggest problem that they're facing.

If the goals are unclear, there's misalignment,

we keep chasing the wrong things.

Start at the goals layer.

Try to establish your own style metric,

your top business metric, your metrics trees.

Start assigning teams with their own area of responsibility.

If you're spending a lot of time in debates

and you're constantly fighting and changing your mind,

start with the ideas layer and establish impact

is confidence or whatever prioritization model you like,

but involve evidence in it.

I think the confidence meter is a good tool

to use irrespective.

If you're building too much and you're not learning enough,

start adopting the steps layer, which we haven't seen yet.

If your team is very disengaged,

you have one of these teams where the developers are very into agile,

very into quality, very into launching things,

start working on the tasks layer.

Awesome.

Okay, let's keep going.

All right, so steps.

Steps are about kind of helping us learn and build

at the same time as we said,

and one of the patterns I see is that organizations don't know

that they can actually learn at a much lower cost.

They believe they need to build this elaborate MVP,

which is not minimal in any way,

and then launch it and then they will discover it.

Basically, it's what we used to call beta 20 years ago,

but just with a different name.

What I'm trying to do here at the steps layer

is to help companies realize there's a gamut of ways

to validate your ideas,

or more specifically to validate the assumptions in your idea.

And I created a little model for this.

It's called after assessment,

fact-finding, tests, experiments, and release results.

But it's, again, it's just putting together things

that much smarter people invented.

So in assessment, you have very easy things,

things that don't require a lot of work.

You check if it aligns with the goals,

this idea that you have in your hand,

you do maybe some business modeling,

you do ice analysis, you do assumption mapping,

which is a great tool by David J. Bland.

Or you talk to your stakeholders one-on-one

just to see if there are any risks, et cetera.

These are usually not expensive things,

and they can teach you an awful lot about the impact

and the ease of your idea.

The next step is to dig data.

And usually that goes hand-in-hand with this,

so you can find data in your data analysis,

through surveys, through competitive analysis,

through user interviews,

and through field research, observing your users.

Obviously, these last two are pretty expensive.

So it's often good not to wait until you have the idea

and then start doing your research.

It's best to keep doing your research ongoing,

and then you have some sort of data to lie on

and to compare your idea against.

But until now, we didn't build anything.

Now you're ready to start testing,

building versions of the product

and putting them in front of users

and measuring the results.

But initially, you don't build anything, you fake it.

You do a fake door test, you do smoke test,

wizard of all tests, concierge test, usability test.

We used a lot of those in the tabbed inbox, by the way.

One of the first early versions was actually,

we showed the tabbed inbox working to people,

but it wasn't really Gmail, it was just a facade of HTML.

And behind the scenes and according to the permissions

that the users gave us,

some of us moved just the subject

and the sender into the right place.

So initially, the interviewer kind of distracted them

and then they showed them their inbox

and in it, the top 50 messages were sorted

to the right place, more or less, if we got it right.

And people were like, wow, this is actually very cool.

And that gave us a lot of evidence.

That's an awesome story.

So that was in the user research.

It wasn't like rolled out to people.

It was a manual individual.

There wasn't a single line of code to read.

And this was just cooked up by our,

he was a researcher in our designers.

But it gave us some evidence to go and say,

hey, we should try and build this thing.

Love that.

So initially you fake it.

Mid-level tests are about building a rough version of it.

It's not complete, it's not polished, it's not scalable,

but it's good enough to give to users to start using.

So those are early adopter programs,

alphas, notitude analysis studies, and fish food.

Fish food is testing on your own team.

Fish food, I haven't heard that term before.

So it's dog-fooding, but more local to your team.

I think it's a Google-ish thing,

but some people told me that they use fish food as well

in their company, the name.

So I'm using it.

I don't know if there's a better name for it.

I wonder what it's called, fish food,

because it's like little and gentle, little planks.

It could be, yeah, I don't know.

Okay, super cool.

I'm learning a lot here.

So the next stage is to actually build

a kind of more complete version of this,

and then you can dog-food it.

Then you can give this to your users internally.

When I joined Microsoft many years ago,

the first thing I noticed was that Outlook was very buggy.

And I asked people what's going on,

and they told me we are all dog-fooding

the next version of Outlook that hasn't come out yet.

And that's a very common practice in Silicon Valley.

You can do previews, you can do beta, you can do labs.

So those are tests.

Now there's a special class of tests

which are experiments,

because they have a control element.

So A, B tests, multivariate tests,

those are all experiments.

So I'm using the word experiment

the way data scientists use it,

although people tend to call the experiments

to everything that you see here.

And finally, even the release,

you can do stage release, you can do percent launches,

you can do holdbacks,

all of these things help you further validate your assumption.

Sometimes you need to walk back and change things,

but it's another opportunity to learn.

So the key point is you don't have to start

at the right-hand side, which is expensive.

You can start early on,

and that leads to parking a lot of ideas very quickly.

You realize they're not as good as you thought,

and then you can invest more effort into the good ideas.

If they generate positive evidence,

you can go further and further

until that point where you feel you're ready for delivery.

Okay, so we've talked about goals,

we've talked about ideas, we're talking about steps here.

Is there anything else along steps?

And then next I know comes tasks.

No, this is it for steps.

There's a lot more, but we will not go into all of it.

Okay, that sounds good.

Let's talk about tasks and what you mean there.

All right, awesome.

So in many organizations, there's these two worlds.

There's the planning world,

where basically you have the managers, the stakeholders,

there are some of the PMs.

Really sit and think about what we need to launch.

And that's where we create the strategies

and the roadmaps and the projects.

But guess who's not invited to the party?

The people who are actually doing the work.

They live in Agile World.

They're very focused on moving tickets to the downstate,

on completing burning story points,

pushing stuff into production.

And there's a big gap between these two worlds.

They don't understand each other.

They don't see eye to eye.

There's a lot of mistrust being built

sometimes against the plans

or the managers feel that the teams

are just not being very effective.

We've seen all of this.

And the solution, kind of the stop gap

is to put a PM in the middle.

The PM is supposed to make all of this work,

deliver on the roadmap like a project manager,

feed the Agile machine with perfectly

product backlogs and stories.

And it just doesn't work, honestly.

And the PMs I meet are very tired.

And they have to spend so much time in planifications

and roadmap discussions.

And they're very busy.

They don't have time to do research or to test ideas.

So I suggest changing this

and bringing the developers a little bit out

of their Agile cage, if you like.

And no disrespect to Agile.

It's a great thing, but let's let them do more

than just develop.

Let's let them discover as well.

And one of the tools I suggest, and again,

this is a process, is what I call the GIST board.

So it's basically the top three layers of GIST.

The goals are on the right.

These are just the key results.

And usually per team, I suggest not more than four.

So you create a GIST board per team.

Then the ideas we're working on right now,

sometimes with their high scores.

And then the next few steps that we might want to pursue

in order to validate these ideas.

And this is a very dynamic thing.

It changes all the time.

The team needs to update it.

And the team needs to meet around it,

at least once every other week to sync,

to talk about what's going on.

Are we still following the right ideas?

How are we doing on the goals?

What are the next steps?

What's blocking us from completing

the most important steps?

And this is a discussion that is not happening today,

because most of the discussion happens

at the roadmap level.

And then there's a lot of discussion at the task level.

But this middle layer of what actually are you trying

to achieve and how well are we doing on it doesn't exist.

If you do have this, you create a lot more context

in the minds of your team.

And then they need to ask you fewer questions.

You need to tell them less what to do.

They know what's success,

and they are able to actually do a lot more on their own.

Is the way to think about the gistboard

as the way you should be road mapping,

or is this more of a strategy framework

to think about why should be prioritizing broadly?

The way I see this is at the beginning of the quarter,

the team defines its goals.

The leads of the team define the goals,

but they review it with the team,

they review it with the managers, of course,

with the stakeholders.

Everyone is in agreement.

These are the maximum four key results

and the one or two objectives you guys need to work on.

Teams cannot deliver on more than that.

You copy these key results into the gistboard.

Then you start looking at your idea bank

or you start generating ideas

and say, how can we achieve these key results?

And to clarify the thing you copy

is the key result as the goal.

Yes, exactly.

You can write the objectives alongside that

if you to remind people what we're trying to achieve,

but the key results are the thing we show here.

Then you pick some ideas,

the ones that look most promising

and as unintuitive as it sounds

or counterintuitive as this sounds,

I would recommend that you let the team pick these ideas.

The managers, the stakeholders can propose the ideas,

everyone can propose,

but the team should use the ICE process to kind of,

and especially the product manager is very important here

to choose which ideas to test first.

And then the team together needs to develop

which steps should we run?

How can we validate this?

Some of the steps will be done by the PM,

some by the data analyst,

some by user researcher,

but some will involve the team.

There'll be some coding,

there'll be some running of experiments.

And so there's some ownership around the steps.

The team, a sub team owns each one of these steps

and we will change the board very actively.

So if an idea turns out to be bad,

we will take it off the board

and put another idea in this place.

Or maybe we achieve the goal,

we don't need to work on this anymore,

we can focus on something else.

So it's a project management tool, in a sense.

Awesome.

And so I'm looking at an edit

and I think maybe the most important piece of this

is that steps aren't just like a project,

like launch a better onboarding

or add the step to onboarding.

You wanna emphasize the steps that you're gonna take

to get to more and more confidence essentially

and more and more evidence-guided thinking

versus just let's figure out how to launch this feature idea.

Exactly, it's not an engineering milestone

or a design milestone, it's a learning milestone.

So we build something and along the way,

we actually grow the scope of what we build.

We are building the product in the process and we learn.

So the two have to come hand in hand.

And just to give for folks that aren't watching this on YouTube,

just to walk through one example, do it real quick.

So one of your goals here is average onboarding time.

You want your goal to be the average onboarding time

less than two days, currently five and a half days.

An idea there is an onboarding wizard.

And then the steps are a usability test with mockups

and then a usability test as a prototype

and then an AB test.

Yeah, basically, and you can alter this as you go along.

Sometimes you can run multiple steps in parallel,

it's not always sequential.

But that's basically the process here.

Awesome, so kind of like, again,

what you're trying to emphasize here as a team is just,

we're not just gonna launch this onboarding wizard

and we're not gonna figure it out later.

It's like, let's be upfront about the steps

we're gonna take to build more and more confidence.

This is something we should keep investing more and more in,

which is really interesting.

Yeah, and something, another interesting thing

that happens every time you run a step,

if it's successful, you have evidence

and you can go back to the managers and tell them and share

and say, you know, with this idea,

we thought it was great, but we got this result.

What do you think that means?

And sometimes that manager that proposes it

would say, you know, I think the test failed,

let's rerun it, or sometimes they would say, you know,

maybe it's not as strong as I thought.

The discussion just becomes that much more nuanced

and objective, if you like.

Maybe just to close out this framework,

how does this relate to a roadmap

that they may have in a spreadsheet or in JIRA

or in a sauna or something like that?

Does this sit on top of that?

Does replacing a roadmap somewhere else?

I would say that release roadmaps

where you're just saying, by Q3, we want to launch this,

or by October, we have to launch that.

They're kind of competing with this.

If you're doing that and people know

that the goal is to launch that thing by October,

forget about learning, forget about evidence-guided.

I recommend using outcome roadmaps,

saying by October, we want to achieve this outcome.

By Q4, we want to launch in other three countries,

or we want to grow our usage in India by that much.

By this time, we need to tackle the problem of churn.

And how we achieve this, sometimes we know,

we have a concrete idea that is high confidence

that we're already tested, we switch into delivery,

then we can put it on the roadmap and say,

yeah, we're going to build this thing

and we'll aim for October.

But otherwise, you want to keep it open.

And the roadmaps can kind of suffocate this process

if you decide upfront with low confidence

that this particular idea must be launched.

Okay, so you're proposing people switch

the roadmapping practice to this,

which is very ambitious, I love it.

Well, this is not a roadmap,

this is just a tool for the team

to manage the project.

But I have a proposal for outcome roadmaps inside the book.

Oh, okay, awesome, okay, so I was going to ask

if people wanted to try this approach,

the book is the best way to fully understand

the framework and how to implement it.

That's one way, I have articles, I have resources on my site,

but I try to condense much of what we just discussed

and much more, a lot more nuance in the book.

So, if you were interested in that, I would give it a go.

Awesome, maybe just on the topic of OKRs real quick.

How do OKRs connect to all this?

It sounds like broadly,

you kind of assume people keep working on,

here's our metric or key results or objectives,

and then that plugs into this kind of gist remark.

So the metrics trees plus your mission,

plus the individual missions of the teams,

give you most of what you need to populate your OKRs.

There's of course a process of alignment,

top down, bottom up, side to side,

which I talk a little bit about as well.

OKRs is a very rich topic.

But those things are usually the core.

There's usually some other OKRs

that are about the health of the company,

the health of the product, et cetera.

Those are called supplementary OKRs.

I talk about those as well.

So yeah, I think OKRs are a helpful tool if you like them.

And just zooming out again, basically,

you don't need to take all of these ideas

and lump them all together

and change the way you work as a business.

You can start with picking some of these ideas

and starting to become more and more evidence-guided.

It sounds like this just board

isn't where you probably want to start,

but maybe it's once you have more and more experience

using some of these tools.

Or you tell me, do you sometimes go straight

to this way of thinking about the roadmap and the plan?

So it might not be the full board

because you're missing some of the pieces,

maybe your goals are not as good

or your idea of prioritization isn't as good.

But if your team is very, very delivery focused,

and sometimes it's also the opposite,

your managers are telling them what to build

and you want to break this kind of dynamic,

you want to create a step backlog.

So instead of a product backlog,

let's create a backlog of steps,

which are just validation steps,

betas and previews, et cetera.

And that changes the dynamic pretty strongly.

So by the time this podcast comes out,

the book will be out.

What is the best place to find the book?

Hopefully on Amazon, you can search for it.

You can go to my site, itomargila.com,

and it will be presented prominently there.

And there's also the book landings page

where you'll find everything you need to know

about the book, evidenceguided.com.

Well, with that, we reached our very exciting

lightning round, are you ready?

Yes, let's go.

What are two or three books you've recommended

most to other people?

So I'm going to cheat.

I'm going to recommend a series of books,

so two series.

One is this. Cheating is allowed.

All right, cool.

One, and those are obvious ones.

One is the series published by SVPG,

Silicon Valley product group.

So inspired, empowered.

Now, I think transformed this has come out.

I haven't read it yet, but I'm sure it's amazing.

So this is Marty Kagan and his colleagues.

They write some tremendous books

and every product manager should read them.

The other series, a bit older.

This is the Lean series.

The Lean startup, Lean Enterprise, Lean Analytics.

There's gold in all these books, Lean UX.

Really, really important books,

and I think they're not as appreciated

as they should.

Running Lean, that's another example.

What is a favorite recent movie or TV show?

I'm not really a big TV or movie buff.

I just put on whatever comes up.

I'm discovering that YouTube is actually becoming

one of my sources of information entertainment.

I'm learning a lot of Spanish recently,

so I discovered this channel called Dreaming Spanish,

which is, if you're learning Spanish, it's incredible.

So that's my recommendation.

That's a unique choice.

I love it.

Favorite interview question you like to ask candidates?

I like to ask them to design something for a niche audience.

So a navigation system for elderly people

or some sort of laptop for people

with vision impairment, et cetera.

Those are good questions to see their customer empathy,

their creativity, their ability to evaluate multiple ideas,

their ability to find flaws in their own ideas.

So there's a lot of room to dig in there

and see how this person is thinking as a product person.

What is a favorite product you recently discovered

that you love?

It's a cliche, but it's AI.

There's a company called Eleven Labs that do voices,

like the best voices, synthetic voices you've heard,

but they can also replicate your own voice.

So you can create a voice signature.

If you're American, you can use their default free version

or cheap version to replicate your own voice.

And that could be pretty useful if you need to,

I don't know, narrate an audiobook or do some online course.

So I'm finding this service very interesting.

This is all part of my big retirement plan.

Find all these components together

that can replace me eventually.

Got AI generating content, we'll have this voice thing.

I love it.

It's all happening.

There's an AI version of you, right?

I can ask you questions now with that.

Oh, there is, Lennybot.com.

Right.

That's all part of the plan.

Okay, what is a favorite life motto

that you repeat most yourself that you share with others?

That's a big one.

Albert Einstein, I think, said strive not to be a success,

but to be of value.

And I think that's a great motto for people,

for companies, it's something that kind of guides me.

And this whole concept of the value exchange, et cetera,

is kind of loosely connected to that.

I love that.

That's such an important point for people

putting out content online.

So many people are just like,

I just wanna be successful, get followers,

here's all these things I'm tweeting and showing.

And the thing that actually works is deliver value,

create valuable stuff that people really value and want.

And I find the signal for that

is do you find it interesting and valuable?

Like if you're like, oh, wow, that's really interesting.

Oftentimes other people are gonna find it interesting.

So I love that.

Great choice.

I'm gonna look at that one up.

Two more questions.

What's the most valuable lesson you learned from your mom

or your dad?

I think both of them, in the wrong way,

they had relatively modest jobs,

teaching or doing other things.

But they always strived, again, to be the best they can

and to deliver the most value they can.

So it's very connected somehow.

Maybe I'm seeing the world through this lens,

but they kind of taught me to strive

to be the best I can at what I do.

Final question, you're Israeli for folks that can't tell.

What is your favorite Israeli food

that people should definitely check out

or try to get whenever they can?

Ooh, when I arrive in Israel,

I usually go for shawarma,

which is like donor kebab if you know it, just better.

So if you're in Israel, if you go visit Haifa,

which is the city where I grew up,

definitely check out the shawarma.

Awesome.

It's a more, I hope people got the gist of your book

from our conversation.

What's the best way to find it?

What's the best way to learn about you

and reach out if they wanna ask you any questions?

And then also, how can listeners be useful to you?

To find it, you can go to itamarguela.com

or to evidenceguider.com

and you will find the book and you'll find me.

Best value to me.

Try it out, just take some of these ideas,

bring them back to your office, talk with your colleagues,

say, what do you think we should do about this?

Just give it a go and reach back to me.

Tell me, I'm easy to find in my website.

Tell me what happened, I'm really interested.

Amazing.

Itamar, thank you again so much for being here.

Thank you.

Bye everyone.

Thank you so much for listening.

If you found this valuable, you can subscribe to the show

on Apple Podcasts, Spotify or your favorite podcast app.

Also, please consider giving us a rating or leaving a review

as that really helps other listeners find the podcast.

You can find all past episodes

or learn more about the show at lennyspodcast.com.

See you in the next episode.

Machine-generated transcript that may contain inaccuracies.

Keywords

Evidence-guided, Confidence meter, Metric trees, GIST model, ICE framework, North Star metric, Value Exchange Loop, Testing, Roadmapping, OKRs

People

Itamar Gilad

Companies

Google, Microsoft

References

Warning: Undefined variable $clean_references in /srv/www/podtranscript.com/app/podcast_episode.php on line 376

Brought to you by Ezra—The leading full-body cancer screening company | Vanta—Automate compliance. Simplify security | LinkedIn Ads—Reach professionals and drive results for your business

Itamar Gilad is a product coach, author, and speaker with over two decades of experience in senior product roles at Google, Microsoft, and various startups. He is also the author of Evidence-Guided: Creating High-Impact Products in the Face of Uncertainty and publishes a popular product management newsletter. In today’s episode, we discuss:

• What it means to be “evidence-guided”

• How to think about your KPIs as metric trees

• How to prioritize ideas using the “confidence meter”

• The GIST model for roadmapping

• Common mistakes with ICE

• Advice for using evidence to challenge gut-driven founders

Find the full transcript at: https://www.lennyspodcast.com/becoming-evidence-guided-itamar-gilad-gmail-youtube-microsoft/#transcript

Where to find Itamar Gilad:

• Twitter/X: https://twitter.com/ItamarGilad

• LinkedIn: https://www.linkedin.com/in/itamargilad/

• Website: https://itamargilad.com/

Where to find Lenny:

• Newsletter: https://www.lennysnewsletter.com

• Twitter/X: https://twitter.com/lennysan

• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/

In this episode, we cover:

(00:00) Itamar’s background

(04:35) How his time working on Gmail shaped his philosophy of “opinion-based” development

(08:35) Lessons from developing Gmail’s tabbed inbox 

(13:40) A brief overview of Itamar’s book, Evidence-Guided

(14:30) Balancing founder creativity with an evidence-based approach

(17:32) Advice on how to push back against founders

(19:36) Signs you aren’t as evidence-guided as you may think

(21:13) Itamar’s GIST model for becoming more evidence-guided

(23:51) How to set overarching goals using his “value exchange loop”

(28:45) North star metrics vs. KPIs

(33:47) Using “ICE” to assess the value of ideas

(37:39) Itamar’s confidence meter

(44:28) Speed of delivery vs. speed of discovery

(46:14) How to apply Itamar’s frameworks based on company type and stage

(49:09) First steps in becoming more evidence-guided

(50:21) Next steps in testing

(55:41) The task layer in the GIST framework

(1:02:54) Thoughts on roadmapping

(1:04:56) How OKRs fit into the whole picture

(1:07:11) Lightning round

Referenced:

• Itamar’s presentation slides: https://itamargilad.com/wp-content/uploads/2023/09/Podcast-Slides.pdf

• What differentiates the highest-performing product teams | John Cutler (Amplitude, The Beautiful Mess): https://www.lennyspodcast.com/what-differentiates-the-highest-performing-product-teams-john-cutler-amplitude-the-beautiful-mess/

Evidence-Guided: Creating High-Impact Products in the Face of Uncertainty: https://itamargilad.com/book-evidence-guided/

• The co-founders of Google in Forbes: https://www.forbes.com/profile/larry-page-and-sergey-brin

• Kanban: https://www.atlassian.com/agile/kanban

• Jira: https://www.atlassian.com/software/jira

• The ultimate guide to OKRs | Christina Wodtke (Stanford): https://www.lennyspodcast.com/the-ultimate-guide-to-okrs-christina-wodtke-stanford/

• Amplitude: https://amplitude.com/

• The ultimate guide to A/B testing | Ronny Kohavi (Airbnb, Microsoft, Amazon): https://www.lennyspodcast.com/the-ultimate-guide-to-ab-testing-ronny-kohavi-airbnb-microsoft-amazon/

• ICE framework: https://growthmethod.com/ice-framework/

• Sean Ellis on LinkedIn: https://www.linkedin.com/in/seanellis/

• RICE scoring model: https://www.productplan.com/glossary/rice-scoring-model/

• Idea Prioritization with ICE and the Confidence Meter: https://itamargilad.com/the-tool-that-will-help-you-choose-better-product-ideas/

• Assumptions Mapping: https://designsprintkit.withgoogle.com/methodology/phase2-define/assumptions-mapping

• What is Dog Fooding, Fish Fooding a Product?: https://matt-rickard.com/fishfooding-dogfooding-product

• SVPG books: https://www.svpg.com/books/

The Lean series: https://theleanstartup.com/the-lean-series

• Dreaming Spanish: https://www.youtube.com/c/DreamingSpanish

• ElevenLabs: https://elevenlabs.io/

• Lennybot: https://www.lennybot.com/

Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.

Lenny may be an investor in the companies discussed.



Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe