FYI - For Your Innovation: OpenAI Dall-E 3, and Tesla’s Optimus Robot | The Brainstorm EP 16
ARK Invest 9/27/23 - Episode Page - 27m - PDF Transcript
Welcome to The Brainstorm, a podcast and video series from Arc Invest.
Tune in every week as we react to the latest in innovation and reflect on how short-term
news impacts our long-term views.
To learn more, visit arc-invest.com.
Arc Investment Management LLC is an SEC-registered investment advisor.
Arc and public are unaffiliated entities and do not have a relationship with respect to
either firm marketing or selling the products or services of the other, and therefore Arc
disclaims responsibility for any loss that may be incurred by public's clients or customers.
The information provided in this show is for informational purposes only and should not
be used as the basis for any investment decision and is subject to change without notice.
It does not constitute either explicitly or implicitly any provision of services or products
by Arc, and investors should determine for themselves whether a particular investment
management service is suitable for their investment needs.
All statements made regarding companies or securities are strictly beliefs and points
of view held by Arc and or show guests and are not endorsements by Arc of any company
or security or recommendations by Arc to buy, sell, or hold any security.
Historical results are not indications of future results.
Certain of the statements contained in this show may be statements of future expectations
and other forward-looking statements that are based on Arc's current views and assumptions
and involve known and unknown risks and uncertainties that could cause actual results, performance
or events to differ materially from those expressed or implied in such statements.
Arc assumes no obligation to update any forward-looking information.
Arc and its clients, as well as its related persons, may, but do not necessarily, have
financial interests in securities or issuers that are discussed.
Certain information was obtained from sources that Arc believes to be reliable, however,
Arc does not guarantee the accuracy or completeness of any information obtained from any third
party.
Welcome to Brainstorm Episode 16.
Today we're going to be talking about Dolly 3, a new study on how chat GPT is making the
average worker even better.
And then we're going to dive into Tesla's Optimus Robot and the video they just released.
Today we're joined by Andrew Kim, Nicholas Groose, and myself, Sam Corris.
Andrew, let's dive right into it.
Sure.
OpenAI made the waves once again last week.
They announced their plans to release Dolly 3, which is their third iteration of their
text-to-image model.
And it's really interesting this time around because they're going to embed Dolly 3 directly
into chat GPT, enabling prompting of that text-to-image model in a conversational format.
This could be really interesting from a general productivity perspective.
If you think about how people interact with stable diffusion or mid-journey or even Dolly
2, it was very much a trial and error process in which people iterate on like a zero-shot
prompts, in which you're kind of putting in a prompt, you see the image output, and it
works for you, it doesn't work for you, and you try again and again.
And through this experience, I think a lot of people kind of learned how to become like
better prompt engineers, especially with the case of mid-journey, of learning to use certain
words, learn to use certain sentence structures, etc.
There's a case with Dolly 3 and chat GPT that that kind of prompt engineering is no longer
needed because if you can prompt this model in a very much like natural conversational
format, then that kind of like prompt engineering that you needed before is, I guess, less important.
So it could be really exciting just from a lowering the barriers to entry point of view.
And it's also really interesting from a new consumer entertainment perspective, right?
And that OpenAI demoed an example in which the participant prompted the AI model for a
cartoon of a hedgehog, right?
And then as they did that, they were asking, OK, like Larry, Larry, the hedgehog?
I think it was.
I don't really remember.
But let's go with Larry.
And, you know, they said, you know, what does Larry like to do for fun, right?
And like the chat GPT interface would basically come up with like a text story, right?
That accompanies the images that they that it outputted before, right?
So it kind of creates this text plus image conversation that could evolve into really
interesting entertainment formats in the future.
But like we have yet to see, you know, what it looks like, right?
Yeah, maybe can you contextualize this?
Just because I know, right?
Chat GPT comes out.
We're all using it. Super exciting.
Usage has clearly decreased after the Wow, this is cool.
OK, everyone's talking about usage decreasing, but it was the summer.
And let's be honest, it's only late September.
Exam season hasn't really picked up yet.
So we'll see if traffic kind of comes back, right?
Because the biggest use case for it is for students, right?
A lot of the times.
And I think we're I mean, I think we're probably going to see
like a big pop to normalcy once like kind of midterm season picks up.
But but I would push back.
I think the biggest use cases is knowledge workers everywhere.
Like, yeah, I think broadly speaking, it's, you know,
students are definitely a strong user base, but you do have knowledge workers.
When you're talking about Dolly, you're talking about, you know,
graphic design work that can easily be enhanced by this.
So I think, yeah, I probably would agree with Sam and push back and say,
you know, if the usage has come down and, you know, in my own use,
I've noticed some degradation to the system.
It hasn't been as useful as it once was for me.
And I don't know if either of you have felt that way.
But I feel like I've had to prompt it more times over when I've used it recently
prior to like the most recent use, it felt like I would give it one prompt
and it would work really well.
And now in like these most recent use uses,
it's been quite hard to get exactly what I need out of the system.
I think there's probably also building like implicit biases and expectations,
right? And that the very first time you prompted an AI model,
you were probably expecting very little out of it, right?
And it was so amazing what came out of it, right?
And now you kind of have a baseline understanding of what the optimal output
might be, right? And it kind of depends on your ability to prompt it
and like using certain keywords to get to the output you want.
But I guess I will backtrack on the student
like point in the like, I didn't mean to say that that's like the largest
opportunity at hand, but I would say that they are the first adopters in this scenario.
So I think it still does make sense that, you know, in the summer,
traffic wanes and assuming that they are still the early adopters
and knowledge workers like, you know, you have like the skepticism, right?
In terms of like iterating.
Well, you don't need to say knowledge worker like that.
Well, like us, I'm sorry, like us, I'm included, I think, I hope.
And yeah, like we have yet to see kind of like full mass adoption
of these models by the more productive and
bigger opportunity, right?
Yes. So Andrew, Andrew, maybe you can dive into kind of that BCG report there
because I think that gets into, you know, here we are poo-pooing our results.
But at the same time, it's clearly hugely beneficial
and the super interesting study that came out.
Sure. So if people want to learn more, their Frank Downing did talk about it
in our Sunday newsletter, but a recent HBS and BCG study
found that GBT for boosted management consultants performance
and speed by 40% and 25% respectively.
So assuming quality and speed equally contributed to productivity,
then GBT increased the consultants productivity by around 75%, 1.75x, right?
So 1.4 times 1.25,
which over the control group, right, which did not have access to GBT for.
So this is great, right?
And that in an idealistic world, AI enhances everyone's productivity
in the workforce, increasing kind of like the lowest common denominator
and making the labor force that much more effective and valuable.
I think like it's very it's not like a controversial opinion to say that,
like, labor, the labor market and like the matching dynamics there
are like highly inefficient.
So just kind of getting rid of not the frictions,
but like just the overall kind of
lowering the probability of hiring kind of like lower quality laborers,
let's say, is like on a very beneficial thing.
Hopefully that entails some sort of like proportional
increase in compensation or work-life balance or other benefits to these workers.
But maybe I'm going on too much of a tangent here,
but I think it's pretty unclear to me what the median economics per capita
will look like and if we should be operating under this humanistic lens
in which AI for the most part like augments human labor rather than outright
replacing it, right? Because just going back to the study,
like I feel like these types of studies that emphasize human in the loop superiority
have already insinuate real feelings of insecurity, right?
Across like the knowledge worker base.
And I will just say like consider the source, right?
And that in the BCG study, they said, I think
participants using GPT for outperform the control group by 40%
for creative product innovation, but for business problem solving,
it resulted in performance that was 23% lower than the control group.
So don't fire your consultant just because you have a chat GPT.
Exactly. Yeah.
Well, I have two questions for both of you and more high level.
Just given this study, thinking about, you know, what happens when you do have
this type of productivity boost?
One, do you think this leads to just more productive work?
Or do you think these freed up hours go to Andrew?
We've talked about this at length, like freed up this freed up time,
then actually just spills over into maybe the entertainment category.
And because you can get, you know, more work done in a shorter period of time,
that doesn't necessarily lead to just continuing to do more work.
It actually could lead to, you know, increasing the time you spend
doing leisure activities.
But I'm curious what your thoughts are on this, both of you,
because I think we've come at and we've had this conversation in the past.
Well, I guess looking at history, right, with like the industrial revolution
as people moved off farms, right, in which people were working essentially
twenty four seven into kind of more like organized labor hours.
And then we had like, you know, labor regulation and like protections
that kind of developed over the past over the decades.
Like we've seen like a decline in average hours worked.
And just using, I guess, like the past century, I would like to think
that this that augmentation will lead to further decrease in labor hours
that can be then like recycled into leisure, let's say, and just better quality of life.
But it is way too early to say.
So I'm really not sure.
Well, I think one of the big takeaways is one, we're still in the very early days
of AI productivity tools and there's already, you know, crazy gains.
The fact that it's one point seven and we're in, you know,
generation one, essentially, I'm going to maybe we'll say generation two
of the chat GPT real rollout here is pretty remarkable.
What I do think is you get, you know, right now you've got the 10 X 10 X
engineer and you've got a curve kind of like this as far as, you know,
talent increasing exponentially for your really top performer.
I think what this is showing to do is you're kind of raising up this long tail
of average talent, which is a big deal.
And I think it's also going to make the curves deeper.
So, you know, it's not the 10 X engineer, the 100 X engineer.
You know, you'll have the 1000 X engineer as well, supported by a
better average worker, which I think is good.
Yeah, no, finish.
No, I just have another question.
Oh, yeah.
So it's like, you know, what does this lead to as far as the labor market?
You know, you could get workers being paid more if they're capturing the value.
You could have the end product be less expensive for the end consumer.
If they are more productive, you could have companies who are capturing the value
and have, you know, faster sales or higher margins.
So there's a lot of ways that this can play out and it'll depend
not just on the technology, but also, you know, the way society chooses to deal with
this, you know, technology can have many different outcomes.
Yeah.
And just on the monetization front, I think an interesting thought exercise is
if you, you know, if these productivity boosts are proven out in the real world,
if you look at, you know, the consulting industry, that's built by the hour.
Does that change how the consulting industry monetizes?
Do they switch to a different monetization mechanism?
Because if you're increasing your productivity, then your billable hours
in theory would go down for a specific project.
Do you do value based?
Right. Right.
And it's probably not great for, you know, a company billing on the hour.
So something to watch for and probably too early to tell.
I feel like this probably just points to like business models as a whole on both
consumer enterprise use cases, like moving towards like goal or like ROI based
targets, right, rather than like SaaS or like consumption based pricing.
Because like if you think about like
going on a huge tangent here now, but like with like Web 2, like obviously that
allowed for huge platforms to be created, right?
With like the cost of computing, like public clouds, like becoming a thing.
But like now with generative AI, like ingestion and output, like the cost of
doing so is like dramatically decreasing, which I think suggests like
the value of a platform or like the value of aggregation is like
not that meaningful anymore, maybe, because like it's now going to be really
interesting for like generative AI to kind of input itself in the middle and then
basically dictate what consumers and also knowledge workers see
right from these aggregation channels, right?
And like I think that kind of leads to like to your example of like both consumption,
both in terms of compute and also like labor hours, like not being viable
input variables for business, for monetization anymore.
Though I would assume it will be more lagged for consumers
because we still like TV and Netflix as we know them today.
All right, this has been a great rundown and I think more of a true brainstorm
than in the past, so that's great.
Andrew, I'm going to invite you to stay on if you have time or you want to,
because I'm going to audible, you know, all this talk, it is a brainstorm.
I was like, we got to talk about Tesla Optimist.
You see that video that they released?
It's incredible.
I mean, it's it's blowing my mind like where it's separating the different
like like giant Lego blocks, the blue and the green.
Yeah, so we'll talk talk through the video quickly.
We can throw it up on video as well.
Yeah, robot, you know, they unveiled this kind of a year and a half ago,
I think was the first viewing of it and they're just making pretty rapid progress.
It's separating out blocks.
They mess with it.
It can still move the blocks in the correct place.
It does some yoga.
And they're saying it's totally end to end visual learning,
which is pretty remarkable in and of itself.
But, you know, we'll get to the robotic side of it.
I think later, to me, the most interesting side of this ties into what you were talking
about, Andrew, and what it means for the labor market or even how do you size this
opportunity? And I was trying to think of, you know, the right framework for it.
Because most things, it's pretty clear how to say what the addressable market is.
So, for example, electric vehicles,
you know, how many people there are, you know, how many other, you know, petrol,
gas vehicles are sold.
So you kind of have a good estimate of what that market size can be.
Then you look at autonomous robots.
OK, this is a little bit more complicated or autonomous vehicles.
And it's a little bit more complicated to size, right?
You look at vehicle miles traveled.
Tasha Keeney has done great work in saying, well, actually, if you lower the price,
you induce demand.
And so you're going to get more vehicles,
vehicle miles traveled, right?
So it's a little bit more complicated.
But then you look at something like an autonomous humanoid robot.
And it's unclear to me what the actual total addressable market is.
If it can continue to gain human like skills, right?
What is the addressable market for humans?
Yes, and I have interest.
I saw an interesting tweet on how to maybe think about how the market develops,
assuming you don't just get to AGI right off the bat.
But someone and I wish I could give them credit,
but I don't remember who tweeted this out.
So this is not my own thoughts, but I thought it was, you know,
really interesting way to look at how you can scale up or develop these humanoid
robots. And if you think about, you know, you're purchasing this robot,
it ends up in your home, it can't come pre-installed with every skill you maybe
want it to have, unless you do get to AGI, I assume.
But if that is the case, you're not at AGI, you then have a skill market or,
you know, an app store for skills.
So you download and pay for skills.
And then you have a whole developer ecosystem training these humanoid robots
in warehouses to do specific skills that maybe, you know, a consumer would want.
So I think the tweet, the example was, you know, it's not going to come
knowing how to create sushi for you, but if you want a sushi chef at home,
maybe you download that skill and you pay a developer X amount of dollars for it.
And I thought that was an interesting way to think about how you could
potentially size the market and think of it as maybe it's an app store type of
economy that develops on top of a humanoid robot.
And it's a direct monetization method on that, you know, on those apps or skills,
which I thought, I don't know if it gets to answering your question of how exactly
you'd size this market, but I think it is an interesting way to think about how it could develop.
And I do think it's much easier to
size the market in the near and medium term because, you know, what I'm talking about
is more, I guess, AGI side of things, right?
Because even if it can just do a simple task in a factory, then that's great.
It's probably pretty easy to show ROI, particularly if labor costs keep going up.
Just having a reliable robot that can achieve a task is 24 seven without
calling out sick is high value in and of itself.
But yeah, we don't need to go too much in here.
I did want to throw this in as a topic because it does.
Andrew, thank you for sparking that.
Any other thoughts here?
Well, I guess I would just say, like, with what money are people buying the apps
or the robots if all these jobs are taken away as in, like,
are the owners of the robot hardware and the software?
Like, are they essentially the new governments?
Andrew, are you arguing?
Are you arguing that people aren't going to have jobs because of automation?
I don't know.
Have you not read Sam's work, Andrew?
I think that is a highly unlikely outcome,
given human ingenuity and the history of the labor markets.
I think if you're going to make that claim, there's a lot of
arguing and evidence where I think the burden of proof is on that
there won't be new jobs for humans.
OK, I just would like to better understand what those jobs will be.
Right.
It's just I like even for like video game market sizing with Nick,
I'm just trying to understand, like, if video game producers are 2x more productive,
are they going to create two games as opposed to like one game per year,
let's say, are they going to create a game that's double the quality?
That's also double the price or double the quality and actually the same price.
Right. And then that deflation just doesn't show up.
Right. And like, you know, who is playing the game?
You know, like all these different variables.
And I think it's just very hard to understand when we've.
It's hard.
Like, have we seen something that's as
clearly a productivity booster as AI like before?
Yeah, I think in this short time of a frame, right?
Yeah, it's interesting.
Oh, go ahead, Nick.
Yeah, it's just interesting to think about how as you create extremely cheap
supply in when we're talking about Dolly, ChatGBT, even these humanoid robots,
as that, you know, cost decline just continues to crater on in areas of the economy
that have largely not seen this type of cost decline that, you know,
we're probably going to be forecasting out.
Where do you unlock that demand to Andrew's point?
If you are able to create a thousand images a day because of Dolly,
where how do you unlock demand for that new supply that hits the market?
Or does everything just become so cheap and abundant that, you know,
the market can't really support that type of supply?
That's, I think, a really interesting question,
because you are getting into areas when you're talking about AI where it's just
mass abundance on a scale we've really never seen in areas of the economy
that have largely been, you know,
locked because it's, you know, powered by humans and humans are limited
in what they can produce a day.
Well, maybe I'll wrap this up.
Is I think a history of automation.
Is you take
economic activity that's not captured by traditional metrics, I would say.
It's really not it's it's labor that's not captured by the economy.
And you bring it into the quote real paid economy.
So you can think about driving.
Millions of people are driving every day,
half an hour to an hour, and they're going to work and no one's paying them.
That's really not part of the economy.
That's a lot of dead hours.
And you say, OK, autonomous driving comes in.
Why are people going to switch from driving themselves to
autonomous because it's actually going to be less expensive for them to pay
a service to do this.
So then all of a sudden you have hundreds of millions of hours that were not
captured in any way in the economy.
I mean, some way, right?
You're paying for fuel, gas, whatever.
But now all of a sudden it's becoming part of the economy and it's being
captured by the autonomous vehicles.
Same thing with like you look at food preparation.
Andrew, who pays you to go grocery shopping and to prepare your meal and to do that?
But if all of a sudden it's less expensive for you to.
Buy the service from a robot or an automated kitchen and delivers by autonomous.
Right.
So it's like all of these interesting ways that transform what the economy looks
like and you take unpaid and uncaptured services and all of a sudden they become
unpaid.
Yeah, we'll dive into that more.
We'll dive into that more.
I have some thoughts there, but we'll wrap at this point because I think there are
some instances where you could say, again, maybe all of the services that would be
created, still there might not be enough demand to match the supply that is being
put out there if you introduce AI into the system.
But that's up for debate.
And that's why we do this show every week so we can come on next week and we need
people to comment.
We need people to comment.
Let us know what you think.
I'm sure a lot of people have thoughts on Tesla's Optimus robot on AI stealing jobs.
People always have strong thoughts there and we'll see everyone next week.
Yeah. And wait, wait, I want to try one thing.
Andrew, thank you for coming on.
Our audience greatly appreciated it.
Thank you very much.
All right.
All right, that's our show.
Machine-generated transcript that may contain inaccuracies.
If you know ARK, then you probably know about our long-term research projections, like estimating where we will be 5-10 years from now! But just because we are long-term investors, doesn’t mean we don’t have strong views and opinions on breaking news. In fact, we discuss and debate this every day. So now we’re sharing some of these internal discussions with you in our new video series, “The Brainstorm”, a co-production from ARK and Public.com. Tune in every week as we react to the latest in innovation. Here and there we’ll be joined by special guests, but ultimately this is our chance to join the conversation and share ARK’s quick takes on what’s going on in tech today.
This week, Associate Portfolio Manager Nick Grous Autonomous Technology and Robotics Director of Research Sam Korus are joined by Research Associate Andrew Kim. Together they discuss OpenAI’s Dall-E 3 release and the impressive developments of Tesla’s Optimus Robot.
Key Points From This Episode:
Intro
OpenAI Dall-E 3
Tesla Optimus
For more updates on Public.com:
Website: https://public.com/
YouTube: @publicinvest
Twitter: https://twitter.com/public