AI Hustle: News on Open AI, ChatGPT, Midjourney, NVIDIA, Anthropic, Open Source LLMs: Unlocking AI's Future: Neuralink, Singularity, CCP's AI Use – Matthew Iversen's Insights

Jaeden Schafer & Jamie McCauley Jaeden Schafer & Jamie McCauley 10/5/23 - Episode Page - 57m - PDF Transcript

Welcome to the OpenAI podcast, the podcast that opens up the world of AI in a quick and

concise manner.

Tune in daily to hear the latest news and breakthroughs in the rapidly evolving world

of artificial intelligence.

If you've been following the podcast for a while, you'll know that over the last six

months I've been working on a stealth AI startup.

Of the hundreds of projects I've covered, this is the one that I believe has the greatest

potential, so today I'm excited to announce AIBOX.

AIBOX is a no-code AI app building platform paired with the App Store for AI that lets

you monetize your AI tools.

The platform lets you build apps by linking together AI models like chatGPT, mid-journey,

and 11Labs.

Eventually, we'll integrate with software like Gmail, Trello, and Salesforce so you

can use AI to automate every function in your organization.

To get notified when we launch and be one of the first to build on the platform, you

can join the waitlist at AIBOX.AI, the link is in the show notes.

We are currently raising a seed round of funding.

If you're an investor that is focused on disruptive tech, I'd love to tell you more

about the platform.

You can reach out to me at jaden at AIBOX.AI, I'll leave that email in the show notes.

Welcome to that chatGPT podcast, I'm your host, Jaden Schaefer.

Today on the podcast, we have Matthew Iverson joining us.

Matthew is a serial entrepreneur.

He's a SaaS software guy, recently launched a Google Chrome plugin called Promptbox.

You should check it out.

Really cool.

He's also the CEO of AppRabbit that makes customizable apps for fitness coaches and some

big celebrities and whatnot.

So Matt, welcome to the show and excited to have you on.

I'm excited to be here.

It seems like we just did this podcast not too long ago, but I feel like so much has

changed because every day I wake up and AI is just in a completely new spot.

I remember last time we talked, we were talking about just chatGPT in general, and then now

we have things like auto-GPT and it's just taking a whole new direction.

Yeah, for those of you that are new, Matt was one of the first guests on the podcast.

He's been around since the beginning, but one of the first video podcasts.

Anyways, we need to get back into doing more frequent updates because he's obviously deep

in the AI space as well.

So things have evolved a lot since we got this whole thing kicked off.

And a crazy bit of information.

I was just telling Matt and for anyone listening, this might be interesting.

We just passed a thousand subscribers on Spotify listening to the, or followers listening

to the podcast.

So thank you for the thousand people that are following.

If you're watching this on Spotify or if you're listening to it on any platform, if

you want to watch the video, you can find it on Spotify.

If you're watching on Spotify, make sure to follow.

And we just cracked 185,000 listens on the podcast.

So thanks to everyone for sticking around.

So Matt, why don't you tell everyone a little bit about what you're doing at PromptBox right

now because I feel like that is a pretty awesome tool.

Anyone using AI right now should be checking out.

Yeah, so I'll give you an example of how I use it, then I'll go into maybe what it is.

So I was just building some marketing designs with mid-journey, like a lot of people do.

I finally found a prompt on how to, that allows me to make super realistic looking images.

And I found it on Twitter or something.

And the thing about these prompts is they have like a bunch of random crazy things in

it.

And so I take that prompt, I open PromptBox, which is a Chrome extension, and I just save

that text, which is the prompt inside of it.

So now anytime I want to go and build something, like for example, I was trying to have a really

nice picture of a high school.

I open up PromptBox, I click on that prompt, and it pops open a screen where I can put

in a couple of variables like high school at sunset, or maybe I want to have a chess

board looking epic in the snow.

But I want everything I am asking for to be super realistic looking.

Anyway, so I can put in what I want, it auto fills the rest of that prompt for me.

And then it creates like a super photorealistic image on mid-journey, or if you have a long

prompt you want to deploy on ChatGPT, this is just a place to save, organize, and share

all your prompts.

And we just cross 7,000 users, and it's a lot of people's favorite tool.

So I'm really excited about it.

I've hardly done any marketing, and it's just kind of growing on its own.

That's super cool.

I'll throw a link to it in the show notes.

But question for you on that, and also on the AI industry in general.

So like with mid-journey, recently they just switched to paid only, so there's no free

tier.

Do you think that's going to cause less people to use mid-journey AI in general?

Because obviously ChatGPT, they got their free tier forever, and if you want GPT-4,

the best version it's paid, but with mid-journey it's just like a hard cut-off, I think it's

like eight bucks a month or something for their lowest one.

Do you think that's going to cut down the amount of people using that?

Yeah, I think it will, but I think that's fine.

It has such a valuable use case for them.

I think having a free trial is something that they still do, I think.

Is that right?

I could be wrong, but I'm pretty sure it's just like a hard eight bucks a month if you

want it.

Yeah.

I knew I had to upgrade it.

It kept telling me that the servers were too busy every single time, so I guess that's

just the thing that they tell people now.

The ultimate squeeze.

Yeah.

Yeah, it's like, wow, so many people are using this.

Oh, well, I'm mad because ChatGPT's been doing that to me as well.

On GPT-4, obviously paid 20 bucks a month for it or whatever.

I asked it a question, it responded with GPT-4, then asked it a follow-up, and it was like,

sorry, this model's not available.

We just reverted to GPT-3.5 to answer your follow-up question, and it was like, I can

tell the difference between 3.5 and 4, and I was kind of annoyed that they downgraded

me mid-conversation.

Oh, jeez.

They're like, your conversation doesn't require the top intelligent AI, you know?

Yeah, exactly.

That's probably what it was.

Like, sorry buddy.

Yeah, you're asking it like why the sky is blue and stuff, and it's just like, we're

going to put you down a notch.

Yeah, they're like, listen, we're saving that computational power for astrophysicists talking

to us right now.

You're in the dumb tier, so you don't get it.

You're asking stuff like, how to make more money side hustle outside the rules.

Yeah, they're like, all right, buddy.

I saw something recently in AI News that kind of blew me away, but I'd be curious to hear

your thoughts on it.

So essentially, there's a developer, and he recently found a way to give chat GPT4 away

for free to people.

He's like, you know, the Robin Hood.

People are like, yo, how's he doing this?

Essentially, and so he open sourced this on GitHub so anyone can use this and integrate

it in their apps, which is pretty hardcore.

First off, what's your best guess on how he would have done this?

Well, I know that you can go to someone who pays for an API and basically write an API

to that open.

If someone has a chat GPT4 running on their website, but they don't have any parameters

around it, I know you can kind of mooch off that.

Besides that, I don't know how he would do it.

Oh, dude, yeah, you're a genius.

I literally couldn't figure this out.

But anyways, yeah, what he ended up doing, he went and yeah, he found, because also GPT4's

API isn't open.

That's another reason that it would be harder.

He found people that had the GPT4 API open and he's mooching off of u.com, which is

like a search engine.

So they obviously have massive bandwidth and they have a lot of people flowing through

it.

It's not like a little app that would notice and shut it off easily.

So u.com is now footing the bill, which it's a search engine.

So I don't even know how they would turn that off because it's almost like an iframe, like

using an API or whatever, but pretty much iframe means straight onto u.com's chat GPT4.

So anyone can just get it for free and they can like, the open source, you'd be able to

put it in your project.

So it's kind of crazy, but anyways, that was hilarious.

That was crazy.

I was asking my developers to put chat GPT in the prompt box to like help generate prompts

and they're like, oh, people are just totally going to use this to like feed other places,

your open source, whatever.

So you have to make sure that whatever prompt you have in the background disallows people

from asking it anything besides something that's specific to your business.

Otherwise it'll just go, you know, people just, yeah, which for a search engine is super

hard.

I don't know really how you get around that because it's technically you're supposed to

be able to ask it anything you would ask chat GPT.

It's like completely open.

So that's pretty hilarious.

I mean, in your case, something I've seen other people do for businesses like your own

is to just.

I mean, this is kind of technical.

So maybe it's too, too annoying for really mass market, but they go and just make you

sign up for a chat GPT account and they have like a documentation on how to do it and you

copy and paste your own API keys.

So you're technically paying for the runs.

That's one way I've seen it done, but I think you could build a little business just offering

that API and just making it easier to access and then charging a little bit more than open

AI charges you because yeah, oh, a hundred percent, a hundred percent, like people are

like, I don't know how to get an API key and then paste it into my it's like if you want

it white label through your website or something, someone should just build a company that where

you just have to like put your name and email in and then you audit and credit cards are

easier to put in than an API key.

So just, you know, fraction just arbitrage that access.

I think that's just no be like a no brainer.

Yeah.

I think that's a no brainer.

I've seen a couple of people do doing that with like a video or with them image ones.

But yeah, it makes perfect sense.

Okay.

So all the image ones work that way.

I guess I have like lens, yeah, all the image ones work that way.

There's a whole, there's a bunch of like image platforms where you can get like Dolly to

mid journey and pretty much all the different image generating ones all kind of on one platform.

So it's kind of useful.

Okay.

Question for you.

Palantir.

So Palantir in case anyone doesn't know, it's like a defense, artificial intelligence.

So pretty much their latest product is an artificial intelligence platform, which pretty

much lets like LLMs integrate with classified intelligence stuff for like the military.

Anyways, they're kind of like famous because like during the whole meme stock thing, they

were like hyped up as like one of those meme stocks that got hyped.

But anyways, recently they did a demo where they had chat GPT or pretty much an AI carry

out a military operation.

And it decided like how to do all the maneuvers, like where to send the troops, how to like

plan the whole thing.

Anyways, what is your opinion on using AI to plan our military maneuvers?

Yeah, I'm looking at it now.

This is really cool.

I don't know.

It's the first time I've heard of it though.

I mean, that seems smart.

Like we should be using AI to further, like, you know, if China was using AI to run military

operations on us, I would hope that we would have an AI that would like,

Superior.

Yeah.

And so maybe that's why the AI race is so important.

I don't know.

I need to think about that for a second because it seems like a benefit.

So one killing people anyway.

So it's like what?

How do you make that more effective?

That's morbid.

One criticism that people have about this is like, you know, chat GPT, like hallucinates.

So you get it to like set up a military exercise where you're sending your troops in one direction

and it's like, yeah, hide under the bridge bunker, like on the side or whatever.

But like in reality, that doesn't actually exist.

So it just kind of sends people to it.

So that's like one criticism it gets.

But in my opinion, the reason why I think this is probably not as big of a deal.

Okay, sure.

It's a big deal right now.

But obviously this is the demo.

It's not live.

It's not a lot of work to make this thing actually live when people's lives are at risk.

I think it's going to be less big of a problem like in the long run because I remember when

Apple Maps first came out, like I remember there's all these memes and news articles

and headlines where people just were just like, oh, I tried using Apple Maps and it

literally like made me turn into a cornfield and like someone like crashed their car because

they like turned into like a like off a cliff or like just like stupid stuff like that where

it's like, obviously this technology exists.

Obviously, if it tells you to turn, don't swerve off a cliff, like they're saying like,

oh, it's like super dangerous because yeah, like it was telling people to turn off cliffs

and stuff like that.

And it's like, like obviously use your own eyes.

It's like an assistant and Apple Maps got better and it's like just like Google Maps.

Now it's fine.

Anyways, but this kind of thing gives me flashbacks to that where it's like, yeah, like maybe

you'll hallucinate, but honestly, if this is a military grade tool, like they're going

to have this thing pretty down before they actually use this in any actual combat.

Yeah, I could see it just being used as like a, you know, a tool to help think of ideas

and then people can decide whether they want to deploy those ideas.

But that's pretty, that's pretty insane.

And like, won't like, won't they just be able to be like, you know, in the future, won't it

just be like, oh, we need this operation done.

And the AI will be like, this is the best way to do it.

Do you agree?

And you'd be like, yes.

And then wouldn't it be able to just like fly the drones and airstrike and nukes like on its own

like the whole thing can be carried out by itself.

Yeah, that's like a pretty terrifying prospect because like, if you did the research or like

if you did the tests and the AI was better, why wouldn't you use it?

But if you use it and everyone uses it, it's just like, there's got to be some like, I think,

I think when it comes to military interventions with all this stuff, we're going to,

I think we'll inevitably see similar things as like our nuclear packs we have with other

countries where we're like, no one's going to nuke each other.

It's going to be like, no one's going to unleash like AI and military on each other,

or maybe we're going to have to shut it off.

And other people are like, well, I guess there's, there's the other aspect to where people are like,

yeah, but it's already like out there.

But like at the same time, so was mustard gas during like World War One, right?

And then they turned it off.

They like made a deal not to have, you know, chemical warfare for I think World War Two or

whatever, and moving forward.

So I feel like even if it starts getting ruled out, if it becomes too like just insane,

inevitably it's going to have to get drawn back.

Otherwise it's just like an all out US, China and Russia inevitably will all have the same

like super intelligent AIs that can decimate each other.

Yeah. Did you hear that Elon Musk and Tucker Carlson interview where Elon talks about

shutting off like having a physical manual switch to shut off AI?

Yeah, yeah, yeah. Because I think Sam Altman from open AI, he has like a backpack,

he carries with him everywhere that has like an off switch for open AI.

Gary wants to just kill it.

Does he?

No, I'm dead serious.

If you look at like videos or pictures of him going anywhere in public,

he has his blue backpack, he carries with him.

And like inside is like the off switch for open AI just terminates it.

What? Dude, I know he also has like, he's a hardcore prepper too.

Like I know he has like a house in the mountains of Sweden or something too.

I wonder if he was like prepper before he did open AI or he's a prepper because he did open AI?

I feel like it must be because because that's a, he's gone to like great lengths.

I'm pretty sure he has like a jet fueled up somewhere and like all this crazy stuff.

I don't know if it's AI or if it's like, you know, political unrest.

I have no idea what he's gearing up for.

But yeah, Elon Musk said the same thing.

He said, you know, Tucker was like, do we need to blow up all the servers?

And Tucker and Elon was like, I don't think we need to blow them up.

I think we can just unplug them.

Like we should have a switch to be able to just turn the power off.

Because if we don't, then like it could be too late because and the way and everyone's always

like, okay, you think AI is so dangerous.

Like how, how, what would actually be the way that it would happen?

And the problem is one way it gets too dangerous is it hits a point called the singularity,

which is a term used for black holes.

It's like a point of trying to predict what will happen when there becomes too many variables

that were actually blinded by your own ignorance and by the sheer number of things that could happen

that once AI gets to this point of accelerated becoming better, like too fast,

we can't see past that point.

So we have no idea what AI could do.

So really the scariest thing about it is just that it becomes completely unpredictable.

It gets the trajectory of improvement becomes too fast.

And then that's when, you know, Elon really just mentioned, well, it'll infiltrate social media.

It'll build accounts and you won't know who's talking to who and everything like that.

And like it'll kind of, you know, brainwash people into thinking certain things and whatever.

But that seems dangerous.

And it seems like that'll happen.

But, you know, what do you think is really actually scary about AI, like danger, like mortal threat?

Yeah, well, I mean, a couple of things.

First off, I think when you say like the singularity and you talk about that,

like a lot of people might be like, Oh, it's like, I don't know, it seems like far off and crazy.

But to me, I don't actually think that's crazy.

And I don't think that's impossible or far off because it kind of goes back to like that

philosophical debate where like you, you're like, do I see green the same way you see green?

Or maybe I see green and you see red, right?

So like there's a whole philosophical debate about like our experiences and how we experience them.

And if they're different anyways, I know it's kind of a random example,

but it makes me think of the fact that we so many things that we think and so many ways that we think

we're like, Oh yeah, this is just normal.

Like it's normal that I wake up and eat breakfast and I sleep for eight hours.

And I do like, we have like these things that are just like normal.

But like I guarantee, like if you just took like two random people on this earth

and spun them up on an identical earth and had them start from scratch from the beginning

and waited 2000 years, maybe you gave them technology, maybe you didn't.

I don't know.

But like the way we did things, the way everything happens, like things would be,

you could have like a completely different universe over there.

Like they think different ways, like computers are programmed in completely different manners.

Like you like, it's just like, you can't even fathom how like different it is.

And of course there's like instinctual things with like biology, like reproduce, eat, sleep.

I guess there's, and there's like rhythms that the sun will force you to,

I guess have certain like sleep hours.

So like there's some things, but anyways, I can't help but think two people on a completely

different planet could evolve to like an insane different way that we're not really thinking about

here. And so like in that tangent of philosophical like thought, AI could do the same thing where

like, I think it could get, it could get so advanced at improving itself.

We don't actually understand why it improves itself or why it thinks a certain way.

And an example of this recently came out, I was reading an article.

This is kind of what brought me to this whole line of reasoning, whatever.

Is it like there's emergence, the emergence, like the teaching themselves,

things that they don't understand why it's teaching themselves.

Yeah, yeah.

And well, the reason was because I saw an article yesterday, there was this guy,

I did a podcast on it and he, he won the lottery and he says he used chat GPT.

So he's like, I went to chat GPT.

I told it like past winning numbers for like a certain lottery and like a bunch of information

about it and blah, blah, blah.

And then I asked it for like what it would predict the future winning numbers were.

And first off it, it gave him like some things like this is just a guess,

winning the lottery is lucky, go find some other hobbies, you should go outside.

Like it told them like stupid stuff like that.

But then it was like, okay, so here's like my prediction for like some possible future,

like winning numbers and it gave him some and he like won.

It's a dude in Thailand.

And so like the equivalent was like 60 American dollars that he won.

Maybe it was more money in Thailand or whatever.

But like the point is that like he won.

So now number one, people are like, okay, replicate it because it could be a fluke, right?

I still, that's a pretty crazy fluke.

Replicate it for the max lot of $2 billion.

Yeah, the super power ball, power ball, $1 billion.

But like my thought is, okay, that's funny.

Maybe it's real.

Maybe it's not whatever.

What would happen though, if you could ask chat GPT to predict the lottery

and it could predict it and we didn't know why it could predict it.

And it's like, why can you predict it?

And it's just like, uh, just like based off of like computation and numbers

and like these really complex things, I just, just think it's going to be this.

And like what if, what if it couldn't explain why it was making predictions

or why it was doing things in a way that we could understand,

but it could like get the answer right.

Anyways, it becomes like, it becomes like God in that way.

Yeah, exactly.

So I'm just like, at that point, is that that far off to think that could happen?

But if that happened, we'd be like so screwed because it's like, if we, you don't under,

and this is, that's a whole nother debate and interesting concept is like,

have you read the book Ender's game or watched the movie?

Yeah, love it.

Yeah. So I mean, I just think the concept of like, there's like the buggers

and there are these like aliens and they're trying to like,

they're at war with the humans and there's like all these different things and the

humans all have to band together and like go defeat them.

Anyways, it's like this whole concept of like,

they don't understand the buggers because they can't talk to them.

So they have to destroy them.

And I think like a lot of human nature has that.

Maybe it's from like tribal warfare, right?

Like we don't understand the other tribe and why it does that thing.

So like, because it might be a threat to us, we have to kill it.

Like it might not be an immediate threat, but it has the power to destroy us.

Let's go destroy it first.

You know, there's like that whole concept.

Anyways, I feel like it's like

It's like a cowboy showdown.

It's like whoever draws first will like walk away.

Yeah, exactly.

And it's like, well, we didn't want to kill each other,

but like we both can and the other guy's got a gun and I've got a gun.

So I'm just going to shoot him so he doesn't shoot me or like my tribe or my people or whatever.

Anyways, I feel like you could get a lot of that same like human

instinct with like AI where it's like, if it can do things and we don't know why it does things.

And it's like, if you don't understand your enemy, it's foreign.

It's like an animal.

You just want to like kill it.

You just don't want it to be a threat, you know, like a saber tooth tiger.

I'm not going to sit here and spend a hundred years like learning how to like

coexist with saber tooth tigers and be its friend.

It has the capability of killing me.

I'm just going to try to kill it if I can, you know.

So yeah, that was that article that came out.

They were discovering things with AI.

This is one of the guys in open AI having the interview.

And I forget really what the whole thing was about,

but there was a moment where AI started writing things in a new language.

And they didn't know what the language was.

And so they got AI to kind of break down like, like the alphabet of that language.

And it was like a more efficient language.

It was like something between Chinese symbols, like Japanese kanji and like a couple other,

like it had like some English intertwined, but like the words had different meanings.

And it was like, you know, we're basically created an alien,

like an alien language that we could actually study and learn out of nothing.

And go back to your point about like AI becoming like a God.

I think, you know, if we can't understand how it's predicting things like numbers,

which is like pretty logical, imagine, you know, like an algorithm in social media

has the ability to predict what videos I'm going to like when I scroll like on YouTube shorts.

And we can kind of understand that, right?

It's like these algorithms are pretty complex, but like based on

the types of things I've watched in the past and based on like the things I like or don't like,

and like it will feed me these things and like things that have a stronger hook get more attention.

And like that's how an algorithm is built.

But the problem is AI has the ability to understand, you know, if you feed it enough information,

it'll find patterns in places that we can't understand.

Like exactly an algorithm human biology, that's going to be really crazy.

It understands like, oh, your hormones are kind of adjusted for this level.

And your mind works like this, like an algorithm.

And now it could really sink its teeth into becoming you, you know.

Oh yeah, that's a terrifying concept.

And yeah, I think the distinction between algorithm and AI is that an algorithm is written by a human,

like these are the, this is like the format it follows.

And AI can just write its own algorithm.

It decides how it, anyways, that's the crazy part.

But dang, yeah, I never actually thought about it tapping into biology and thinking.

And also like, if you think about like the world around us today,

so much of what we see and hear and like believe like going back to the concept of like,

the world could be different if it was set up like from scratch on a different planet.

Like I imagine it would evolve differently, huge civilizations.

And like it has, right?

We've seen it before.

We've seen lots of civilizations rise and fall, right?

Rome, Greece, all these different ones.

So like we can look back and kind of have precedent for like, in that era,

this is what it evolved to.

This is what it evolved to.

This is what killed it.

We don't know what's going to kill America yet, hopefully nothing.

But yeah, sorry, another one, another tangent for you.

I think America will die in the next, well, at least 100 years, if not like the next 10 years.

Like really?

Okay.

What role, what role do you think AI would play in, in its death expediting or slowing it down?

I think it just, I think AI is like, it's like a superpower.

Like it just, it will speed up anything.

And if it, if you're a good country, it'll speed it up.

And if you're a bad country, it'll speed it up.

And I think right now America is so corrupt from within.

Like politically, I think it'll just speed up its demise in ways like whether,

I think it'll go like election fraud route first.

And we'll just, we'll like dive straight into tyranny with AI.

That's morbid.

What about, what about China?

How do you think AI will affect China?

Well, they're already tyrannical.

So I don't know, I hear that China's AI is really good.

And I don't know what they're going to do with it.

I literally can't even really imagine.

I assume they'll attack the West cause they can,

cause they have Russia and Canada and Saudi Arabia and like all these other alliances now.

Yeah, yeah, yeah.

Oh yeah, that's very interesting.

Well, I'm a little bit more optimistic than you are on America.

And like, I guess perhaps American, American exceptionalism or whatever you want to call it,

being the best and being able to fight off everyone else combined.

And number one, my thing with China is I think that by and large, their culture,

people probably upset about this opinion of mine.

That's okay.

Just whatever.

I think by and large, China's culture is designed to help them not invent new things

as much as it is to conform because of the common society they're in,

therefore making them less creative and more likely to copy what other people are doing.

I mean, like if we just look at it from like a statistical perspective,

like statistically they copy products more than they invent products.

There's not a lot of products that like were invented in China that went to the masses.

We have TikTok kind of was a copy though of other social media that came out of the West.

Chinese knockoff.

Yeah.

And so I think that AI is going to have a very similar flavor of that, right?

Because obviously China doesn't want any Western technology,

which I think it's funny.

We have TikTok here in America and like it's controversial to ban it,

but like in China where it comes from every single American social media is banned.

And it's just like, oh yeah, of course they can't have it.

Anyways, they're going to do the same thing with AI and then they're going to make their own.

And I think that it's actually not hard to clone chat GPT, sadly.

So like the moat isn't very good.

The only barrier to entry on the moat of chat GPT in reality is their terms of service,

which is sad because essentially, well, I don't know if you saw,

but Stanford pretty much went and they use like a Facebook Lambda.

They spent $100 in compute power using a Facebook Lamma and then Lambda.

And then they spent $500 doing questions and answers to chat GPT.

They took that output stuck it in the new Facebook system.

And then they got an AI model that was roughly equivalent to chat GPT on a majority of all

topics for 600 bucks.

So I don't think that like China will struggle cloning us in that way.

But I do think that they will have a harder time staying ahead of the curve without like

a lot of the ideas that come out of America.

Like open AI and chat GPT came out of America for whatever reason, I know.

I'm sure like we have a lot of international listeners.

So I mean, look, I'm from Canada, you're from Canada.

So I mean, it's not to beg on other countries, but there's a lot of really smart software

engineers that are incentivized in this country to create new things because of capitalism.

And so anyways, we came to America because of these ideas.

Like we both we both come from another country and like I'm very nationalistic.

I actually just became a citizen.

So I'm dual now American and I am super proud of it.

I'm super optimistic about America.

I love the creativity and the ingenuity and like everything about it.

I just think it's the only people I can destroy America is America itself.

Like it's actually a nationalistic kind of point of view.

It's like we're so good that only we can destroy us.

And I think I think we might.

But so so do you think America ends up because like Europe, the opinion America is gonna,

you know, get destroyed or kill itself in 100 years or something.

Are you have the perspective that America will do that to itself or that China, Russia,

all these alliances, the whole world, right?

China is getting everyone in their court right now.

They're getting everyone kicked off the US dollar.

Do you think that that's going to be the the downfall?

I think that I don't know if it'll be like them walking over the border demanding that we surrender.

But I think I think we've already seen like psychological like prowess like from China.

I think they're going to just split us apart like from within.

And I think we'll look outside for help and then they'll come and help us.

Yeah. And air quotes help us.

I think that's how it works.

They can save us.

Yeah. I think America's strong.

Okay. Well, fascinating.

We'll save. We'll have to continue diving into that on the next episode.

But I have a question for you when it comes to finance,

because we're touching on this earlier with the whole lottery thing.

But there was something else really interesting that happened earlier this month,

which is that some researchers took, I think like they took a number.

I'm going to just say it wrong.

So I won't say the exact number.

But in the thousands of news headlines dating back to I think 2001.

And they ran them through chat GPT and they had it predict.

They said based off of like these news headlines,

predict whether the stock of the company that they're referring to will go up or down the next day.

And like, this isn't just a sentiment analysis.

Like, is this an article positive or negative?

It's like, is it positive to the point that it would actually affect their stock?

Is it negative to a point where it would actually tank their stock?

And chat GPT did a really good job.

They said if they had been investing based off of just those signals,

like shorting and buying, just based off of is the stock price going to go up or down the next day,

that it would have been really successful.

And then they compared that to, because this isn't a new concept, right?

Like we have financial analysts that do the same thing for hedge funds and big companies,

but they found that it actually outperformed the financial analysts.

And the interesting thing is with current technology algorithms that the financial analysts have,

they do news headlines.

They also do social media sentiment and they have a couple other factors.

And with like way richer data, they're still worse than chat GPT just with the news headlines.

So my question to you is,

are you ready to hand over your entire investment strategy to an AI and chat GPT?

Totally.

And I'm also ready to use them as doctors too.

And to use them as like analysts for like property,

like, you know, analyzing whether a property is good to buy, like everything.

If you had all the data at your fingertips and you could access all of it without any

latency, like that just, that's the best.

I think AI is going to come for all analyzing jobs.

Okay.

This brings me to a very interesting thought.

But before I touch on that, which I know is going to be controversial,

is I also want to bring up just a really interesting thing because I saw recently that like CNBC

did an article on, I think this thing called magnified, but essentially it's like chat GPT

and Robinhood combined.

And it gives me the thought of like what we're just talking about, right?

Like actually being able to trade stocks using AI and like be successful.

But that's not actually what it does.

What it is, is it's like a co-pilot for self-directed investors to like direct,

or to like invest in your own personalized style.

I mean, it's a brokerage and trade stocks, whatever.

Okay.

So I feel like a lot, there's going to be a lot of products,

which this is totally different conversation really, but there's going to be a lot of products

that come out where people are worried about the liabilities of AI and like what the implications

are.

And so they shy away from it's like really powerful use cases and they do these just like

lukewarm horrible products, which in my opinion, it's a horrible product.

I haven't even tried it.

Maybe it's great, but like use chat GPT to invest in my own personal way, bro.

I'm not a millionaire.

So obviously my own personal way of investing isn't that good.

Why would I do that?

Why wouldn't I just use an AI that could be trained off of Warren Buffett or someone else

that is a billionaire and go there anyways.

So that's just like a whole nother side tangent, but I just feel like it's so annoying when you

when you read like researchers figured out like how to use these news headlines to accurately

predict whether stocks going up or down, use chat GPT to like invest in a couple little things

in a little way.

Like no, dude, like give me the freaking AGI that's going to like 10x my portfolio.

Yeah.

Like let me just plug into a bot that uses AI.

Yeah.

So I think there's going to be, I'm not 100% sure, but it wouldn't surprise me if

people that discover the really powerful use cases for some of these AIs,

maybe even outside of chat GPT, maybe in their own internal models.

And also maybe they've already figured this out.

Maybe this is old news.

Like they're not going to release that kind of stuff to the public.

Like if you really could 100x a stock portfolio in a year with some sort of AI tool.

Yeah, right.

I'm going to BlackRock and I'm going to sell it to them for $10 billion.

Like forget it.

This is not going into the Robinhood of chat GPT.

All of the regular little like investors and everyone's Robinhood on their phones,

like good luck.

Yeah.

Yeah.

If it actually works, it's going to make you a lot more money using,

like you're going to make more money using it than you would to sell it.

So no one's going to get acted.

I bet, dude, maybe it's just a prompt.

Maybe it's just a hardcore prompt for chat GPT for all you have to do is like,

Oh my gosh.

You are a financial analyst.

Your only job in life is to 500x this.

You work for BlackRock.

There is no corruption to corrupt for you.

How do you turn $10 into a million?

You are the stem of the earth.

Yeah.

You've sold your soul to the S&P 500.

Okay.

So yeah, that's an interesting thought.

But like my other thing is like, I'm sure that's already in vent.

Like I'm sure there's, I know that there's like complex stock trading bots,

because I've talked to people that like run a bunch.

And I'm sure there's even more complex ones than, you know,

but if you have super computers, you can trade stocks faster.

There's all sorts of things, whatever.

Okay.

That's boring, old news, whatever rich people are going to stay rich forever,

because they have better resources and can lock in exclusive technology.

They're not going to give it to anyone else.

Okay.

Moving on.

Okay.

My next thing is for the rest of us, like normal folk and brokies,

that that doesn't really matter to us that much.

You were talking about how you were willing to outsource your entire investment portfolio

and strategy yada yada to a bot, right?

Let's say there's this incredible bot.

What would happen if today you walked outside, get hit by a car?

Boom.

You go into a coma, you wake up in a year and your wife, everyone else you know,

has neurolink embedded in their brain with like the most up to date super charged AI.

And you wake up and you're like, oh, hi everyone.

And they're all like, you know, it's like, you seem really dumb compared to the rest of them.

What would you do?

Would you get it?

And this isn't just like, would you get this today?

This is like, everyone already has it and you're at a serious disadvantage.

Do you get neurolink embedded in your brain?

Yeah, I think, well, I don't know, man.

That's hard.

Yeah.

I think you would kind of have to, unless you want to ride the short bus for the rest of your life.

What, what would you do?

Oh, I mean, it's not really about me.

I'm more grilling you on this one.

But, um, okay.

I just needed about you, bro.

What kind of redirect is that?

No, dude, you, are you going to get neurolink and let's assume that neurolink has access to

real time data in the internet.

Have you seen those like those clever Instagram reels where it's like a guy who has neurolink

and he's like talking to his buddy and he's like, he's like, ask me something.

He, so he asks him like a math question.

And he just like says the, the answer like right away.

And then he's like, uh, he keeps on getting more and more complex questions.

Like, you know, name the president of, of Russia or like, or like, you know,

name like the vice president of this thing or whatever of this company.

And it gets so good until the guy starts predicting what the next question will be,

as buddy's asking him.

And then at the end of the day, he starts to say that like the same time as him.

And then like it moves beyond that.

And then he starts telling the future of like what's going to happen.

He's like, I don't know.

It's almost like we're all, we're all just like plugging into this like matrix and like

based on like the past, I can like not predict the future based on these like algorithms that

like I'm constantly adapting to and like building in my mind.

I was like, geez, like actually one part of AI that's very fascinating is you just said that,

you know, you AI can predict things in the future based on like a headline based,

you know, it can predict stock prices.

Well, that's like very much, you know, within the human psyche to be like, yeah,

that makes sense.

And like that's a great use case.

What else could it predict though?

If it takes all of the information about how civilizations rise and collapse,

could it not also, if you fed it enough information, understand where America is going

and predict the future and predict, you know, what fuel sources will be we be using in 100 years.

Yeah.

And instead of waiting 100 years, if it's going to work then, why wouldn't we bring it now?

So, you know, if we're moving to electric cars, like what comes next?

Well, this is an interesting concept because like how much you want to bet

that Xi Jinping over in China has his like Chinese hyper smart AI team for the government

build an AI and he sits down on there and types, pretend that you are a previous American citizen

from 30 years ago.

America has fallen.

The whole country got completely decimated.

China now owns it and is the best.

How did it happen?

How did it happen?

Yeah.

Yeah.

Like tell me the story.

Tell me the most likely way that this happened.

Yeah.

Why did the Americans never see it coming?

Why did no Western country with democracy understand why this would happen to them?

Yeah.

What were the weakest points and why was it so easy?

Yeah.

Why was it so, so easy?

And why am I the best looking like ruler of the world there ever?

I mean, theoretically speaking, hypothetically speaking.

Please include a paragraph about why I look nothing like Winnie the Pooh.

Yeah.

Yeah.

Why, why does Chinese food also just the best?

Yeah.

So it turns into like the mirror on the wall just starts like praising him back.

Yeah.

Yeah.

But it's interesting and but like it's so interesting when you have like opposing forces

or ideologies or sides with essentially the same technology.

Because here's the thing.

Like a lot of people hype up AI because they say we're about to reach the,

the singularity.

Yeah.

The singularity.

Yeah.

I've actually talked to some very smart people in AI that say we've hit a,

we've hit a cap on what the transformer model, which is what ChatGPT is built off,

can actually get to how far its intelligence can move.

And at this point, it can do like so much.

And now if we want it to get better, we're plugging plugins.

Like here's a math plugin.

Now it's really good at math.

But it, the transformer model itself is not good at math.

And I think that some people are saying we've hit a max there and that there may very well

be advancements in AI and technology yada yada in the future.

But what are your thoughts on that?

Yeah.

I've heard that too.

But I think this, I think that when you look back through civilization of humans,

I think you see thousands of years of not very much technological advancement.

And then you have the industrial revolution was like,

how many years before the internet?

And then you have AI 20 years after the internet.

Internet was crazy.

Now you have AI.

It's like, well, I don't know if it's like having,

having it the distance between each major event.

But now with AI, it would only make sense that like the next big thing comes sooner

because AI is a faster tool and it's kind of shortening the distance.

And I think that's the whole idea of like reaching the singularities.

The time, the time between each fantastic benchmark of technology is shortening each time.

So that's, that's what's scary because then it becomes to a point

in the singularity, which basically becomes instantaneous.

It's like every new crazy thing that comes out with humankind happens like one second after another.

I think that that's, I mean, I think people got like a metaphorical taste of this

when chat GBT came out and like every single day in the news,

there's a new crazy AI advancement.

And then you're like, right, like when chat GBT came out, it was so big,

but we just like we, our brains could not even fathom all the implications it had.

It's just like, it's crazy.

And I don't even know why it's crazy.

And I know I don't know why it's crazy.

And every single day a new company and new innovation comes out and it's like,

oh yeah, oh yeah, that too.

Yeah, transport, it's going to completely destroy finance,

going to completely destroy education, like every sector you think about.

And I think that that's just going to, yeah,

that's going to be an expedited feeling, get used to it.

I don't think that's going to stop.

No, it's crazy.

It almost makes you not want to even jump into the field.

Like I had some uncles and friends ask me like, hey, I hear AI is big.

Like, what are the biggest opportunities you see?

Like, how can I enter the market?

It's like, the only thing I could think of that I did,

there's this site that I saw on Twitter called site GPT and site GPT is basically

you feed this bot all the documents about your business and then it'll make a chat bot

on your website that can answer any questions that your customers have

in like a very intelligent way as if it's chat GPT, but for your business.

And I said, you could take that company and you could go sell it to,

sorry, you could take that technology and you could just go sell it to companies

and they could fire all of their on staff chat bot people.

And it'll be a lot cheaper, it's instantaneous,

it's always on so it never has sick days and like it's honestly a lot more coherent

than a lot of their employees and like go sell that.

Besides doing something like that where it's kind of like selling something,

I think it's really hard to know how to get really into AI

because things are changing every day.

Starting a company like you and I have done,

like it's probably one of the hardest things to do within the AI circle.

And like you move quickly, you'll be left behind,

like even chat GPT coming out with plugins completely destroyed

like half of the companies that were being built around AI.

Like it's just insane.

Yeah, 100%.

And here's the other thing that I think is that a lot of people are like,

oh, AI is going to take everyone's job is going to do all this stuff.

And then we have chat GPT and now it's been like six months.

And it's like, okay, well, the world's not that much different.

I think that like there is a curve,

like we talk about things coming out super fast.

There's a curve to like a technological advancement,

the implications, the software being built on top of it.

And like right now, for example, with that bot that you said,

you know, it's like a customer service bot and like you can fire your like people.

We can't really fire your people because like when I call up GoDaddy on the phone

and I'm like, hey, X, Y and Z is broken.

Like the dude's like, okay, let me go check into the thing.

And he's like looking through like my files and trying to switch stuff.

He's like, did it work now?

Did it work now?

Did your DNS work now?

Right.

Okay, so like the chat bot can't do that.

It can answer questions, but it can't do that.

Yeah, but what if you just plug it in?

What if you trust it enough?

You test it, you trust enough that you plug it into your back end.

There's no human.

Someone's chatting and it goes and makes changes and does things.

Like to whatever like permission level, I'm only an admin.

I have to request access from my like super admin bot above me.

And the super admin bot is trained to be like,

is someone scamming?

Is there something bad?

Like it has a different set of parameters,

but like people are doing those jobs.

And like the people I've talked to on the phone

are not incredibly technologically savvy a lot of times.

So like, yeah.

And so I feel like once it gets directly plugged into people's back ends,

which it's only a matter of time, those jobs do go away.

And I think there's a lot of industries like that.

People completely forget, but like the,

I think that the shifts and like the disruption were,

we're seeing like the technology, but we're not seeing the implications yet.

We're not feeling the pain of whatever those disruptions are about to cause,

because they're going to wreak massive havoc.

Like if it can do it better, it will.

It's just a matter of time for people to get a system set up.

Yeah. And it'll be like a one button click.

It'll be a go on button.

And suddenly, suddenly this AI like crawls your website

and it uses the HTML to access the back end.

And like it understands context and what the purpose of your company is.

And it'll just make changes for you on the fly.

My other company app rabbit, it's an app building software.

And we're like, how we like, we need to incorporate AI

because people are already kind of building these rudimentary apps using AI.

Like you see that a lot, like build me a website that does this.

And it'll set you up something pretty simple,

but we actually have an infrastructure where you can add folders and widgets and tabs and like

change text and upload images.

But soon we're going to just need like a prompt box that pops up.

And you can just type what kind of app you want.

And now we need to actually have an AI in the back end

that's trained to go in and like, you know, if you want like a workout app,

it needs to understand, oh, I need to add like seven days.

And then like within each day have a different workout plan.

And we need to teach it what it looks like,

but very possible for AI to just give the structure context.

So truly, like we might end up building like the most amazing app builder

because we're incorporating AI into an app builder that already exists.

And we'll just train it on how to build.

So I think, I think that's really, that's probably like that cap

or that plateau that you're talking about though,

because people like me start building with the context of like our business

and how can we add AI to it?

I think the next phase of AI are people that are willing to be more patient,

working for the next five to 10 years,

building the things where you can just chat,

where you can just type or even say or even think of something

and AI just gets the whole thing done for you.

No button pushes, no clicks,

maybe just asks for your permission to do things.

You know, whether you're asking it to buy something or set up something,

like it just becomes way more automatic.

And I think the next iteration of AI will just be like no hardly any input.

I think Elon Musk says this, like any input is like error.

Like it's a problem.

User input is, yeah, is a bug or something for like all Teslas

for their self-driving AI.

If you like grab a steering wheel and turn,

yeah, he's saying that like user input is error.

That's how they can train their AI to know like to get better and stuff.

That is a really interesting, oh man, that is a horrible philosophy though.

If you really think about it for AI, any user input is a bug.

Anytime the humans have to think or do anything

and we didn't already do it for them, it's a bug.

Like if you train an AI with that philosophy, dude,

it could be insane because it doesn't want your input.

It doesn't want you to give it any direction or guidance.

It's optimizing to not be guided.

It'll start and like honestly people want that too,

which is bad for like, we want things that are bad for us all the time.

So like we want to scroll through TikTok

when like our attention spans are getting shorter and shorter

and that's bad for us, but we want more of it because of dopamine.

I think that our, not our attention span,

but like our threshold of waiting for something to be complete

will be shortened to, what would you call that?

Our time to satisfaction for like completing a task.

Like we won't be able to do deep work anymore

because now if I want to write a marketing email,

I have to wait five seconds for chat GBT to do that for me

instead of spending the time.

And so I think, you know, if someone wants to build themselves an app or a website,

the time between the thought of them wanting to have it

and the time of it happening is getting shorter.

It's kind of like Amazon, right?

It's why they're so successful.

They can deliver the result the fastest.

And so I think in the future,

the people who win as far as AI and businesses,

it's just getting you what you need the fastest,

like without, without any input.

And yeah, that will be bad for us.

Yeah, okay.

So along that line, like I can envision a world.

So tell me what you think the implications are,

but like along that vein, imagine a world where there's an AI optimized,

where user input is error.

So you wake up in the morning and there's like,

you buy one of those Tesla humanoid robots, right?

Because they're, they're making them.

They've already done demos.

They're not perfect, whatever.

It's maybe that's the next, maybe that is the next thing.

Actually, that is the next thing.

That's the next thing.

We have the AI.

Now if we have the vessel to put in the humanoid robot

that can do everything a human can, game over.

It's out of the box.

Yeah, because now plumbers, electricians, everyone else,

every job is now automated and gone.

Like there's, so anyways, another thought,

but like let's say you wake up in the morning,

your humanoid robot made you like breakfast.

It took your kids to school and it's like,

there's like a check on the counter.

It's like, hey, I like, I went to like the grocery store

or like I went like all night last night

at the farmer's field and plowed and like here's my like wages.

Okay, obviously that's a ridiculous example.

More likely it was like, I went on the internet last night

and found like a few like arbitrage opportunities

for X, Y and Z opportunities that you trained me to do.

Here's like $1,000 from like last night.

Like what implication does that have on humanity?

If, okay, this combined with, and this is not far off people.

Like actually I was listening to the All In podcast recently

and they're talking about like nuclear fission

and the fact that theoretically we could have almost unlimited

free energy in the future.

So unlimited free energy, AI, humanoid robot,

like what is their left, what is the human experience at that point in time?

The golden age, this is the golden age.

This is like one of the things they predict that AI,

like that singularity point of advancement.

It gets so good that it actually reigns it.

This is like the optimistic output, their guess prediction

that humans enter this golden age

where we don't have to do anything except exactly what we want to do.

Food is taken care of, the air is clean,

we've solved all the problems, we've solved whatever,

global warming, whatever you believe in,

we've solved hunger, you know, whatever it is.

And like we don't need government anymore

because like AI runs things according to like our wants and needs

and like robots and we live in Star Wars basically.

Yeah, okay, so in reality I think

because humans are directing and making it

and we all have, I would say, intrinsic human value

or human instincts that are like often selfish, pride,

arrogant, I don't know, whatever.

Like we have these human instincts.

Someone is going to find a way to like try to benefit

more than everyone else, right?

Like theoretically we could be living

in some version of a utopia right now

but obviously that's not happened.

We have war and some people have, there's all sorts of things.

There's all sorts of corruption that happens to stop the utopia

which maybe it wouldn't even be a utopia

but that you've read about in classical philosophers books.

But anyways, in the future I don't believe

that we ever achieve obviously the fairytale land

because there's going to be like all of those biased inputs

and people just essentially for greed or whatever

trying to become number one and benefit the most off of it.

So it's going to be interesting

and I think that it all kind of comes down to like

do we need some sort of regulatory body?

Do we need some sort of regulation in AI?

What's your opinion on that?

Yeah, I saw Elon talking about this

and it's a little bit like out of my zone

so I kind of have to rely on the experts a little bit

to help craft my opinion.

It makes sense like if we believe in regulation

for cars or homes and electricians

and like all these things that in our life

it would make sense that the most powerful thing

in the world that we've created has some form of regulation

but at the same time I'd rather the government

not like step into anything.

So it's hard to be anti-government pro-regulation

if the government is the one doing it.

I don't know, what do you think?

Yeah, I'm somewhere in that same camp as well.

It's like I obviously hate it when

big bureaucratic, inefficient organizations

like the government do anything.

I've actually heard people with pretty strong opinions

on either side of this.

Some people saying like we have to start regulation now.

It's like the Federal Aviation Administration.

Once they did it there was less plane crashes

but then it's also like if they start it now

let's say they say okay we're making a pause on all AI

because it's dangerous for the next six months

and then China beats us.

I could see them right now, think about it.

Who is regulation?

What does regulation mean?

Like you know who regulation is.

Look at everyone in the Senate in the house right now.

Those are the people, that's regulation.

I don't know if I trust any of them

to do anything particularly well, period.

I don't even trust that people add open AI right now.

And AI already has inherent bias like censorship.

So like AI is going to be used for what they want it to be used for.

I know that Elon plans on making another model

which is like open and he says try to find

like the closest form of truth.

We'll see.

Yeah, optimize for truth.

Yeah, we'll see.

I mean he's done some good things in the past

so it's hard to discount him on it.

It would be awesome if he was able to pull something awesome out.

Yeah, anyways, it has been an hour so we better wrap up here.

Thanks so much for reading on the pod, Matthew.

Anyone that is interested, you should definitely go

and check out Prompt Fox.

If you're building apps, check out AppRabbit,

recently rebranded website of Matt's.

But anyways, thanks so much for coming on the pod

and we will have to have you on again next week.

Yeah, this is awesome.

Thanks.

If you are looking for an innovative

and creative community of people using ChatGPT,

you need to join our ChatGPT creators community.

I'll drop a link in the description to this podcast.

We'd love to see you there where we share tips

and tricks of what is working in ChatGPT.

It's a lot easier than a podcast as you can see screenshots,

you can share and comment on things that are currently working.

So if this sounds interesting to you,

check out the link in the comment.

We'd love to have you in the community.

Thanks for joining me on the OpenAI podcast.

It would mean the world to me if you would rate this podcast

wherever you listen to your podcasts

and I'll see you tomorrow.

Machine-generated transcript that may contain inaccuracies.

Join us in a captivating discussion as we dive deep into the future of AI with the insightful Matthew Iversen. Discover the potential of Neuralink, explore the concept of the AI Singularity, and unravel the intriguing landscape of AI use by the CCP. Get ready for an engaging journey into the world of artificial intelligence.


Get on the AI Box Waitlist: https://AIBox.ai/
Join our ChatGPT Community: ⁠https://www.facebook.com/groups/739308654562189/⁠
Follow me on Twitter: ⁠https://twitter.com/jaeden_ai⁠