AI Hustle: News on Open AI, ChatGPT, Midjourney, NVIDIA, Anthropic, Open Source LLMs: AI's Legal Impact: DLA Piper's Gareth Stokes Speaks Out

Jaeden Schafer & Jamie McCauley Jaeden Schafer & Jamie McCauley 10/12/23 - Episode Page - 34m - PDF Transcript

Welcome to the OpenAI podcast, the podcast that opens up the world of AI in a quick and

concise manner.

Tune in daily to hear the latest news and breakthroughs in the rapidly evolving world

of artificial intelligence.

If you've been following the podcast for a while, you'll know that over the last six

months I've been working on a stealth AI startup.

Of the hundreds of projects I've covered, this is the one that I believe has the greatest

potential.

So today I'm excited to announce AIBOX.

AIBOX is a no-code AI app building platform paired with the App Store for AI that lets

you monetize your AI tools.

The platform lets you build apps by linking together AI models like chatGPT, mid-journey

and 11Labs, eventually will integrate with software like Gmail, Trello and Salesforce

so you can use AI to automate every function in your organization.

To get notified when we launch and be one of the first to build on the platform, you

can join the wait list at AIBOX.AI, the link is in the show notes.

We are currently raising a seed round of funding.

If you're an investor that is focused on disruptive tech, I'd love to tell you more

about the platform.

You can reach out to me at jaden at AIBOX.AI, I'll leave that email in the show notes.

Welcome to the AI chat podcast, I'm your host Jayden Schaffer.

Today on the podcast we have the pleasure of being joined with Garth Stokes who is

on the show.

Garth leads DLA Piper's global AI practice group and Garth specializes in helping clients

navigate the rapidly evolving landscape of AI, focusing on legal issues like ethics,

biases, data governance and when he's not immersed in technology projects, he enjoys

cooking, skiing and spending time with his family.

Welcome to the show today.

Thanks, delighted to be here.

So super excited to have you on the show, what I'd love to kick this off with is I'd

love to ask you a little bit about your background and your journey.

I'm curious was AI and kind of this tech side something you always knew you were interested

in growing up or is this something that you know you kind of discovered in college or

something.

Tell us a little bit about your background there.

Yeah, sure.

So, oh well, the first thing I have to admit is exactly how long I've been doing this,

which makes me sound terribly old in the context of all these bright young things who are sort

of getting started with AI at the moment, but I've been a lawyer for 21 years now.

And I started out as a technology lawyer way, way, way back in 2001 because I've always

been really, really interesting in computing, really interested in what was then the kind

of nascent.com boom and all of those sorts of things and tech transactions over that

period have evolved enormously.

The first point at which we really started to see AI becoming a material part of deals

was doing kind of large outsourcing deals with a lot of the really big tech vendors.

And we suddenly in a realm of about 2014 or so started to see various vendors starting

to talk seriously about transformation projects and outsourcing involving AI in some form.

We'd had IBM's Watson kind of win Jeopardy in 2011, took really until sort of 2014-2015

to start seeing that kind of technology commercialized in outsourcing deals.

And that's really the first contact I had with AI on kind of client-facing mandates.

Okay, very cool.

Super interesting.

So of course you had, like you mentioned, a really incredible kind of career in this

space.

What kind of motivated you to co-lead DLA Piper's global AI practice group?

Partly being super interested by the technology.

So one of the things that led me to be a technology lawyer in the first place is I'm kind of a

kind of turbo nerd as well as a lawyer and I could see the impact that lots of the technologies

were starting to have very, very early on.

The impact was going to be outside of, you know, the simplify, standardize, automate

transformation journeys that outsourcing projects were typically on.

I could see that coming very, very quickly indeed.

We were then invited to get involved with a bit of advice for the UK's House of Lords

around a report that was published in around about 2017 called AI in the UK Ready, Willing

and Able.

That was the sort of the kernel of starting a much wider look at the AI landscape and

I started then reading lots and lots and lots of the kind of archive papers that were out

at that time because there wasn't really a tremendous amount of user-accessible

content outside of the academic space.

And the timing was almost perfect because just as I was starting to do that, Google kind

of published the seminal attention is all you need paper about transformer models.

I spent a long time trying to get my head around that because I'm a lawyer, not a mathematician.

But eventually it started to penetrate and I started to be able to have some conversations

with some of our clients who were then looking at investing in AI organizations, some of

our clients who were looking at building some of these tools themselves and some of our clients

who were looking at whether or not these things were threats to their business model.

So we started having lots of conversations around that time.

And of course, things just snowballed.

Even by 2020, 2021, the people who were in the know could sort of see the tidal wave coming

and then Pandora's Box was opened in November 22 when chat GPT just exploded onto the scene.

And so luckily, we were already there with a global AI practice that had been going for

a couple of years by that stage.

But thankfully, kind of caught that way.

You're all lined up for it.

That's super awesome.

Something I'd be curious to ask you with kind of the background you have and whatnot.

What are some of the common legal challenges you encounter when you're working on AI projects?

Yeah, so I suppose this depends on who you're advising and what they're looking to do because

an awful lot of people are in the position of being customer organizations saying we

are not AI experts, we're an insurance company or a bank or a life sciences company or whatever

it is.

How can we take advantage of these technologies?

How can we benefit from these fantastic productivity increases that everybody's talking

about that are going to be delivered by AI?

How can we sort of make some of our processes more efficient?

How can we deliver these insights?

And when they're doing that, it usually starts off with sort of a top down approach if people

are doing it properly.

So looking at things strategically, what is it that we as an organization would like

to be doing with this?

How would we like to be doing that?

What are the sort of top down choices that we need to make around our values as an organization

and how we can implement AI in a way that is consistent with those values?

So once you've got that strategic approach right, you can then move down to the kind

of tactical layer and say, how now are we going to convert that strategy into a series

of policies, a series of governance framework, some committees that bring in all the stakeholders,

contracts, et cetera, and then operationalize that because it's all well and good for someone

like me to write into an AI policy that we will do this ethically or we'll do this in

a way that's unbiased.

But then someone's got to build a system that meets that test.

Someone's going to be able to operationalize that and sort of prove from a data science

point of view that this really does match with exactly what we're doing in that space.

And so that was a gap that I think the AI Piper had recognized relatively early.

And one of the things that we've done is in addition to building a series of legal experts

who know what the kind of the nascent laws are, a sect AI around the world because it's

evolving landscape, we've also got a series of kind of lawyer data scientists who are

able to help kind of prove that the algorithms that are being put together proves that the

models that are being trained and the datasets that are being used to train them do match

up with those tactical level policy statements that say, well, this will be unbiased or this

will be ethical, et cetera.

And I think that's a really important piece of the puzzle because too often there's a

gap between the kind of operational layer and the tactical layer.

Okay.

Yeah.

That makes a lot of sense and definitely sounds like something people have to consider and

to work into their strategies.

Something else I'd be curious about asking you about is a lot of people to kind of mention

the importance of upfront data work in your, based off what you've seen, what are some

of the best practices for data governance and AI projects, specifically when we're looking

at the risk side of it that I'm sure you deal with in the legal front?

Yeah.

And there's an awful lot to that because there's the data governance in relation to the organization's

own data.

So there are already, and there have been in many, many different parts of the world,

a series of legal frameworks that govern how you have to kind of collect and use data

for various different purposes.

So some of that will be kind of personal data or PII type rules and regulations that exist

in various different parts of the world.

And an awful lot of what organizations that have Dickey enterprise customer organizations

want to do with AI tends to involve processing personal data in some way anyway.

There's then also a whole series of laws of regulations that are sector specific to

some extent.

So if you're doing AI and life sciences, you've got to make sure that you're complying with

all of the various different regulations that exist in that space and you're likely to end

up building something that is, you know, a software medical device in regulatory terms

in some sense, or if you're deploying it in the financial services sector, you've got

to make sure that you're complying with all of the oversight and control regulatory frameworks

that exist in that sector and so on and so forth.

So there's how you control your own data and that's a big challenge.

But at least organizations feel like that's their own.

They own the IP in relation to it, they're confident as it's sort of veracity.

They may have mother all tidy up jobs to do this, they're honest in relation to sort

of actually getting that data into good shape.

But that's their data and they sort of feel like they can put their arms around it.

The challenge with AI is that we're also seeing people coming along and saying, well, now

I can use these pre-trained tools that exist in the world.

Some of those are, you know, access via APIs because they're controlled by some of the,

you know, big name AI providers.

Increasingly, we're also seeing people being very, very excited about some of the open sorts

and open access and large language models that are out there that they can then host

and run themselves.

But the question in relation to all of those things is, OK, great, but how much visibility

do I have in relation to those sorts of tools?

How much confidence do I have in how those are going to be used?

And so it's really about sort of data governance in terms of your own data, understanding what

the potential data landscape risk is in relation to those pre-trained models and ideally putting

it together and doing something that allows a kind of best of both worlds solution.

And that might be prompt engineering or, you know, using existing large language models to

kind of allow you to talk to a document or talk to a document set or something of that type.

There's a lot of solutions in that space.

In some cases, people are very, very sophisticated and they actually want to unlock the model

and do actual training runs to truly fine tune a model to be better for their needs.

OK, that's an awful lot easier with, you know, the likes of a Falcon 180b or a

Falcon 40b or a Llama 2 or something like that.

Right.

They have open source, open access models.

That kind of training is a lot easier.

There are solutions out there from the enterprise players that allow a kind of fine tuning layer

on top of a locked large language model to be developed as well.

But that's that's an area of an awful lot of interest for the more sophisticated

organizations who are saying, how can I have the best of both here?

I feel really confident about my ownership of this data.

I feel like I can contractually get comfortable with the risk that exists around some of this

other data, or I can take one of these models and host it myself and do fine tuning runs

and feel confident about the quality and ownership of that model.

And that allows me to then feel good about the situation I find myself in and the risk

that I've got around that data landscape.

OK, very cool.

Yeah, I think that I think those are definitely some really important considerations and things

to look at.

Something I'd be curious on asking you about is, you know, when you're kind of in this

space and you're consulting and helping people specifically in kind of an AI focused area,

what are some of the key elements you think that organizations are overlooking right now

when maybe that's preparing data for AI models or for other things?

But like, what are what are some areas that you see that, you know, people should should

be aware of or thinking about?

Yeah, there's a few common challenges.

So I think that one of them is people thinking that the current state of affairs is sort of

relatively static and they don't really need to be worried about the changing regulation

that is coming out on the tracks.

And that's not just, you know, we hear an awful lot about things like the EU's AI Act

for instance, which is the first big regulatory framework that's coming down the tracks in

that space.

But I think that there is going to be a lot of regulation in other parts of the world

coming down the tracks very, very quickly indeed, soon from a US perspective.

We've already seen, you know, Senate committees interviewing CEOs of, you know, very well

known AI companies.

We've seen summits being held with both political and technology players being invited along

to those summits.

We've seen lots and lots of industry commentary around the need to give reassurance to customers

around some of these sorts of things.

And some of the big players have even gone so far as to come out with, you know, statements

that are intended to give comfort around IP issues or statements that are intended to

give comfort around data issues or whatever they might be.

So there's a there's a trans amount of changing regulation there.

And that I think means that, you know, large customers can sort of take where they are

now and look to the future.

And if you're a provider and we work with an awful lot of the foundation model providers

and organizations that are building these things as well, it's important to sort of

be able to have conversations, both in public and sometimes, you know, via other channels

to sort of try to have some degree of sensible dialogue around the direction of travel with

regulation, because you and everybody that you might potentially consider as a customer

in future are going to get caught by this stuff.

So I think that being prepared for what's coming is a really, really big part of this.

And the organizations that are ahead of the curve there are the ones that will save money

and not have to do remediation jobs in future.

And they will, you know, they're baking in benefits today, that they'll be able to continue

to get the benefit of in an even more accelerated form tomorrow.

And those who are, you know, in cautious, they're the ones who are going to end up kind

of, you know, doing this job twice, right, you know, there's definitely a lot of incentive

to get this right first time if you possibly can.

Okay, yeah, that makes a lot of sense.

Something you kind of touched on that I'd like to, you know, double click on a little

bit is like, how do you see the relationship between intellectual property and AI evolving

in the coming years, right?

You mentioned some some companies, I think specifically Adobe has said, if you're using

some of our image generators, and you, you know, get sued for something, we'll cover

the lawsuit.

So there's kind of like that angle, there's people like open AI that just kind of have

like Dolly and they're just like throw it out there and they're just like, I mean, it's

kind of like a use at your own risk in a way, right?

How do you see this evolving in the future?

It's a really interesting one because there's obviously a lot of cases being pursued in

various different forums at the moment.

So we've got a few exam pools or class action suits that are out there, particularly in

the United States, where people have made, you know, a whole series of different claims

around the extent to which copying may or may not have happened in relation to training

materials that are alleged to have contributed towards various different training models.

And, you know, there is, I think, a degree of naivety amongst some of the people who

are looking at these questions at the moment, and I kind of touched on this at the beginning,

trying to get to the heart of exactly how these systems work and what it means to trade

an AI and what an AI model actually is in terms of the collection of weights and biases

and make it up.

These are not familiar concepts for lawyers who are more used to the sort of, you know,

technology, copyright litigation cases that are sort of more akin to Napster and sort

of, you know, out and out piracy of large volumes of media.

The other complicating factor is that actually these are cases that in some instances are

being brought against organizations that are both AI model providers on the one hand and

rights holders in their own right, on the other hand.

So there's a there's a massive incentive actually for a lot of these big organizations to get

things right from an intellectual property perspective.

So I'm really, really interested to see how some of these cases will play out because I don't

think it's going to be very, very clear cut.

I think there's going to be a lot of interesting decisions by people who probably need to be

brought up the curve fairly quickly in education around how these things work in order to, you

know, judges to hear the cases properly and to sort of have the facts at their disposal

when they're making these decisions.

And then we've also got some cases out there where we've got large rights holder organizations

pursuing claims.

Those are, you know, easier to see being sort of pursued in a really clear cut way, because

there's a much more, you know, there's a single claim that usually they've got a very

particular position that they're going to be able to pursue.

They're usually able to sort of be relatively well funded and, you know, they'll have gone for a

sort of law firm that they've specifically chosen in order to represent them on those grounds.

And those are the cases where I expect the issues to get flushed out and be cleared up much

more quickly and much more clearly.

And once we've got those positions, we all then know what it is.

But I think one of the interesting analogies here is when photography was brand new, originally,

there was no copyright in photographs at all.

There was a sort of suggestion that, you know, in order for copyright to exist, there has to be

some act of human creativity involved.

And people thought that, well, you know, pressing a button on a camera, there's no

creativity involved there.

So how can that possibly attract copyright?

So the very first photographs were deemed to have no copyright in them.

And it wasn't until people started to sort of particularly pose models with particular

lighting and particular dress and all the rest of it that there was deemed to be enough

creativity involved for copyright to persist in photographs.

There was decades of development of law around how just something as straightforward as

photography was treated by the law.

So you think of how much more complicated AI technology is than an old silver halide

photographic camera in the 1800s.

It's going to take some time for the law to settle down in these areas.

And it's not going to be clear cut.

And all AI is not trained in the same way.

And all data sets are licensed in the same way.

So just because one case goes one way, it doesn't mean all other cases are going to

necessarily follow suit.

There's there's so much of this that turns on the facts.

Yeah, I think it's such an interesting space because, you know, I definitely feel

like there's this kind of idea of like ask for forgiveness, not permission.

And people are just, you know, blasting out these AI models that they've

scripted the entire internet for.

And I'm sure like, yeah, in in the future, some people will criticize that or it'll

be interesting to see how that evolves.

But it would appear, you know, people are just trying to get the technology out there.

Something I would be interested in hearing your opinion on, you know,

you're you're over in Europe right now.

What are like, what role do you think, you know, regulation is going to play in

the future of AI, especially in terms of data and ethics and all this kind of stuff.

Of course, like you mentioned, the EU is one of the first places that is kind of

putting forth a AI regulatory framework right now.

This is definitely something we're talking about over here in the United States.

But what if you could make a prediction?

How do you think that's going to kind of shake out?

How do you think this is going to get rolled out?

Yeah, and this is one of the this is one of the nice things that, you know, the

fact that I was able to have these conversations with colleagues in, you

know, the United States or in Southeast Asia, we get to kind of compare and

contrast what's going on in different parts of the world.

And so one of the things that I've been saying for a little while around this

area is you can see the European mentality of, you know, let's go for control

and regulation as the method to control these nascent technologies.

In the first instance, you know, we have the EU AI Act progressing through

its final trilog stages now, and we sort of have got a very good idea of what

the final form legislation is likely to look like.

Very similar in texture to GDPR in many ways.

It's sort of this idea of use case-based regulation and fines if you get it wrong.

Very, very big fines as well.

In the US, interesting is much more about, you know, control via class action

litigation, and that seems to be the method by which people are kind of trying

to settle some of these early sort of concerns around different different

areas of AI use.

I think regulation does have a really important role to play, though, because

you only tend to sort of have regulation and policies and standards where you

expect people to be doing something.

You know, regulation isn't about saying no to things.

It's about saying, how can we allow people to do these sorts of things?

So the fact that you have regulation actually creates an awful lot of

certainty. It creates a common platform.

It creates public trust.

So it means that actually getting people to adopt these solutions in the real

world will be, I suspect, easier where people feel like, you know, there is

some kind of oversight, there is some kind of ethical standard, there is some

kind of transparency, et cetera, that is imposed by, you know, the law and is a

sort of separate standard to the good graces of the AI providers and may

choose or choose not to do.

And I also think that that common platform will actually enable inward

investment in a much more significant way.

And we've sort of seen this as, oh, yeah, people often look to the GDPR and sort

of a standard that people will say is, you know, anti-business or creates some,

you know, difficult hurdles for people to get over.

Actually, it has generated a massive amount of confidence in how people can go

about creating data safe businesses.

And there's now been a huge industry around that in Europe precisely because of

that regulatory environment.

And I would expect the net effect of the AI Act and other regulations that follow

on from that to be exactly the same.

There will be lots and lots of different kinds of, you know, open access, open

source AI generated in lots of other places.

I suspect that Europe will end up having a bit of a lead by being the place where

people feel that they have got the ability to generate and market the safest AI,

the ones that customers can feel most assured about.

So we'll see how it plays out, but that's certainly my prediction for what the

real world impact of regulation will be.

And when people talk about the fact that there isn't regulation elsewhere, I

also think that's sort of slightly false challenge because the mere fact that

there isn't necessarily something that is called the AI Act somewhere else

doesn't mean that there aren't lots and lots of reactions that impact

how you use AI in other places.

It just might be a much more complex environment or something where, you know,

there's a much more political component to how that regulation is imposed.

You can use AI as long as you don't trust the science of government or you can

use AI as long as you're doing it within the sort of narrow guardrails of these

sort of slightly awkward industry guidelines or, you know, regulatory

requirements or whatever it might be.

So that horizontal regulation will also create some so much needed parity across

lots of different regulatory environments.

It will be easy to go from, you know, using an AI model in healthcare to financial

services to media and sports or something like that in Europe in a way that, you

know, in the UK, which is looking at sector based regulation or the US, which is

also looking at sector based regulation, going from reusing an AI and financial

services to insurance, which is very similar but separately regulated or

life sciences, which is very differently regulated.

That would be a real challenge.

That would be that would be a lot more difficult potentially.

Yeah, I think that's, I think that's spot on.

And actually, that's really interesting.

You say that is a take I have not heard before that more regulation will

actually spur more business growth, but it makes a lot of sense.

Right.

You, I mean, even if looking to pass like tech, you know, cycles or whatnot, you

had, you know, when the whole web three thing was in its, you know, full steam, you

had all of the top, you know, crypto platforms asking for regulation, asking

for clarification, though the lack of that definitely gave people an uneasy

feeling.

And of course, I'm like, crypto has had all of its own problems and whatnot.

So not exactly.

I correlated that to AI and whatnot.

But you're, I think you're spot on where like sometimes this regulation, it

gives people a lot of confidence and gives businesses confidence.

And I would be really curious to see, yeah, I mean, that'd be really

interesting to see what the investment looks like with that regulation in place.

Well, there's, there's one further point on that as well.

And it's, you know, if you go from regulation at the kind of global level

to the kind of in, in house regulation that companies have, what we are

definitely seeing is that those companies that have adopted policies and

procedures around AI, they have effectively given permission to their

people to innovate in a very particular way.

They've sort of said, this is how we want to do it.

Go and bring us your best ideas.

And they tend to be the ones who are sort of going and having a whole

load of early successes.

Whereas companies that don't house a well-developed policy and procedures in

place, the first thing that people have got to get past with any AI project is,

am I even allowed to do this at all?

Is this something that this company wants me to do?

And that's a big barrier.

That's that, you know, the lack of a policy is a weird, you know, point of

uncertainty that people then have to get past.

Yeah.

So, so, yeah, it works at the corporate level.

It works at the national level.

That's really interesting.

One question I'd be curious to get your opinion on in like regard to that

specifically is, do you think, let's say the EU passes the regulation, do you think

that's enough, like, let's say, you know, opening eyes, like, okay, well, we

have to comply with that.

So now we're just like, kind of like GDPR, right?

Like we, we don't necessarily have that in the US, but a lot of businesses in the

US, if they're going to do business in Europe, their GDPR compliant.

So it's kind of like the US citizens or people around the world essentially get

the benefits of GDPR if the company is operating in those areas.

Do you think, let's say like the EU AI Act gets passed or whatever, that those

benefits are just going to kind of by default get passed on to everyone?

Or do you think it's going to necessitate still that every country comes up with

all their own regulations?

Um, they're willing out.

So it's an interesting analogy to sort of talk about GDPR, because GDPR does

have a whole series of explicitly extraterritorial impacts.

And it was designed that way.

So that, and it's always a point where when you sort of explain this to

businesses that don't touch on Europe at all, the fact that because they're

doing business with someone who's in Europe or that, you know, they may end

up holding some data that was once in Europe.

They could get caught by this regulation.

It's always a sort of point of surprise to say the least.

The same will apply with the EU AI Act.

Again, it has provisions within it that make it have explicitly extraterritorial

effect. And since so many businesses, you know, you've got 350 plus million

very wealthy consumers in Europe.

That's not a market that most global organizations can easily ignore.

So if you want to be able to access that market,

you're going to end up creating AI that's likely to then have potential

reuse in Europe, or the outputs might find themselves being used in Europe.

And in that case, you've suddenly got yourself caught by the

the regulatory regime, wherever it is in the world that you have based

that model, wherever it is that you've done your kind of compute runs

to train that model, you're now caught.

So there will effectively be this kind of Europe regulates the world

kind of roll out as a result of this.

Inevitably, there's a kind of question of enforcement.

How realistic is it that just because someone is technically caught

by the rules that they will meaningfully comply with them

if they're based a long way outside of the EU?

So I'm expecting that we'll still see an awful lot of people who are

technically caught or might technically have to comply, but choose not to.

Because, you know, they don't need to.

But certainly any large global enterprise, any organization

that is hoping to have the widest possible global market,

they will need to sort of be building tools and products that

are compliant with this regime, wherever in the world they happen to be.

OK, that's very interesting.

Hey, it's been incredible to have you on the show today.

As we wrap up, I would love to get maybe one piece of advice from you

that you feel like you could give to people that are looking to implement AI,

working in this space, maybe corporations.

What's one piece of advice you feel like you could could give to these people

that are that are looking at this?

I don't be afraid of putting in place policies and procedures.

The more that you do that, the more that you'll end up being confident

that you're developing AI tools, AI products and giving

version to your people to be innovative in a way that is future proofed

because it forces you to think about who are we as an organization?

What do we want to be doing with AI and how is that going to evolve in the future?

So the more that people do that, the more future proof will be.

I love that. That's fantastic advice.

Garth, it was amazing to have you on the podcast today.

So many great insights.

If people are interested in connecting with you or your team,

what's the best way for them to go about doing that?

So I can be found on on LinkedIn, if people search for

Gareth Stokes or DLA Piper's website is www.dlapiper.com.

And we have an enormous number of different resources,

including a blog called Technologies Legal Edge, which is technologiesleagleedge.com.

And I have written a whole series of articles

expanding on all sorts of different aspects of AI there,

if people would like my opinions for for free on a range of AI related topics as well.

Very cool.

And I'll make sure to leave a link to the website in the description below for the listener.

Once again, thank you so much for coming on, sharing your insights.

It has been absolutely phenomenal to the listener.

Thanks for tuning in to the AI chat podcast.

Make sure to rate us wherever you get your podcasts and have a fantastic rest of your day.

If you are looking for an innovative and creative community of people using chat

GPT, you need to join our chat GPT creators community.

I'll drop a link in the description to this podcast.

We'd love to see you there where we share tips and tricks of what is working in chat GPT.

It's a lot easier than a podcast as you can see screenshots.

You can share and comment on things that are currently working.

So if this sounds interesting to you, check out the link in the comment.

We'd love to have you in the community.

Thanks for joining me on the open AI podcast.

It would mean the world to me if you would rate this podcast wherever you listen to your

podcasts and I'll see you tomorrow.

Machine-generated transcript that may contain inaccuracies.

Join us for an illuminating episode as we explore the profound influence of AI on the legal and regulatory landscape, featuring insights from Gareth Stokes of DLA Piper. Dive into the dynamic intersection of artificial intelligence, law, and regulation, and discover the challenges and opportunities it presents. Don't miss this discussion on the evolving role of AI in the legal world!


Get on the AI Box Waitlist: https://AIBox.ai/
Join our ChatGPT Community: ⁠https://www.facebook.com/groups/739308654562189/⁠
Follow me on Twitter: ⁠https://twitter.com/jaeden_ai⁠