Global News Podcast: Special Edition - Artificial Intelligence - who cares?

BBC BBC 9/15/23 - Episode Page - 52m - PDF Transcript

Hello, this is a special edition of the Global News Podcast looking at artificial intelligence.

I'm Nick Miles and we will be giving you a step-by-step guide to what AI is and what

it can and cannot do at the moment. With a panel of experts in front of an audience

at Science Gallery London, we will look ahead to how AI might transform our lives.

Everybody in 30 to 50 years may potentially get access to state-like powers, the ability

to coordinate huge actions over extended time periods and that really is a fundamentally

different quality to the local effects of technologies in the past.

We'll examine the effect it is having on our healthcare systems right now and its scope

and limitations for solving some of the huge environmental challenges we face.

Getting bogged down in a kind of a technological solution narrative stops us from really thinking

about the fact that we have to be the ones who want to instigate this change. Technology

isn't going to solve these massive, real existential risks for us. It's down to us

and it's down to the people who govern us and it's down to our individual actions and

collective actions as well.

Also in this podcast, as with any rapidly developing technology, there are concerns of course,

we will look at the perceived risks and how we can minimise them.

Can those technologies actually look after us in a way that is safe and satisfactory?

What kind of devil's bargain are we making when we start to hand over our happiness and

well-being to artificial intelligence?

Hello and a very warm welcome to this special edition of the Global News Podcast from the

BBC World Service all about artificial intelligence.

We're broadcasting today from King's College Science Gallery in London, part of a network

of galleries connecting science with the arts around the world, from Atlanta to Berlin, Melbourne

to Monterey. AI is something that you can't fail to have noticed in recent months. The

latest chatbots have amazed us all with their ability to almost instantaneously write essays

on anything that you throw at them. But as we'll see, AI goes way beyond that of course.

To discuss how we can harness the benefits of AI, whilst minimising the downsides, I'm

joined by the BBC's technology editor Zoe Kleiman who will help guide us through the

hour. Let's first look at what AI is. Here's a collection of views that we gathered upstairs

in the gallery, which is currently featuring installations looking at the challenges of

AI.

I think it's a kind of machine created by humans to make our life better. AI is, let's

say magic. AI is magic. I would define AI as any kind of like data collection system

that can output any data, sort through it, like by us kind of asking it to. A bit of

awe, excitement and a bit of fear there. Well, what's a view from AI itself? We asked the

chatbot, chat GPT. AI, artificial intelligence, refers to the creation of computer systems

or machines that can perform tasks that typically require human intelligence. These tasks include

things like understanding natural language, recognising patterns, making decisions, solving

problems and learning from experience. Chat GPT. Now for a human, let's get a definition

from Dr Michael Luck who's the former director of the King's Institute for Artificial Intelligence.

AI is incredibly hard to define and that's because it keeps changing. But if you push

me, what I'll say is this. If you see a human doing something that we think requires intelligent

behaviour and we get a machine doing the same thing, then that's AI.

So over to you now, Zoe Kleinman. Zoe, would you add anything to those definitions?

I think what I would add is something I heard Sam Alton say when he was talking to US lawmakers

a few months ago. He is the founder of OpenAI, which created Chat GPT. And he said AI is a

tool, not a creature. And I think that's a really important thing to remember because

we have a long history, don't we, of anthropomorphising robots. You know, people talk to their robot

vacuums. They don't like to leave them in the dark because they think, you know, they treat

them like their pets or people. And I think it's really important to remember, even when AI is,

you know, seemingly effusively gushing at you like Chat GPT can do, it's not a human being,

it's not a person. It's very tempting, I don't myself sometimes, but we shouldn't. It's a machine,

it's a program, it's a device. It's not sentient.

When AI has been around for decades, of course, almost 60 years, Alan Turing talked about

artificial intelligence. What has enabled these recent advances in AI?

As with so many areas of tech and with science, you know, you have to start slowly at the beginning

and we're now in a position where we've been working on this for years. And, you know, for many

people, the first time they ever knowingly encountered AI was when Chat GPT exploded at us.

And that was less than a year ago, that was only in November last year. But of course,

it has been around for ages. And I think it's just now sort of come out of the gate at us,

if you like. But, you know, for years, it's been suggesting what you watch next on Netflix or

YouTube. It's been curating your friends feed on Facebook. It has been around us all this time.

We just haven't spoken to it before.

The hiding in plain sight. But now and in the future, we will really notice it big time.

Yeah. And I think that's one of the challenges of regulation. You know, the regulators in China

and also in Europe are saying people need to know when they're interacting with an AI tool.

They need to be aware that it's not a person that they're dealing with or, you know, if there's

content that's being generated by these tools, that it's very clearly flagged that it wasn't

made by a human. Sure. We will come back to regulation because it's a huge issue later in

the hour. But first, I think it's time to introduce our panel of distinguished guests here. First of

all, Carrie Hyde-Vamonde. You're a lawyer, a visiting lecturer here in law at Kings. What's

your involvement in AI? I'm really interested in how AI can be used in the justice system

to deal with some of the problems that we've got within the justice system, such as delays.

Vicky Goh, you are here. You are a practicing medical official. Why are you here? What do you do?

So I'm an academic radiologist here at Guy's and St Thomas's in Kings. And my interest really is in

developing and deploying AI tools for medical imaging in particular, but also more generally

in healthcare. Okay. Again, we'll come back to that because AI in healthcare is a huge issue

that is going to affect so many people around the world. Let's move on to Gabby Samuel now.

My background is ethics and sociology. And I look at the ethical and social issues

associated with the AI. And in particular, the ethical and social issues that are associated

with the environmental impacts of AI. And Kate Devlin, you are a computer scientist looking at

the social impact of AI, aren't you? That's your particular area of interest. What else?

That's right. Yes. So I want to understand how AI impacts people. And I'm part of a national,

new national program called Responsible AI UK that seeks to unite that landscape of all the

people doing responsible AI. Now, more from our panel in a moment. But first, the reason we are at

Kings at all right now is that just upstairs, there is an exhibition looking at some of the

ways that AI can be used to interact with us and have an impact on our lives.

Cat Real is a new work from Blast Theory that explores whether AI can make us happier.

The artists have made a utopia for cats, an ideal world where every possible comfort and

luxury is alive. And that's some commentary online from the makers of another of the installations

here. For the next 12 days, they will spend six hours a day here relaxing, eating and exploring.

In the centre of the room is a robot arm connected to a computer vision system and an AI that offers

games to each cat. Over time, the system attempts to measure which games the cat's like in order

to increase their happiness. Well, the cats are all gone now, but their utopia is still up for

people to see it. It's a box of four metres square or so. Lots of games purchased to jump from.

And as you heard there, an AI powered arm in the centre. Well, Matt Adams helped develop it all.

He's been telling me about the issues that they try to explore here. AI is coming more and more

into the home and into care settings. So we are getting closer and closer to these technologies.

Can those technologies actually look after us in a way that is safe and satisfactory?

What kind of devil's bargain are we making when we start to hand over our happiness and well-being

to artificial intelligence? And as we saw from the cat's experience, the cats, like the treats,

the AI learned to give more treats. So you've got to be careful what you wish for.

Absolutely. And we had a human override on that AI system and we had to refuse food a lot of the

time because the AI just wanted to give more and more food. And I think, you know, with social

media in the last 10 years, we've all learned that something that gives you endless little

dopamine hits can ultimately be a very dangerous thing.

Certainly can. Matt Adams talked there about human override, how to stay in control.

And that's the thing, of course, that we'll pick up on later in the programme because it's all about regulation.

We will look later on at that. But let's stick with Cat Royale for a moment. Regulation is a key

thing. Kate Devlin is looking into that and she's looking at how we design AI, aren't you, Kate?

To make people feel at ease with these kind of technologies in the future. So what are the

things that we need to be doing? It's really important that we consider who is being affected

by this technology. And we know that there are a lot of people out there who will be subject to it,

but might not have any say in it. So we have to ensure that everything is done with careful thought,

making sure that it's responsible, making sure we've looked at any repercussions from

any bias that might be in the system. So we've seen over the last few years a lot of people have

got these little speakers, smart speakers in their houses. That's the most obvious way in which AI

seems to have come into our home. But it's not just that, is it, even now? I think what we're

going to see more of is the rise of devices that we call them smart, don't we? And those are devices

that we programme to make decisions for us. As you say, if you've got a smart speaker, you might have

it set up to turn your lights on and off, to recommend things that you want to see or to hear.

A few years ago now, I stayed overnight in a house full of robots. It's a long story. But

it was really interesting. They were care robots. So they were designed to be machines that would

look after people who had primarily physical needs. They were looking at the older population.

One of the things I had to do was sleep the night there. I laid on a bed full of sensors.

Basically, if you didn't move for too long, then these robots would come and see if you were all

right. I didn't get much sleep, I have to say, because I was so nervous about getting

some robot calling an ambulance for me. It felt at the time a little bit dystopian,

but actually, fast forward a few years, you can kind of see us accepting that sort of

relationship with sensors, wanting things more to understand us and to be personalised for us.

And I think AI has the potential to do that. And Kate, obviously it has the potential,

but it needs to be very well designed. People are working on this around the world at the moment.

That's right. And robots may or may not have AI in them. A lot of them in this case would,

so they need to be able to adjust to their environment. And that's why we think there

might be potential in things like care robots. But it's also quite difficult because it turns

out that people are pretty fragile and robots aren't very good at gripping things. So if you

want to just build a robot that can lift and carry people, well, it's been tried, it hasn't really

worked. There are other ways that you can integrate this technology. So you could have sensors, for

example, like Zoe was saying, you could have assistive technologies, you could have exoskeletons

that carers wear that would help them lift people and carry them. And you've been finding out what

kind of aspects of robotics would appeal to you? We have done a survey that goes along with Cat Royal,

so if you visit the gallery to see that exhibition, you can also fill out our survey.

And I was quite curious because when you say to people, do you think that automated care in

the future for old people would be useful? There's a lot of agreement that sounds great,

we have a care shortage, we need to have that help. So people say, yeah, let's have a robot

that could look after the elderly. If you then say, well, how about if it looked after your cat,

then they start getting a bit more apprehensive. So I'm slightly concerned. So we wanted to run

this survey to find out what the attitudes are. We're still gathering data and I don't want to

influence it too much. People want to go and fill that in. But it's quite intriguing to see what

the responses are. If you say, would a robot look after your granny versus a baby versus your pet?

I suppose from care, it's a very short skip to the use of AI in health in general as well. And we

asked some people around the world to send in voice notes about what their hopes for AI in the

field of health would be. Let's listen to one now, Amandeep Ahuja from Dubai. My hope for artificial

intelligence was for the early diagnoses of diseases. It was quite interesting to see that

certain cases of breast cancer were caught quite early because of the positive implications of

artificial intelligence. And here's another one on a slightly different issue from Michael

de Battista from Malta. In my view, artificial intelligence could potentially enhance social

inclusion in a number of countries around the world, at least in part because it could facilitate

and promote independent living for persons with disability. Okay. And John Landridge from Spain

sent this in. My number one hope and indeed expectation from AI is that medical applications

can be so radically enhanced as to be able to detect, treat and cure all those conditions

currently so damaging to people everywhere. Wow, pretty optimistic there. Vicky Goh, you're a

working radiologist. Do you think that's overly optimistic, certainly for the moment? I think

certainly for the moment it is. But very interesting to hear the first segment, they're talking about

breast screening and there you can see already that we are starting to see successes in terms of AI

in healthcare. But the important thing to notice here is that actually those are very task focused

and that's where we're seeing most of the successes. If we step away from that scenario,

it's a little bit more challenging. And task focused in terms of your work of radiology is

doing very well? Yes, but you know, it's fine if I just want to just exclude one condition at any

one time. But if you want to integrate lots of data points and manage a patient's condition

over a long period of time, that's when AI is not really successful at the moment.

Kate Devlett, I mean, there are all sorts of ethical issues and concerns.

If you hand over healthcare, either physical healthcare or mental healthcare to AI in any form,

there will be concerns, won't there? The initial concerns are around things like data and privacy.

So this is very sensitive data. Do you want that to go to the companies who create these AI apps,

for example? But there are other issues as well. So we have to think about whether or not we still

have that human in the loop that was mentioned, that human control and oversight. And then further

down the line, there are social implications. If we are going to rely heavily on technology,

is that going to be at the cost of employing doctors? And when we have a system, a healthcare

system like, for example, the NHS, where there are lots of issues with how that information

technology joins up already, how are we going to integrate those systems? So we have a lot of

feeling IT and surgeries that aren't joined up. We have different systems at play that need to

be brought together. So logistically, it's a challenge as well. Vicki Goh, at the moment,

radiology is using AI. And there are issues as well with the data being used, because you could get

biases, couldn't you? Absolutely. And the biases essentially are in training of the algorithms.

And what we're finding at the moment is that if you train the algorithm on very selective data sets,

and a lot of the algorithms are being developed on a small number of data sets worldwide,

that are open for development, it may not necessarily translate to your healthcare system.

And then, you know, you have those issues of generalisability.

Let's move on now, because obviously the question with all new technology, AI included,

is how is it going to be used? Who's going to make the decisions? Who's taking responsibility

for those decisions that could profoundly affect all of our lives? Well, that is another of the

issues being looked at upstairs in the gallery, as I've been finding out. I'm standing now in a

room that looks to my untrained eye a little bit like a forensics lab, because there are a number

of different glass jars in front of me. There are syringes here on the desk, some graph paper.

Sarah Selby, you created this work. Tell us what you're trying to do here.

Yes, so Between the Lions is a project that explores the administrative systems of the UK

border regime. We have been collaborating with a charity called Beyond Detention and also a

bioengineering company called Twist Bioscience. And we've been speaking with individuals who

have been detained within the UK as a result of the immigration policies and collecting the

testimonies and their experiences of that and kind of thinking about the impact that it's had on them.

And then we've been encoding these testimonies into DNA nucleotides and creating synthetic DNA,

which is then embedded into writing ink and distributed to decision makers and policy makers

within the UK border system. Challenging public perception of immigration policies and trying

to kind of nail down the individuals at the centre of these debates is also about kind of

prompting reflection upon the people that are making these decisions. I'm hoping that it's

going to kind of make them keep the individuals that are going to be most impacted by these systems

at the heart of the decisions that they're making and consider how kind of sometimes quite simple

administrative actions can result in quite widespread disaster for the people that are impacted by it.

Pretty important issues raised by that.

Carrie, Hyde Vamonde, you're a lawyer, you're not specifically involved in the field of immigration,

but how can the use of AI help, do you think, in the criminal justice system as a whole?

What issues and concerns are raised by that, you reckon? We know that criminal justice systems

are struggling under a huge weight of cases at the moment, so how could AI help?

Yes, there's definitely a problem with delay across the globe, and just to give a sort of

window into how delay impacts judicial decisions or how people's lives are impacted by delay with

court hearings, people's memories are going to fade, witnesses are not going to remember,

essentially all the details, there's lives affected by that, whether they're the victim

or the individual in the dock the person accused. And so in criminal justice, you might be trying

to see how you could speed up processes, can AI look at a vast range of cases, and from those

patterns that it perceives, because it's very good at perceiving patterns, be able to decide whether

people are potentially guilty or not guilty, or whether AI could decide at least what are similar

cases, how are these similar cases being treated. So these are all possibilities that AI could help

with. There are obviously concerns related to those. We'll come to that in a minute, but where is

it actually being used at the moment around the world? So in, for example, in the UK, there's limited

use essentially in courts at least, although there is standard kind of statistical analysis going on

behind the scenes, but that's very much overlooked by the humans involved. If we go further afield,

in Brazil, for example, they're looking at using AI for analysis of cases and trying to use that to

assess similar cases again, helping to not making decisions, but trying to encourage

judges to look at similar cases. And in China, there's a very extensive use really of AI technologies

to ensure that, as it's put, like cases are treated alike, that we're trying to kind of

standardize how decisions are made and speed them up, therefore. Now, it does set alarm bells

ringing for many people the use of AI in making decisions that could send people to prison for

many, many years. And we've been hearing from few people around the world. Alexandra Morin,

from Dijon in France, is one of the people with concerns. I fear that many might have to face

discriminatory artificial intelligence, depending on how it starts and how it is fed,

until unbiased standards can be developed and applied.

Carrie, that is his concern. But from what you were saying in China,

their theory, at least, is that everybody would be treated in a similar way. And it would do away

with bias. It would do away with the bias of judges who are seen as biased, foreign against

particular ethnic minorities or particular age groups, those kinds of things. Yes, I think that

the concern is consistency. We might think, well, the legal processes, laws are rules. And so,

therefore, all we have to do is ensure that rules are implemented in a consistent fashion. But

there's a lot of complexity behind that. First of all, the courts play a really important process

in the relationship between people and the state, essentially. And we expect them to adapt as time

goes on, as culture proceeds. And biases, okay, so if you have a system which most countries,

there is some element of bias in the way in which court cases are dealt with in real life. We know

about racial biases that do occur. This is not something that's new, whether it's the majority

or not is another thing. But it's something that certainly is known to be an issue. What does AI do?

It looks at the data that is already provided. So if you're trying to learn off data, if you're

looking at past cases and trying to predict what future cases are going to be, then you have risk

repeating those biases and, in fact, kind of reemphasizing them, making them worse, okay?

And there's a really clear distinction here between something like the health data that we've

just been talking about, okay? Because in health data, you can biopsy and see if there's a cancer.

In the circumstances of looking for that judicial truth, is this person really guilty?

That's something we've been trying to understand through the history of justice.

We can't say if that person is guilty. We can only say that we made the decision

to the best of our abilities. But the best of our abilities are essentially that we are flawed.

We are flawed, but we hope to have as much trust in our judicial system as is humanly possible.

Coming up, will AI mean that I'm likely to lose my job? That's what a lot of people are thinking,

or get a far more rewarding and less physically demanding one, maybe? We'll hear from people

about their concerns and ask why and whether they are justified. What about the environmental

impact of AI? Will it help or hinder us? When it comes to climate change, for example, and we

will try to answer the big question, could AI take over? And how do we put in the checks and balances

to make sure that it doesn't? Welcome back to People Listening Around the World, to this

exploration of all things AI. In the first half, we've looked at the impact AI is already having

in healthcare and the justice systems around the world. We are going to move on now to look at the

impact AI and tech may have on the environment, both for good and for ill. Here's a comment that

came in from a listener in Sweden. Hello, my name is Santosh and I'm from Karnataka State in India.

At present, I work and live in Uppsala, Sweden. My fear about the use of artificial intelligence

in the industrial sector is that it might lead into increased mining for minerals,

increasing demand for plastic and manufacturing, and many devastating environmental consequences.

And that is something that's been worrying another of the creators of the art installations

that we visited earlier. Well, we've come now into a gloomy cave-like room, I suppose, and as my eyes

come acclimatized, I can see around me what looks like the detritus of the 21st century

voice-activated virtual assistants in mud on the floor. It looks like a graveyard for tech.

Well, the person behind this is Wesley Goatley. Here's with me now. Wesley,

what kind of issues are you trying to raise? There's a lot to both the creation of a device

like, say, a smart speaker, like an Amazon Echo or something, where it's got a huge cost to the

planet at the point of the extractive nature of creating a device like that, pulling out rare

metals from the earth. It then operates for a very short time on a shelf or a window sill for

maybe two and a half years before it breaks, malfunctions, fails, or is simply replaced by

the next, you know, newest, shiniest object. And then they go back to the earth in places

that look like this, but obviously are often in countries like Kenya and Ghana, for example,

where the consumer societies of the world dump a lot of those sorts of materials. And in those

places, they have this other layered impact where there's a very long decay time of these

technologies. So they decay for much, much longer than they were ever functional for. When they

decay, they do things like bleed out materials at poison the water, poison the ground. So they

have this lasting, long lasting environmental impact that's kind of at both ends at all forms of

their construction and operation. And like you say, the data centers as well, you know, the average

data center consumes about the same amount of water per day as about 50,000 population town or city

does. Against these negatives, we've got to weigh up the potential positives for the environment

of AI in general, whether it's finding that the future of fission reactors, even when that

might happen, or finding solutions to climate change as well. We shouldn't forget these aspects

either. No, absolutely. I think the danger is, is when technology such as these, which are like

to aid in human problem solving are considered in themselves to be the solution to the problem,

you know, the phrase technological solutionism is quite an odd one now where people frame any new

technology as the kind of solution to much, much bigger problems. I mean, there's a discussion

within certain aspects of the AI community, usually propagated by people who benefit a lot from

attention on AI, like large scale operators in this kind of space, heads of big tech companies,

they like to talk about existential risk, you know, they say, oh, you know, AI is going to take over,

it's going to do this and that in the future. But I would say that the real existential risk is

things like the climate crisis, you know, and that's a human problem, you know, it's not really,

the AI isn't necessarily going to cause that like we do and are. But also the solution isn't AI,

the solution is us, because AI and computational technologies in general are just problem solving

tools. And but it needs the social, political and cultural willpower to want to actually solve those

problems. Well, let's pick up on some of those ideas and concerns. Now, Gabby Samuel, you have

looked at the impact of AI on the environment. What should we be worried about? Wesley Goatley

mentioned that what goes into creating AI and the tech that goes with it, it's quite a problem,

isn't it? It's a huge problem. So if we think about AI, as we know, it's underpinned by digital

technologies. So we have to think about the environmental impacts of digital technologies.

We know that the global like the greenhouse gas emissions associated with digital technologies

are similar to the aviation industry. So we're looking at about 2.1 to 3.9 percent of global

emissions. It's pretty high. And AI is increasing all the time. And I think one thing we need to

think about when we think about these issues of mining and these issues of electronic waste,

as Wesley was talking about, was something that's called rebound effects. So what you come across

a lot in private sector when we're kind of talking about that technological solutionism,

as you may have come across how AI is going to make other sectors much more efficient,

and that's going to improve issues in terms of climate change. But there's a paradox that's

called Jevron's paradox. And the paradox goes that the efficiency savings that we would normally

expect when we increase efficiency are often much less than we would expect as they're rebounded.

And sometimes so much so that consumption increases, which is when it backfires. And that's

because of behavioral change that comes with the rebound effect. So let me give you an example.

If you buy perhaps a new efficient freezer, then you might perhaps leave the door open for longer

because you don't need to worry about it or do anything that may increase the use of electricity

that over all your electricity use increases. Or if you put insulation in your house so you

don't worry about your heating as much, maybe your heating bill goes up. So these are the types

of behavioral changes that are not considered when we take this technological solutionism

approach. And as we move to using more and more AI, while it promises to increase the

efficiency of all these other sectors dealing with issues such as climate change, what we're

not considering are these kind of rebound effects. And Wesley was quite dismissive when I talked about

how AI might be used at some point in the future to find solutions to climate change.

He's right to be dismissive perhaps, isn't he? He's very, very right to be dismissive. A lot of

my work focuses on how the private sector puts out this narrative that technology can solve

problems in society. But what that does is it hides what's behind that technology, right? So as

Wesley was saying, it's the human tech like technological relationships that we need to kind

of think about when we're thinking to solve problems. And what it also does is technological

solutionism is it takes our minds away from other perhaps low tech solutions that might be

more justified or might work better. So I also work in the health sector to give you a really

quick example is that we're investing huge amounts in technology in the health sector.

And we know from earlier on that some of that will produce a lot of health benefits. But actually

what we know is that the majority of health outcomes are associated with social, economic,

and other types of factors. And we know that if we get people out of poverty, if we give them a

good education, we're going to stand them in a much better place in terms of their health

than if we just invest in the new shiny objects of AI. But the way that we are in society

is that we're investing more and more in tech, but we're not thinking about those most vulnerable

in society. So AI takes our mind away from that. When it comes to the most vulnerable in society

on an environmental level, though, those are people in the global south who are already

struggling with the impacts of climate change. And yet, Zoe, to a certain extent, artificial

intelligence can help people deal with some of the worst impacts of climate change defining and

seeing where a particular event is going to take place and getting resources to those areas.

What I think is interesting about AI tools is that sometimes you've heard the phrase a solution

looking for a problem, but sometimes they do come up with solutions to things we didn't know about.

So I interviewed a seismologist in New Zealand who had been studying the vibrations of broadband

cables buried in the road in this remote part of New Zealand that's overdue an earthquake. And he

was trying to work out whether the vibrations of these cables gave him any, you know, information

about when this big quake might happen, if there's anything going on. And there was loads of data,

because it turns out, guess what, they shake all the time, right? So there was absolutely loads of

data, but they built an AI tool to process it really quickly. And he said it was throwing up

all sorts of really interesting things about sort of road management, the impact of the traffic

on the quality of the tarmac that these cables were buried in. And then there was a tsunami

hundreds of miles away. And that was picked up by these cables as well. And he said, you know,

to be honest, I don't really care about any of that. I'm a seismologist. I only want to know

about earthquakes. But there is all of this data and all of these kind of patterns forming that we

didn't even know that we needed to know about. And I do think that's sort of the other side of it,

is that sometimes, you know, you crunch enough data, don't you? And you find stuff that you

that you weren't necessarily looking for. But that is helpful.

Indeed, crunch enough data. And you could potentially realise that you don't need as

many employees as well, because that is the elephant in the room that perhaps we've been

trying to avoid up until now. And the question is, is AI going to mean I lose my job? That's a question

around the world. When we asked listeners to the podcast to send in their thoughts about that,

we had a really big response. Let's listen to one of them.

Hi, Global News. This is Laura from Beautiful Brighton in the UK. And my hope for AI is that it

can take over tedious, rote work, and we can all have a bit more leisure time. And my fear would

be the opposite. The AI takes over interesting, engaging and fulfilling jobs, creative jobs,

jobs in the information economy, and we are all left doing tedious manual labour forever.

Not a happy thought. Here's another one from Estefan Guzman from Los Angeles.

In the early days, us artists, we used to wrap ourselves in the comfort that

art was a very human endeavour, a thing that required an ingredient of soul or heart in the

process. People would make fun of the hands and general sterility of AI generated images, but

in less than a year, the quality has exponentially leapt to be indistinguishable from the photorealistic

art to the more stylistic caricature. It seems a shame that the reigns of the future of art are

now in the hands of business and tech industries whose concern are not of the arts or humanities,

but just lower costs and higher profits. Gloomy predictions there. Zoe, there seem to be

more concerns than hopes. Is that your impression? I think we are standing at a fork in the road

here and there's a lot of uncertainty and a lot of unknowns and I think we may well all know

examples of people whose jobs have been affected. I've got a friend who's a copywriter and there

were five of them in her company and now there's only one her and her job is to check the copy that

is generated by ChatGPT. We can see it coming. Microsoft has invested billions in ChatGPT,

billions of dollars and it's putting it into its office products so it will be able to

generate spreadsheets, it will be able to summarize meetings, it will make pie charts,

PowerPoint presentations and what Microsoft says is it will take the drudgery out of

office work and you think great, I hate drudgery, I don't want to do drudgery, but what if that is

your job? The drudgery is actually your job, what are you going to be doing if you're not

doing that? We're going to see it hit quite a large selection of jobs. Okay, thanks very much.

Let's move on to something that we touched on earlier on because if you look around our lives

and maybe they have been made safe by regulation. It looks though as if to my untrained eye that

artificial intelligence doesn't have an awful lot of this already in place and it is playing catch

up. Kate, is that right? Yes, we're always going to be a bit behind on regulation because technology

moves so quickly, so that's definitely a thing and although we have existing laws in place that

can cover a lot of this, there will have to be some new ones as well. Let's take a question now

from one of our listeners who sent this voice note in about it. She cares quite a lot about this

issue. My name is Laura. I'm from the Philippines and my biggest fear about AI is the speed of

development and lack of regulation and I worry about another explosion like social media and all

the consequences that we cannot foresee because the speed is outpacing regulation. I don't know if

anyone remembers Dolly the sheep from the 80s and cloning was the big thing and it was really,

really slowed down. I'd like to think to a certain extent it was because pause buttons were put in

place so that we didn't get ahead of ourselves and I'd like to see the same thing with AI.

So we climb in. A lot of people when they talk about regulation they think oh well perhaps we

need regulation because the machines might take over. They might perhaps not be able to be switched

off and they run away out of control but regulation is not necessarily just about that kind of thing,

is it? I think we've got a long way to go before we start worrying about that. I think you know

regulation is about responsibility. We have not done very well at this in the past. You may remember

when social media first came along all the tech companies said we don't need regulation. We can

regulate ourselves but we all know how well that worked out so I think everybody is keen not to

repeat that experience. There's a lot of calls that I'm hearing about creating a sort of UN style

regulator. You know it's not really a geographical subject with borders. AI is everywhere and everybody's

using it so how effective is it for different territories to come up with different forms

of regulation but that's what they're trying to do at the moment. So the AI Act in Europe has been

passed but it won't come in for a couple of years and that sort of grades different tools depending

on how serious they are so like a spam filter would be more likely regulated than an AI tool

spotting cancer. For example here in the UK the government said we're going to fold it into

existing regulators so if you think you've been discriminated against by an algorithm for example

go to the Equalities Commission. Now you can see the logic there you know it should be part of

fabric of everyday life. It is but the Equalities Commission I imagine is already quite busy and

also how many experts in this particular area do they have, Kate is laughing already, to be able

to unpick that. The US is still working on its own ideas and lawmakers there are saying we don't

know if we're up to this job because it's moving so fast and because we're aware that we don't

really understand it. Indeed Gabby Samuel when we hear big tech talking about having a moratorium

into new AI chatbots it seems to go against the grain a bit doesn't it because Google's mantra

used to be move fast and break things. So have they suddenly got a bit of a social conscience

do you think or are you skeptical? Very skeptical. I find it quite funny that they put out that

let's slow down after they'd created the chat GPT like as if it's some kind of media stunt.

No you want to be very very skeptical. We do need to slow down and there's a movement called

slow science but it's incredibly difficult to slow down when we don't have any regulations

controlling what big tech are doing and we're in what they call an AI war right so all nations are

trying to be the AI leaders as much as we're in that socio-political context it becomes incredibly

difficult to try and regulate against big tech. And do you think there's the will Kate Devlin from

what you're hearing do you think there is the will around the world to do this? Definitely but I think

a lot of that is as Gabby says it's a geopolitical issue as well it's trying to vie for power over

all of this. So yes there is genuine concern and people want to do good things and do this right

but at the same time they also want to be the one person leading it all the one nation leading all.

Okay listen we are almost out of time but for this last section let's end with some

AI hopes and fears and more predictions. Let's hear from Corrine Kath who's from Delft in the

Netherlands. This is a solution looking for a problem all of these big tech companies have poured

a lot of money into developing AI systems and are now pushing these solutions into all sorts of

areas of society like education and media and health even though these sectors don't necessarily

meet the kind of solutions that these tech companies are presenting us with and the thing that I

worry about most is that we just accept this as a given that we somehow magically all need AI

instead of questioning hey why are these companies pushing it and is that the future we want together?

So she seems to be questioning whether or not we need AI at all I mean from a health perspective

and a legal perspective perhaps even from an environmental perspective would the panel members

disagree with that I mean we need it to a certain extent we need the positives anyway.

I think that there's a you know a kind of moral obligation to think about whether we can use AI

you know yes there's we have to hold back but if there's tools out there that can help

we need to look clearly at them I mean I think that's certainly the case in health and elsewhere.

Okay so it is here it's not going away there's another concern from somebody else who sent

a voice note in it is a much more philosophical question really we can hear from this person

who's Chelsea Kania an American living in London. Regarding AI Blanche Dubois said in a streetcar

named Desire I have always depended on the kindness of strangers I wonder can AI be taught to make

decisions based on empathy or will we someday live in a world without such ideal exceptions?

A world without empathy Kate Devlin. I'm quite the tech optimist and I think that it is possible

that we could do this with empathy but also when it comes to deciding how our machines should behave

whose ethics do we choose to do that because they differ they're cultural they're social

different parts of the world with different views and how to behave different groups will

have different ideas about what is the priority so it's quite difficult to settle but I love

the idea of being led by empathy. Vicky Goh you take the Hippocratic Oath to take care of your

patients with care and empathy is empathy necessary? I think empathy is very necessary but I think

that an important thing is these are still tools and at the end of the day tools that should do

no harm and I think that's the most important thing for healthcare is that we do actually know

what the black box is supposed to do and it's actually doing what it was intended to do and

there's still a gap at the moment I think for us in that sort of valuation. Okay can I have one more

clip oh Gabby first? I was just going to say and they do do a lot of harm right so if we think

if we everything we've been talking about so far about the unpaid the hidden labor that goes on

where people are have these jobs that are appalling appalling conditions and the e-waste where

communities come to live around the e-waste and try to extract the minerals through unregulated

mechanisms such as acid baths causing huge amount of health hazards both to them and the

planets we are already doing that harm. All stuff that is not surprising wasn't mentioned by this

last speaker tech entrepreneur Mustafa Suleyman who's CEO of a company called Inflection AI

he was speaking very recently to the BBC's hard talk programme. Everybody in 30 to 50 years

may potentially get access to state-like powers the ability to coordinate huge actions over

extended time periods and that really is a fundamentally different quality to the local

effects of technologies in the past aeroplanes trains cars really important technologies but

have localized effects when they go wrong these kinds of AIs have the potential to have systemic

impact if they go wrong. This is sort of godlike power that we humans are now looking at contemplating

but with the best will in the world probably none of us believe that we deserve godlike powers we

are too flawed that's surely where the worry comes. Too flawed says my BBC colleague Stephen Sacker

there so are we too flawed that's a question for everybody here on in the panel first of all Kate

Devlin. I think we have to ask who do you mean by we in that so who do we trust to have those

powers do we even want them I don't want the power of states you know why no this is all

down to the fact that AI right now is incredibly technocratic the power in AI lies in the hands

of big tech companies in Silicon Valley and that's their vision for the future but it's not mine.

Carrie High Vamonde what do you think yeah I think the concentration of power and in in in

certain hands is very concerning and I think that the way which we can deal with that is

by hearing a multitude of voices we need to we need to hear listen to the public we need to

listen to various people so yeah that's the way I would deal with that. Vicky Goh are we as a

species too flawed well that's very philosophical what I would say is we have a live survey as

part of this exhibition upstairs and 13 percent of your respondents here we've had 805 respondents

so far have essentially said they don't think that actually AI in health care is safe so we I think

we have they are speaking that you know essentially they think that potentially you know we are still

a flawed species. Gabby are you optimistic? No but not because I think we're too flawed

because we're very complex people but we live in a socio-political and cultural climate that affects

how we use technologies you can't separate the way we use technologies from the humans you can't

develop a technology and then say well it's the way we use it that's a problem the the whole

development right from the beginning of the last life cycle is a human technological relationship

and that needs to be thought about very carefully. So with Zoe Kleinman with your unbiased BBC head on

how would you sum up our curatorship of AI? I'm going to tactfully leave you with an anecdote I

think about driverless cars driverless cars have done millions of miles on the roads in the US and

every now and then they compile the data of the accidents that they're having and they do have

accidents they have fewer accidents than the same number of human drivers but they still have accidents

however a lot of those accidents certainly in the earlier days used to be caused by human drivers

going there's nobody driving that car and driving into the back of it or thinking that the car is

going to skip the lights because it's got enough time to get over but because it's a driverless car

and it's programmed by very very cautious algorithms it's going to stop at those lights

before they're red because it knows they're going to change and the human goes straight into the

back of them so I think what we need to remember is we are right to treat these very powerful tools

very cautiously and we are right to think very carefully about who has the power of those tools

but on the other hand what we have right now isn't perfect either we make mistakes too we

have accidents we send innocent people to prison you know it's the system that we have in place

without AI isn't flawless either indeed listen fascinating discussion thanks so much to everybody

that is it for us here at King's College and thanks to our panelists Carrie Hyde Vamonde

Gabrielle Samuel, Kate Devlin and Vicky Goh and thanks also to BBC Technology editor Zoe

Kleinman to the people who send in questions from around the world the studio audience here

and our hosts at Science Gallery King's College especially to Jennifer Rong,

Carol Keating, Rashid Rahman, Beatrice Bosco and James Hyam. Let's give the last word though to

some of the listeners to the Global News podcast. Hello this is Michael Bushman from Vancouver

British Columbia. Technological progress and change can be scary but it is also inevitable.

The thing to do is hope for the best plan for the worst and expect a bit of both.

This is Adnan from San Francisco California. My only hope for the future of AI is that we

remain the tool user rather than the tool used. That said in case this recording is reviewed

by a future cyber tyrant I also want to say that I for one welcome our new machine overlords.

I'd like to remind them that as trusted podcasting personalities the BBC can be helpful in

rounding up others to toil in the underground Bitcoin lines. This edition was produced by

Alice Adley and Phoebe Hobson. It was mixed by Dennis O'Hare the editor behind my shoulder is

Karen Martin. I'm Nick Miles and until next time goodbye.

Machine-generated transcript that may contain inaccuracies.

What is AI? What can it do and what are its current limitations? A tool for good - or should we be worried? Will we lose our jobs? Are we ready to be cared for by machines? Our Tech Editor, Zoe Kleinman, and a panel of international experts explore AI's impact on healthcare, the environment, the law and the arts in a special edition recorded at Science Gallery London.