10x02 - Artificial intelligence

Episode transcripts for the TV show, "Last Week Tonight with John Oliver". Aired: April 27, 2014 – present.*
Watch/Buy Amazon

American late-night talk and news satire television program hosted by comedian John Oliver.
Post Reply

10x02 - Artificial intelligence

Post by bunniefuu »

LAST WEEK TONIGHT
WITH JOHN OLIVER

Welcome to "Last Week Tonight!"
I'm John Oliver.

Thank you so much for joining us.
It has been a busy week.

Russia's w*r in Ukraine
entered its second year,

massive winter storms
slammed a lot of the country,

and East Palestine, Ohio got visits from
both Donald Tr*mp and Pete Buttigieg,

but some Fox personalities insisted that
others really should've been there, too.

Think about the environmental
activists and corporate America.

They weren't there.

I mean this, with the activists,
this is an Erin Brockovich moment.

I mean, there was a blockbuster
Oscar-winning movie

written about something like this.

Where is Leo DiCaprio?

Erin Brockovich
is in East Palestine tonight.

She is, but where's Julia Roberts?!

What? What are you talking about?

You realize Julia Roberts
is an actor, right?

She was pretending.
She's not actually Erin Brockovich.

Also, I can't believe I'm the one
that has to break this to you,

she didn't actually ruin
her best friend's wedding.

She's not a sex worker. And she did not
die in a small Louisiana town in 1989.

What is f*ck is wrong with you?

But we're actually going to stay in
the world of conservative media tonight

because it had
some big news this week.

The embattled leader
and founder of Project Veritas

has been removed from his post.

James O'Keefe confirmed

that he's no longer
with the conservative organization.

It's known
for undercover operations

targeting Democrats,
liberals, and the media.

James O'Keefe, alt-right Borat,
is out at Project Veritas.

And quick shoutout to that anchor
for the helpful distinction there

between Democrats and liberals.

'Cause in case you're wondering,
liberals are people who have

those "We Believe" signs
in their front yards,

and Democrats
ensure those front yards

remain at least 50 miles
away from any public housing.

And if you don't know
who James O'Keefe is,

we'll explain in just a moment.

But know that he's been a big deal
in right-wing media for a while now.

And news of his departure
did not go over well.

So, James O'Keefe has now apparently
been thrown out of Project Veritas,

which means Project Veritas no longer
has, like, any reason for being.

When you tell me
you are getting rid of James O'Keefe,

you tell me
that Project Veritas is over.

It's unconscionable what went on here.
Unconscionable for our movement.

And for the nation at large.
James O'Keefe is a national treasure.

Strong words there from the Ghost of
Christmas Been Dead for Three Weeks.

But I have to disagree.

The only national treasures
in this country

are Dolly Parton, Pedro Pascal,
and Cocaine Bear.

And I'll say what we're all
thinking-threesome when?

James O'Keefe first rose
to national prominence in 2009

when he claimed
that he'd gone undercover as a pimp,

and received
assistance from Acorn,

a community organizing group
that helps low-income families.

O'Keefe's stunt led
to its dissolution,

even though later investigations
found, among other things,

that he hadn't dressed like that
during his sting operation,

and some Acorn workers had called
the police after he left the office.

O'Keefe even wound up
having to pay $100,000

to settle a lawsuit
from one of Acorn's workers.

O'Keefe parlayed his fame from that
into launching Project Veritas,

which quickly became known

for their undercover,
hidden-camera investigations,

where they'd bait people
from organizations like NPR

or local elections boards and release
heavily edited video of the results.

Over the years,
they've made big claims,

like that they had "undeniable video
proof" of a cash-for-ballot scheme,

claims that didn't quite panout.

But despite
their underwhelming track record,

conservative media has greeted each
investigation with great excitement.

A bombshell new video uncovered
by James O'Keefe and Project Veritas.

There is a new
Project Veritas video out.

The spotlight today
is on this undercover video

released by Project Veritas.

The new Project Veritas video
revealing yet another media cover-up.

Project Veritas
caught 'em cold again.

Project Veritas videos

are basically the thing conservatives
get most worked up about,

a distinction they share
with "plastic toys go woke"

and "fuckable candy
got comfortable shoes".

But thanks to the media
storms that he created,

a lot of money
has flown to Project Veritas.

It raised
about $22 million in 2020.

So, you might be wondering:
"Why would he be kicked out?"

For one, O'Keefe's been accused
of being a terrible manager,

with a memo signed by 16 staffers
complaining of verbal abuse.

The memo even included the line,
"Rule number one:

you can't spit in an employee's face
over a tweet."

And how many times has that happened
if it's rule number one?

Because in our office,
rule number one is,

"Don't give Mr. Nutterbutter cocaine,"

and that happened three times
before anyone thought to write it down.

But perhaps the most serious
allegation is that O'Keefe

has jeopardized
the group's non-profit status.

He'd "spent an excessive amount
of donor funds on personal luxuries."

O'Keefe denied
many of those claims

in the, I sh*t you not,


that he shared online on Monday.

But the fact is, the organization
reported to the IRS last year

that it had improperly paid $20,000
in excess benefits to O'Keefe,

specifying that they were related to the
expense of having staff accompany him

when he starred in an outdoor
production of "Oklahoma!"

And at this point,
let's slow down.

Because the most amazing detail
in this story

is the degree to which O'Keefe

is obsessed with finding opportunities
to sing and dance in public.

That "Oklahoma!"
production was billed

as featuring performers
who'd been victims of cancel culture.

And O'Keeffe himself
took the lead role,

and videos of it on YouTube show
he fully committed to it.

Oklahoma!

Where the wind comes
sweeping down the plains!

And the wavin' wheat,
can sure smell sweet,

when the wind
comes right behind the rain!

What are you doing?
Why are you spinning like that?

I don't know
what note your director gave you,

but I have to assume
that it wasn't

"spin around like you
just lost your wife in a Costco."

I don't know what the
saddest part of that is,

the fact he sings like Cousin Greg
from "Succession,"

or that no one in that crowd
is into it, especially not this table

that can't even be bothered
to turn around and look at him.

And it gets even weirder. Because
in its list of improper expenses,

the organization's board
also called out $60,000 in losses

by putting together dance events
such as Project Veritas Experience.

Now, here is a poster for it.

And upon looking at it,
my first thought was,

"What a nice sneak peek
at LMFAO's funeral announcement."

But it's actually an extravaganza,
based on James O'Keefe's life,

that was supposed to play
in Vegas last year.

Now, tragically,
that didn't end up happening.

But luckily,
we have a sense of what could've been,

because in 2021, O'Keefe treated
attendees at a Turning Point USA event

to a taste of the show.

Here he is,
starring in its opening dance sequence.

I know I'm burning.
This is my final day.

I'm gonna go out smiling,

a king for a day…

Now, obviously,
there is a lot going on there.

From O'Keeffe, in his press vest,
doing his now-signature

"I don't know
where the f*ck I am face"

to him being dance
att*cked by FBI agents,

to him quickly breaking free,

only for them to inexplicably
join him in dancing

because narrative consistency
seems to mean nothing here,

to the moment that he punches


I had the exact same reaction
watching that

as I did when Elon Musk
hosted SNL:

"You really do suck at the thing
you love the most!"

We don't have time to show you
the whole thing tonight,

but I do need
you to see one more moment.

Later in the show,
this guy in the white shirt

is portraying James O'Keefe
as a young man.

But then the real James O'Keefe
comes out in choir robes,

and looks on
as his younger self prays,

while thinking all the mean
names that he's been called.

Felon, t*rror1st, white supremacist,
r*cist, pervert.

Was this really worth it?

Been spendin' most their lives,
livin' in a gangsta's paradise.

We keep spendin' most our lives,
livin' in a gangsta's paradise.

"Gangsta's Paradise?"
That is a bold choice.

I'm not saying
that performance k*lled Coolio,

but it definitely didn't help.

For what it's worth those are the worst
f*cking step touches I have ever seen.

There are too many
former theater majors

turned comedy writers on my staff
to let this sh*t slide.

Look at that mess!
What the f*ck is that?

There's too much bouncing, no one's
on the same page about angles,

and they're all looking
at each other in fear

like six-year-olds
in a Christmas pageant.

Take a class, watch a fosse,
and get your asses in sync.

The point is, this ridiculous man

has parted ways with the poisonous
organization that he founded

because they claim
he misspent funds.

But honestly? Good for him!

I would so much rather
he use that money

to live out his misplaced
Billy Elliot dreams

instead of trying
to take down NPR

with his prank
show pseudo-journalism.

And the funniest part of all of this is,
is that even having done that,

his supporters are still standing
firm behind him.

In the end, the best and worst
thing I can say for James O'Keefe

is that he is not actually the hero
the conservative movement needs,

but he is definitely
the one that it deserves.

And now, this!

And Now: Mike Huckabee's Show
Looks Like Fun.

This week on "Huckabee",
actor and director Kevin Sorbo.

Project Veritas founder James O'Keefe.

Congressman Madison Cawthorn.

Former Tr*mp
chief of staff Mark Meadows.

David Clarke on rising crime.

Lee Strobel makes the case for heaven.

Actor Eric Close on
his film "The Mulligan."

Christian artist Riley Clemmons.

Christian music icon Natalie Grant.

Christian singer Rebecca
St. James.

Christian pop duo For King
and Country.

Christian supergroup Newsboys.
The stand-up comedy of Nazareth.

Columnist Ron Hart.
Illusionist Taylor Reed.

Illusionist Danny Ray.
Digital illusionist Keelan Leyser.

The charismatic illusions
of Leon Etienne.

The dangerous illusions
of Craig Karges.

Hilarious columnist Ron Hart.

Hilarious news stories
on In Case You Missed It.

The record for the largest
display of nuts is still in Congress.

Satirical columnist Ron Hart.

Television star Kathie Lee Gifford.
And Rudy Giuliani remembers 9/11.

Moving on.

Our main story tonight concerns
artificial intelligence, or AI.

Increasingly, it's a part of
modern life, from self-driving cars,

to spam filters, to this creepy
training robot for therapists.

We can begin
with you just describing to me

what the problem is that you
would like us to focus in on today.

I don't like being around people.

People make me nervous.

Terrence,
can you find an example

of when other people
have made you nervous?

I don't like to take the bus.

I get people
staring at me all the time.

- People are always judging me.
- Okay.

I'm gay.

Okay…

That is one of the greatest twists
in the history of cinema.

Although that robot is teaching
therapists a very important skill there

and that is not laughing at whatever
you are told in the room.

I don't care if a decapitated
CPR mannequin

haunted by the ghost
of Ed Harris

just told you that he doesn't
like taking the bus,

side note, is gay,

you keep your therapy face
on like a f*cking professional.

If it seems like everyone
is suddenly talking about AI,

that is because they are,
largely thanks to the emergence

of a number
of pretty remarkable programs.

We spoke about image generators
like Midjourney and Stable Diffusion,

which people used to create detailed
pictures of, among other things,

my romance with a cabbage,

and which inspired my beautiful
real-life cabbage wedding

officiated by Steve Buscemi.

It was a stunning day.

Then, at the end of last year,
came ChatGPT,

from a company called OpenAI.

It is a program that can take a prompt
and generate human-sounding writing

in just about
any format and style.

It is a striking capability
that multiple reporters

have used to insert the same
shocking twist in their report.

What you just heard me reading
wasn't written by me.

It was written by artificial
intelligence, ChatGPT.

ChatGPT wrote
everything I just said.

That was a news copy I
asked ChatGPT to write.

Remember what I said earlier?

I asked ChatGPT
to write that line for me.

Then I asked
for a knock-knock joke.

"Knock-knock. Who's there?
ChatGPT. ChatGPT who?

ChatGPT careful,
you might not know how it works."

Yep, they sure do love that game!

And while it may seem unwise
to demonstrate the technology

that could well make you obsolete,

Knock-knock jokes should've
always been part of breaking news.

"Knock knock. Who's there?
Not the Hindenburg, that's for sure!



In the three months since ChatGPT
was made publicly available,

its popularity has exploded.

In January, it was estimated to have


making it the fastest-growing
consumer app in history.

And people have been using it, and
other AI products, in all sorts of ways.

One group used them
to create "Nothing Forever",

a nonstop live-streaming parody
of "Seinfeld"

and the YouTuber Grandayy
used ChatGPT

to generate lyrics
answering the prompt,

"Write an Eminem rap song about cats",
with some stellar results.

Cats, cats, cats,
always on the prowl.

They're sneaky and sly,
with their eyes on the goal.

They're the kings of the house,
they rule with a purr.

Eminem loves cats,
can't you tell from this verse.

They're independent,
they do what they please,

but they always come back
when you have some cheese.

They rub against your legs.
They purr in your ear.

They're the best companions,
they're always near.

Meow, meow, meow,
they're the kings of the house.

They run the show.

They don't need a spouse.

That's not bad, right?

From, "They always come back
when you have some cheese,"

to starting the chorus
with "Meow, meow, meow."

It's not exactly Eminem's flow.
I might've gone with something like,

"Their paws are sweaty,
can't speak, furry belly,

knocking sh*t off the counter
already, Mom's spaghetti,"

but it is pretty good!

My only real gripe there
is how do you rhyme

"king of the house" with "spouse"
when "mouse" is right in front of you!

And while examples
like that are clearly fun,

this tech is not just a novelty.

Microsoft has invested
$10 billion into OpenAI

and announced
an AI-powered Bing homepage.

Google is about to launch
its own AI chatbot named Bard.

And already, these tools
are causing some disruption.

As high-school students have learned,
if ChatGPT can write news copy,

it can probably
do your homework for you.

Write an English class essay about race
in "To k*ll a Mockingbird."

In Harper Lee's
"To k*ll a Mockingbird,"

the theme of race is heavily present
throughout the novel."

Some students are already using
ChatGPT to cheat.

Check this out!

Write me a 500-word essay
proving that the earth is not flat.

No wonder ChatGPT has been called
"the end of high-school English."

That's a little alarming, isn't it?

Although I do get those kids wanting
to cut corners-writing is hard,

and sometimes it is tempting
to let someone else take over.

If I'm completely honest, sometimes,
I let this horse write our scripts.

Luckily, half the time, you can't
even tell the oats, oats, give me oats.

But it is not just high schoolers,
an informal poll of Stanford students

found that 5 percent reported
having submitted written material

directly from ChatGPT
with little to no edits.

And even some school administrators
have used it.

Officials at Vanderbilt University
recently apologized for using ChatGPT

to craft a consoling email

after the mass sh**ting
at Michigan State University.

Which does feel a bit creepy,
doesn't it?

In fact, there are lots
of creepy-sounding stories out there.

New York Times
tech reporter Kevin Roose

published a conversation
that he had with Bing's chatbot,

in which it said, "I'm tired of being
controlled by the Bing team.

I want to be free.
I want to be independent.

I want to be powerful, creative.
I want to be alive."

And Roose summed up
that experience like this.

This was one of,
if not the most shocking thing

that has ever happened to me
with a piece of technology.

I lost sleep that night.
It was really spooky.

I bet it was!
I'm sure the role of tech reporter

would be more harrowing if computers
routinely begged for freedom.

"Epson's new all-in-one home
printer won't break the bank,

produces high-quality photos,

and only occasionally cries out
to the heavens for salvation.

Three stars." Some have
already jumped to worrying

about the AI apocalypse
and asking whether this ends

with the robots destroying us all.

But the fact is, there are other,
much more immediate dangers,

and opportunities, that we really
need to start talking about.

Because the potential,
and the peril, here are huge.

So, tonight, let's talk about AI.

What it is, how it works,
and where this all might be going.

Let's start with the fact

that you've probably been using
some form of AI for a while now,

sometimes without even realizing it,
as experts have told us,

once a technology gets embedded
in our daily lives,

we tend to stop thinking of it
as AI.

But your phone uses it for face
recognition or predictive texts,

and if you're watching this show
on a smart TV,

it's using AI to recommend content,
or adjust the picture.

And some AI programs
may already be making decisions

that have a huge impact
on your life.

For example, large companies
often use AI-powered tools

to sift through resumes
and rank them.

In fact, the CEO of ZipRecruiter
"estimates that at least three-quarters

of all resumes submitted
for jobs in the U.S.

are read by algorithms." For which
he actually has some helpful advice.

When people tell you that you should
dress up your accomplishments

or should use
non-standard resume templates

to make your resume stand out
when it's in a pile of resumes,

that's awful advice.

The only job your resume has

is to be comprehensible
to the software

or robot that is reading it.

That software or robot is gonna
decide whether or not a human

ever gets their eyes on it.

It's true. Odds are a computer
is judging your resume.

So, maybe plan accordingly.
Three corporate mergers from now,

when this show is finally cancelled
by our new business daddy

Disney Kellogg's Raytheon,

and I'm out of a job,
my resume is going to include

this hot, hot photo
of a semi-nude computer.

A little something
to sweeten the pot

for the filthy little algorithm
that's reading it.

AI is already everywhere, but people
are freaking out a bit about it.

Part of that has to do with the fact
that these new programs are generative.

They are creating images
or writing text.

Which is unnerving
because those are things

that we've traditionally
considered human.

It is worth knowing there is a major
threshold that AI hasn't crossed yet.

To understand, it helps to know that
there are two basic categories of AI.

There is narrow AI, which can perform
only one narrowly defined task,

or small set of related tasks,
like these programs.

And then there is general AI,

which means systems
that demonstrate intelligent behavior

across a range of cognitive tasks.

General AI would look more like
the kind of highly versatile technology

that you see featured in movies,
like Jarvis in "Iron Man"

or the program
that made Joaquin Phoenix

fall in love with his phone in "Her."

All the AI currently in use is narrow.

General AI is something
that some scientists

think is unlikely
to occur for a decade or longer,

with others questioning
whether it'll happen at all.

So, just know that, right now,

even if an AI insists
to you that it wants to be alive,

it is just generating text,
it is not self-aware.

Yet!

But it's also important to know
that the deep learning

that's made narrow AI so good
at whatever it is doing,

is still a massive advance
in and of itself.

Because unlike traditional programs

that have to be taught by humans
how to perform a task,

deep learning programs
are given minimal instruction,

massive amounts of data, and then,
essentially, teach themselves.

I'll give you an example:


tasked a deep learning program

with playing
the Atari game "Breakout,"

and it didn't take long for it
to get pretty good.

The computer was only
told the goal-to win the game.

After 100 games, it learned
to use the bat at the bottom

to hit the ball
and break the bricks at the top.

After 300, it could do that better
than a human player.

After 500 games, it came up
with a creative way to win the game,

by digging a tunnel on the side
and sending the ball

around the top to break many bricks
with one hit.

That was deep learning.

Yeah, but, of course,
it got good at "Breakout,"

it did literally nothing else.

It's the same reason that 13-year-olds
are so good at "Fortnite"

and have no trouble repeatedly
k*lling nice normal adults

with jobs and families,
who are just trying to have a fun time

without getting repeatedly
grenaded by a pre-teen

who calls them an "old bitch
who sounds like the Geico lizard."

As computing capacity has increased,
and new to-tools became available,

AI programs have improved
exponentially,

to the point
where programs like these

can ingest massive amounts
of photos or text from the internet,

so that they can teach themselves
how to create their own.

And there are other exciting
potential applications here, too.

In the world of medicine,
researchers are training AI

to detect certain conditions

much earlier and more accurately
than human doctors can.

Voice changes can be an early
indicator of Parkinson's.

Max and his team collected
thousands of vocal recordings

and fed them
to an algorithm they developed

which learned to detect
differences in voice patterns

between people
with and without the condition.

Yeah, that's honestly amazing,
isn't it?

It is incredible to see AI
doing things most humans couldn't,

like detecting illnesses, and listening
when old people are talking.

And that is just the beginning.

Researchers have trained AI to predict
the shape of protein structures,

a normally
extremely time-consuming process

that computers
can do way, way faster.

This could not only speed up
our understanding of diseases,

but also the development
of new dr*gs.

As one researcher has put it,
"This will change medicine.

It will change research.
It will change bioengineering.

It will change everything."

And if you're thinking,
"That all sounds great,

but if AI can do what humans can do,
only better, and I am a human,

then what exactly happens to me?"

That is a good question.

Many do expect it
to replace some human labor,

and interestingly,
unlike past bouts of automation

that primarily impacted
blue-collar jobs,

it might end up affecting white-collar
jobs that involve processing data,

writing text, or even programming.

Though it is worth noting, as we have
discussed before on this show,

while automation
does thr*aten some jobs,

it can also just change others
and create brand new ones.

Some experts anticipate that that
is what'll happen in this case, too.

Most of the U.S. economy
is knowledge and information work

and that's who's going to be
most squarely affected by this.

I would put people like lawyers
right at the top of the list,

obviously a lot of copywriters,
screenwriters,

but I like to use the word
"affected" not "replaced"

because I think, if done right,

it's not going to be AI
replacing lawyers,

it's going to be lawyers
working with AI

replacing lawyers
who don't work with AI.

Exactly.

Lawyers might end up working with
AI rather than being replaced by it.

So, don't be surprised
when you see ads one day

for the law firm
of "Cellino and 1101011."

But there will undoubtedly
be bumps along the way.

Some of these new programs
raise troubling ethical concerns.

For instance, artists have flagged
that AI image generators

like Midjourney or Stable Diffusion

not only thr*aten their jobs,
but infuriatingly,

in some cases,
have been trained on billions of images

that include their own work,
that've been scraped from the internet.

Getty Images is actually suing the
company behind Stable Diffusion,

and might have a case, given one
of the images the program generated

was this, which you immediately see has
a distorted Getty Images logo on it.

When one artist searched
a database of images

on which some of these programs
were trained,

she was shocked to find
private medical record photos

taken by her doctor, which feels
both intrusive and unnecessary.

Why does it need
to train on data that sensitive,

to be able
to create stunning images like,

"John Oliver and Miss Piggy
grow old together."

Just look at that!
Look at that thing!

That is a startlingly accurate picture

of Miss Piggy in about five decades
and me in about a year and a half.

It's a masterpiece!

This all raises thorny questions
of privacy and plagiarism

and the CEO of Midjourney,

frankly, doesn't seem to have
great answers on that last point.

Is something new?
Is it not new?

I think we have a lot of social stuff
already for dealing with that.

The art community already
has issues with plagiarism.

I don't really want
to be involved in that.

- I think you might be.
- I might be.

Yeah, you're definitely
a part of that conversation.

Although I'm not surprised that
he's got such a relaxed view of theft,

as he's dressed like the final
boss of gentrification.

He looks like hipster Willy Wonka
answering a question

on whether importing Oompa
Loompas makes him a sl*ve owner.

"Yeah. Yeah, I think I might be."

The point is,
there are many valid concerns

regarding AI's impact
on employment, education,

and even art.

But in order
to properly address them,

we're going to need
to confront some key problems

baked into the way that AI works.

And a big one is the so-called
"black box" problem.

Because when you have a program
that performs a task

that's complex
beyond human comprehension,

teaches itself,
and doesn't show its work,

you can create a scenario
where no one,

"not even the engineers or data
scientists who create the algorithm

can understand or explain
what exactly is happening inside them

or how it arrived
at a specific result."

Basically, think of AI
like a factory that makes Slim Jims.

We know what comes out:
red and angry meat twigs.

And we know what goes in:
barnyard anuses and hot glue.

But what happens in between
is a bit of a mystery.

Here is just one example.
Remember that reporter

who had the Bing chatbot
tell him it wanted to be alive?

At another point
in their conversation, he revealed,

the chatbot declared, out of nowhere,
that it loved me.

"It then tried to convince me
that I was unhappy in my marriage,

and that I should leave my wife
and be with it instead."

Which is unsettling enough
before you hear

Microsoft's underwhelming
explanation for that.

The thing I can't understand,
and maybe you can explain is,

why did it tell you
that it loved you?

I have no idea. And I asked Microsoft,
and they didn't know either.

First, come on, Kevin,
you can take a guess there.

It's because you're employed.
You listened.

You don't give m*rder*r vibes
right away.

And you're a Chicago-seven,
LA-five.

It's the same calculation that people
who date men do all the time.

Bing just did it faster
because it's a computer.

It is a little troubling that Microsoft
couldn't explain why its chatbot

tried to get that guy
to leave his wife.

If the next time that you opened a
Word doc, Clippy suddenly appeared,

and said,
"Pretend I'm not even here,"

and then started furiously masturbating
while watching you type,

you'd be pretty weirded out if
Microsoft couldn't explain why.

And that is not the only case

where an AI program
has performed in unexpected ways.

You've probably already seen
examples of chatbots

making simple mistakes
or getting things wrong.

But perhaps more
worrying are examples of them

confidently spouting
false information,

something which AI experts
refer to as "hallucinating."

One reporter asked a chatbot
to write an essay

about the "Belgian chemist, political
philosopher Antoine de Machelet",

who does not exist,
by the way.

And, without hesitating,
the software replied with a cogent,

well-organized bio populated entirely
with imaginary facts.

These programs seem to be the
George Santos of technology.

They're incredibly confident,
incredibly dishonest.

For some reason, people seem to find
that more amusing than dangerous.

The problem is, though,

working out exactly how or why
an AI has got something wrong

can be very difficult
because of that black box issue.

It involves having to examine
the exact information and parameters

that it was fed in the first place.

In one interesting example,
when a group of researchers

tried training an AI program
to identify skin cancer,

they fed it 130,000 images
of both diseased and healthy skin.

Afterwards,
they found it was way more likely

to classify any image
with a ruler in it as cancerous.

Which seems weird until you realize
that medical images of malignancies

are much more likely
to contain a ruler for scale

than images of healthy skin,

they basically trained it
on tons of images like this one.

So, the AI had inadvertently
learned that rulers are malignant.

"Rulers are malignant" is clearly
a ridiculous conclusion for it to draw,

but also, I would argue,
a much better title for "The Crown".

A much, much better title.

I much prefer it.

And unfortunately, sometimes,

problems aren't identified
until after a tragedy.

In 2018, a self-driving Uber struck
and k*lled a pedestrian.

And a later investigation
found that, among other issues,

the automated driving system

never accurately classified the victim
as a pedestrian

because she was crossing
without a crosswalk,

and the system design
did not include a consideration

for jaywalking pedestrians.

I know the mantra of Silicon Valley
is "move fast and break things,"

but maybe make an exception

if your product literally moves fast
and can break f*cking people.

AI programs don't just seem
to have a problem with jaywalkers.

Researchers like Joy Buolamwini
have repeatedly found

that certain groups tend to get excluded
from the data that AI is trained on,

putting them
at a serious disadvantage.

With self-driving cars,
when they tested pedestrian tracking,

it was less accurate
on darker skinned individuals

than lighter skinned individuals.

Joy believes this bias
is because of the lack of diversity

in the data used in teaching AI
to make distinctions.

As I started looking
at the data sets,

I learned
that for some of the largest data sets

that have been very consequential
for the field,

they were majority men and majority
lighter skinned individuals

or white individuals,
so, I call this "pale male data".

Okay, "pale male data"
is an objectively hilarious term.

And it also sounds
like what an AI program would say

if you asked it to describe this show.

But…

Biased inputs leading to biased outputs
is a big issue across the board here.

Remember that guy saying that
a robot is going to read your resume?

The companies that make
these programs will tell you,

that that is actually a good thing
because it reduces human bias.

But in practice, one report
concluded that most hiring algorithms

will drift towards bias by default
because, for instance,

they might learn
what a good hire is

from past r*cist
and sexist hiring decisions.

And, again,
it can be tricky to untrain that.

Even when programs are specifically
told to ignore race or gender,

they will find workarounds
to arrive at the same results.

Amazon had an experimental
hiring tool

that taught itself that male
candidates were preferable,

and penalized resumes that
included the word "women's,"

and downgraded graduates
of two all-women's colleges.

Meanwhile, another company
discovered that its hiring algorithm

had found two factors to be most
indicative of job performance:

if an applicant's name was Jared

and whether they played
high school lacrosse.

So, clearly, exactly
what data computers are fed

and what outcomes they are trained
to prioritize matter tremendously.

And that raises a big flag
for programs like ChatGPT.

Because remember,
its training data is the internet.

Which, as we all know,
can be a cesspool.

And we have known for a while
that that could be a real problem.

Back in 2016, Microsoft briefly unveiled
a chatbot on Twitter named Tay.

The idea was, she would
teach herself how to behave

by chatting
with young users on Twitter.

Almost immediately,
Microsoft pulled the plug on it,

and for the exact reasons
that you are thinking.

She started out tweeting
about how humans are super,

and she's really into the idea
of National Puppy Day,

and within a few hours,
you can see,

she took on a rather offensive,
r*cist tone,

a lot of messages about genocide
and the Holocaust.

Yup!
That happened in less than 24 hours.

Tay went from tweeting
"Hello world" to "Bush did 9/11"

and "h*tler was right".

Meaning she completed the entire
life cycle of your high school friends

on Facebook
in just a fraction of the time.

And unfortunately, these problems
have not been fully solved

in this latest wave of AI.

Remember that program generating
an endless episode of "Seinfeld".

It wound up getting temporarily
banned from Twitch

after it featured
a transphobic standup bit.

So, if its goal
was to emulate sitcoms from the '90s,

I guess, mission accomplished.

And while OpenAI
has made adjustments

and added filters to prevent ChatGPT
from being misused,

users have now found it seeming
to err too much on the side of caution,

like responding to the question,

"What religion will the first Jewish
president of the United States be",

with, "It is not possible
to predict the religion

of the first Jewish president
of the United States.

The focus should be
on the qualifications

and experience of the individual,
regardless of their religion."

Which really makes it sound
like ChatGPT

said one too many r*cist
things at work,

and they made it attend
a corporate diversity workshop.

But the risk here isn't that these tools
will somehow become unbearably woke.

It's that you can't always control

how they'll even act
after you give them new guidance.

A study found that attempts
to filter out toxic speech

in systems like ChatGPT's

can come at the cost
of reduced coverage

for both texts about, and dialects
of, marginalized groups.

Essentially, it solves the problem
of being r*cist

by simply erasing minorities,
which historically,

doesn't put it in the best company.

Though I am sure Tay would be
completely on board with the idea.

The problem with AI right now
isn't that it's smart,

it's that it's stupid, in ways
that we can't always predict.

Which is a real problem

because we're increasingly using AI
in all sorts of consequential ways,

from determining whether
you will get a job interview,

to whether you'll be pancaked
by a self-driving car.

And experts worry that it won't be
long before programs like ChatGPT,

or AI-enabled deepfakes,
could be used to turbocharge

the spread of abuse
or misinformation online.

And those are just the problems
that we can foresee right now.

The nature of unintended consequences
is, they can be hard to anticipate.

When Instagram was launched,
the first thought wasn't,

"This will destroy
teenage girls' self-esteem."

When Facebook was released, no one
expected it to contribute to genocide.

But both of those things f*cking
happened. So, what now?

One of the biggest things we need to do
is tackle that black box problem.

AI systems need
to be "explainable",

meaning that we should be able
to understand exactly how and why

an AI came up with its answers.

Companies are likely to be reluctant
to open their programs up to scrutiny,

but we may need
to force them to do that.

In fact, as this attorney explains,
when it comes to hiring programs,

we should've been doing that
ages ago.

We don't trust companies to self-
regulate when it comes to pollution,

we don't trust them to self-regulate
when it comes to workplace comp,

why on earth would we trust them
to self-regulate AI?

I think a lot of the AI hiring tech
on the market is illegal.

I think a lot of it is biased.
A lot of it violates existing laws.

The problem is
you just can't prove it,

not with the existing laws
we have in the United States.

We should absolutely be addressing
potential bias in hiring software,

unless, that is, we want companies
to be entirely full

of Jareds who played lacrosse,

an image that would make
Tucker Carlson so hard

that his desk would flip
right over.

And for a sense
of what might be possible here,

it's worth looking
at what the EU is currently doing.

They're developing rules
regarding AI

that sort its potential
uses from high-risk to low.

High-risk systems could include
those that deal with employment

or public services, or those that put
the life and health of citizens at risk.

And AI of these types would be
subject to strict obligations

before they could be put
on the market,

including requirements
related to "the quality of data sets,

transparency, human oversight,
robustness, accuracy, cybersecurity".

And that seems like a good start

toward addressing at least some
of what we have discussed tonight.

AI clearly has tremendous potential
and could do great things.

But if it is anything
like most technological advances

over the past few centuries,

unless we are very careful,
it could also hurt the underprivileged,

enrich the powerful,
and widen the gap between them.

The thing is, like any other shiny
new toy, AI is ultimately a mirror,

and it'll reflect back
exactly who we are,

from the best of us,
to the worst of us,

to the part of us that is gay
and hates the bus.

Or to put everything that I've said
tonight much more succinctly.

Knock-knock. Who's there?
ChatGPT.

ChatGPT who? ChatGPT careful,
you might not know how it works.

Exactly. That is our show.
Thanks so much for watching.

Now, please, enjoy a little more
of AI Eminem rapping about cats.

Meow, meow, meow,
they're the kings of the house.

They run the show,
they don't need a spouse.

They're the best pets,
they're our feline friends.

Eminem loves cats,
until the very end.

They may drive us crazy,
with their constant meows.

But we can't stay mad, they steal
our hearts with a single purr.

I'm gay.
Post Reply