table of contents

04/12/2023

Humanities in the Age of Artificial Intelligence with Ana Ilievska

Ana is a Mellon Fellow at the Stanford Humanities Center and a Lecturer of French and Italian at Stanford. Her teaching and research focus on the relationship between literature, the industrial revolution, and technology from a Southern perspective.

download transcript [vtt]
00:00:00.000
I hope you will forgive me, Professor Harrison,
00:00:02.800
and not take it as too much of a violation
00:00:04.880
to put some words into your voice.
00:00:07.240
Changes in artificial intelligence come quickly,
00:00:10.360
and what I had done with a generic voice a few weeks ago
00:00:13.640
can now be done with any voice you wish to wield.
00:00:16.840
This technology has been with us for some years already,
00:00:19.640
but now requires only 60 seconds of a person's voice
00:00:22.760
to replicate it to this.
00:00:24.440
Yeah, it really does sound like me.
00:00:26.520
[MUSIC PLAYING]
00:00:30.280
This is Casey S.U. Stanford.
00:00:33.280
Welcome to entitled opinions.
00:00:34.840
My name is Robert Harrison.
00:00:36.600
We're coming to you from the Stanford canvas.
00:00:38.840
[MUSIC PLAYING]
00:00:41.840
When I hear that synthetic replication of my voice,
00:00:55.840
I get a better sense of how the Greeks envisioned
00:00:58.240
the afterlife, a shadow world where the full-bodied person
00:01:02.560
is reduced to his or her idol on, or disembodied image.
00:01:07.960
In the old days, the shades and Hades
00:01:10.200
required the blood of a black ram to animate their voices.
00:01:14.880
These days, it only requires co.
00:01:18.080
Hold tight, everyone.
00:01:19.440
A synthetic Hades coming our way.
00:01:21.520
[MUSIC PLAYING]
00:01:24.520
[MUSIC PLAYING]
00:01:29.520
I'm joined in the studio today by Anna Iyevska,
00:01:40.120
a brilliant young scholar who thinks and writes about technology
00:01:43.680
and the humanities.
00:01:45.640
She listened to our most recent show with Brian Chong
00:01:48.640
on artificial intelligence and large language models.
00:01:53.080
And she suggested that we do a follow-up show
00:01:55.280
on the same topic from other points of view.
00:01:58.120
And that's what's on tap for today.
00:02:00.960
But before we get started, let me repeat the hide-a-girt quote,
00:02:04.640
"I invoked in the intro to that show.
00:02:07.920
It's well known to the entitled opinions for gait.
00:02:12.640
The most thought-provoking thing about our thought-provoking age
00:02:15.960
is that we are still not thinking.
00:02:20.360
I'm not sure anyone can tell us exactly what it means
00:02:23.040
to think thoughtfully, maybe not even hide-a-girt.
00:02:27.120
But for what it's worth, here's what DH Lawrence
00:02:30.520
declares in his poem, "Thought."
00:02:34.920
"Thought is a man in his wholeness, holy attending.
00:02:40.320
Thought is the welling up of unknown life into consciousness."
00:02:46.560
The source of that welling up remains mysterious.
00:02:49.440
You could call it the collective unconscious
00:02:51.640
if you favor that term, or you could call it the underworld of the dead,
00:02:56.320
which is how I prefer to think of it.
00:02:59.120
Either way, within each of us, is a dark continent of the soul
00:03:03.720
where voices echo down to us from the past.
00:03:08.600
It requires a special kind of quietude
00:03:11.480
and a special kind of listening to hear them.
00:03:14.960
Those voices come to us mostly from books
00:03:18.120
and in a world drowning ever deeper in noise,
00:03:21.280
concentrated attention becomes more and more rare.
00:03:26.360
The poet Wallace Stevens reminds us of what the solitary activity
00:03:30.720
of thoughtful reading used to be like, at least for some of us.
00:03:37.440
The house was quiet and the world was calm.
00:03:41.200
The reader became the book and summer night
00:03:44.200
was like the conscious being of the book.
00:03:48.800
The house was quiet and the world was calm.
00:03:51.720
The words were spoken as if there was no book,
00:03:55.200
except that the reader leaned above the page,
00:03:58.520
wanted to lean, wanted much most to be the scholar
00:04:02.480
to whom his book is true, to whom the summer night
00:04:07.080
is like a perfection of thought.
00:04:10.360
The house was quiet because it had to be.
00:04:13.680
The quiet was part of the meaning, part of the mind,
00:04:17.720
the access of perfection to the page.
00:04:21.640
And the world was calm.
00:04:23.840
The truth in a calm world in which there is no other meaning,
00:04:28.800
itself is calm.
00:04:30.800
Its self is summer and night.
00:04:33.840
Its self is the reader leaning late and reading there.
00:04:41.920
Those who today still engage in deep reading are leaning late.
00:04:46.360
For historically, the hour is late indeed,
00:04:49.480
even if the world is far from calm.
00:04:54.680
My guest today, Anatyayyevska, is a Mellon postdoctoral fellow
00:04:59.040
at the Stanford Humanities Center
00:05:01.200
and lecturer in our Department of French and Italian.
00:05:05.120
A comparator to works in Italian, Luza phone and Balkan studies.
00:05:09.200
She holds a PhD in comparative literature
00:05:11.720
from the University of Chicago.
00:05:14.680
She's currently working on a book called Deep Tech,
00:05:17.760
Literature, Southern Thought, and the Question
00:05:20.120
Concerning Technology.
00:05:22.760
She has published on Italian, Portuguese, and Luza
00:05:25.400
African writers, in addition to numerous translations
00:05:28.920
and public scholarship on poetry, sound,
00:05:31.920
and the ethos of Silicon Valley.
00:05:34.080
And welcome to entitled opinions.
00:05:37.000
Bonjour, no, Robert.
00:05:38.040
Thank you so much for the invitation.
00:05:40.760
So since you arrived at Stanford about--
00:05:43.160
well, almost two years ago, you and I
00:05:45.000
have had some conversations about thought, solitude,
00:05:48.880
and modern technology, and more recently
00:05:51.600
about artificial intelligence, we both agree
00:05:56.920
that thought requires something like solitude,
00:06:00.760
since solitude allows for being in dialogue with oneself,
00:06:05.480
as well as with the voices that reach us from beyond the threshold.
00:06:10.600
I, for one, believe that most of the essential things
00:06:13.200
that come to fruition later in life
00:06:15.080
are nourished in the hours that a young person spends
00:06:18.200
alone reading, learning, wondering, observing,
00:06:23.880
dreaming, imagining, and pondering.
00:06:26.840
We're going to be talking about artificial intelligence
00:06:28.960
today.
00:06:29.440
So let me ask you to start with, do you
00:06:32.320
think that artificial intelligence, if it one day realizes
00:06:35.360
its full potential, and help to promote this sort of thoughtful
00:06:39.120
solitude, or will it drown it out like most other technologies
00:06:43.920
do today?
00:06:45.640
Thank you, Robert.
00:06:46.560
Well, this is a fine question, but one
00:06:49.200
that I think we have to approach carefully.
00:06:51.720
I suggest we do this in three steps.
00:06:54.480
First, let's think a bit deeper, perhaps,
00:06:56.680
about the nature and location of thinking.
00:06:59.400
Then we can talk about solitude, where does solitude really
00:07:02.120
take place?
00:07:03.400
And then finally, we can ask whether AI
00:07:05.560
can promote what you call thoughtful solitude.
00:07:08.080
I really like that phrase, by the way.
00:07:10.280
So to start with a definition of thinking
00:07:12.280
that perhaps would be widely accepted,
00:07:15.160
and I think our neuroscientist friends would agree too.
00:07:17.840
I define thinking as a cognitive ability, first of all,
00:07:20.360
specific to language use, that takes place within the embodied
00:07:24.280
mind.
00:07:24.720
It's very important that we emphasize the embodied mind
00:07:27.360
and not say the brain, because there's
00:07:29.560
a whole series of contributions, such as context
00:07:32.560
experience, family, nature, culture, genetics,
00:07:36.000
that contribute to how a person thinks.
00:07:38.880
So it's a cognitive ability.
00:07:40.640
The good news is that it's innate to all human beings.
00:07:44.360
And it is not to be confused with knowledge or information.
00:07:48.800
This means that, for instance, we can have knowledgeable
00:07:50.960
engineers, coders, scientists, doctors, journalists,
00:07:54.640
philosophers or business people, but they do not necessarily
00:07:57.600
think.
00:07:58.920
And the adverse we can have people who think
00:08:01.080
but are not necessarily knowledgeable.
00:08:02.840
So one thing doesn't guarantee the other.
00:08:05.600
So in a way, thinking, I see it as the fifth element
00:08:09.200
alongside writing, speaking, listening, and reading.
00:08:13.040
In this sense, it's a form of technique.
00:08:15.560
It's something that can be practiced and can be fostered.
00:08:18.880
And definitely, we cannot teach a person what to think.
00:08:21.800
But perhaps we can give them some direction
00:08:25.120
of how to go about the process of thinking.
00:08:28.440
So the most important thing about thinking
00:08:30.360
is that-- and we would agree here with Hannah Arden,
00:08:32.640
our favorite philosopher, that it's
00:08:34.760
an ongoing, robust internal dialogue
00:08:37.800
where decision-making, planning, and moral considerations
00:08:41.680
take place.
00:08:43.200
So this kind of internal dialogue requires
00:08:46.320
a split within the self.
00:08:49.080
So for Arndt-- and again, I'm going to quote her here,
00:08:52.360
she's speaking with Socrates and Plato--
00:08:54.880
she calls thinking a soundless dialogue between me
00:08:58.360
and myself, the two in one.
00:09:01.080
And then she writes, if you want to think,
00:09:02.920
you must see to it that the two who carry on the thinking
00:09:06.560
dialogue being good shape, that the partners be friends.
00:09:11.000
She constantly speaks about thinking
00:09:13.360
as meeting the other fellow who is within us.
00:09:17.200
So thinking is also very private.
00:09:19.280
It takes place in the embodied mind
00:09:21.520
and what we could call our mind as our hidden hermitage.
00:09:25.600
And I'm taking this term from Julian James,
00:09:27.200
of psychologists that now is coming into fashion
00:09:30.040
with generative AI.
00:09:32.120
So thinking takes place in a private mention
00:09:34.400
when we are free from the gaze of others,
00:09:37.040
but they're also free from ours.
00:09:39.720
The things we think about are usually absent.
00:09:42.720
They are not immediately available to our senses.
00:09:45.440
So in this sense, it requires a retreat
00:09:47.880
from the world, the type of thinking
00:09:50.480
that I'm speaking about.
00:09:52.080
Now, here are the problems that we face with thinking.
00:09:56.520
Logistical and psychological problems, right?
00:09:58.720
The question is, how can we live with ourselves?
00:10:01.800
Because in the absolute privacy of our minds,
00:10:04.240
we either encounter our best friend or our worst enemy.
00:10:07.880
So now the question is twofold
00:10:09.560
and in logistical and psychological terms.
00:10:12.600
Where can we find solitude,
00:10:14.440
logistically speaking in this world today?
00:10:16.640
And then once we found that solitude,
00:10:18.960
how can we bear it?
00:10:20.560
How can we bear to be solitary together with ourselves?
00:10:24.120
That is the question.
00:10:25.160
So thinking is psychological, it's difficult,
00:10:27.600
it's time-confusing, requires a retreat from the world,
00:10:31.080
discipline and practice,
00:10:32.720
and there is no visible marketable product, right?
00:10:35.720
It's not measurable.
00:10:37.160
So what do we do when we retreat to ourselves?
00:10:40.280
And how do we manage this dialogue?
00:10:43.440
Is the question?
00:10:44.360
Before we get to that question,
00:10:47.320
maybe we could trouble a little bit,
00:10:52.000
definition of thinking as something that takes place
00:10:54.400
within a mind,
00:10:56.800
an embodied mind to be sure,
00:10:58.320
but nevertheless, it sounds like one is retreating
00:11:02.120
into some kind of internal, psychological place.
00:11:07.120
Do you want to call it?
00:11:09.920
Certainly not an organ,
00:11:11.280
but some kind of inner region or domain
00:11:15.560
where this cognitive ability that you mentioned
00:11:20.080
gets activated.
00:11:24.000
But if one were to suggest, instead,
00:11:27.520
that thinking is not a cognitive ability,
00:11:30.520
that it doesn't generate from within itself,
00:11:35.160
a dialogue with itself,
00:11:37.520
but rather reaches us from outside of what we would normally
00:11:42.680
call the mind, it could reach us from nature.
00:11:46.600
I have mentioned on this show,
00:11:49.000
I think probably more than once,
00:11:50.480
that most nature does most of them, I thinking for me.
00:11:53.280
When I'm in the natural world,
00:11:55.360
I think very different thoughts than if I'm in my room
00:11:58.000
or in a classroom or something of that sort.
00:12:00.320
So could listening be the essence of a certain kind of thought
00:12:05.320
rather than cognitive ability?
00:12:08.440
Namely, an open, receptive, heedfulness
00:12:13.440
to what reaches us from depths
00:12:16.280
that don't belong to the cognitive mind,
00:12:20.200
but from some other kind of region of being.
00:12:23.560
- Yes, I wouldn't disagree with this definition
00:12:29.960
or with this problematization of the definition.
00:12:33.120
However, it's only partial, as I said,
00:12:35.040
what I mean by the embodied mind,
00:12:37.200
why do we think it means everything that we consider thinking
00:12:41.320
is a conglomeration of context, other people,
00:12:45.320
nature, our social environment, education,
00:12:47.800
culture, genetics, experience,
00:12:49.600
all these things contribute to thought.
00:12:51.360
And I think this is the main difference
00:12:53.120
between humans currently and machines
00:12:55.080
and why we think and they still do not,
00:12:57.680
is because machines lack that embodied engagement
00:13:00.920
with the world.
00:13:01.880
For them, it's almost impossible to engage with nature.
00:13:04.920
- Is that because the machine does not have a self?
00:13:07.960
- More importantly, it doesn't have a body.
00:13:11.440
I think this is the crucial difference.
00:13:14.480
It cannot experience the world through its own senses.
00:13:17.640
It's always told what to do.
00:13:19.680
- Okay, this is very important because we had a guest
00:13:23.480
on last spring, Marcus Gabriel, well known to you.
00:13:28.120
It was a fellow at the Humanities Center along with you.
00:13:30.720
And we did a show on the meaning of thought
00:13:33.040
where he proposed his thesis that thought is a six cents.
00:13:38.040
- Yes.
00:13:39.400
- And I gather that you sympathize with this notion
00:13:43.840
that thinking is embodied, it's a-
00:13:46.360
- Absolutely.
00:13:47.200
- It's a form of sense, if not perception, at least,
00:13:49.800
sensing.
00:13:52.400
And if we were speaking an Italian,
00:13:54.440
this would be beautiful because centiré in Italian,
00:13:58.360
which is for me the most wondrous word,
00:14:00.720
I know in any language, centiré means
00:14:03.240
at once to sense, to hear.
00:14:07.720
- Absolutely.
00:14:08.560
- And to feel, so all these things would mean
00:14:14.600
that thought is a kind of sensing in this broad.
00:14:19.600
- I understand.
00:14:22.320
Now, here is the most interesting problem for me
00:14:25.520
when it comes to thinking.
00:14:27.520
Thinking is at the same time,
00:14:29.840
perhaps the most subversive ability
00:14:32.080
that human beings have.
00:14:33.320
It's completely radical.
00:14:34.760
It's where ultimate skepticism and doubt happen,
00:14:37.520
as we know from the Cartesian lesson.
00:14:39.960
At the same time, it is the most susceptible sense
00:14:42.880
we have to manipulation and to ideology.
00:14:46.000
And why am I saying that?
00:14:47.040
You asked before that you suggested that we also think
00:14:50.280
with nature that so many of our thoughts
00:14:52.000
are suggested to us by the external world, right?
00:14:55.160
Well, this is very interesting,
00:14:56.400
and I would like here to introduce a thinker
00:14:58.480
that we already mentioned today in the show at the UNI,
00:15:00.920
talked about in our last conversation.
00:15:03.200
Julian James and his idea of the bicameral mind.
00:15:06.440
Well, his idea was that back in Homeric times,
00:15:09.760
at least in the Homeric epics that he studied very closely,
00:15:12.840
he says that his heroes had no internal mind space
00:15:17.080
to introspect upon.
00:15:18.800
They had what he called a so-called bicameral mind.
00:15:21.760
On the one hand, there was the self,
00:15:23.880
and then on the other hand, in the mind,
00:15:25.720
they were the gods.
00:15:27.080
So there was an executive branch within the brain,
00:15:29.680
which are the gods in the Odyssey and the Iliad,
00:15:32.560
and then there were the heroes who were the followers.
00:15:35.480
So for a very long time, Julian James posits,
00:15:38.760
human consciousness or human thinking
00:15:41.560
happened in this constellation where people heard voices
00:15:44.840
within their own head,
00:15:45.920
and they were convinced that those are the voices of the gods.
00:15:49.400
So today, in our time,
00:15:51.520
these, obviously, we don't hear these voices of the gods anymore.
00:15:54.520
So Julian James would say that in a sense,
00:15:56.200
we have become our own gods.
00:15:57.600
We have realized that the other voice in our head
00:15:59.600
is actually our own voice,
00:16:01.120
and we have learned how to dialogue with itself.
00:16:03.200
But today, what is the danger now of thinking
00:16:05.360
that thoughts come from an external instance,
00:16:07.520
or that that's the only way how we receive thoughts?
00:16:10.320
Well, then we can receive all sorts of messages.
00:16:12.960
We can be sent through social media, through TV,
00:16:15.840
through reading, through conversations with friends.
00:16:17.840
We can have all sorts of input that could feel like it's
00:16:20.920
another voice that is talking to ourselves.
00:16:23.160
Now, the question is, and I think this is partly,
00:16:25.720
or to grade degree, the task of the humanities
00:16:28.440
is to understand what voices should reach us.
00:16:32.800
There needs to be a certain kind of filter,
00:16:34.880
which is not in place anymore.
00:16:36.280
Yeah, that becomes very problematic,
00:16:38.520
because you're going to try to do a policing of the borders.
00:16:42.240
I agree with you.
00:16:44.000
Thinking is a very porous activity.
00:16:47.800
Things reach it.
00:16:49.360
Things determine it.
00:16:51.200
It also is not clear where exactly where it's taking place.
00:16:55.760
I mean, I'm not a big fan of Julian James.
00:16:58.520
I've tried to get into that book.
00:17:01.280
And you can't just get into everything.
00:17:04.000
Some books just don't speak to me.
00:17:07.800
But this idea of the bicameral mind of the Homeric,
00:17:10.040
our colleague Andrea Nightingale, who's
00:17:11.680
been a frequent guest on this show,
00:17:15.160
she's a classicist, and she's working on the Greek notions
00:17:19.200
about where thinking really happens.
00:17:21.000
And it doesn't happen in the brain.
00:17:22.920
It doesn't happen in the mind.
00:17:24.400
Oftentimes, it's in the chest, or it's in the limbs.
00:17:27.880
There's four different places, at least,
00:17:30.320
says Andrea Nightingale, where the Greeks thought
00:17:34.000
of thinking taking place.
00:17:36.000
And all of them connected to the body, by the way.
00:17:38.480
So this is not in any way different from what you're suggesting.
00:17:45.040
Absolutely.
00:17:45.680
But this exclusive emphasis on a mind, which
00:17:50.520
is somehow a kind of center of operations
00:17:56.760
for the receiving and sending of messages,
00:17:59.680
maybe it could be expanded a bit.
00:18:02.720
But I mean, I agree with you.
00:18:05.000
Ideology, propaganda, all these things
00:18:07.560
would not work if the mind were not receptive by its nature.
00:18:10.920
Absolutely.
00:18:11.400
It's too receptive.
00:18:12.440
That's even the problem that, to a certain degree,
00:18:14.840
I feel today we cannot really tell any more
00:18:17.480
to what extent are we doing the thinking
00:18:19.160
and to what extent are these intrusive thoughts
00:18:21.080
that we get through constant bombardment of messages
00:18:23.360
and information through social media?
00:18:25.240
So I would go back to what you said about policing
00:18:27.800
the mind.
00:18:28.240
I am absolutely against policing the mind.
00:18:30.280
And as I mentioned at the beginning of our talk today,
00:18:34.440
it is not about telling people what to think.
00:18:36.480
It's about training them how to recognize the pertinent
00:18:41.480
information and then how to deal with it
00:18:43.480
within their own solitude.
00:18:44.960
That is the issue.
00:18:45.840
And let me go to an article, since we've
00:18:48.200
been talking about all these New York Times articles recently
00:18:50.680
that came out in the topic of chat GPT,
00:18:52.960
I read one that actually Henry Kissinger wrote.
00:18:55.640
And I know one little anecdotal piece of information
00:18:59.520
about Henry Kissinger.
00:19:01.160
One book that he always kept on his nightstand
00:19:04.080
is Machiavelli's The Prince.
00:19:06.440
So that I find very interesting given our interest
00:19:10.560
in Italian literature here.
00:19:12.000
So he wrote an article alongside the CEO of Google
00:19:16.400
and a dean at MIT.
00:19:18.280
And they also like your friend Brian last week
00:19:21.120
and the way that I wrote it in my own essay for today
00:19:23.840
calls chat GPT, the genie, the genie of Gen tech.
00:19:28.160
Genie in the bottle.
00:19:29.040
Right, he says the genie of Gen tech is out of the bottle.
00:19:31.960
That's what he's saying.
00:19:32.920
And what does he suggest for us to do?
00:19:34.720
He writes, we use our brains less and our machines more.
00:19:38.080
Humans may lose some abilities, such as critical thinking,
00:19:41.040
writing, and design abilities.
00:19:42.880
So he proposes a certain kind of practice
00:19:44.800
of concerned skepticism or dialectical pedagogy
00:19:47.880
at the university in education.
00:19:49.360
So he writes very specifically, or they write,
00:19:51.400
our educational and professional systems
00:19:53.360
must preserve a vision of humans as moral,
00:19:56.120
psychological, and strategic creatures,
00:19:58.600
uniquely capable of rendering holistic judgments.
00:20:01.800
So this is where we need to take the reins,
00:20:05.240
so to say, let's say, in the humanistic disciplines,
00:20:08.000
in order to make sure that this internal dialogue is robust.
00:20:11.840
Let me tell you why I have doubts about that.
00:20:14.280
Yes, go ahead.
00:20:15.280
Because if you take the high degree quote
00:20:19.040
that the most thought-provoking thing about our age
00:20:21.920
is that we're still not thinking,
00:20:23.680
it goes back 150 years, or not 150 years,
00:20:27.400
goes back half a century.
00:20:30.560
Things have only gotten worse.
00:20:33.160
We, as educators, have seen, at least in three decades,
00:20:37.840
teaching, how much what writing skills, thinking skills,
00:20:44.840
everything seems to me in a state of collapse,
00:20:49.280
precisely because we don't have concentrated attention.
00:20:53.480
It could be the cell phone, it could be social media.
00:20:57.040
And any number of culprits can be pointed to.
00:21:01.680
But perhaps something like artificial intelligence,
00:21:05.920
and it's coming at the right time to save us
00:21:10.000
from our own kind of dissolution.
00:21:13.080
And this idea that we have to hold on
00:21:15.440
to our critical thinking abilities and everything
00:21:18.400
that Kissinger is worried about is losing,
00:21:21.400
we have lost them almost fully compared to what it used to be.
00:21:28.920
So things are not going to get better by clinging on
00:21:32.400
to the little vestiges of what used to be
00:21:34.840
a holistic way of forming moral judgments
00:21:39.200
and critical thinking and synthesis of these sorts.
00:21:43.200
So perhaps we need an artificial help
00:21:49.480
to save us from ourselves.
00:21:52.200
I don't know.
00:21:53.120
I just throw that idea out.
00:21:54.560
Right.
00:21:55.160
That is very interesting.
00:21:56.560
I mean, I don't trust educators more than I trust
00:22:02.120
the artificial intelligence.
00:22:03.440
Because I see sometimes what our edge--
00:22:05.520
well, no one goes to speak ill.
00:22:09.360
So we had this conversation about education.
00:22:12.040
So what do I see particularly problematic
00:22:14.920
about the educational system, especially in the US
00:22:17.680
currently?
00:22:18.680
I mean, I was a graduate student here.
00:22:19.560
I went through various universities.
00:22:22.120
What I felt-- and I'm a recent PhD--
00:22:25.480
was overly emphasized in our education
00:22:28.120
is definitely writing.
00:22:29.760
Writing.
00:22:30.160
Writing centers, writing tutors.
00:22:31.800
We write papers, dissertations,
00:22:33.860
liquid fermentation letters, cvs.
00:22:36.360
My entire career, so to say, is a graduate student
00:22:38.640
was spent doing that, doing writing.
00:22:41.160
Now we charge a P.T. with the arrival of this new technology
00:22:44.240
into our lives.
00:22:45.880
It just shows to educational institutions
00:22:48.760
how peripheral actually writing is.
00:22:51.040
Because if an artificial intelligence can do that,
00:22:53.320
then what do we do at the university as professors,
00:22:55.840
as lecturers?
00:22:56.560
What are we going to teach students anymore
00:22:58.120
if we've made writing so central to our pedagogy?
00:23:01.520
So this is where I think a shift to thinking has to happen.
00:23:04.480
You mean away from writing?
00:23:05.800
Away from writing, yes.
00:23:06.880
Well, listen, the billions and trillions of dollars
00:23:10.440
that the US government spends on development
00:23:14.600
of writing skills.
00:23:15.760
Because you're absolutely right.
00:23:16.720
Education is not only at the university level,
00:23:19.080
throughout grade school and high school,
00:23:21.880
writing is king.
00:23:24.640
And the more money that's being spent,
00:23:26.640
the more time that goes by, the worse the writing gets.
00:23:29.680
I hate to say this on KZSU Stanford University,
00:23:33.480
but the Stanford students, the undergraduates,
00:23:36.480
and oftentimes graduate.
00:23:38.440
The writing is appalling, and it's gotten worse.
00:23:42.040
So actually, when I read an essay written by a chat
00:23:47.360
GPT machine, all of a sudden, it's a relief.
00:23:51.680
I can breathe again, because it's organized.
00:23:54.440
It's clear and distinct, and it follows a certain kind
00:23:57.480
of thesis.
00:23:58.000
And how refreshing is that compared to the coiled
00:24:04.200
and confused writing, which is more
00:24:07.120
the norm than the exception, even at elites,
00:24:11.760
universities like Stanford.
00:24:13.560
Well, to answer that question, I will mention John
00:24:17.120
Gilery, whose book we discussed recently,
00:24:18.960
"Professor and Criticism" that just came out last year.
00:24:21.880
Well, he sees the diminishing of quality in writing
00:24:27.360
and student writing as a direct product of the lack
00:24:30.320
of reading that this is wise to--
00:24:32.800
Exactly.
00:24:33.600
--right, exactly.
00:24:34.680
Exactly.
00:24:35.360
I agree with that.
00:24:36.320
So what is the problem now with chat GPT?
00:24:39.360
The first impetus of Stanford University and professors
00:24:42.920
in general has been to ban it, right?
00:24:44.680
Do not allow it in the classroom.
00:24:47.320
I allowed it to my students this quarter,
00:24:49.280
and I encouraged them to do their final papers
00:24:51.720
by consulting chat GPT.
00:24:53.040
And they came out of the experience quite level headed.
00:24:57.040
They founded a very useful tool, and it gave them more time
00:24:59.720
to develop their own ideas.
00:25:01.480
And I have to tell you-- we spoke about this.
00:25:03.040
Just in February, I attended the first ever industry
00:25:06.760
conference on generative AI.
00:25:08.880
This was organized by a company called Jasper.
00:25:11.080
They have their own type of chat GPT.
00:25:13.800
And I listened to the whole conference,
00:25:15.480
and it was quite interesting because the coders,
00:25:17.760
the very people who are making this technology,
00:25:20.200
are also level headed about what it will do to us.
00:25:22.760
And I will give you one quote by Nat Friedman.
00:25:25.240
He's the former CEO of GitHub and MIT graduate.
00:25:28.080
He said one thing during the conference.
00:25:29.480
He said what chat GPT will do is
00:25:31.160
going to rewrite civilization.
00:25:33.160
He didn't say it's going to transform civilization.
00:25:36.720
He didn't use Thomas Friedman words from the New York Times
00:25:40.200
that it's transformational, that it's a tornado, a promethean
00:25:43.000
moment.
00:25:43.560
He said it's going to rewrite civilization.
00:25:46.080
And then Megan Anderson, from Jasper's marketing,
00:25:49.080
said very clearly that we need to reinvest,
00:25:51.880
so to say, in content, not composition.
00:25:55.000
Let the best content win.
00:25:57.600
This is a strategy.
00:25:58.400
This is how people around generate a very eye-curranty market
00:26:01.840
the product.
00:26:02.360
It's about letting you to have free time to develop your ideas
00:26:06.160
and not spend time on writing.
00:26:07.680
Now, here is the most difficult question.
00:26:10.160
If-- and the question that I think we definitely
00:26:12.840
need to address as a society, especially educators.
00:26:15.680
So if chat GPT and all these technologies,
00:26:18.600
such as Bard, Sydney, Bloom, that are being developed
00:26:22.560
by competitors, if they can allow us now
00:26:24.800
to spend more time developing our ideas,
00:26:27.200
just editing the text rather than creating the form,
00:26:30.960
then what do we fill the space that remains empty?
00:26:35.160
What do we do at the university, let's say,
00:26:38.960
because this is our particular profession,
00:26:41.640
where we usually teach form.
00:26:43.520
We teach form.
00:26:44.320
We teach how to write a text, what to write in a specific
00:26:47.360
paragraph.
00:26:47.800
Well, if chat GPT can do that to us,
00:26:50.240
what is left than for us as educators?
00:26:52.120
And how do we fill that space?
00:26:53.800
That is the question for me.
00:26:55.280
Can I give you a very simple answer
00:26:56.960
we'll do with that with that leisure?
00:27:00.120
And I'm going to sound like Vernor Herzog here.
00:27:02.040
Read, read, read, read, read, read.
00:27:05.520
That's Vernor Herzog.
00:27:06.920
And it's the simple and most obvious, evident answer
00:27:14.680
to this question, because you're not
00:27:17.360
going to develop ideas and avoid.
00:27:20.800
You can't ask students to sit back.
00:27:22.360
It's like they come into a class, they're a freshman,
00:27:24.840
or they're sophomores and things.
00:27:26.600
And we have so-called educators or instructors who say, OK,
00:27:30.840
here, what do you think about this?
00:27:33.360
They haven't even learned anything.
00:27:34.640
And you're asking, what do they think about something
00:27:36.960
that they haven't even learned?
00:27:38.360
How about reading, reading, reading, learning?
00:27:42.320
Become full of multiplicity of voices that come
00:27:48.160
from different sources.
00:27:49.600
Absolutely.
00:27:50.160
Yes, I see what you mean, Robert, for sure.
00:27:52.080
There is a specific historical reason, though, why,
00:27:55.480
currently human beings in general are not very interested
00:28:00.400
in reading.
00:28:01.040
First, let me say that I do not agree with Heidegger
00:28:03.200
when he said that we're still not thinking.
00:28:05.040
I would rather say that we're not thinking anymore.
00:28:07.760
And there is a specific reason for it.
00:28:09.360
So I studied the 19th century, and I've
00:28:11.520
done quite a lot of reading on the debates
00:28:13.600
that arose there about industrialization.
00:28:16.240
So one thing that happened in the 19th century
00:28:18.400
is the overstimulation of our cognitive apparatus
00:28:22.400
that began with the industrial revelation,
00:28:24.400
with the proliferation of choices,
00:28:26.400
a massment of products, devices, we can blame capitalism for it,
00:28:30.160
we can blame the bourgeoisie, whatever
00:28:33.440
we'd like to think.
00:28:34.720
So we have more career options, more partners
00:28:36.920
to choose for more entertainment.
00:28:38.640
So there is an emasement of information
00:28:41.160
you call it noise that started hitting us for all sides.
00:28:44.720
I can start speaking about Western Europe,
00:28:46.280
but now we know exactly how this is filled.
00:28:48.480
So what is the result of this overstimulation?
00:28:51.000
It leads to difficulties in decision-making.
00:28:53.760
We don't know what partner to choose.
00:28:55.960
We don't know what's right from wrong.
00:28:57.680
We can't tell.
00:28:58.720
We don't know what product to choose.
00:29:00.120
So we've created all these technologies,
00:29:02.320
these neural extensions of ourselves,
00:29:04.440
to alleviate the process.
00:29:05.760
They algorithm suggests what's the next thing
00:29:08.280
we are going to buy.
00:29:09.440
They tell us what's the next restaurant we should go.
00:29:11.320
So all of this allows us not to make choices.
00:29:14.400
It's created a sort of-- how do we call it?
00:29:19.360
A brain full of paraphernalia.
00:29:21.920
Our brains are completely cluttered with information,
00:29:25.240
so that we cannot possibly squeeze reading
00:29:29.160
into this situation, because on one hand, again,
00:29:31.520
our brains are completely cluttered.
00:29:33.160
Let's call them-- our minds are completely cluttered
00:29:35.040
with information, media, messages, headlines, anxieties.
00:29:38.120
And on the other hand, they're silenced.
00:29:40.040
And how do we do this?
00:29:40.840
We do this to a magic medication.
00:29:42.600
We go to therapy.
00:29:43.880
We do everything to silence the overwhelming amount of voices
00:29:47.960
that we hear in our own heads.
00:29:50.120
So now, what should--
00:29:52.320
But you asked, what are we supposed to do with our leisure?
00:29:54.840
If the artificial intelligence technology gives us
00:29:58.920
this extra leisure, I tried to answer it by saying, well,
00:30:02.440
maybe a little more reading wouldn't hurt, no?
00:30:04.840
Right.
00:30:05.120
But first, we have to draw attention, I think,
00:30:07.160
to how cluttered the mind currently is.
00:30:09.720
We need to draw attention to people's habits
00:30:12.320
of what we do when we get home at night.
00:30:15.000
We spend our entire days working, planning, organizing,
00:30:18.720
with machines, creating all sorts of schemes of how
00:30:23.440
to go through our day.
00:30:24.800
And then we come back at home at night,
00:30:26.320
and instead of retreating into solitude,
00:30:28.520
which would be something that we need to do in the evening
00:30:30.560
in order to come back to ourselves to meet that other fellow.
00:30:34.480
We engage in even more clattering of information.
00:30:37.440
I mean, I do it.
00:30:38.440
We swear.
00:30:39.240
So your proposal sounds very much like mine.
00:30:42.280
That you retreat into solitude.
00:30:44.920
So how do we do that?
00:30:46.160
How do we do it?
00:30:47.240
Right.
00:30:48.040
So I think it's important here to talk about solitude.
00:30:51.920
It hasn't been spoken much about this concept.
00:30:55.280
We speak a lot about loneliness.
00:30:57.160
And I remember Robert when I first came to Stanford,
00:31:00.240
we got a beer at the local pub.
00:31:02.600
And one of the first things that she told me
00:31:04.320
was there's a lot of loneliness around here, which set
00:31:08.880
the stage for me and was very unexpected,
00:31:10.680
because I imagine I will come to California.
00:31:13.440
The weather is great.
00:31:14.320
Everyone is so casual.
00:31:16.760
So it will be very easy to meet people.
00:31:18.560
But you're right.
00:31:19.280
In the most technological place in the world
00:31:21.560
where there are millions of people, so many smart people
00:31:24.320
coming around, there's a lot of loneliness.
00:31:27.800
However, the Norwegian philosopher, Lars Svensson,
00:31:32.360
wrote a book called The Philosophy of Loneliness.
00:31:35.080
And in this book, he makes a very clear cut distinction
00:31:37.800
between loneliness and solitude.
00:31:40.080
These are two different concepts.
00:31:42.120
In loneliness, we are scrolling on our phones.
00:31:45.320
We are going to the gym.
00:31:46.720
We are going to meditation and to yoga.
00:31:48.840
We're still filling up the place when we're lonely.
00:31:52.160
Some people, excuse me, some people who meditate and go to yoga
00:31:54.880
say that they're doing exactly what you're recommending.
00:31:57.040
They're going inside themselves and trying to find that
00:32:00.280
still point, turning world.
00:32:02.880
I cannot comment on that because I'm not a yoga person,
00:32:06.480
but I do feel that meditation in yoga are supposed to empty you
00:32:10.680
out of the words and the words that you have.
00:32:13.640
They're decluttering, but they're not necessarily providing
00:32:16.280
supplements, so to say.
00:32:17.720
So we need a supplement for whatever we're purging out of
00:32:20.960
ourselves.
00:32:21.800
So right?
00:32:22.800
We need norms for what we should think about.
00:32:24.960
Now, what do you mean by supplements?
00:32:28.280
Again, we don't need norms.
00:32:30.320
We need a bit more moral guidance.
00:32:33.840
And in this argument, I'm specifically thinking about the new
00:32:39.600
realism philosophical movement that is coming out of Europe
00:32:42.240
right now.
00:32:43.120
We talked about Marcus Gabriel, Marizzio Ferraris, is one of
00:32:46.040
those other thinkers who have discussed the consequences of
00:32:52.160
postmodernism on our society, on our education, and the way we
00:32:55.600
think today.
00:32:56.920
So they're calling for a certain return to moral and ethical
00:33:00.880
questions.
00:33:01.800
And I don't think it's just the task of philosophy,
00:33:03.840
currently.
00:33:04.200
It's not just philosophers.
00:33:05.120
They are doing all over the world here at Stanford, the
00:33:07.680
human centered, AI centered, we have here everywhere.
00:33:11.040
We're trying to think more about ethics, right?
00:33:13.200
We're trying to think about how to regulate these new
00:33:17.160
technologies that are coming to us.
00:33:18.920
However, is that real thinking or is it just more noise?
00:33:22.520
It's a start.
00:33:23.560
It's a start.
00:33:24.160
However, this is where I see the difference.
00:33:26.440
Whereas the industry and professional philosophers, so to
00:33:31.560
say, see the solution in ethics in specific moral values that
00:33:38.320
are valid for everyone, I see the question as being born
00:33:43.040
absolutely in the mind, in the individual mind.
00:33:46.680
That's where we start building better citizens.
00:33:49.680
That's where we train people to think in a more robust way and
00:33:53.720
be able to decide for themselves what is right or wrong or
00:33:56.640
what is information, what is knowledge, what is clutter, right?
00:34:00.200
Or what is an actual thought-provoking event, right?
00:34:03.440
There's a lot of confusion.
00:34:04.320
I think one of the reasons why we're experienced is kind of
00:34:06.600
clutter is precisely of this infosphere in which we live.
00:34:11.040
This is a Luciano-Fluerides term.
00:34:14.560
We have confused throughout the past decades, especially
00:34:18.840
through the arrival of the worldwide web, we've confused
00:34:21.400
knowledge with information.
00:34:23.480
And we further conflate those two with thinking.
00:34:27.440
But these three concepts are quite distinct from one another.
00:34:31.800
So this is something that we need to teach, we need to make sure
00:34:35.120
that people are aware of this difference so that they can
00:34:38.040
acknowledge for themselves and recognize what is information,
00:34:40.960
what kind of knowledge are they being given, what can be
00:34:43.200
questioned, what cannot be questioned, right?
00:34:45.160
So there is a return to that, but again, for me, the important
00:34:47.280
part.
00:34:47.800
>> How does that differ from what traditionally was being
00:34:51.720
called critical thinking?
00:34:54.000
>> Which universities have always presumed to teach
00:34:59.720
person foremost?
00:35:00.880
>> Yeah.
00:35:01.960
That is the ideal mission of the university.
00:35:06.120
Critical thinking, however, on the ground does not really have,
00:35:10.440
I think, any more to do with questioning things.
00:35:13.440
It's not that high-to-garian questioning where you just take
00:35:15.680
apart an argument.
00:35:17.040
Critical thinking currently has more to do with theory and
00:35:21.400
with method.
00:35:23.280
I, for myself, I don't think of thinking in the solitary, thoughtful
00:35:30.800
sense of thinking as being directly related to critical thinking.
00:35:38.000
I think again, it's a kind of opening of whatever region of
00:35:45.360
the self.
00:35:52.360
It is that receives subterranean messages from, I can call it the world of the dead,
00:35:56.680
because every time you open a book and you're reading words on a page,
00:36:03.960
it's a voice that gets reanimated within the psyche, in the mind that's coming from
00:36:13.240
the past and coming from the realm of the ghosts of the authors who wrote them.
00:36:18.680
I mean 99% of the books we read are authored by dead people after.
00:36:23.720
So I'm not trying to be spooky.
00:36:25.400
I'm just in a very self-evident way, saying that learning how to institute a
00:36:33.640
dialogue with the dead is as important as a dialogue with oneself, because
00:36:40.080
one self is that self that you want to be in dialogue with is a very learned
00:36:46.440
itself.
00:36:48.240
And as you say, if you fill it with nonsense, it will respond with nonsense.
00:36:53.480
So you raise a lot of interesting methodological questions.
00:36:56.160
How do you declutter it?
00:36:58.240
How do you guide it?
00:37:00.200
How do you give it a moral inflection, so on and so forth?
00:37:05.080
But here I'm going to ask you the tough question because you keep using the
00:37:09.720
pronoun 'we' and I think that you mean educators and professors at universities.
00:37:16.720
But I'm perhaps a little less sanguine about presuming to be the adjudicator, the
00:37:25.200
legislature, of what should enter the mind and what should not enter the mind.
00:37:32.200
If I trusted my colleagues more than I actually do to actually navigate that distinction,
00:37:44.080
and it would be fine.
00:37:45.600
But it depends on who is doing the policing again.
00:37:52.880
Right, I mean here we come back to the logistical question of thinking.
00:37:57.600
This is what I think is perhaps the most difficult task, but it also the university and
00:38:04.280
our profession lends itself very well to it without being the police, so to say, behind
00:38:09.720
the thinking mode.
00:38:10.960
We need a space.
00:38:11.960
We simply need a space, a physical space out there first, in which reading and speaking
00:38:19.240
and listening to one another can be made possible in direct human face-to-face interactions.
00:38:25.520
We cannot replace that with Twitter where most of people today get their information,
00:38:30.640
right?
00:38:31.640
That's where the food for thought today comes from, because simply there is not accountability
00:38:34.920
when you are interacting with virtual beings.
00:38:38.160
So the university, again, not so much as a policing instance, but simply as a provider
00:38:43.680
of the physical, first the physical space can play a significant role there.
00:38:48.000
And that physical space in a way, it's sacred.
00:38:50.120
That's where the practice of judgment comes together.
00:38:52.920
So what we can do is, this is the basic structure of how we teach, right?
00:38:57.760
We assign a text or we choose an artwork where human experience is sedimented.
00:39:04.360
Literature in particular lends itself so beautifully to the study of human nature and
00:39:08.960
our internal dialogue, because we have layers and layers of sedimentation, of experimentation,
00:39:15.400
of various points of view of how people think and make decisions.
00:39:19.160
So when we assign a text like that and we gather together in a room or even outdoors,
00:39:24.120
like Socrates and his peers used to do, and we come together around the text, each one of
00:39:29.720
us starts perhaps with an opinion.
00:39:32.080
But by confronting our opinions with one another, we hone our skills, we practice our
00:39:37.960
judgment so that then even Socrates, our own rights after having spent the day on the
00:39:43.640
marketplace, goes home and meets the other fellow, right?
00:39:47.160
After we have had the chance to practice our opinions, to practice expressing judgment,
00:39:51.200
and I'm not saying here moral judgment or just the evaluation of aesthetic judgment,
00:39:56.040
but just working with others who have different perspectives, then we can go home and meet
00:40:01.080
the other fellow and then have that conversation.
00:40:04.040
But this I think can only happen really in a one-on-one situation.
00:40:08.000
We need the human body, we need the authority of the body.
00:40:10.880
And this is why, for instance, Ariane DeGin also writes that human beings, man, are thought
00:40:15.560
becomes flesh.
00:40:17.440
That thought needs to confront itself in the flesh.
00:40:20.760
Sounds very much like a classroom to me.
00:40:23.560
Absolutely, absolutely.
00:40:25.720
However, we are quite aware, Robert, both of us, and I'm a very young scholar and on the
00:40:31.480
job market, how the classroom works today.
00:40:34.160
We are, in a way, obliged to work with all these new technologies to engage in digital
00:40:40.200
humanities, to study media, whatever, but it does not feel like it is something that
00:40:45.120
promotes thinking but replaces it.
00:40:48.120
All the logistics of dealing with new technologies create even less space for thinking.
00:40:53.840
Can you speak now more directly about GPT and why you have serious questions about the role
00:41:02.560
it's going to play in enabling the process of creating more emptiness for the solitary
00:41:12.200
sort, the productive sort.
00:41:15.200
So how can GPT or let's call it just generative AI, can they make space for us?
00:41:23.240
Can they provide that space where we can promote thinking?
00:41:27.360
Or the other question is whether those technologies actually will start thinking themselves
00:41:31.400
in that development of the bi-camera-almine.
00:41:33.240
So it could be both ways.
00:41:34.760
Well, I can start with just saying that the debate out there is very vibrant on this technology,
00:41:41.720
and as I quoted Thomas Friedman before he was speaking over a Permethian moment, and there
00:41:45.360
is pretty much the same types of camps about this technology that we have in the 19th century.
00:41:50.560
It's, every time some kind of new product comes into the world, there are the same reactions.
00:41:55.920
This is something that hasn't changed.
00:41:57.120
And you think it's overblown?
00:41:59.080
Absolutely.
00:42:00.080
I don't.
00:42:01.080
Yes, we disagree on this.
00:42:02.640
I think it is Prometheus.
00:42:04.440
Okay, for me, the use of the Permethian analogy is problematic because then it asks the
00:42:14.480
question of who is Prometheus, right?
00:42:16.520
Who is the one who is bringing this new technology to us when newspaper people, journalists
00:42:24.480
write about this, they always tend to create a metaphor out of this new technology.
00:42:30.440
But this technology is not a metaphor, it's something very concrete.
00:42:33.240
Well, I can say that when the smartphone came into being, and especially the iPhone, I was telling
00:42:41.640
people this is going to completely change, I didn't say rewrite civilization, but this
00:42:46.520
is completely going to transform human civilization.
00:42:49.720
Everyone thought that was humorous, if not ridiculous.
00:42:53.560
And of course, yes, yes, every time there's a new technology, everyone says that.
00:42:59.360
And yet there is, I think history is divided between AI and a BUI, namely before the iPhone
00:43:11.680
and after the iPhone, where iPhone stands for the device by which Twitter, social media,
00:43:20.280
the worldwide web, fake news, all these things come to us in our hands.
00:43:26.800
And so I don't think it was overblown at all.
00:43:29.520
In 2005, to say that human civilization has undergone is going to undergo a fundamental
00:43:39.080
concussion at its foundations.
00:43:41.040
Well, in this aspect, I tend to side with two scholars.
00:43:47.440
One is Kate Crawford, who wrote Atlas of AI in which he demonstrates very clearly that
00:43:51.840
the entire hype around generative artificial intelligence has masked a lot of socioeconomic
00:43:59.040
day-to-day practices out there, which she's using a term from someone else.
00:44:05.080
It's called "fautomation."
00:44:06.080
She speaks of "fautomation."
00:44:07.960
That there are actually businesses today that hire human beings to pretend that they're
00:44:14.520
AI's.
00:44:15.840
And this is part of the process, right?
00:44:17.600
Well, on one side, we are celebrating this new Promethean technology on the other side.
00:44:22.680
Companies are actually trying to seem more as if they had AI.
00:44:25.920
So this is not to say it's not just problematic, but it's highly immoral, right?
00:44:30.160
But that also exposes of how overblown the moment to a certain degree is.
00:44:34.800
And again, like I said, with Crawford on this, and in particular with no, I'm Chomsky,
00:44:38.960
who also recently published an article in The New York Times in which pretty much says
00:44:45.160
human beings just think differently, we deal with language differently.
00:44:48.600
And what this technology and this is my particular take does is again, it presses us into
00:44:54.400
a corner to figure out again, what is it that makes us particularly specifically human?
00:45:00.440
And it cannot be writing.
00:45:01.640
This is at least another piece of the puzzle that Chad GPT now showed us to us.
00:45:06.040
It that is not creativity, learning, writing, aren't what makes us completely, specifically
00:45:12.600
human.
00:45:13.600
We should de-emphasize writing in our curriculum.
00:45:17.760
I do.
00:45:18.760
I mean, I think it's, well, on the one hand, I do on the one hand, I don't.
00:45:23.440
I think that sloppy writing, which is the norm in institutions of higher education, even,
00:45:31.440
is a manifestation of an inability to think and that you're not going to get better writing
00:45:37.040
until you get better thinking and you're not going to get better thinking until you have
00:45:40.640
more thoughtful reading.
00:45:43.840
Ideally, it should be a conglomeration of these skills of writing, speaking, listening,
00:45:47.840
reading and thinking.
00:45:49.200
All of those should be fostered equally at the university.
00:45:51.680
However, the tendency of the latest generation has been to teach us mostly how to write
00:45:58.600
and a certain kind of reading that is very methodological.
00:46:01.840
It's very directed towards the discipline.
00:46:04.000
But at the end of the day, the biggest space should be left to thinking to discussing these
00:46:07.960
texts with our peers.
00:46:09.680
This is why I encourage students to use these new technologies to help themselves prove
00:46:14.640
their texts.
00:46:15.640
They're making them come to the classroom and have more discussion.
00:46:17.920
But this, of course, puts a lot of questions out there.
00:46:20.440
How do we grade then students?
00:46:22.480
How do we evaluate it?
00:46:23.480
Well, maybe there is the time is ripe for us to move from those kind of grading systems,
00:46:28.080
which are based just on textual evidence and just promote more thinking in the classroom.
00:46:32.520
Yeah, I agree.
00:46:34.960
What we need is an extended sabbatical at all levels for students no more papers for one
00:46:41.080
year.
00:46:42.080
You don't write one paper, you just read nothing more to do.
00:46:45.360
And for faculty members and lecturers, a ban on conferences and symposia and academic
00:46:53.040
lectures for one year.
00:46:55.040
Maybe they also can go back and do some reading.
00:46:58.800
So absolutely.
00:46:59.800
Robert, we're saying this in very banal terms.
00:47:02.240
It would almost sound to someone who listens to us that we're just complaining about
00:47:05.520
our profession.
00:47:06.880
But again, when I attended this latest conference or the first conference of Generative AI
00:47:12.160
in San Francisco, one thing that became quite clear to me that what this technology will
00:47:17.480
do is it will change education.
00:47:21.680
This is perhaps the branch of our culture, of our D institution today that is mostly going
00:47:28.360
to be impacted by these technologies and journalists, tech people, businessmen, they all agree.
00:47:34.640
However, in these conversations that I have attended, usually there are no representatives
00:47:38.600
from our neck of the woods, almost as if it didn't concern us.
00:47:42.120
So if the universities, if we do not embrace these technologies, like for instance, when
00:47:48.280
Wikipedia first came out, right?
00:47:50.240
My professors at least at the time when I was an undergrad prohibited me from the
00:47:54.720
prohibited us from using Wikipedia, the result was pretty much catastrophic because students
00:47:59.240
still would cheat and they would not double check the information, right?
00:48:02.400
So now we have charge of GPT, we cannot prohibit students from using it.
00:48:06.160
We have to teach them how and what kind of questions to ask and what to expect from it, right?
00:48:11.480
What you mentioned earlier is even though there weren't educators present at that conference,
00:48:16.560
the threat of GPT for as far as I understand it and generative AI is to take away jobs
00:48:25.160
from the white collar.
00:48:26.560
Absolutely.
00:48:27.560
Absolutely.
00:48:28.560
Which is, you know, finally there would be a little bit of justice in that I mean,
00:48:34.560
you know me, I'm not far from being a Marxist.
00:48:38.440
However, it seems like it's the blue collar workers who have been getting screwed for
00:48:43.680
the last few decades, you know, since Reagan is at least.
00:48:49.120
And the white collar class now is going to feel the repercussions of this technology, I think
00:48:57.240
in the way the blue collar workers felt the outsourcing of jobs to other countries and
00:49:03.240
so forth.
00:49:04.240
Yes, that became quite clear.
00:49:05.840
The white collar workers, coders also, coders and producers.
00:49:09.560
Journalist, professors, writers, everyone who has to do something, yes, with a written
00:49:15.400
word are pretty much replaceable now by this new technology.
00:49:19.880
So this is why, you know, I suggested in the first place that we do this show so that we
00:49:23.760
can think about the implications of these new technologies specifically for everyone who
00:49:28.240
depends on writing.
00:49:32.560
So if we can end this conversation by actually hearing the opinion or opinion, hearing the
00:49:39.680
prediction of CHAGEPT itself on this topic, I actually asked it some of the questions
00:49:44.840
that we discussed today.
00:49:46.160
So first of all, I asked it what happens when the structuring of cognitive language
00:49:50.800
abilities is relinquished to an external artificial agent such as CHAGEPT.
00:49:55.440
And CHAGEPT answered.
00:49:56.960
It is important to know that while CHAGEPT can generate text and structure language,
00:50:01.360
it is still a machine unlike human consciousness, creativity and experience.
00:50:06.000
CHAGEPT can provide valuable assistant in generating text and organizing ideas, but ultimately
00:50:11.200
humans must make decisions about the content and direction of their writing.
00:50:15.680
And then the other question that I asked it, what can the humanities teach that generative
00:50:20.160
AI cannot already do faster and better?
00:50:23.560
And CHAGEPT responded.
00:50:25.320
It is important to recognize that the humanities involve critical thinking, interpretation,
00:50:30.200
analysis of complex ideas and concepts.
00:50:32.920
These skills require creativity, empathy and an understanding of culture and historical
00:50:36.840
context, which are difficult for AI to replicate.
00:50:41.480
So I would end our conversation today by saying the following.
00:50:46.400
Technology changes all the time.
00:50:48.520
We are for sure living in an era in which every few months, something permethian comes
00:50:55.200
out onto the market, right?
00:50:57.920
So what stays remarkably stable throughout human history is human nature, right?
00:51:03.680
That is the great mystery that we need to study, and this is the great mystery that
00:51:08.320
is the topic of the humanities.
00:51:10.360
So definitions of what it means to be human constantly change and supersede one another.
00:51:15.800
When we as a civilization finally solve the mystery of this internal dialogue that happens
00:51:21.640
within ourselves and we call human nature, well then we won't be talking about generative
00:51:26.000
artificial intelligence in humans, but we would just have angels left on earth.
00:51:31.040
Well, that's very well said, but I have to go back to this very trivializing definition
00:51:37.040
that the GPT gave you why it doesn't have human consciousness, critical thinking and so forth.
00:51:45.760
Yeah, what is consciousness going back to DH Lawrence thought is the welling up of unknown
00:51:51.640
life into consciousness.
00:51:54.520
So what that definition of the mind amputates is the unknown life that wells up into whatever
00:52:02.800
this thing is that we call consciousness.
00:52:06.400
So it's not enough just to say that we are conscious and AI is not conscious.
00:52:12.240
The mystery is what is welling up, what is the unknown life that wells up into this
00:52:18.720
consciousness, what are the sources from it?
00:52:22.320
So that I think is a mystery that we don't want to solve because to solve a mystery is
00:52:27.600
also to neutralize it.
00:52:30.520
We want to open ourselves to the mystery.
00:52:33.320
Absolutely.
00:52:34.320
Well, I will let the last word to Hannah Arendt where she writes, "Men, if there were ever
00:52:40.560
to lose the appetite for meaning we call thinking and seize to ask unanswerable questions
00:52:47.320
would lose not only the ability to produce work of arts, but also the capacity to ask the
00:52:52.520
answerable questions upon which every civilization is found."
00:52:57.320
Very nice.
00:52:59.120
Now we're going to end our show with a rather dissonant song given what we've been talking
00:53:06.040
about, but we wanted to play it because I didn't mention that you actually come from
00:53:10.120
North Macedonia and so it's come a long way from North Macedonia to the Silicon Valley
00:53:16.440
and all these reckoned-diet issues that we've been discussing on the show today.
00:53:21.040
But you brought a song for our exit music that you might just want to tell us what we're
00:53:28.200
going to be listening to and what our listeners know about the band and the name of the
00:53:33.040
song.
00:53:34.040
Thank you Robert.
00:53:35.040
Yes, I, this is a personally very beautiful moment for me.
00:53:39.320
So we are ending our show today by hearing the North Macedonia band called Anastasia,
00:53:46.280
we're going to hear the song Time Never Ends, which was part of the soundtrack of the film
00:53:52.240
before the rain by Milchomanschowski.
00:53:54.880
And I think it's a very appropriate way to end the show because again, as someone who comes
00:53:59.840
from the south of Europe and from part of the civilization that gave us ancient philosophy,
00:54:09.520
this song just brings back what you were talking about Robert.
00:54:13.560
It brings back the mystery.
00:54:15.160
I feel that we lack today.
00:54:18.520
Okay, I'm reminding our listeners that we've been speaking with Anna Yevsky, who's
00:54:23.040
a Mellon fellow here at Stanford at the Humanity Center, and I'm Robert Harrison for
00:54:27.640
entitled opinions.
00:54:29.280
Bye bye.
00:54:29.640
[MUSIC]
00:54:39.640
[MUSIC]
00:54:49.640
[MUSIC]
00:54:59.640
[MUSIC]
00:55:09.640
[MUSIC]
00:55:19.640
(upbeat music)
00:55:22.300
(upbeat music)
00:55:24.880
(upbeat music)
00:55:27.460
(upbeat music)
00:55:30.040
(upbeat music)
00:55:32.620
(upbeat music)
00:55:35.200
(upbeat music)
00:55:37.780
(upbeat music)
00:55:40.360
(upbeat music)
00:55:45.360
(upbeat music)
00:55:50.360
(upbeat music)
00:55:59.360
(upbeat music)
00:56:08.360
(upbeat music)
00:56:13.360
(upbeat music)
00:56:18.360
(upbeat music)
00:56:25.360
(upbeat music)
00:56:34.360
(upbeat music)
00:56:39.360
(upbeat music)
00:56:44.360
(upbeat music)
00:56:51.360
(upbeat music)
00:57:02.360
(upbeat music)
00:57:04.940
(upbeat music)
00:57:07.520
(upbeat music)
00:57:10.100
(upbeat music)
00:57:12.680
(upbeat music)
00:57:15.260
(upbeat music)
00:57:17.840
(gentle music)
00:57:20.420
(audience applauding)
00:57:23.420
(gentle music)
00:57:26.000
(gentle music)
00:57:31.000
(gentle music)
00:57:36.000
(gentle music)
00:57:41.000
(gentle music)
00:57:46.000
(gentle music)
00:57:51.000
(gentle music)
00:57:53.580
You