05/27/2017
Sam Ginn on the Singularity
Sam Ginn is a second year undergraduate student at Stanford University. He is a computer science major interested in human consciousness and whether human consciousness is artificially replicable. Sam is also a participant in the philosophical reading group at Stanford and he is a devotee of Martin Heidegger's thought. In this show Sam discusses the […]
00:00:00.000 |
[Music]
|
00:00:03.000 |
This is KZSU, Stanford.
|
00:00:06.000 |
Welcome to entitled opinions. My name is Robert Harrison.
|
00:00:10.000 |
And we're coming to you from the Stanford campus.
|
00:00:14.000 |
[Music]
|
00:00:24.000 |
The future ain't what it used to be, friends. I've been reading Ray Kurtz file recently.
|
00:00:31.000 |
He's one of our present-day futurists, and his crystal ball is telling him that the singularity is near.
|
00:00:39.000 |
What is the singularity? It's a period in the future during which the pace of technological change will be so rapid.
|
00:00:48.000 |
It's impact so deep that all of human life and the basic concepts by which we make sense of it will be irreversively transformed, thanks to artificial intelligence.
|
00:01:00.000 |
I quote, "With in several decades information-based technologies will encompass all human knowledge and proficiency, including ultimately the pattern recognition powers, problem-solving skills, and emotional and moral intelligence of the human brain.
|
00:01:17.000 |
The singularity will allow us to transcend the limitations of our biological bodies and brains.
|
00:01:27.000 |
We will gain power over our fates, our mortality will be in our own hands.
|
00:01:33.000 |
We will be able to live as long as we want, and we will fully understand human thinking, and will vastly extend and expand its reach.
|
00:01:42.000 |
By the end of this century, the non-biological portion of our intelligence will be trillions of trillions of times more powerful than unaided human intelligence.
|
00:01:54.000 |
We are now, continues Kurtz file, in the early stages of this transition, the acceleration of the paradigm shift, as well as the exponential growth of the capacity of information technology, are both beginning to reach the knee of the curve.
|
00:02:11.000 |
Which is the stage at which an exponential trend becomes noticeable.
|
00:02:16.000 |
Shortly after this stage, the trend quickly becomes explosive.
|
00:02:21.000 |
Before the middle of this century, the growth rates of our technology, which will be indistinguishable from ourselves, will be so steep as to appear essentially vertical.
|
00:02:33.000 |
The singularity will represent the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human, but that transcends our biological roots.
|
00:02:47.000 |
There will be no distinction, post singularity between human and machine, or between physical and virtual reality.
|
00:02:55.000 |
If you wonder what will remain unequivocally human in such a world, it's simply this quality.
|
00:03:01.000 |
Hours is the species that inherently seeks to extend its physical and mental reach beyond current limitations.
|
00:03:11.000 |
That's Kurtz file.
|
00:03:13.000 |
He goes on to discuss what he calls the six major epochs of cosmic evolution, and he understands evolution as a process of creating patterns of increasing order.
|
00:03:24.000 |
And believes that it's the evolution of these patterns that constitutes the ultimate story of our world.
|
00:03:32.000 |
So these six stages of cosmic evolution begin with stage one, physics and chemistry, namely the formation of atoms, and then with chemistry, the formation of molecules.
|
00:03:46.000 |
Stage two is biology, or information as it's stored in DNA.
|
00:03:53.000 |
Epoch three brains, information in neural patterns.
|
00:04:01.000 |
Epoch four is the epoch of technology, where information in hardware and software designs start to emerge.
|
00:04:10.000 |
And I presume that we are in stage four of this epochal cosmic evolution.
|
00:04:16.000 |
And then we have epoch five, the merger of technology and human intelligence.
|
00:04:22.000 |
The methods of biology, including human intelligence, are integrated into the exponentially expanding human technology base.
|
00:04:33.000 |
And then finally, epoch six, the universe wakes up.
|
00:04:37.000 |
Patterns of matter and energy in the universe become saturated with intelligent processes and knowledge.
|
00:04:46.000 |
So there you go.
|
00:04:48.000 |
You heard correctly, whatever is taking place right here on this third stone from the Sun, our Earth, will eventually in the sixth epoch wake up the entire universe,
|
00:05:01.000 |
saturating its matter and energy with a superintelligence of which we human beings will have been the humble origin.
|
00:05:09.000 |
Who said that anthropocentrism and geocentrism were bygone illusions?
|
00:05:15.000 |
The future ain't what it used to be unless you go back far enough, and then it starts to look a lot like that old story.
|
00:05:23.000 |
A universe that revolves all around us, baby, it all comes back home to the Terran system and to these human brains of ours that just yesterday we thought we're an epiphenomenon of evolution.
|
00:05:36.000 |
That's right, the human mind is the great cosmic mind after all. Once it's able to reproduce its intelligence artificially.
|
00:05:45.000 |
So the shame and tells us in his book the Singularity is near, published in 2006, will in title opinions survive the Singularity?
|
00:05:57.000 |
Well, given that in title opinions figures as the highest form that human consciousness takes in this lead up to the Singularity, and given that we are broadcasting from Stanford University, which is one of the main incubators of the fifth epoch,
|
00:06:13.000 |
it's highly likely that the archives of this show will be carried over into the Singularity, and with any luck will infect the system with the deadly virus of free and amorous thinking.
|
00:06:26.000 |
It ain't over till it's over and who knows, in title opinions may well be the fat lady in this singular drama once it's all said and done.
|
00:06:36.000 |
Speaking of Stanford and artificial intelligence, I have with me in the studio a Stanford sophomore who was helping our technology approach the knee of the curve, where the exponential growth of artificial intelligence will begin its vertical rise into the brave new world of Singularity.
|
00:06:54.000 |
Sam Ginn is a computer science major interested in human consciousness, or more precisely in whether human consciousness is artificially replicable.
|
00:07:05.000 |
Sam has taken a course with me on nihilism, is a participant in the philosophical reading group at Stanford that I run with my colleague, Zep Gumbrecht, and he is amazingly enough a devotee of Martin Heidegger's thought.
|
00:07:19.000 |
He gives me special pleasure to welcome him to in title opinions. Sam, thanks for joining us today.
|
00:07:25.000 |
Yeah, thank you so much. It's an absolute pleasure to be here.
|
00:07:28.000 |
You are immersed in the world of artificial intelligence. Could you share with us your thoughts on just how close AI is to the breakthroughs that would create the sort of exponential growth that Kurtzweil and others are calling the Singularity?
|
00:07:42.000 |
Of course.
|
00:07:44.000 |
Well, I think first we have to understand what possible point this is that Kurtzweil is talking about when he talks about the Singularity.
|
00:07:54.000 |
What type of intelligence is necessary to bring about this rapid growth in progress?
|
00:08:01.000 |
Because what Kurtzweil is talking about is that this is the course of human history.
|
00:08:05.000 |
We've been advancing technologically iterative centuries and recently iterative decades.
|
00:08:10.000 |
I mean, the smartphone was created just a couple decades ago.
|
00:08:13.000 |
But the Singularity is that point in which this progress becomes on the order of magnitude of minutes of hours.
|
00:08:19.000 |
When a computer will be able to solve special relativity, which Einstein took a decade to come up with in a matter of minutes.
|
00:08:26.000 |
And so what kind of technological progress would be required to enable this?
|
00:08:32.000 |
And that's a computer which can start learning for itself.
|
00:08:35.000 |
So when Kurtzweil talks about the Singularity, the Singularity is that point in which the humans growth curve of technological progress meets and melts with the computers growth curve of technological progress.
|
00:08:48.000 |
And the computer will then start learning on its own with its own thought, with its own thinking, exactly as how humans do now.
|
00:08:56.000 |
But it's such a rate completely inconceivable to us.
|
00:08:59.000 |
A computer that can think by itself would be able to self-replicate it on every digital surface of this planet with ease, with rapid ability.
|
00:09:09.000 |
If it wants to learn a neighboring star, it can, as you said in the beginning, with Kurtzweil opening, that we will permeate this entire universe with its existence.
|
00:09:21.000 |
Well, yeah, it depends on two things. First, what we mean by thinking, but permeating this universe why, just because our computers can get ever faster and ever broader.
|
00:09:32.000 |
How does it permeate the matter and energy of the universe?
|
00:09:36.000 |
I mean, a computer that can think for itself can perhaps be curious to explore the alpha-centaurio, which is our neighboring solar system, and build its own rocket ships.
|
00:09:46.000 |
When Kurtzweil talks about permeating, it's not a mystical permeation. It is entirely a very physical permeation.
|
00:09:53.000 |
Okay, let's take the television series Star Trek.
|
00:09:57.000 |
If you look at the science fiction of fantasy future, in either the first series, the original one, or the Star Trek, the next generation, what you find is that on the level of electronics, or computer intelligence, we are very close to what was envisioned as something three or four centuries away in time from our thing.
|
00:10:20.000 |
And in that sense, you can say the singularity is happening, there's a lot of evidence that on the other hand, we are not one inch closer to space travel today than we were then.
|
00:10:31.000 |
Our airplanes do not go 100 miles an hour faster than they ever did since they invented.
|
00:10:39.000 |
That this permeation of the universe would seem to require something that is in excess of what we would just call this artificial intelligence.
|
00:10:49.000 |
It requires some, if you want to use the analogy, I don't like these analogies, but something on the level of the hard, not hardware or such, but something that involves material travel.
|
00:10:59.000 |
And whether a computer can invent warp drive, you're telling me that it can.
|
00:11:04.000 |
Yes, I think you bring up a good point right now artificial intelligence, although it seems to be incredibly intelligent, it can do insane mathematical computations, is nowhere close to the world that curves while so eloquently paints in his picture of a world permeated by intelligence.
|
00:11:23.000 |
And that would 100% require what you say, a manufacturing of ability, ability to solve the equations of warp drive to bend space time to engage with this word physically.
|
00:11:35.000 |
And right now artificial intelligence can't do that.
|
00:11:38.000 |
And I think this brings into the interesting question of where we are right now with artificial intelligence and where we need to be to get to the singularity.
|
00:11:48.000 |
Because for curves about the singularity is not a gradual evolution of intelligence.
|
00:11:54.000 |
It's a singular point in which an artificial intelligence gains its ability to, which will go into what this means, but thinking for itself to have its own curiosity to build itself to build the robots to go out in the world and build a warp drive.
|
00:12:11.000 |
Right, so it's a point at which it all takes off and something explosive happens, almost unimaginable.
|
00:12:18.000 |
So how far is artificial intelligence from such a moment of singularity?
|
00:12:23.000 |
Yeah, so I think before we get into where AI is right now, we have to understand what I mean when I talk about the singularity.
|
00:12:30.000 |
What I mean when I talk about the singularity is the computer that can truly think for itself.
|
00:12:35.000 |
So in philosophy circles and computer science circles, we've divided this problem of intelligence or the problem of consciousness of extreme sentience into the easy problem and the hard problem of consciousness.
|
00:12:49.000 |
You had this great quote by Kurzweil that's about when he said that intelligence will be able to do informational information retrieval, pattern recognition, and more on emotional intelligence.
|
00:13:01.000 |
In those first two categories, information retrieval, pattern recognition, decision making, those are the easy problem.
|
00:13:08.000 |
These are things that we can solve with known computation given enough computation.
|
00:13:13.000 |
These are things that we can come up with equations that we know how to solve with pen and paper.
|
00:13:18.000 |
Now it might take an incredibly long time to solve with pen and paper, but it's something we can imagine doing with math, with dumb processes, but when in math and in collective they can emerge really interesting solutions.
|
00:13:30.000 |
So for instance, image recognition is a great example given a photo of a panda can a computer classify this as a panda.
|
00:13:38.000 |
Well, yes, we can teach it that pen is of ears, faces, they're white and black, etc. It can learn from a system of rules. It's something we can replicate with pen and paper.
|
00:13:47.000 |
But that type of artificial intelligence does not get at what curves will also include in this point of singularity.
|
00:13:54.000 |
That's just the first part. That's the easy problem, which is very hard to do in computers, but we have a clear path forward on how to get there.
|
00:14:00.000 |
But this last word said he used a moral and emotional intelligence.
|
00:14:05.000 |
That's the hard problem.
|
00:14:07.000 |
That problem consists in our ability as humans to feel, to interpret the world, to experience the world from a point of view.
|
00:14:16.000 |
So David Chalmers is a really famous cognitive neuroscience philosopher who talks about consciousness, and he defined this hard problem of consciousness.
|
00:14:26.000 |
It's that subjectivity that we experience every day. In Thomas Nagel's words, it feels like something to be us.
|
00:14:35.000 |
To be anything that is sentient. Something that it feels like to be a bat, something that it feels like to be a whale.
|
00:14:42.000 |
So the subjectivity that you're sentient subjectivity you're talking about, it's not exclusively human.
|
00:14:47.000 |
Yeah, exactly. It's not necessarily linked to intelligence, by the way.
|
00:14:50.000 |
Yeah, no. You can, in Chalmers, he comes up with this idea of a philosophical zombie.
|
00:14:55.000 |
You could have somebody who looks like a human who acts like a human who does everything that a human can do, but without that subjectivity.
|
00:15:02.000 |
It would feel like nothing to be inside that person's head. But this subjectivity, if we could artificially create it, then that machine would have feelings. It would have curiosity. It would have wonder, it would have its own desires.
|
00:15:17.000 |
And if it can have that, it can embark on its own exponential curve of learning.
|
00:15:23.000 |
And this is what curves by a singularity point is. It doesn't necessarily require this subjectivity, but it requires a computer which can emerge on its own.
|
00:15:31.000 |
It can emerge on its own and decide for itself that it wants to learn, that it wants to grow, that it might be interested in solving warp drive.
|
00:15:38.000 |
Until we get there, all computer science, all thinking on this topic will have to be things that can just be solved by pen and paper, but better.
|
00:15:47.000 |
We would have to know the theory of warp drive in order to instruct a computer to solve the mathematical equations.
|
00:15:54.000 |
What curves by all needs to achieve the singularity is a computer that finds the problem of warp drive interesting.
|
00:16:00.000 |
And wants to solve it and come up with the theory on its own.
|
00:16:03.000 |
So the first question that comes to mind hearing you say that is, why do we need artificial intelligence to do more than the dumb stuff?
|
00:16:13.000 |
Or the easy problems?
|
00:16:15.000 |
What do we gain by replicating human pathos, emotions, moral intelligence, and so forth?
|
00:16:24.000 |
Why can't the singularity take place without machines learning to be human in that sense?
|
00:16:31.000 |
Yes, so to answer that question, it's interesting to explore where computer science is right now.
|
00:16:38.000 |
So right now we have excellent algorithms that can do pattern recognition in superhuman feats.
|
00:16:45.000 |
In 1997, we solved the game of chess, VP chess, and that was a huge break. They're an artificial intelligence.
|
00:16:53.000 |
We now have robots that can travel on their own in Mars and react without any human control navigating harsh terrain, moving through rocks and fields.
|
00:17:04.000 |
We have self-driving cars on the streets in San Francisco.
|
00:17:07.000 |
They can do seemingly superhuman abilities in driving.
|
00:17:12.000 |
We no longer, it's interesting, like when SpaceX and NASA, they launch rockets.
|
00:17:18.000 |
We actually don't even trust humans to do a good job at launching rockets in those beginning 10 seconds right before they take off.
|
00:17:26.000 |
What we do is they wouldn't it be a risk to give such a machine emotions and desires and perversions?
|
00:17:34.000 |
It would 100% be a risk that machine might decide, maybe I don't want this rocket to launch.
|
00:17:41.000 |
It's incredibly risky. I mean, all of the great technologists right now, Elon Musk, Larry Page, Sergey Brin,
|
00:17:49.000 |
they all talk about the scary future of AI, of how it will gain sense and consciousness and be able to do things that the human programmer didn't program them.
|
00:17:59.000 |
But they talk about these things because they know the limits of artificial intelligence right now
|
00:18:05.000 |
and without imbuing these machines with this consciousness, then we will never be able to get the super-effects.
|
00:18:13.000 |
The really interesting things of warp drive, of having solved that because when you talk about Star Trek,
|
00:18:19.000 |
we see this beautiful world of a computer, but it's not that singularity that Kirzweh was talking about.
|
00:18:25.000 |
In order to get that singularity, you really need a computer that can think abstractly.
|
00:18:31.000 |
The ability to think abstractly is an ability that requires consciousness and requires us to have the idea of an abstract thought.
|
00:18:38.000 |
Right now, artificial intelligence has no notion of abstract ideas. It has no notion of concepts. It just has notions of symbols.
|
00:18:47.000 |
But it has a notion of what those symbols mean.
|
00:18:49.000 |
So, for instance, the probably most sophisticated artificial intelligence program right now is a computer program developed by Google in DeepMind,
|
00:18:58.000 |
which is owned by Google called AlphaGo, which just competed last year in the against the Go World Champion, whose name is Lisa Dall.
|
00:19:08.000 |
What is Go for? Those are, I don't know what you're about.
|
00:19:10.000 |
So, Go is a game that's similar to chess. I mean, it looks like chess. You put pieces on a board.
|
00:19:17.000 |
But whereas chess has always been this amazing feat of human prowess in terms of our intelligence,
|
00:19:23.000 |
it was actually solved by computers in 1997 and why it was solved by computers is because a computer could literally search the entire space, a possible chess position at the current space.
|
00:19:35.000 |
And then it could just see, "All right, if I make this move, I can win." Or, "If I make this move, I will lose." 50 moves down the line.
|
00:19:42.000 |
But with Go. It's similar to chess, but its state space is astronomically orders of magnitude larger than chess.
|
00:19:49.000 |
So, for instance, on any game of Go, there are more possible positions of a Go board than there are atoms in the universe.
|
00:19:58.000 |
So, there is no possible way that a computer could even hope to search this entire space.
|
00:20:03.000 |
Neither could a human mind. Yeah.
|
00:20:05.000 |
So, the question is how are humans good at playing Go?
|
00:20:09.000 |
And if you ask any Go expert, what they're going to tell you is its intuition.
|
00:20:13.000 |
This Go game is a perfect example of something that's super hard for a computer to program because we have no idea what this word intuition means.
|
00:20:22.000 |
For Lee Sudol when he talks about when he's playing, he talks about an aesthetic nature to the game of Go.
|
00:20:28.000 |
It feels like a better move over here. He can see the game unfolding not precisely, but in a way that's more favorable to him painting almost like a picture.
|
00:20:38.000 |
Now, this used to be something that computers had never hoped of replicating. But what Go, Google, did is they created a program that learns through something called reinforcement learning.
|
00:20:48.000 |
And it learned on its own without searching in an endless space, and it learned to beat Lee Sudol in the Go game.
|
00:20:55.000 |
And so, this world championship that happened last year, AlphaGo actually beat Lee Sudol four out of the five games.
|
00:21:02.000 |
And the commentators were blown away that a computer was able to achieve such an amazing feat.
|
00:21:09.000 |
In fact, one of the commentators said that on one of the moves in game two, that this was not a human move.
|
00:21:15.000 |
He thought that it was a bad move that AlphaGo made, and I thought it was a silly move.
|
00:21:21.000 |
But it turned out 20 moves later. It was crucial to AlphaGo's victory.
|
00:21:26.000 |
And this was nothing that could be predicted. AlphaGo didn't know this would be the best move, but it had an idea that this would be correct.
|
00:21:35.000 |
And it was no move that a human would make. It learned how to play Go differently than how humans played Go.
|
00:21:42.000 |
Now, this might seem very cool. This might seem like this is the path forward to get the singularity.
|
00:21:47.000 |
We have a machine that is able to learn on its own to learn things that we as humans find in scrutable,
|
00:21:54.000 |
that we as humans, we can't even tell you how AlphaGo figured out how to win this game of Go.
|
00:21:59.000 |
But when we look at what AlphaGo is actually doing, what it's doing is the same math that a fifth grader can do with pen and paper, but just really, really fast.
|
00:22:09.000 |
What it's doing, how AlphaGo learned is it played itself in the game of Go millions and millions of times.
|
00:22:16.000 |
And at the end of every game, a person told it whether it won the game or whether it lost the game.
|
00:22:21.000 |
And then it inferred patterns within its gameplay of this type of pattern, something that looked like this, would yield with a higher probability of victory.
|
00:22:31.000 |
In the end result, what this came out to be was just a mathematical equation.
|
00:22:35.000 |
It was able to just identify patterns and produce an end state, a move in this case based on the current position of the board game that would yield to a higher probability of victory in the end.
|
00:22:47.000 |
But AlphaGo had no idea it was playing a game. It had no idea that it was in a competition that was broadcasted across the world, and a no idea it was even making moves.
|
00:22:58.000 |
So when we talk about what would it would take for a computer to invent something like Warp Drive, AlphaGo might seem like something along the lines of getting there, maybe in a decade.
|
00:23:09.000 |
But in fact, it's nothing close, because what AlphaGo required is this equation in the end of whether it won the game or not.
|
00:23:17.000 |
And it required a clear state space to discover patterns that would yield to its positive feedback loop.
|
00:23:22.000 |
Basically, it was just optimizing a single mathematical equation given the inputs of a board game.
|
00:23:29.000 |
What do I need to literally multiply it by numerically in order to get an output move?
|
00:23:35.000 |
There is no way I can conceive of putting the problem of Warp Drive or the problem of space exploration in terms of an optimization problem.
|
00:23:46.000 |
And so this is really what artificial intelligence is right now.
|
00:23:50.000 |
Artificial intelligence can be described as optimization problems. It can be described as math.
|
00:23:55.000 |
So anything in the future that can be described through optimization is something where the AI right now is really good at.
|
00:24:03.000 |
So when we look at your Star Trek world, what is the computer really good at?
|
00:24:07.000 |
Well, the computer is really good at maneuvering the spaceship, maneuvering the enterprise into position, given an enemy spaceship.
|
00:24:14.000 |
How can the enterprise expertly maneuver to avoid the missiles?
|
00:24:18.000 |
Or what is the optimal firing rate and firing trajectory of its lasers in order to take down an enemy ship?
|
00:24:24.000 |
For instance, data, the Android on the Starship Enterprise, data is really good at things like calculating really, really fast in order to accomplish exactly what Captain Picard demands of him.
|
00:24:38.000 |
But what the computers are not good at is creativity. They're not good at anything that would require Captain Picard or the other Starship Enterprise officers to decide whether we should explore this planet.
|
00:24:51.000 |
How can we explore this planet, not in the optimum way, but in the way the best suits human interests?
|
00:24:57.000 |
But does that, excuse me, for interrupting, is a beautiful explanation you're giving.
|
00:25:02.000 |
But what the officers of the Starship Enterprise have that data doesn't have, is that a certain kind of intelligence?
|
00:25:11.000 |
Or is it something else? Is it on the level of use, word intuition?
|
00:25:16.000 |
Or moral intelligence? Some other kind of human faculty that is not subsumable under the rubric of intelligence.
|
00:25:26.000 |
This is the problem I have with Kurt Spiles that he thinks that intelligence is the whole game.
|
00:25:33.000 |
And we know that there's a lot more to being human than what goes on in our brains.
|
00:25:39.000 |
I agree completely. What's going on in the human brain includes the intelligence that computers can do right now, includes the intelligence of data or the computer.
|
00:25:50.000 |
But there's something more going on in the human brain, where you can call it intuition, you can call it feeling, you can call it whatever.
|
00:26:00.000 |
That enables us not just to make decisions, not just to learn, but to grasp immense complexity of multiple and manifold problems, really well in a representational manner.
|
00:26:14.000 |
So what computers are really good at is taking a very, very narrow topic, the game of go and solving it.
|
00:26:19.000 |
Where humans can do that, not as well as the computer, but what humans can do at so much order of magnitude better than computers in Star Trek, which is hundreds of years in the future, is take a whole world of data, a whole experience of data, understand it conceptually, intuitively, in some manner, and then make decisions off of that.
|
00:26:44.000 |
Computers can only make decisions off of the inputs we give them. Humans can go on in the world and experience completely unfamiliar things, and then make completely new decisions.
|
00:26:56.000 |
Can I ask a question before it slips my mind? How much of what you're describing about humans do we share in common with animals?
|
00:27:04.000 |
Yes, so that's a completely unknown question. What I'm talking about right now is the human consciousness. What is it that makes a special computer? So computers right now, even like the curiosity rover on Mars that drives itself, it's not really taking in brand new ideas and making decisions.
|
00:27:20.000 |
We trained it off of Earthly Terrain and we trained it how to navigate Terrain, a human or a dog when put on Mars. It couldn't counter things completely alien, things that it had never been taught before, and somehow learn to learn how to deal with it.
|
00:27:38.000 |
And so I think, and this is debatable, is that because they have their sentient subject to it? They have the ability to question themselves, to think about themselves.
|
00:27:49.000 |
So I think a human undeniably when put on Mars or put on a different universe even, they could think about things from their own point of view, relate to these new experience by relating it to themselves, to their own subjectivity, and I think a human can do that. Could a dog do that?
|
00:28:06.000 |
I would say a dog would be able to do that. I would say a monkey would be able to do that. Could a worm do that? Could a tree do that? I don't know. Those are interesting questions.
|
00:28:14.000 |
Yeah, we don't have to go speculate too much on that line. I'm just trying to identify. We're still in the realm you were describing of the easy problem, right? Even though these are incredibly sophisticated very, you know, the AlphaGo is incredibly stupid, but still within the domain of the easy problem.
|
00:28:32.000 |
The hard problem is getting machines to develop something along the lines of human subjectivity. Exactly.
|
00:28:40.000 |
I still don't understand what we can talk about the challenges that represent, but I still don't understand why it's necessary that they have that subjectivity. It's because without it, they're not able to interact and respond to completely unpredictable circumstances or events, and therefore not able to make decisions creatively or intuitively that would enable them to exponentially grow at the rate of a singularity.
|
00:29:09.000 |
Exactly. Exactly. Exactly. And that will matter. So when we think about computers now, AlphaGo, self-driving cars, even the computer enterprise, they do one thing very well or maybe multiple things very well, based on hordes of training data and past experiences of that specific thing.
|
00:29:26.000 |
But AlphaGo cannot play chess. AlphaGo cannot be put in a completely unfamiliar position and learn to learn how to solve that problem.
|
00:29:37.000 |
It's really this question of computers right now can do learning really well when somebody tells them what to learn, but they can't...
|
00:29:44.000 |
And this is what... I can't take the initiative from the computer. Yeah, they can't learn to learn. And this is what the very, very contemporary computer scientists are trying to create a computer to do, but we haven't been able to do it.
|
00:29:54.000 |
Would that still be within the realm of the user growth? Yeah, no. So that is getting into the realm of the hard problem. What would be required for a computer to have its own curiosity, to have its own ability to explore its own thoughts?
|
00:30:09.000 |
That would require what Chalmers and other computer scientists call it the hard problem of consciousness or strong artificial intelligence or artificial general intelligence.
|
00:30:19.000 |
This hard problem of consciousness is that subjectivity and what it feels like to be us. That enables us to learn to learn and enables us to engage with our world curiously with wonder.
|
00:30:33.000 |
And in my opinion, and this is debatable, but without this wonder, without this subjective appreciation that could not be written on a piece of paper, computers will never learn to engage in a general sense.
|
00:30:48.000 |
So what computer science calls it's right now we have artificial intelligence, but what we don't have is artificial general intelligence, an algorithm or a machine which can learn in general anything exactly how humans do it.
|
00:31:02.960 |
And that is kind of the dividing point between week AI and strong AI, the dividing point between computers we have now and computers in the Star Trek realm and those computers after the point of singularity that would go on and permeate the universe.
|
00:31:15.960 |
Well, it could be that what they don't have yet is stages of development. In particular, they don't have an infancy or a youth because we know that our species, what's so singular about our species even compared to the closest primates to us is this project.
|
00:31:31.960 |
Accessively prolonged infancy and childhood that we have in terms of percentage of our lifetime in an occupies.
|
00:31:41.960 |
And we know that this extraordinary plasticity of the infant in young mind, the learning seems to have its all its incubation matrix right there in something that is somehow associated with youth.
|
00:31:54.960 |
How that's going to help solve the artificial general intelligence problem. I have no idea.
|
00:32:00.960 |
But why? I take it that you believe that we are very far away from solving any of the hard problems. Is that correct?
|
00:32:08.960 |
Well, this is the question of I cannot put a single time line on this because when we're talking about the easy problems, decision-making.
|
00:32:16.960 |
We're talking about things that we can clearly see how we can improve upon existing algorithms to get at. So, for instance, I don't know how to make a plane or nobody knows how to make a plane travel between star systems right now really fast and efficiently.
|
00:32:30.960 |
But we can see our current planes now and we can see if we incrementally improve them over decades at an exponential rate of progress we'll be able to do that.
|
00:32:37.960 |
So I can pinpoint maybe in a hundred years we'll visit another star system.
|
00:32:41.960 |
But nobody has any theoretical conception of how you could create this consciousness.
|
00:32:47.960 |
Right now we have and very recently in the past decade since like 90s and early 2000s developed enough technology,
|
00:32:55.960 |
developed enough machinery to completely mimic all of the power and all of the relationships in the human brain.
|
00:33:01.960 |
So right now we have the technology required to create the sentience, but we don't have the theory required to create the sentience yet.
|
00:33:10.960 |
So, and why people are really worried about this now is because the technology is so easily accessible.
|
00:33:17.960 |
If some person, maybe a Stanford student, maybe some person in their garage can develop the theory.
|
00:33:25.960 |
They can create the sentient intelligence and once they do it, it's a winner take all game.
|
00:33:31.960 |
Once they do it, that is the point of singularity.
|
00:33:34.960 |
The AI, however they programmed it, would be able to learn in this exponential rate envisioned by Curves Vile and be able to do untold number of things.
|
00:33:43.960 |
So that's why it's a dangerous thought.
|
00:33:45.960 |
And that's why I can't get to it.
|
00:33:46.960 |
Can I ask you, are you a candidate for such a person?
|
00:33:48.960 |
Yes, no, I mean it's I have the technical abilities and every graduate in Stanford or any undergraduate who's taken AI classes here at Stanford has the technical ability
|
00:34:01.960 |
and has the machinery, the capability to program this future AI that wouldn't break through the singularity.
|
00:34:09.960 |
What none of us have as of yet is a pragmatic theory of how to create this consciousness.
|
00:34:16.960 |
So I take it, Sam, that you believe that the theories that are governing AI at the moment are unviable, that they represent a naive,
|
00:34:26.960 |
maybe naive is not the word, but extremely limited and perhaps misguided philosophical concept of what human intelligence is.
|
00:34:36.960 |
Can you speak about this, the Cartesian legacy of the theory of mind that is operative in the AI community and especially what you call the limitations of a state theory of intelligence?
|
00:34:52.960 |
Yeah, so let me explain, so computer scientists right now have wonderful theories about how we could create consciousness, but they're all based on this Cartesian framework.
|
00:35:01.960 |
So one of the absolute most leading neuroscientists in the world, Christoph Kalk, he has this idea of consciousness that consciousness is a fundamental part of the universe.
|
00:35:12.960 |
We don't have to go in exactly what he says, but what he bases all this theory off is, and he writes this, is they cards formulation,
|
00:35:21.960 |
and we need to argue soon. I think, therefore, I am. What he says, this is vision of how to do this, is we can develop a weak AI, and we know exactly how to do that.
|
00:35:32.960 |
Something that can do decisions can take in parameters, learn off of the parameters, and develop a solution to a problem. We can do that. Let's do that really, really well.
|
00:35:41.960 |
Then let's apply value predicates on top of that data. So as I said, AlphaGo plays Go right now, but it doesn't know it's playing Go.
|
00:35:49.960 |
What computer scientists want to do is teach it to play Go like it does now, and then apply value, apply meaning on what it's currently doing.
|
00:35:57.960 |
So it's playing Go, then we want it to teach it or inject within the meaning of playing Go.
|
00:36:03.960 |
We want it to get that it somehow has a meaningful perception of the Go game, that it understands that it's playing this Go.
|
00:36:11.960 |
Yeah, so they're still in this input output.
|
00:36:13.960 |
Yeah.
|
00:36:14.960 |
Way of thinking.
|
00:36:15.960 |
It can put something and it'll come out as an output, which is, yes, a very Cartesian representation model of learning.
|
00:36:22.960 |
Yeah, and they think it's this kind of special subjectivity sauce that you sprinkle on a smart machine, and magically it becomes consciousness.
|
00:36:29.960 |
And so based on this framework, this is where you've gotten a hold debate.
|
00:36:33.960 |
Well, is this special subjectivity sauce even even computable?
|
00:36:36.960 |
So you get crazy ideas from like David Chalmers or Christoph Cauch them.
|
00:36:40.960 |
Maybe consciousness requires new physics. Maybe this state, which I'll explain in one second requires a brand new idea of physics in order to explain because when you think about what it would require for AlphaGo to get this subjectivity, it's almost magical.
|
00:36:56.960 |
It's almost, there is no way a computer could do that.
|
00:36:59.960 |
We need something extra.
|
00:37:00.960 |
We need a specialness to this.
|
00:37:02.960 |
And so this is where all the modern ideas of consciousness come out of.
|
00:37:05.960 |
They come out of the idea.
|
00:37:06.960 |
All right, what is this specialness that we could inject into this machine to get it in that subjectivity?
|
00:37:13.960 |
And so some people think, well, maybe it has to do with quantum physics.
|
00:37:17.960 |
So Sir Roger Penrose is a physicist who says that our consciousness in our brain is due to quantum physics.
|
00:37:23.960 |
It's due to the superposition and the breakdown of quantum particles within our brain.
|
00:37:28.960 |
That's one idea. We don't have to get into it.
|
00:37:30.960 |
But another idea is that maybe consciousness is fundamental to the universe.
|
00:37:34.960 |
And why people are coming up with all these crazy ideas is that nobody has any viable path forward and how we can computationally inject this subjectivity under this Cartesian framework into AlphaGo to get it conscious.
|
00:37:47.960 |
So we have to come up with all these crazy, ulterior viewpoints of how we could possibly do it.
|
00:37:54.960 |
Now, I think that all of these ideas, all of these frameworks, the consciousness is fundamental.
|
00:38:03.960 |
The consciousness has to do with quantum physics that perhaps consciousness is part of what people call the integrative information theory, which is that given enough information moving and rapidly consuming and processing emerges consciousness magically.
|
00:38:17.960 |
They're working off of the wrong definition of consciousness.
|
00:38:20.960 |
They're not embarking upon the question of what would it mean to make a conscious entity.
|
00:38:28.960 |
What they're doing is they're trying to create a conscious state.
|
00:38:33.960 |
So what traumas and coke and panros and all these other theories, they're trying to build a machine that one could point to and say, this is a conscious entity.
|
00:38:43.960 |
This is a conscious agent.
|
00:38:45.960 |
It is a state, a subject.
|
00:38:47.960 |
And so this is why Coke begins with Descartes.
|
00:38:49.960 |
He says, I think, therefore, I am.
|
00:38:53.960 |
He pinpoints an eye.
|
00:38:55.960 |
He pinpoints that there's a subject behind existence, which experiences.
|
00:39:01.960 |
I don't necessarily agree with that.
|
00:39:03.960 |
I don't think, if you freeze me right now, and I'm an object, that you could say that that frozen Sam is still conscious.
|
00:39:14.960 |
Consciousness to me is not a state which can be computed, not a state that can be solved for a non-estate, that can be sprinkled into.
|
00:39:24.960 |
Not a conscious ness, but rather an act of doing.
|
00:39:28.960 |
When we think about what we do, when we are conscious.
|
00:39:32.960 |
So for instance, when I look, for instance, at leaves of grass, what I see is not the color green, the first and foremost, as RGB values, as wavelengths, as frequencies.
|
00:39:44.960 |
I see the greenness of something.
|
00:39:46.960 |
I can engage with the color green in a way that far surpasses any type of currently existing known methods of computation.
|
00:39:53.960 |
When you talk about design, which is somewhat word for a conscious entity, of a door closing.
|
00:40:03.960 |
When a door closes now, we don't hear the frequency of the sound.
|
00:40:09.960 |
We hear the door closing as a door smashing.
|
00:40:13.960 |
Right.
|
00:40:14.960 |
I'd all take place within a meaningful context.
|
00:40:17.960 |
A context of meaningfulness into which design is thrown.
|
00:40:22.960 |
Right.
|
00:40:23.960 |
Yeah.
|
00:40:24.960 |
And so this ability to take things as things, this is something that artificial intelligence has no idea how to do.
|
00:40:33.960 |
Even though it's not the same thing as pattern recognition.
|
00:40:36.960 |
Not at all.
|
00:40:37.960 |
Not at all.
|
00:40:38.960 |
What's the difference?
|
00:40:39.960 |
Yeah.
|
00:40:40.960 |
So this pattern recognition is I would first take the frequency of a door closing.
|
00:40:45.960 |
I would first take the soundways.
|
00:40:48.960 |
And then I would actively think, all right, these sound waves sound 80% like these other sound waves, which historically have then proceeded a door closing.
|
00:41:01.960 |
Got it.
|
00:41:02.960 |
That's a pattern recognition.
|
00:41:04.960 |
That is what coke and all these other consciousness scholars are trying to create.
|
00:41:08.960 |
They want to inject this meaning after the fact of the data.
|
00:41:12.960 |
And I think this is just the wrong approach to consciousness is the wrong approach to an artificial and general intelligence.
|
00:41:18.960 |
Because what hydegr is so beautifully elucidates for us is that we take things as they as meaning primordally.
|
00:41:26.960 |
Before we engage with the substance what hydegr calls their present in handness.
|
00:41:32.960 |
Right.
|
00:41:33.960 |
And that's what basically to simplify.
|
00:41:35.960 |
That's what he means by being.
|
00:41:37.960 |
Being is the aspector.
|
00:41:40.960 |
We take something as what it is.
|
00:41:43.960 |
In other words, we have access to its being.
|
00:41:45.960 |
It's being this or that or something.
|
00:41:47.960 |
Yeah.
|
00:41:48.960 |
Go ahead.
|
00:41:49.960 |
Anyway.
|
00:41:50.960 |
Yeah.
|
00:41:51.960 |
No.
|
00:41:52.960 |
So when you ask me, what would it mean to create an entity that reaches curse by a singularity or reaches that artificial general intelligence?
|
00:41:59.960 |
It can't just look at data without meaning.
|
00:42:03.960 |
It can't just look at data without meaning and then learn off of that.
|
00:42:05.960 |
Yes.
|
00:42:06.960 |
That could theoretically get us really, really far.
|
00:42:09.960 |
But it is completely alien to how humans do it.
|
00:42:12.960 |
And I don't think that humans have just stumbled on another way to do it.
|
00:42:17.960 |
I think part of what has made us so brilliant.
|
00:42:19.960 |
So having this ability to create planes and automies now is because we don't look at things as meaningless data.
|
00:42:25.960 |
And then apply some values on top of it which in itself would still be meaningless.
|
00:42:29.960 |
We take things as this.
|
00:42:31.960 |
And this isn't just some ontological difference that I'm making here.
|
00:42:35.960 |
This is a critical difference in how we think about building intelligence.
|
00:42:38.960 |
If things first look at things as something as meaning, they engage and they learn completely alien to how computers or other things learn.
|
00:42:49.960 |
They don't learn off of data.
|
00:42:51.960 |
They learn meaningfully.
|
00:42:52.960 |
They care about things.
|
00:42:54.960 |
They have this concern.
|
00:42:55.960 |
They experience the world from a point of view from a meaningful existence.
|
00:43:01.960 |
I mean, the high digger then goes into what is required of this design.
|
00:43:05.960 |
This design is projected into a world that is already meaningful.
|
00:43:11.960 |
It's projected forth into a multitude of worlds of experiences.
|
00:43:18.960 |
He has wonderful examples of the person in a shop, a builder.
|
00:43:24.960 |
When they're working on a car with their tools, they don't engage all the tools, classify them all,
|
00:43:30.960 |
recognize what patterns would be best to screw in a screw.
|
00:43:36.960 |
Or now, now with a hammer.
|
00:43:39.960 |
They understand that a hammer, it's meaningful existence.
|
00:43:44.960 |
It's ready at hand property.
|
00:43:46.960 |
Is that which can now now?
|
00:43:48.960 |
Is that which can be used for this, something else?
|
00:43:52.960 |
It's a pre-theoretical engagement with the tool.
|
00:43:55.960 |
As opposed to the present at hand, which is a theoretical after effect.
|
00:44:04.960 |
So for high digger, however, this ability to take things as, you mentioned earlier that for design or human existence,
|
00:44:13.960 |
or what are colleague Tom Schienkal's throne openness, we're thrown open.
|
00:44:17.960 |
We're thrown into a world in a mode of openness, and therefore things come to us,
|
00:44:23.960 |
as well as at the same time we're reaching out beyond ourselves to them.
|
00:44:28.960 |
You mentioned that things matter to us.
|
00:44:31.960 |
And in being in time, after high digger undergoes this existential analysis of the mode of being of design,
|
00:44:39.960 |
he finds that the inner core, the essence of this throne openness in a world, is care.
|
00:44:47.960 |
And it's care from which means that on the one hand we're burdened by cares, on the other hand, things matter to us,
|
00:44:56.960 |
and we take them into our care.
|
00:44:58.960 |
And he will then go on in division two of being in time to articulate what he thinks are the conditions of possibility for this pragmatic taking as.
|
00:45:08.960 |
And he'll find that it's in our temporal, dynamic projection beyond our immediate present into a future.
|
00:45:16.960 |
It's a security, design is always future, design is a being unto death, and this ultimate impossibility of its being that it can shatter at before the actual event of death takes place is what throws us back into the world.
|
00:45:30.960 |
And therefore it seems to be responsible for the mediated relation that we have to things, right?
|
00:45:36.960 |
Do you think that for artificial intelligence to replicate conscious what you're calling human consciousness,
|
00:45:42.960 |
would it have to give the machine some sense of care and some sense of futurity and perhaps like the movie Blade Runner, a sense of the imminent mortality of the replicants,
|
00:45:55.960 |
whereby they go back to their makers and creator and they want more life because they know they're going to die?
|
00:46:01.960 |
Would that be a kind of necessary condition for artificial general intelligence?
|
00:46:06.960 |
Yeah, no, I agree completely. One of the reasons why I think humans have been able to become so intelligent is in part because of this care.
|
00:46:15.960 |
And when you have this great philosophy of learning, how have we come to this learn, it's this brimming over of curiosity, this brimming over of wonder, it's this brimming of care and concern.
|
00:46:26.960 |
And so what I mean here is in order to create this artificial general intelligence, we need an entity that not only understands the world physically, but that has a concern for the future that understands its place.
|
00:46:44.960 |
And these aren't just meaningless philosophical requirements. These are necessary precipitants to really general learning.
|
00:46:53.960 |
How can a machine be in a general case without having an ability to experience the world, without having an ability to care and to have concerns for future actions?
|
00:47:03.960 |
So when I talk about consciousness, I don't talk about consciousness as in the state. I talk about consciousness as intrinsically as high degree and you put it temporal.
|
00:47:14.960 |
So what consciousness needs is not necessarily a machine. It needs an understanding of time. It needs an illusion of being projected towards the future.
|
00:47:29.960 |
And I think this is a key point in Hiedegar's style when he's critiquing Descartes.
|
00:47:33.960 |
It exists frozen. It exists by itself. Hiedegar's design is inherently projected. It is never not moving. It is always looking towards the, I'm not looking towards but engaging with the future ain't almost in the future without being in the future.
|
00:47:54.960 |
It's, Hiedegar's words projected on towards death or she and his words thrown openness. These aren't just meaningless words, but it tells about a difference in how we think about that entity which has subjectivity.
|
00:48:07.960 |
And Hiedegar's words, it's a verb rather than a noun. It's something which experiences present Lee into the future with the past, not something that just happens in an instant.
|
00:48:21.960 |
And so this temporal dynamism of Hiedegar's design is what I think is necessary for an artificial general intelligence, one to become conscious, but two to really engage with learning and engage with this world.
|
00:48:36.960 |
How can it learn to learn without this care? Without this concern and understanding of the world's around it.
|
00:48:45.960 |
And in Hiedegar we can think of the human subject exists in one world, the universe, and it engages with that one world through its special mediation. With Hiedegar that's not how consciousness exists at all.
|
00:48:56.960 |
It exists. It's projected on towards a multitude of worlds. One world might be the workshop, one world might be the temperature in the room.
|
00:49:04.960 |
And it has all of these independent concerns. So for instance, when I'm in the studio right now with you, I have the concern about what I'm talking about, what I'm going to talk about next.
|
00:49:14.960 |
And I'm trying to come up in my head about what the next word will come out of my mouth. But I also in the background have a feeling of the temperature in this room.
|
00:49:23.960 |
And right now it's a perfectly fine temperature, and I'm not really considering the temperature. But the moment it becomes too hot for me, my attention, my concern will be moved towards it.
|
00:49:35.960 |
And I don't necessarily know exactly how that happens. But that type of, I don't always say multitasking, but that type of concern with the world is what is required by an artificial general intelligence.
|
00:49:47.960 |
It needs this general ability to exist in the world as an experience.
|
00:49:52.960 |
Right. Well, the Scholastic philosophers of the Middle Ages going back to Aristotle, distinguished between two different kinds of potency, I say, call it active potency,
|
00:50:04.960 |
and passive potency. Now active potency would be associated with everything that we do when we act actively. So we read a book, we calculate all those things that maybe have to do with the easy problem of artificial intelligence are conceived of in terms of an active potency.
|
00:50:25.960 |
What Aristotle and others mean by passive potency is our, so it's a potency, namely therefore it's a potential, but it's our ability to be affected.
|
00:50:38.960 |
It is our capacity for a pathos, being open to a change of temperature in the room, and therefore responding to it.
|
00:50:48.960 |
Or feeling the pathos of another human being who might break into the studio in a state of hysteria because she's been robbed or something is this sort of passive potency, I would suggest has to play a key role in any attempt to raise artificial intelligence to the general love.
|
00:51:09.960 |
Yeah, no, I agree completely. So when I think about what I do when I read a book and I'm learning something in the book, I'm not actively so I mean I am actively reading the book, I'm reading every word, I'm thinking about what the author is saying.
|
00:51:22.960 |
But when we think about what we really get out of a book, it's those relationships we make to the, from what the book says to our own experiences.
|
00:51:30.960 |
It's when I'm reading a book such as Heidegger Aristotle, I'm reading what he says, but I'm relating that to other experiences. There's that act of potency in the action of reading, but there's that passive potency and how my mind is attuned to the world in general and then can make relationships and make distinctions.
|
00:51:49.960 |
So I think when we intelligently learn things before you make you have to receive it's a receptive this capacity to to passively receive something in an active mode. I don't know how it's to put it.
|
00:52:02.960 |
No, no, no, I agree completely. It's not the active is what I physically do. That's the discard. I think, but the passive is this conscious openness to you can call it intuitions or something.
|
00:52:16.960 |
So it's for instance, when I think about what I do, when I'm programming a computer, when I'm actually engaging in the act of coding, at one point I am actively thinking about what I'm typing and actively thinking about what is necessary for me to accomplish the next task, the next algorithm.
|
00:52:37.960 |
But there's also this passiveness, this intuition, my hands already know what they will be typing before I can consciously become aware of that radiosination going on.
|
00:52:49.960 |
I have this inkling, this feeling of what needs to happen next before it even comes into the forefront of my head. That's that passive openness to the thought that's emerging in me.
|
00:53:01.960 |
It's not my physical hands are above it, but the thought, not the conscious thought, but that open thought is ahead. My body, some, or my mind, my design, or in the world somewhere knows what the next command that I'm going to be typing to the computer is before I consciously become aware of it.
|
00:53:22.960 |
So there's this activeness of my typing, but there's this passive openness, not something that I'm consciously making that I'm consciously ratty is what I'm saying about, but that comes to me intuitively as I already pre-thought thought, something that I can engage with.
|
00:53:38.960 |
And this is really where Heidegger talks about how we live and engage in experience in this world. And I think without this, without this ability to relate to this world in multiple ways, to have these intuitions that are existing before you even come to them.
|
00:53:55.960 |
This openness to thought and passive potency is essential to an artificial general intelligence. It's essential to any entity that can learn in the broad sense, because right now I can give a computer an ability to play go, like a book on how to play go.
|
00:54:16.960 |
But it has no consumption of flowers. So for instance, go is this beautiful game in which the terminology for this civic type of states are based off of like real world things. So there's a specific type of position and go that is named after a flower.
|
00:54:31.960 |
And the right way to play is that which completes the picture of a flower. How could a computer come with this intuition that this kind of looks like a flower and that knowledge that this looks like a flower comes before it even actively thinks about it, this idea that the board where it's just a bunch of white and black pieces on a board looks like a flower, I can't actively think about that. That's something that needs to come to me in a way that's open.
|
00:55:00.960 |
Yeah, listen, you're talking about experience and there's so many dimensions and depths to it. And the one that you're referring to in some ways is what piederger might call our attunement to the world.
|
00:55:15.960 |
What is that attunement? Is it something that takes place in our brains? I don't think so not primarily. I think attunement is part of our whole personhood.
|
00:55:25.960 |
It's a somatic or sensory perception or yes, our intelligence at a certain level. But it means that we are already in touch with the world rather than this Cartesian inner ego whose relation to the world comes only through the mediation of representations or in the case of computers, algorithms or computational numbers and so forth.
|
00:55:52.960 |
That's a tall order though, Sam, to give a machine a sense of attunement to the world.
|
00:55:59.960 |
Well, here's where I'll get really practical with you and why I think this isn't just some abstract conversation about consciousness or philosophy, but that can actually impact artificial intelligence research right now.
|
00:56:12.960 |
So how can we create this attunement? You can never create an entity which is attuned a priori. You can't create this Cartesian subject which engages with the
|
00:56:22.940 |
world as you say. No, attunement is not one but a relation to the world. And this is what I want to stress moving forward with artificial intelligence research is that we need to stop thinking about how we can create the state of consciousness, but how we can create the experience, the action, the verb, the relationship itself.
|
00:56:41.940 |
And that attunement is not even, it's a great word, but it's not a mint, it's an attuned continuance of forward movement of the attunement.
|
00:56:51.940 |
I am attuned to the world, I have a concern for the world. I am in touch with the world. All these things are active things. This in my opinion is where consciousness emerges.
|
00:57:00.940 |
It doesn't emerge in the special way I connect my computer network or my neural network. It emerges in the action of how this network engages with the world. It is that verb. It's an empty space between the subject and the object. That empty space that relates the two is where consciousness is where experience emerges. And that is where what we can try to program in the future.
|
00:57:26.940 |
Remember his name, folks, Sam Ginn, we've been speaking with a Stanford sophomore, artificial intelligence, human experience, consciousness. And again, remember that name, I think we're going to hear about Sam a lot more in the future. Whether that future takes us to the singularity or not, whether he has something to do with it or not, he's going to be there somewhere.
|
00:57:47.940 |
And stay tuned for another program we're going to have on consciousness. But this case is going to be a show about how you expand your consciousness through certain kind of psychotropic drugs. That would be an interesting question. Would an artificial intelligent machine be capable of intoxication and any creation.
|
00:58:06.940 |
But we'll leave that for another conversation, Sam.
|
00:58:09.780 |
Thanks for joining us today on in title opinions.
|
00:58:11.940 |
We've tried to get you back for another follow-up sometime.
|
00:58:16.060 |
Very soon, I'm Robert Harrison for in title opinions.
|
00:58:18.980 |
Stay tuned, bye-bye.
|
00:58:20.760 |
- Yeah, thank you so much.
|
00:58:21.880 |
(upbeat music)
|
00:58:24.620 |
(upbeat music)
|
00:58:27.200 |
(upbeat music)
|
00:58:29.780 |
(upbeat music)
|
00:58:32.360 |
(upbeat music)
|
00:58:36.940 |
(upbeat music)
|
00:58:41.320 |
(upbeat music)
|
00:58:51.600 |
(upbeat music)
|
00:59:00.320 |
(upbeat music)
|
00:59:10.840 |
(upbeat music)
|
00:59:21.420 |
(upbeat music)
|
00:59:28.320 |
(upbeat music)
|
00:59:48.900 |
(upbeat music)
|
00:59:55.380 |
(upbeat music)
|