Turing 2018/6: “Computing Machinery and Intelligence” – Overview of Turing’s 1950 paper

1
00:00:00,090 --> 00:00:10,830
Before I start on today's lecture, I just want to point out something from the last lecture there was, as I pointed out at the time, a misprint.

2
00:00:10,830 --> 00:00:16,440
I've not corrected it. It should say X prime equals x plus one.

3
00:00:16,440 --> 00:00:24,600
That's in slide one five two one five three and one five four.

4
00:00:24,600 --> 00:00:29,400
It previously said X equals x prime plus one, which was the wrong way round.

5
00:00:29,400 --> 00:00:33,390
This is clearly saying X Prime is the successor of X.

6
00:00:33,390 --> 00:00:47,230
That was correct. My gloss on it was mistaken. Sorry about that.

7
00:00:47,230 --> 00:01:00,850
Seems to be fine, Andy. Today we move on from the 1936 paper to the 1950 paper even more famous.

8
00:01:00,850 --> 00:01:07,900
It's one of the most cited philosophical papers ever published computing, machinery and intelligence.

9
00:01:07,900 --> 00:01:13,510
It was published in the prominent journal mined in 1950.

10
00:01:13,510 --> 00:01:17,650
You'll find that there are quite a lot of allusions back to the 1936 paper,

11
00:01:17,650 --> 00:01:28,700
so watch out for those quite a lot of what Turing says is alluding to results that he had proved then.

12
00:01:28,700 --> 00:01:35,420
I'm just putting on the slides here some useful books on the philosophy of AI on the Chinese room argument,

13
00:01:35,420 --> 00:01:42,440
as well as on Turing, will be dealing with cells Chinese room argument next time.

14
00:01:42,440 --> 00:01:46,760
Some useful collections of papers. This one we've already seen.

15
00:01:46,760 --> 00:01:55,400
There's a collection that I edited with Andy Clock back in 1996 that was from a conference commemorating Turing.

16
00:01:55,400 --> 00:02:02,450
And there are seven papers in there that are relevant to the Turing test and also my introduction.

17
00:02:02,450 --> 00:02:07,610
And there's a book devoted to the Chinese room argument that I mention there.

18
00:02:07,610 --> 00:02:13,550
Stanford Encyclopaedia of Philosophy is often very useful on philosophical things quite generally,

19
00:02:13,550 --> 00:02:20,000
and it's got an article on the Turing test and on the Chinese room, and there are some useful web resources.

20
00:02:20,000 --> 00:02:28,070
Andrew Hodges, who wrote the monumental biography of Alan Turing, has a website devoted to him.

21
00:02:28,070 --> 00:02:32,990
Lots of good stuff there and a very long paper with some useful stuff.

22
00:02:32,990 --> 00:02:36,800
I think of more mixed quality, but not all the points made.

23
00:02:36,800 --> 00:02:41,840
There are ones that I'd agree with, but it's got a lot of scholarly material on the Turing test.

24
00:02:41,840 --> 00:02:47,680
They're well worth looking at. OK.

25
00:02:47,680 --> 00:02:58,510
The 1950 paper is perhaps often taken more seriously than it ought to be, and this is a point worth starting with.

26
00:02:58,510 --> 00:03:05,110
This is a quotation from Robin Gandy I think may be the last thing he ever published.

27
00:03:05,110 --> 00:03:16,390
This was in our collection in 1996. The 1950 paper was intended not so much as a penetrating contribution to philosophy, but as propaganda.

28
00:03:16,390 --> 00:03:21,460
He wrote this paper, unlike his mathematical papers, quickly and with enjoyment,

29
00:03:21,460 --> 00:03:29,170
I can remember him reading aloud to me some of the passages always with a smile, sometimes with a giggle.

30
00:03:29,170 --> 00:03:35,410
Some of the discussions of the paper loaded with more significance than it was intended to bear there.

31
00:03:35,410 --> 00:03:39,430
Robin Gandhi was an intimate friend of Turing's.

32
00:03:39,430 --> 00:03:48,190
He was his Ph.D. student. He was Turing's literary executors of Turing left his papers to Gandhi.

33
00:03:48,190 --> 00:03:53,920
He also, interestingly ended up at Oxford, where one of the things that he did was to start the maths and philosophy degree.

34
00:03:53,920 --> 00:04:02,470
Back in 1969, in 1972, modern languages philosophy, modern languages was started in the next degree.

35
00:04:02,470 --> 00:04:07,750
To be started after that in philosophy was computer science philosophy in 2012.

36
00:04:07,750 --> 00:04:13,810
So a bit of a legacy then.

37
00:04:13,810 --> 00:04:20,620
OK, The Imitation Game is Turing's way of reworking the question.

38
00:04:20,620 --> 00:04:30,400
Can machines think? And this is a bit confusing, a bit odd.

39
00:04:30,400 --> 00:04:36,520
I proposed to consider the question Can machines think if the meaning of the words machine and thing?

40
00:04:36,520 --> 00:04:43,510
Could it be found by examining how they commonly used? The answer is to be sought in a statistical survey, but this is absurd.

41
00:04:43,510 --> 00:04:51,280
Instead, I shall replace the question by another, which is closely related to it, but relatively unambiguous.

42
00:04:51,280 --> 00:05:02,920
Now that's a bit peculiar, isn't it? Philosophers are constantly asking questions about the meaning of things like knowledge free will and so on.

43
00:05:02,920 --> 00:05:10,030
And we don't standardly think, well, we could go and do a statistical survey and find out what people say.

44
00:05:10,030 --> 00:05:14,680
But actually the only alternative is to replace it with a completely different question.

45
00:05:14,680 --> 00:05:23,950
So there's something a little bit odd about Turing's procedure, but let's see where it leads.

46
00:05:23,950 --> 00:05:36,160
His replacement question is in the context of an imitation game, and he introduces this in the context of a question over identity.

47
00:05:36,160 --> 00:05:49,090
We have an interrogator in one room who is sending questions to two individuals in two other rooms.

48
00:05:49,090 --> 00:05:56,200
The questions are being sent in text, so Turing suggests a teletype machine.

49
00:05:56,200 --> 00:06:03,850
Here I've actually got Turing statue, and that's an Enigma machine, but it's close enough to a teletype that I've put it in there.

50
00:06:03,850 --> 00:06:10,930
And here we've got Charles Babbage and Ada Lovelace, who I've taken as my sample man and woman it.

51
00:06:10,930 --> 00:06:18,190
When the game is played, the interrogator doesn't actually know the identity of the people concerned has no idea who they are.

52
00:06:18,190 --> 00:06:29,650
Just knows that there's one man and one woman in separate rooms and is sending questions to them in text and is receiving them back in text.

53
00:06:29,650 --> 00:06:36,760
And the interrogators job is to try to work out which is the man, which is the woman.

54
00:06:36,760 --> 00:06:42,280
But the twist on it is that the man is pretending to be a woman.

55
00:06:42,280 --> 00:06:48,670
So if, as Turing says, he asks, How long is your hair?

56
00:06:48,670 --> 00:06:55,570
The man might reply, My hair is shingled, and the longest strands are about nine inches long.

57
00:06:55,570 --> 00:07:03,110
Meanwhile, the woman may be saying, I'm the woman, don't listen to him, and you can imagine other questions that might be asked.

58
00:07:03,110 --> 00:07:09,310
I mean, in the modern context, one might ask who won the Premiership this year?

59
00:07:09,310 --> 00:07:15,010
And the man will respond, Oh, I don't know anything about football. I'll have to ask my boyfriend.

60
00:07:15,010 --> 00:07:18,520
So you can imagine it could be quite fun.

61
00:07:18,520 --> 00:07:25,750
I mean, it seems to have its origin in the sort of Victorian parlour game now.

62
00:07:25,750 --> 00:07:38,050
What the man is trying to do remember is to pretend successfully to be the woman he succeeds in that if the interrogator can't tell who's who.

63
00:07:38,050 --> 00:07:45,970
So if the interrogator ends up essentially tossing a coin, or suppose you play this game repeatedly with different interrogation?

64
00:07:45,970 --> 00:07:52,900
It is, and the man scores about 50 per cent, so the interrogator is only able to get him half the time.

65
00:07:52,900 --> 00:07:54,880
That's pretty much as good as it gets.

66
00:07:54,880 --> 00:08:01,120
It would be a bit peculiar if the man was able to impersonate a woman better than a woman can represent herself.

67
00:08:01,120 --> 00:08:20,150
But. So the interrogators in a different room, tones of voice are ruled out because the answers are purely by text, as you see.

68
00:08:20,150 --> 00:08:33,460
Turing is suggesting a tele printer. And then we get the computer being introduced.

69
00:08:33,460 --> 00:08:38,350
We now ask what will happen when a machine takes the place of a.

70
00:08:38,350 --> 00:08:41,110
That is the deceitful man in this game.

71
00:08:41,110 --> 00:08:49,600
Will the interrogator decide wrongly as often when the game is played like this, as he does when the game is played between a man and a woman?

72
00:08:49,600 --> 00:08:56,360
These questions replace our original can machines think?

73
00:08:56,360 --> 00:09:01,450
That's again, rather peculiar. They replaced the original question.

74
00:09:01,450 --> 00:09:11,740
What does that mean, then the machine? The question was can machines think that looks like it's got a yes or no answer?

75
00:09:11,740 --> 00:09:22,930
Or maybe the answer would be possibly yes. Machines could in principle think, are we really supposed to replace that with the question,

76
00:09:22,930 --> 00:09:29,920
can a machine play The Imitation Game as well as a man can against a woman?

77
00:09:29,920 --> 00:09:38,290
That seems a bit peculiar. Moreover, it's not clear exactly what the computer's job is.

78
00:09:38,290 --> 00:09:47,560
Here is the job to pretend to be a woman, or is the job to pretend to be a human?

79
00:09:47,560 --> 00:09:52,300
That actually does become clear later in Sections two and five of the paper.

80
00:09:52,300 --> 00:09:58,840
It becomes obvious that Turing is seeing the computer as trying to imitate a person.

81
00:09:58,840 --> 00:10:06,040
I mean, he often says a man, but that in 1950, when you say man, you often mean person.

82
00:10:06,040 --> 00:10:13,570
That's changed considerably. So this is the way the Turing test is normally understood.

83
00:10:13,570 --> 00:10:21,370
We've still got the interrogator. We've got the teletype communicating to two different rooms, if you like.

84
00:10:21,370 --> 00:10:30,100
One of them contains a computer running a programme. The other one contains a person could be either man or woman doesn't matter.

85
00:10:30,100 --> 00:10:35,500
And the computer is trying to impersonate a person.

86
00:10:35,500 --> 00:10:47,320
Either man or woman doesn't matter. And the result that Turing seems to be hinting at, he doesn't state it very explicitly.

87
00:10:47,320 --> 00:10:54,340
But this seems to be the drift of his paper that if the interrogator can't reliably distinguish the computer from the human,

88
00:10:54,340 --> 00:11:07,750
then the computer programme must be judged to be intelligent for thinking that that seems to be where he's going.

89
00:11:07,750 --> 00:11:13,900
In Section two of the paper, Turing's idea about what he's doing is clarified.

90
00:11:13,900 --> 00:11:22,270
The interrogators questions can be used to elicit the computer's knowledge about almost any of the fields of human endeavour.

91
00:11:22,270 --> 00:11:29,380
And he gives some examples we'll see in a moment and notice that the set up has the advantage as he points

92
00:11:29,380 --> 00:11:36,610
out of drawing a fairly sharp line between the physical and the intellectual capacities of a person.

93
00:11:36,610 --> 00:11:47,560
So one great advantage of the test is, as we've seen before, the tone of voice isn't taken into account, nor is the physical appearance.

94
00:11:47,560 --> 00:11:57,610
The interrogator doesn't actually see who is responding. All all he gets is the textual responses and has to judge on that basis.

95
00:11:57,610 --> 00:12:01,930
But as Turing says, those textual responses could cover a wide range of things.

96
00:12:01,930 --> 00:12:07,420
So here's an illustrative conversation that he gives.

97
00:12:07,420 --> 00:12:13,120
Please might write me a summit on the subject of the fourth bridge that comes the answer.

98
00:12:13,120 --> 00:12:20,170
Count me out on this one. I never could write poetry. It seems, may seem a rather strange thing to ask.

99
00:12:20,170 --> 00:12:27,520
As your first question, please write me a sonnet, not the sort of thing you would expect to happen in an interactive conversation.

100
00:12:27,520 --> 00:12:32,080
We'll see why Turing gives the example of a sonnet later.

101
00:12:32,080 --> 00:12:38,830
He's responding to Geoffrey Jefferson at thirty four thousand nine hundred fifty seven to

102
00:12:38,830 --> 00:12:45,010
seventy thousand seven hundred sixty four poles about 30 seconds and then give his answer,

103
00:12:45,010 --> 00:12:48,070
other than 5000 621.

104
00:12:48,070 --> 00:13:02,080
Notice Turing is here deliberately having the system pretend to take longer than it needs, so there's a clear element of deception playing a role.

105
00:13:02,080 --> 00:13:13,300
Do you play chess? Yes. And he gives a very simple chess set up, and after a pause of 15 seconds to educate Mate,

106
00:13:13,300 --> 00:13:20,350
I conclude that Turing was not particularly expert at chess if he thought it took 15 seconds to work that one out.

107
00:13:20,350 --> 00:13:24,760
There we go. Yeah, OK.

108
00:13:24,760 --> 00:13:33,160
An obvious objection to the Turing test. It seems to be biased in favour of human thought, and Turing says may not.

109
00:13:33,160 --> 00:13:40,330
Machines carry out something which ought to be described as thinking, but is very different from what a human person does.

110
00:13:40,330 --> 00:13:44,440
And indeed, this seems to be an obvious objection.

111
00:13:44,440 --> 00:13:53,990
I mean, suppose an alien comes down from the planet Zog and takes place that takes part in this test and.

112
00:13:53,990 --> 00:13:59,300
The alien is asked this question and comes back with an answer immediately.

113
00:13:59,300 --> 00:14:04,070
We just know it isn't a human. Certainly not in any sort of normal human.

114
00:14:04,070 --> 00:14:06,290
Does that mean it's not intelligent? No.

115
00:14:06,290 --> 00:14:16,010
On the contrary, if the alien is able to do arithmetic much more quickly than we can, if anything, that suggests it's more intelligent, not less.

116
00:14:16,010 --> 00:14:21,440
And likewise, if you if you ask it, you know, how long is your hair?

117
00:14:21,440 --> 00:14:30,830
What hair? Maybe the aliens on Zog don't have any hair that wouldn't count against its being intelligent.

118
00:14:30,830 --> 00:14:39,020
So it seems odd to have a test which depends on mimicking humans.

119
00:14:39,020 --> 00:14:42,950
Turing's response is that the objection is indeed a very strong one,

120
00:14:42,950 --> 00:14:49,130
but at least we can say that if nevertheless a machine can be constructed to play The Imitation Game satisfactorily,

121
00:14:49,130 --> 00:14:58,010
we need not be troubled by this objection. Now again, he doesn't say very explicitly where he's driving here,

122
00:14:58,010 --> 00:15:09,440
but I take it that this is a strong suggestion that he wants us to take the the test as a sufficient proof of intelligence, but not a necessary test.

123
00:15:09,440 --> 00:15:14,060
So if something passes the Turing test, then we are to deem it intelligent.

124
00:15:14,060 --> 00:15:27,480
The fact that it doesn't pass the test, e.g. because it responds more quickly than a human would should not mean that we counted as unintelligent.

125
00:15:27,480 --> 00:15:32,130
Now I want to just now refer to something later in the paper.

126
00:15:32,130 --> 00:15:34,890
I mean, I'm generally going in this lecture I'm going through,

127
00:15:34,890 --> 00:15:42,720
as you see in sequence through the paper to guide your reading of it, and I please do read it before the next lecture.

128
00:15:42,720 --> 00:15:48,870
I hope you know this will help you to identify the principal points in it.

129
00:15:48,870 --> 00:15:58,990
But later on in Section six, Turing is going to give an example, which I think probably is the best argument for the Turing test.

130
00:15:58,990 --> 00:16:07,840
And it concerns the choice of words in a poem. And here is the dialogue.

131
00:16:07,840 --> 00:16:10,180
So again, we've got this sonnet.

132
00:16:10,180 --> 00:16:20,980
We imagine the interrogator questioning what Turing calls a witness that is one of the people in one of the rooms in the first line of your sonnet,

133
00:16:20,980 --> 00:16:28,300
which reads, Shall I compare this to a summer's day? Wouldn't a spring day do as well?

134
00:16:28,300 --> 00:16:34,300
Or better back comes the answer. It wouldn't scan.

135
00:16:34,300 --> 00:16:39,160
In other words, it would have the wrong rhythm. Shall I compare this to a spring day?

136
00:16:39,160 --> 00:16:43,990
Not enough syllables. How about a winter's day that would scan?

137
00:16:43,990 --> 00:16:53,350
All right. Shall I compare it to a winter's day? Yes, but nobody wants to be compared to a winter's day.

138
00:16:53,350 --> 00:17:01,390
Would you say Mr. Pickwick reminded you of Christmas, then we had an allusion to Shakespeare and a Shakespearean sonnet?

139
00:17:01,390 --> 00:17:08,860
Now we get Dickens, OK? Just pick Mr. Pickwick, remind you of Christmas in a way.

140
00:17:08,860 --> 00:17:14,190
Yeah, Christmas is a winter's day, and I don't think Mr. Pickwick would mind the comparison.

141
00:17:14,190 --> 00:17:23,100
I don't think you're serious by a winter's day. One means a typical winter's day rather than a special one like Christmas.

142
00:17:23,100 --> 00:17:35,400
OK, now let's suppose that that conversation took place and we were completely assured that these responses had not as it were being built in,

143
00:17:35,400 --> 00:17:36,930
there wasn't any trickery involved.

144
00:17:36,930 --> 00:17:50,070
You know, canned responses just coming out and supposed conversations of similar levels of sophistication came across quite a range of topics,

145
00:17:50,070 --> 00:17:58,740
not just on poetry, a number of things. The force of Turing's argument here is to say surely.

146
00:17:58,740 --> 00:18:04,110
Surely we would then have to say that this is exhibiting intelligence.

147
00:18:04,110 --> 00:18:15,060
This would be pretty strong evidence. And in the context of 1950, where obviously Turing cannot appeal to achievements in robotics,

148
00:18:15,060 --> 00:18:22,500
for example, or simulation of human behaviour, physical behaviour or anything like that,

149
00:18:22,500 --> 00:18:30,420
this kind of achievement, if if it were to be achieved in verbal response,

150
00:18:30,420 --> 00:18:35,940
would be pretty much a strong evidence of intelligence, as perhaps you could get.

151
00:18:35,940 --> 00:18:41,460
So it's propaganda. It's quite effective.

152
00:18:41,460 --> 00:18:51,660
OK, section three Turing goes onto the machines concerned in the game, and you will see that there is another bit of humour coming in.

153
00:18:51,660 --> 00:18:58,920
We want to allow all sorts of different engineering techniques to be used to create these machines.

154
00:18:58,920 --> 00:19:04,080
However, we wish to exclude from the machines men born in the usual manner.

155
00:19:04,080 --> 00:19:05,280
In other words,

156
00:19:05,280 --> 00:19:15,570
we're not going to treat biological reproduction as an appropriate way of producing a thinking machine because otherwise that would include all of us.

157
00:19:15,570 --> 00:19:24,760
So Turing is obviously pushing towards saying our restriction is going to be digital computers, right?

158
00:19:24,760 --> 00:19:32,490
We don't. We don't want to rule out a machine on the grounds that it's made of one thing rather than another.

159
00:19:32,490 --> 00:19:36,720
We do want to rule out biological organisms.

160
00:19:36,720 --> 00:19:46,290
So what we're going to do is go for digital computers, which of course, is exactly the domain of his 1936 paper.

161
00:19:46,290 --> 00:19:53,400
The idea behind digital computers is that these machines are intended to carry out any operations which could be done by human computer.

162
00:19:53,400 --> 00:19:58,410
That's a clear echo of section nine of the 1936 paper,

163
00:19:58,410 --> 00:20:07,890
where he argues for the Turing machine as a way of encapsulating all the things that a human computer could do.

164
00:20:07,890 --> 00:20:13,440
He then gives an outline of how they work.

165
00:20:13,440 --> 00:20:21,540
He considers some particular cases. He suggests that a digital computer could contain a random element.

166
00:20:21,540 --> 00:20:26,820
You could have a computer with an unlimited store and that has special theoretical interest.

167
00:20:26,820 --> 00:20:37,380
Obviously, Turing machines have an unlimited store, and he alludes to Charles Babbage showing that digital machines needn't be electrical.

168
00:20:37,380 --> 00:20:42,780
They could be mechanical. Charles back Babbage designed the analytical engine.

169
00:20:42,780 --> 00:20:48,000
It was never built. Unfortunately, it's far too complex and expensive.

170
00:20:48,000 --> 00:20:53,310
But the idea of an analytical engine was that showing that a digital computer

171
00:20:53,310 --> 00:20:58,950
could be made in principle as a physical machine rather than a mechanical machine,

172
00:20:58,950 --> 00:21:05,250
rather than an electronic universality of digital computers.

173
00:21:05,250 --> 00:21:09,720
Again, a clear echo of the 1936 paper.

174
00:21:09,720 --> 00:21:17,790
And here, bear in mind that Turing is addressing an audience of probably mainly philosophers and general readers,

175
00:21:17,790 --> 00:21:24,900
not people who would be familiar with the results of his 1936 paper.

176
00:21:24,900 --> 00:21:33,210
So processes in the world are really continuous and indeed chaotic.

177
00:21:33,210 --> 00:21:41,610
And Turing gives an illustration of what we now call the butterfly effect, which was quite prescient in 1950.

178
00:21:41,610 --> 00:21:51,360
But even where processes are continuous, they can usefully be modelled by discrete systems and discrete state machines,

179
00:21:51,360 --> 00:21:58,020
as he describes them, is utterly predictable. There is no reason why this calculation should not be carried out by means of a digital computer,

180
00:21:58,020 --> 00:22:00,570
provided it could be carried out sufficiently quickly.

181
00:22:00,570 --> 00:22:09,120
The digital computer could mimic the behaviour of any discrete state machine, so digital computers are, in a sense, universal.

182
00:22:09,120 --> 00:22:15,690
Again, we've seen his model of the universal Turing machine from 1936.

183
00:22:15,690 --> 00:22:28,140
He's pointing out now that not only can you have a computer that can mimic any digital system and any system of axioms rules, etc. of inference,

184
00:22:28,140 --> 00:22:42,090
but also any continuous system can generally at least be modelled with arbitrary, accurate accuracy by some corresponding digital system.

185
00:22:42,090 --> 00:22:44,550
So because digital computers are universal,

186
00:22:44,550 --> 00:22:50,610
the Imitation Game question reduces to this Let's just fix our attention on one particular digital computer.

187
00:22:50,610 --> 00:22:56,550
See? Is it true that by modifying this computer to have an adequate storage,

188
00:22:56,550 --> 00:23:01,950
suitably increasing its speed of action and providing it with an appropriate programme,

189
00:23:01,950 --> 00:23:06,960
C can be made to play satisfactorily the part of a in The Imitation Game.

190
00:23:06,960 --> 00:23:12,840
The part of B being taken by a man person right here.

191
00:23:12,840 --> 00:23:20,790
It's it's obvious that the Part B is not meant to be taken by a woman, specifically with the computer pretending to be a woman.

192
00:23:20,790 --> 00:23:40,020
All right. OK, now we come to section six and in section six of the paper, Turing considers and rejects nine different objections to his thesis.

193
00:23:40,020 --> 00:23:47,460
And some of these he treats rather humorously, not terribly seriously.

194
00:23:47,460 --> 00:24:01,350
Some of them he discusses more seriously. But I think he also in doing so makes some significant mistakes, which we will see.

195
00:24:01,350 --> 00:24:09,120
But before he considers the objections, he offers a couple of predictions we'll be coming back to these in a later lecture,

196
00:24:09,120 --> 00:24:15,780
but they're quite significant and I think quite prescient.

197
00:24:15,780 --> 00:24:20,550
I believe this in about 50 years time. OK, about 2000,

198
00:24:20,550 --> 00:24:30,540
it will be possible to programme computers with a storage capacity of about 10 to the nine to make them play The Imitation Game so well that

199
00:24:30,540 --> 00:24:40,940
an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.

200
00:24:40,940 --> 00:24:51,770
OK. So in 2000, would it have been possible to programme a computer with a storage capacity of about a gigabyte to make them

201
00:24:51,770 --> 00:25:04,300
play The Imitation Game so well that the average interrogator will go wrong at 30 percent of the time?

202
00:25:04,300 --> 00:25:08,560
I think actually that's rather a plausible prediction.

203
00:25:08,560 --> 00:25:16,690
I think if artificial intelligence research had focussed on achieving that, I think it would have achieved it.

204
00:25:16,690 --> 00:25:22,690
Not true that it did. But I think it could have. We'll come back to that in a later lecture.

205
00:25:22,690 --> 00:25:29,620
Look at the second one. The original question? Can machines think, I believe, to be too meaningless to deserve discussion?

206
00:25:29,620 --> 00:25:32,200
Nevertheless, I believe that at the end of the century,

207
00:25:32,200 --> 00:25:37,870
the use of words and general educated opinion will have altered so much that one will be expected to be.

208
00:25:37,870 --> 00:25:45,720
One will be able to think speak of machines thinking without expecting to be contradicted.

209
00:25:45,720 --> 00:25:52,960
I'm in a more authoritative position to discuss that than most of you, but I will tell you this in 2000,

210
00:25:52,960 --> 00:26:01,240
I think it was quite plausible that one could talk of machines thinking without expecting to be contradicted.

211
00:26:01,240 --> 00:26:05,290
I think now it happens all the time anyway.

212
00:26:05,290 --> 00:26:17,800
That's another thing we'll come back to. But do note that as you go through the paper, those are quite significant predictions and important ones.

213
00:26:17,800 --> 00:26:23,110
OK? The theological objection thinking is a function of man's immortal soul.

214
00:26:23,110 --> 00:26:28,000
God has given an immortal soul to every man and woman, but not to any other animal or to machines.

215
00:26:28,000 --> 00:26:40,480
Hence, no animal or machine can think. I'm not going to spend long on Turing's discussion of this, which seems somewhat flippant.

216
00:26:40,480 --> 00:26:47,350
Certainly, Turing is not sympathetic to such religious doctrines.

217
00:26:47,350 --> 00:26:55,470
He highlights the absurdity of some religious views, e.g., women don't have souls.

218
00:26:55,470 --> 00:27:00,580
Suppose God gives cells. Why shouldn't he give a soul to a computer?

219
00:27:00,580 --> 00:27:10,000
I mean, if one thought that this was a serious discussion, there is a lot that could be said here.

220
00:27:10,000 --> 00:27:14,350
Turing rather skips over it. I'm going to as well.

221
00:27:14,350 --> 00:27:21,430
I think whatever is significant about that objection can be wrapped up in the problem of consciousness,

222
00:27:21,430 --> 00:27:25,420
which will come to light in their heads in the sand.

223
00:27:25,420 --> 00:27:30,670
Objection again, rather flippant. The consequences of machines thinking would be too dreadful.

224
00:27:30,670 --> 00:27:33,940
Let us hope and believe that they cannot do so.

225
00:27:33,940 --> 00:27:44,560
And Turing is suggesting that this lies behind many people's opposition to the idea of machines thinking they just don't want to think about it,

226
00:27:44,560 --> 00:27:53,500
bury their heads in the sand. Consolation is more appropriate than refutation, perhaps in the transmigration of souls.

227
00:27:53,500 --> 00:28:00,880
Yeah, sure. I think again, Turing is having some fun.

228
00:28:00,880 --> 00:28:09,580
I don't actually think the paper could possibly be published in the journal like mind in its current form today.

229
00:28:09,580 --> 00:28:18,490
It would have had a lot heavier editing for some of this stuff, but it does add to the entertainment of reading it.

230
00:28:18,490 --> 00:28:25,210
OK, number three, we come to the mathematical objection and this is a serious objection.

231
00:28:25,210 --> 00:28:37,990
It's been raised in recent years. Well, John Lucas many years ago now, actually and Roger Penrose, the idea is that results like girdles,

232
00:28:37,990 --> 00:28:44,230
but also, you see, Turing is alluding to other results that we looked at in the last lecture church.

233
00:28:44,230 --> 00:28:52,580
Clearly, Rosser Turing results that demonstrate the limited power of discrete state machines.

234
00:28:52,580 --> 00:29:05,050
And does that show actually that humans have an ability that no discrete machine can because we can run through, for example, the Turing?

235
00:29:05,050 --> 00:29:15,430
That's sorry, the girdle proof. We can see that the girdle formula is true, but the girdle formula cannot be proved by the formal system.

236
00:29:15,430 --> 00:29:19,960
Therefore, we are able to do something that the formal system cannot.

237
00:29:19,960 --> 00:29:26,800
Therefore, the human brain has a power that discrete make state machines cannot.

238
00:29:26,800 --> 00:29:34,280
That's a sketch of the kind of argument that Lucas and Penrose suggest.

239
00:29:34,280 --> 00:29:44,290
So it's interesting that cheering anticipates this. Here you can see a clear reference back to the 1936 paper.

240
00:29:44,290 --> 00:29:49,690
Consider the machine specified as follows Will this machine ever answer yes to any question you can see?

241
00:29:49,690 --> 00:29:53,560
That's a bit like will the machine ever print a zero on the tape?

242
00:29:53,560 --> 00:30:01,180
He knows these proved that you cannot have a general machine that will do that.

243
00:30:01,180 --> 00:30:10,060
But in answer to the question, does that show that these machines are less powerful than the human mind?

244
00:30:10,060 --> 00:30:17,350
Not necessarily, no, because humans have limitations too, and superiority over one particular machine.

245
00:30:17,350 --> 00:30:28,000
I mean, suppose we have some Turing machine implementing some axiomatic system and then we do a girdle on it and find a formula that it cannot prove

246
00:30:28,000 --> 00:30:39,490
that just shows we're superior to that machine in that respect doesn't show we are superior to all machines in all respect or even in one.

247
00:30:39,490 --> 00:30:44,920
Okay, so I'm going to put that to one side, but that is potentially an important objection.

248
00:30:44,920 --> 00:30:49,510
So it's one that has been taken seriously down the years.

249
00:30:49,510 --> 00:31:00,430
The argument from consciousness, I think, is probably the most important objection, and it's where I'm going to suggest Turing goes most wrong.

250
00:31:00,430 --> 00:31:05,980
He quotes from Geoffrey Jefferson's list oration of 1949.

251
00:31:05,980 --> 00:31:14,770
So here is Jefferson saying why computers can't think not until the machine can write a sonnet or

252
00:31:14,770 --> 00:31:22,210
compose a concerto because of thoughts and emotions felt and not by the chance for all of symbols?

253
00:31:22,210 --> 00:31:30,340
Could we agree that machine equals brain that is not only write it, but know that he had written it.

254
00:31:30,340 --> 00:31:41,290
So you can see Jefferson is putting a lot of emphasis on that writing of a sonnet, and Turing is responding to this with his examples of sonnet.

255
00:31:41,290 --> 00:31:53,650
No mechanism could feel and not merely artificially signal an easy contrivance pleasure that it successes grief when it's valves fuse,

256
00:31:53,650 --> 00:32:01,240
be made miserable, be charmed. Be angry or depressed. OK, so you get the idea.

257
00:32:01,240 --> 00:32:05,710
The computer can't genuinely feel anything.

258
00:32:05,710 --> 00:32:16,170
So when the computer produces a sonnet, say it's not doing it through genuine thinking.

259
00:32:16,170 --> 00:32:24,420
Now, Turing's response to this is amusing, but I'm going to suggest dubious.

260
00:32:24,420 --> 00:32:31,560
This argument appears to be a denial of the validity of our test, according to the most extreme form of this view.

261
00:32:31,560 --> 00:32:37,260
The only way to know that either a machine or a man thinks is to be that particular man.

262
00:32:37,260 --> 00:32:46,200
It is, in fact, the solipsistic point of view. It may be the most logical view to hold, but it makes communication of ideas difficult.

263
00:32:46,200 --> 00:32:51,120
A is liable to believe anything, but B does not.

264
00:32:51,120 --> 00:32:56,010
Meanwhile, B is liable to believe B thinks, but it is not.

265
00:32:56,010 --> 00:33:05,190
Instead of arguing continually over this point, it is usual to have the polite convention that everyone thinks.

266
00:33:05,190 --> 00:33:12,270
OK. Solipsism, by the way, is the theory that I am the only thing that exists in the world, right?

267
00:33:12,270 --> 00:33:17,520
Everything else is a figment of my imagination, including all of you.

268
00:33:17,520 --> 00:33:24,480
So if I'm a solipsistic right, I genuinely think that I am the only thing that's thinking.

269
00:33:24,480 --> 00:33:37,680
And Turing is suggesting that if we follow the logic of Jefferson's objection, we would actually come to the conclusion that solipsism is true.

270
00:33:37,680 --> 00:33:42,780
Now, he then gives his viva voce example.

271
00:33:42,780 --> 00:33:49,020
So I think the previous point will come back to it in a moment I think is highly dubious.

272
00:33:49,020 --> 00:33:54,360
But I think some of what he says about the sonnet example is strong.

273
00:33:54,360 --> 00:33:59,580
So just to remind you, that's the conversation about the sonnet.

274
00:33:59,580 --> 00:34:04,620
You can see it's quite a sophisticated conversation.

275
00:34:04,620 --> 00:34:11,130
What would Jefferson say if the sonnet writing machine was able to answer like this in the viva voce?

276
00:34:11,130 --> 00:34:17,640
I do not know whether he would regard the machine as merely artificially signalling these answers.

277
00:34:17,640 --> 00:34:27,300
But if the answers were satisfactory and sustained as in the above passage, I do not think he would describe it as an easy contrivance.

278
00:34:27,300 --> 00:34:31,920
In short, then, I think that most of those who support the argument from consciousness could be

279
00:34:31,920 --> 00:34:38,620
persuaded to abandon it rather than be forced into the solipsistic position.

280
00:34:38,620 --> 00:34:49,810
OK. I think there's a good point there, but I think there's a confusion of two quite distinct lines of thought.

281
00:34:49,810 --> 00:34:58,360
So one of those is that Jefferson is denying the validity of the Turing test because it does not test for genuine consciousness.

282
00:34:58,360 --> 00:35:05,230
And according to Jefferson, genuine consciousness, rather than artificial signalling is necessary for intelligence.

283
00:35:05,230 --> 00:35:13,060
OK, that's one line of thought. A different line of thought is that artificial signalling of apparent emotions

284
00:35:13,060 --> 00:35:18,480
is unworthy of being deemed intelligent because it's an easy contrivance.

285
00:35:18,480 --> 00:35:25,150
OK, so those are two different points. One is saying consciousness is crucial has got to be genuine feeling.

286
00:35:25,150 --> 00:35:31,480
The other one says Easy contrivance isn't enough.

287
00:35:31,480 --> 00:35:35,740
Now Turing runs those together in his response.

288
00:35:35,740 --> 00:35:46,120
And I think his answer to the easy contrivance point is much stronger than his response to the first unconsciousness,

289
00:35:46,120 --> 00:35:47,630
what he'd have been better saying.

290
00:35:47,630 --> 00:35:55,000
And we'll talk about this more in the next two lectures after he gave the sonnet example and others write not just the sonnet,

291
00:35:55,000 --> 00:36:00,010
but examples of similar sophistication of conversation,

292
00:36:00,010 --> 00:36:08,800
he should have said something like this if the answers were satisfactory and sustained, as in the above passage direct quote from him,

293
00:36:08,800 --> 00:36:15,910
then there would be reason to call the machine intelligent, irrespective of whether or not it has genuine feelings.

294
00:36:15,910 --> 00:36:23,440
Intelligent need not require consciousness. That, I think is the way he should have argued.

295
00:36:23,440 --> 00:36:32,830
Obviously, there's a lot more to say about this, so we'll come back to this in, well, the next sector and especially the one after that.

296
00:36:32,830 --> 00:36:39,850
Okay, so we'll put consciousness on one side for now. We then get two arguments from various disabilities.

297
00:36:39,850 --> 00:36:46,210
Again, Turing is obviously being humorous because he includes amongst the disabilities, being kind,

298
00:36:46,210 --> 00:36:52,780
beautiful, friendly, having initiative, having a sense of humour, telling right from wrong making mistakes.

299
00:36:52,780 --> 00:36:54,980
Okay, that's a disability. Maybe fall in love.

300
00:36:54,980 --> 00:37:03,040
Well, maybe that his enjoy strawberries and cream, learn from experience, use words properly, be the subject of its own thought.

301
00:37:03,040 --> 00:37:05,950
Do something really new.

302
00:37:05,950 --> 00:37:15,370
Now, obviously, the limited machines of 1950 couldn't do these, but it requires some argument to show that no machine could allow.

303
00:37:15,370 --> 00:37:19,510
Some of these just seem to take us back to the argument from consciousness.

304
00:37:19,510 --> 00:37:24,560
Okay, I'm being kind, you might say, to be genuinely kind.

305
00:37:24,560 --> 00:37:34,390
Do you actually have to have feeling for the other person or a robot is not kind, even if it does the kinds of things that kind people would do?

306
00:37:34,390 --> 00:37:38,740
Enjoy strawberries and cream? Well, presumably you need to be conscious to do that.

307
00:37:38,740 --> 00:37:48,940
So quite a lot of this just takes us back to that argument when we are actually talking about disabilities, like making a mistake, Turing remarks.

308
00:37:48,940 --> 00:37:55,960
This is a very old complaint. I mean, to say computers can't be intelligent because they can't make mistakes would be very odd.

309
00:37:55,960 --> 00:38:03,130
We normally think of the intelligent person as making fewer mistakes than the unintelligent person.

310
00:38:03,130 --> 00:38:12,820
But also, he points out that you could programme a machine to make errors in the same in a humanlike way, potentially.

311
00:38:12,820 --> 00:38:20,140
But again, it would be odd to say that worst performance makes it more intelligent.

312
00:38:20,140 --> 00:38:28,660
Number six, we come to Lady Lovelace's objection. This can seem quite a strong objection, or it's a very popular one.

313
00:38:28,660 --> 00:38:32,740
But I think Turing deals with this reasonably well.

314
00:38:32,740 --> 00:38:40,810
He quotes Ada Lovelace saying that Charles Babbage's analytical engine has no pretensions to originate anything.

315
00:38:40,810 --> 00:38:45,670
It can do whatever we know how to order it to perform.

316
00:38:45,670 --> 00:38:49,720
So a computer programme essentially just follows orders.

317
00:38:49,720 --> 00:38:56,860
Therefore, it can't be intelligent and you can see there are there are interesting issues raised here.

318
00:38:56,860 --> 00:39:02,800
For example, if we discuss the relation between intelligence and free will.

319
00:39:02,800 --> 00:39:11,200
But on the point about not originating anything, Turing points out that we can often be surprised by the outcome of things.

320
00:39:11,200 --> 00:39:23,500
I mean, it's perfectly possible, for example, to write a computer programme which experiments you can write a computer programme,

321
00:39:23,500 --> 00:39:28,480
which compares different strategies against each other.

322
00:39:28,480 --> 00:39:32,140
So, for example, you could write a computer programme to play a game,

323
00:39:32,140 --> 00:39:37,090
maybe that you've never played before something like three dimensional noughts and crosses or something like that.

324
00:39:37,090 --> 00:39:43,140
Maybe you've never. That but you could write a programme that learns strategies for playing that

325
00:39:43,140 --> 00:39:47,160
simply by trying different strategies and comparing them against each other.

326
00:39:47,160 --> 00:39:53,220
And then at the end of it, you've got a programme that plays the game far, far better than you could yourself.

327
00:39:53,220 --> 00:39:59,760
It's odd to say that it's just doing what it's told. No, it's actually learning.

328
00:39:59,760 --> 00:40:00,660
Now you might say, yeah,

329
00:40:00,660 --> 00:40:07,950
but it's not genuinely learning because the learning mechanism itself is under the control of an algorithm that you have written.

330
00:40:07,950 --> 00:40:15,450
Well, OK. But then if you're going to push this argument, I suspect that it's taking us back to issues about consciousness.

331
00:40:15,450 --> 00:40:26,380
Genuine agency again free will, but apparently tied in with the idea that one needs to be conscious of what one's doing.

332
00:40:26,380 --> 00:40:29,590
And I want to be fair to Ada Lovelace.

333
00:40:29,590 --> 00:40:41,110
She wrote some famous notes on Babbage's analytical engine in virtue of which she's often spoken of as the first computer programmer.

334
00:40:41,110 --> 00:40:52,930
And the quotation from Turing he was he was quoting from Note eight note gee, sorry of her 1842 notes,

335
00:40:52,930 --> 00:41:00,730
but Note A includes this interesting statement the operating mechanism might act upon other things.

336
00:41:00,730 --> 00:41:10,540
Besides, no were object found whose mutual fundamental relations could be expressed by those of the abstract science of operations

337
00:41:10,540 --> 00:41:18,370
and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine.

338
00:41:18,370 --> 00:41:24,430
Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and

339
00:41:24,430 --> 00:41:29,440
of musical composition was susceptible of such expression and adaptations,

340
00:41:29,440 --> 00:41:37,480
the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.

341
00:41:37,480 --> 00:41:42,760
So it's a little unfair to Ada Lovelace that because of Turing's paper,

342
00:41:42,760 --> 00:41:50,710
she is very widely known as someone who denied that computers could originate anything when actually more prominently in that document,

343
00:41:50,710 --> 00:41:54,430
she was saying, uh, again.

344
00:41:54,430 --> 00:42:03,130
Very prophetically, the Babbage's analytical engine had the potential to operate in lots of different spheres,

345
00:42:03,130 --> 00:42:12,950
music as well as numbers, and to be creative in the sense of coming out with new compositions.

346
00:42:12,950 --> 00:42:23,210
OK. Just to finish the last few sections of Turing's paper, we get the argument from continuity in the nervous system.

347
00:42:23,210 --> 00:42:30,680
The nervous system is not a discrete state machine. A small error in the information about the size of a nervous impulse impinging

348
00:42:30,680 --> 00:42:36,200
on Neurone may make a large difference to the size of the outgoing impulse.

349
00:42:36,200 --> 00:42:45,200
And again, we he makes the point about how a discrete state machine can mimic a continuous system.

350
00:42:45,200 --> 00:42:49,430
The argument from informality of behaviour,

351
00:42:49,430 --> 00:42:56,840
if each man had a definite set of rules of conduct by which he regulated his life, he would be no better than a machine.

352
00:42:56,840 --> 00:43:03,860
But there are no such rules. So men cannot be machines. And he points out that this is a fallacy.

353
00:43:03,860 --> 00:43:04,760
And in any case,

354
00:43:04,760 --> 00:43:12,000
it's hard to establish that we're not in fact governed by laws of behaviour and laws of behaviour are not the same as rules of conduct.

355
00:43:12,000 --> 00:43:14,780
OK, rules of conduct tell you what you ought to do.

356
00:43:14,780 --> 00:43:24,170
Laws of behaviour are natural laws or consequences of natural laws, which determine how we do, in fact, behave.

357
00:43:24,170 --> 00:43:33,470
So, for example, many of you will have read section eight of Hume's first enquiry, where he discusses liberty and necessity.

358
00:43:33,470 --> 00:43:36,560
And Hume clearly takes the view that although we don't know what they are,

359
00:43:36,560 --> 00:43:48,420
there are underlying laws of behaviour that we are determined in what we do.

360
00:43:48,420 --> 00:43:53,670
Finally, in this section, we get the argument from extrasensory perception.

361
00:43:53,670 --> 00:43:58,770
This seems very odd. You might. What's that doing there?

362
00:43:58,770 --> 00:44:12,720
Well, first of all, Turing seems to think that the statistical evidence for telepathy and psychic Karnezis and so on is strong, indeed overwhelming.

363
00:44:12,720 --> 00:44:20,460
Most of us now inform people would not think this was true, but a guy called J.B. Rhine at the time was quite influential.

364
00:44:20,460 --> 00:44:32,290
He was doing a lot of experiments, and Hodges, in his biography of Turing, points out that Turing seems to have been impressed with that.

365
00:44:32,290 --> 00:44:37,950
And the sad fact is on either sad or happy, depending on how you look at it, I suppose.

366
00:44:37,950 --> 00:44:47,220
But the evidence simply hasn't stood up very well. I don't think many would say that there is compelling evidence for these phenomena.

367
00:44:47,220 --> 00:44:53,490
But anyway, during thought there was let suppose that there was,

368
00:44:53,490 --> 00:45:01,560
then you can see why cheering might be concerned because if extrasensory perception were possible,

369
00:45:01,560 --> 00:45:12,960
then that's something that a machine probably couldn't mimic. It could mean that we have going into the processing of our conversation forms

370
00:45:12,960 --> 00:45:22,740
of perception that Turing is not going to be able to imitate in his machine.

371
00:45:22,740 --> 00:45:37,500
Just by the way, I've alluded to Hodges biography. It does seem plausible that Turing's great interest in things like spiritualism and

372
00:45:37,500 --> 00:45:43,290
clairvoyance and that kind of thing were influenced by the death of Christopher Morcombe,

373
00:45:43,290 --> 00:45:54,390
his intimate friend when he was very young. I'm thinking about life after death, hoping that the soul continues to exist and so forth.

374
00:45:54,390 --> 00:46:02,820
So it's perhaps not surprising that it has, you know, that it features in in this paper.

375
00:46:02,820 --> 00:46:12,330
The finally, we get on to learning machines where Turing starts off saying,

376
00:46:12,330 --> 00:46:19,830
I have no very convincing arguments of a positive nature to support my views.

377
00:46:19,830 --> 00:46:24,990
The only really satisfactory support that can be given for the view expressed at the beginning of

378
00:46:24,990 --> 00:46:31,650
Section six will be that provided by waiting for the end of the century and then doing the experiment.

379
00:46:31,650 --> 00:46:36,480
So he's now referring back to those predictions that I drew your attention to.

380
00:46:36,480 --> 00:46:41,550
And he's basically saying, Well, I can't prove that what I say is right here.

381
00:46:41,550 --> 00:46:52,110
But wait and see. Well, I think there is a lot to be said for seeing his paper as.

382
00:46:52,110 --> 00:46:59,140
If you like propaganda, largely what he's doing is.

383
00:46:59,140 --> 00:47:08,980
You're saying you may naturally be very disinclined to think that computers can be intelligent?

384
00:47:08,980 --> 00:47:19,790
Let me give you a thought experiment. Suppose they could perform this well, would that not force you to revise your view?

385
00:47:19,790 --> 00:47:26,780
I actually think it would force you to revise your view. I think if computers could do this, general use of language would change.

386
00:47:26,780 --> 00:47:32,000
We would come to call them intelligent and so on. That's my prediction.

387
00:47:32,000 --> 00:47:40,850
Let's wait and see. But it gives a way of addressing the question which takes it away from all these

388
00:47:40,850 --> 00:47:46,010
what he thinks of as irrelevant issues like the theological objection and so on.

389
00:47:46,010 --> 00:47:50,120
And it potentially also takes it away from issues like consciousness, though,

390
00:47:50,120 --> 00:47:58,430
as we've seen, Turing doesn't separate those as much as perhaps he should.

391
00:47:58,430 --> 00:48:02,990
We've seen he's ending with the section on learning machines.

392
00:48:02,990 --> 00:48:09,960
He actually suggests that the way to get a machine that can perform to the desired standard might

393
00:48:09,960 --> 00:48:16,130
be to try to simulate a baby's mind rather than an adult and provide it with the ability to learn.

394
00:48:16,130 --> 00:48:17,060
OK.

395
00:48:17,060 --> 00:48:27,410
This seems frankly very unrealistic, but it's easy for us to say that because we've seen a lot of experience, you know, of artificial intelligence.

396
00:48:27,410 --> 00:48:29,600
We know how difficult it is to learn.

397
00:48:29,600 --> 00:48:36,020
We know how difficult it is to interact with the physical environment, something that babies do very effectively.

398
00:48:36,020 --> 00:48:42,950
They learn huge amounts through their physical senses to get a computer,

399
00:48:42,950 --> 00:48:51,710
to be able to learn in anything like that way so far beyond us and likely to be for some time.

400
00:48:51,710 --> 00:48:55,820
An important point that Turing makes here is that learning machine is highly likely to behave

401
00:48:55,820 --> 00:49:00,890
in ways that its programmers could neither foresee nor understand and also to make mistakes.

402
00:49:00,890 --> 00:49:06,680
So he's bolstering up points he's already made in the paper.

403
00:49:06,680 --> 00:49:15,260
Many people think that a very abstract activity, like the playing of chess might be a good place to start in attempting to match human intelligence.

404
00:49:15,260 --> 00:49:23,840
Chess was indeed seen as a paradigm case for artificial intelligence for many, many years.

405
00:49:23,840 --> 00:49:30,620
But perhaps instead, it's best to provide the machine with the best sense organs that money can buy and then teach it to

406
00:49:30,620 --> 00:49:36,530
understand and speak English so that it could then follow the normal teaching of a child again.

407
00:49:36,530 --> 00:49:43,400
This now seems incredibly naive. General natural language understanding is extraordinarily difficult,

408
00:49:43,400 --> 00:49:48,650
and progress in things like machine translation has tended to come in recent years,

409
00:49:48,650 --> 00:50:00,720
from humungous amounts of data analysed statistically, rather than the kind of formal methods that may initially have seemed more promising.

410
00:50:00,720 --> 00:50:04,850
Okay, on that uncertain note, as I say, the paper ends.

411
00:50:04,850 --> 00:50:10,160
Please make sure you read it carefully before the next lecture.

412
00:50:10,160 --> 00:50:18,920
I hope that what we've covered today will help you to appreciate both its virtues and its some of its vices.

413
00:50:18,920 --> 00:50:23,543
Thank you.

More from Lectionem

Featured on

Comment

Your email address will not be published. Required fields are marked *