G.J.C. Lokhorst. Has artificial intelligence taught us anything of importance about the mind? Putnam workshop, Erasmus University Rotterdam, November 1995.
In his article "Artificial Intelligence: Much Ado about Not Very Much" () Hilary Putnam gave the following answer to the question posed in the title: "I am inclined to think the answer is no" (, p. 269). He went on to say that "AI has so far spun off a good deal that is of real interest to computer science in general, but nothing that sheds any real light on the mind" (, p. 270). He finally wondered "What's all the fuss about now? Why don't we wait until AI achieves something and then have an issue?" (, p. 271)
In an article entitled "When Philosophers Meet Artificial Intelligence" () Daniel Dennett took up the challenge. Putnam was not satisfied by Dennett's reply, however. He complained that whereas "Dennett says that he is going to explain what AI has taught us about the mind, what he in fact does is to repeat the insults that AI researchers hurl at philosophers ("We are experimenters, and you are armchair thinkers!") (, p. 280).
I shall try to rectify Dennett's omission and try to give some more or less straightforward answers to Putnam's question.
I will not try to define either "the philosophy of mind" or "artificial intelligence." But let us take a look at some subjects that philosophers of mind, from Plato and Aristotle onwards, have traditionally concerned themselves with and see if the field which calls itself AI really does not have anything to contribute to these issues.
First, learning and memory spring to mind. Philosophers have come up with all kinds of metaphors, from Plato's wax tablets and aviary to more neurally inspired models like that of William James. (The Dutch philosopher Draaisma has just completed a book giving a survey of all these metaphors.) In the forties Hebb came up with the idea of cell assemblies and a simple learning rule for individual synapses. His ideas did not work, as computer simulations in the fifties quickly revealed, but nevertheless inspired all subsequent neural net research on learning and memory. This research has resulted in the best models of human learning and memory we have today. It would clearly be ridiculous if anyone wrote about memory today without knowing about these neural net models. That is one topic which has been more or less snatched from the hands of the philosophers by artificial intelligence research.
Second, take sense perception. This is a topic the second book of Aristotle's De Anima is largely concerned with. We nowadays have some marvellous mathematical models of the working of the retina. Carver Mead has even built a wonderful Artificial Retina, a device which Aristotle would have been thrilled at. The philosophy of perception should surely not be carried out in isolation from such break-throughs.
Third, take consciousness and its alleged "unity." Dennett () and others have suggested that the Jamesian or Joycean stream of consciousness might turn out to be nothing but a virtual entity superimposed upon a parallel distributed network--surely an exciting theme for any philosopher of mind!
Fourth, consider the whole topic of reasoning. The logic of knowledge, belief and perception; deontic, nonmonotic and diagnostic logic; reasoning by analogy and reasoning about actions and interactions;--all these disciplines originally started in philosophy, but they have nowadays almost completely left their home country and have become highly successful subdiscplines of AI. No philosopher writing about these topics can afford to ignore the developments occurring there.
Fifthly, think of AI as food for thought. It has certainly proven to be a highly nutritive food for thought. Searle's Chinese Room thought experiment, the issues about functionalism and mechanicism, narrow and wide content, the groundedness of representations, and so on and so forth--all these hotly-debated issues indicate that as far as the philosophy of mind is concerned, AI is the most nutritive food for thought that has ever presented itself.
Sixthly, self-organisation as displayed in neural nets and everything connected with the currently exploding field of "artificial life" is an exciting topic in itself. Philosophers have always wondered about the phenomenon of "emergence." We now have some very concrete examples of this phenomenon.
Seventhly, and finally, artificial intelligence research has shown that the mind is a very complicated thing--much more complicated than any philosopher or scientist thought likely half a century ago. The progress in AI is generally much less rapid than anyone envisaged in the forties and fifties. Some activities we carry out are not entirely inscrutable to ourselves. Such activities have turned out to be easily programmable; theorem proving, for example. Most things we do are hidden from consciousness, however; we seldom know how we do what we do. Such things are, of course, difficult to program. AI research indicates they should be programmed in a different way than the few things of which we know how we do them. Recognising perceived objects, for example, is different from theorem proving, and language production is different from either. There is no Master Program for human cognitive performance. There will never be an Einstein of the mind, because the mind seems to depend on a collection of many disparate modules or "odd hacks." It is not a unity but a society of differently specialised members. We are aware of nothing but the tip of the iceberg.
Making artificial minds is not an easy enterprise. It may even be so difficult that we will not be able to write the programs ourselves. We may have to be assisted by the computer. This is the driving force behind the field involved with genetic algorithms, which are intended to reproduce, hopefully within our lifetimes, what evolution by natural selection took billions of years...Just as Intel cannot make its new processors without computer-assistance, we may not be able to comprehend ourselves without using the computer as our tool...
Professor Putnam's disparaging remarks about AI curiously, and perhaps inappropriately, remind me of some remarks which Russell made about Ryle's Concept of Mind. Russell wrote that:
Professor Ryle's attitude to science is curious. He no doubt knows that scientists say things which they believe to be relevant to the problems he is discussing, but he is quite persuaded that the philosopher need pay no attention to science. (, pp. 183-4)
I think it is clear that Artificial Intelligence may be reckoned among the sciences and that it is here to stay. Philosophers ignore it at their peril. Russell said that:
Philosophy cannot be fruitful if divorced from empirical science. And by this I do not mean only that the philosopher should "get up" some science as a holiday task. I mean something much more intimate: that his imagination should be impregnated with the scientific outlook and that he should feel that science has presented us with a new world, new concepts and new methods, not known in earlier times, but proved by experience to be fruitful where the older concepts and methods proved barren. (, pp. 187)
The same seems to be true if we replace "philosophy" by "philosophy of mind" and "empirical science" by "artificial intelligence research combined with natural intelligence research."
Fortunately enough, as even a cursory look at the Philosophers' Index reveals, many contemporary philosophers of mind live up to Russell's precept and do not discuss the philosophy of mind with their eyes turned away from sciences such as AI and neurophysiology.
All this being said, however, I do share Putnam's feeling that the hard philosophical problems are here to stay. He specifically mentions the problem of intentionality. Like him, I have never seen this issue addressed head-on in artificial intelligence. However, I have the feeling that philosophers may still be wrestling with it if the goal of artificial intelligence has been reached and intelligent robots à la Asimov are strolling all about us. Or rather, I think that a problem such as the problem of intentionality will simply have been forgotten by then. It will have disappeared in the same way as other unsolvable philosophical problems have disappeared before.
In the previous century, philosophers wrestled with such problems as the "force-matter" distinction (they meant "force" in a wider sense than the Newtonian one) () and the question "What is life?" These problems were never solved in any philosophical sense. But contemporary philosophers no longer feel any inclination to solve them either. Look at the derivation of the E=mc2 equation, the way DNA works and such things as the Krebs-cycle, and you know everything you might ever have wanted to know about force and matter and life and much more besides. The philosophical conundrums were not solved, but eclipsed by results brighter than any philosopher could have imagined. I have the feeling that a philosophically recalcitrant problem such as the problem of intentionality may suffer the same fate: it will not be solved but dissolve in view of the accomplishments that science has achieved and will achieve.
Any Roman catholic priest may come up with some unanswerable conundrums a couple of centuries from now on--like Brentano did a century ago with his concept of "intentionality"--but philosophers should not let themselves be unduly mesmerised by such perversions.
Previous | Up | Next
firstname.lastname@example.org || July 17, 2015 || HTML 4.01 Strict