INTELLIGENCE AND THE BRAIN

Gert-Jan Lokhorst

1987

G.J.C. Lokhorst. Intelligence and the brain. In The Nature of Intelligence. Essays by Joseph Weizenbaum, Albert Visser, Gert-Jan Lokhorst and Monica Meijsing, followed by a transcript of the panel discussion after the film of Piet Hoenderdos, "Victim of the Brain". Studium Generale Erasmus University Rotterdam, Rotterdam, 1988.

In contrast to the other speakers of this day, I will discuss the empirical study of intelligence. I will be concerned not with artificial intelligence, but with natural intelligence. As you all seem to be human, this may perhaps interest you the most.

Specifically, I will consider what the brain sciences have to tell us about intelligence. I hope I may give you a bird's eye view of what sort of things are known at present and of the direction in which research is currently heading. In order to appreciate the merits of the neurobiological approach to the understanding of intelligence, I will contrast it with another popular way of attacking the problem, one which is used in cognitive psychology and which is, in my opinion, seriously limited at best.

Neuropsychology

As is usually the case in the brain sciences, most of our knowledge about the way in which the brain produces intelligent behaviour comes from the study of brain lesions. The difficulties of patients with such lesions help to identify the steps by which we proceed in solving a problem. A lot of work has been done by the Russian neuropsychologist Luria, who has shown that human problem solving--which is often regarded as the characteristic manifestation of intelligence--consists, from a neuropsychological point of view, of four main stages.

First, in order to solve a problem you have to recognize that there is a problem to be solved. Lesions deep in the brain or in the frontal lobes may disturb this goal-formulating capacity. When patients with such lesions are given a problem they do not see it as a problem; instead of solving it they will restate it as a fact.

A second stage in solving a problem consists of identifying its conditions and components. Patients with frontal-lobe lesions lack this ability too: they are not aware of any constraints and impulsively give misappropriate answers. For example, one such patient was presented with the problem: "A candle is 15 centimeters long; the shadow from the candle is 45 centimeters longer. How many times is the shadow longer than the candle?." He answered immediately: "Well, three times of course!." Even after he was prompted to check his result, he did not notice anything wrong.

A third stage consists of applying the appropriate processing strategies to the problem. A patient who cannot do this cannot solve the problem, even if he is aware of the problem and is motivated to solve it. This step may be blocked if the relevant parts of the brain are damaged; for example, patients with lesions of the left parieto-occipital regions have enormous difficulties in understanding relations. When they are presented with a problem like "Jack has two apples, while Jill has two more. How many apples has Jill?," they ask, confusedly, "While? More? What does `more' mean?."

Finally, solving a problem involves recognizing the solution as a solution once it has been found. Patients with deep lesions of the cortex and basal ganglia may be unable to do this, with the result that they repeat the solution over and over again.

So here we have a rough subdivision of problem solving in four stages which may separately be blocked by various cerebral lesions; this is already a first step in understanding the finer structure of this ability.

Some other neuropsychological subdivisions of cognitive abilities are the distinction between two styles of cognitive activity, one visuo-spatial and having its seat in the right hemisphere of the brain, and another one which is logico-verbal and left-hemisphere based; it is often added that truly creative activity requires the interplay of both modes of thought.

Apart from brain lesions, other evidence for the way in which cognitive abilities are weaved from several strands comes from studies which monitor the metabolic activities of various parts of the brain while they are engaged in complex tasks. The principal techniques are the measurement of regional cerebral blood flow and positron emission tomography; these techniques enable us to observe so-called "mindscapes" of activity, which are not uniformly flat, but have various hills and valleys roughly correlated with various psychological phenomena. For example, listening to music is different from listening to spoken words, the former primarily engaging the right brain and the latter primarily the left one.

Now, these results are certainly one step forward in understanding the fine structure of intelligence. However, they constitute only a very rough first analysis. We would like to know much more components, how they hang together, and moreover, how the brain carries them out at the synaptic level. Obviously, it is not enough to know that problem solving proceeds in four steps, without any further analysis of these steps themselves and any further insight in what goes on in the very large regions of the brain allocated to each of them.

However, very much further knowledge is not likely to come from the neuropsychological investigations. These are intrinsically limited to only the most gross macro-effects.

So, how should we proceed?

Protocol analysis

One discipline naturally suggests itself: cognitive psychology. After all, this is the discipline that explicitly aims to understand cognition.

One important method that has been often used in cognitive psychology to study puzzle solving is the method of "protocol analysis." People are told to solve a certain puzzle and to give as much comment as possible while they are doing this; they must, as it were, "think aloud." The comments are recorded and analyzed afterwards; in this way the investigators hope to find out which strategies and rules are employed in such tasks. The system of strategies and rules thus extracted are often run in computer simulations afterwards; in this way, it may be tested whether they form a consistent system, which is indeed capable of carrying out these tasks.

The classic book in this field is Newell's and Simon's Human Problem Solving (1972), which discusses chess problems and logical puzzles. The method is often rather laborious: for example, a subject who was solving a cryptoarithmetical puzzle produced no less than 311 "fragments of thought," which then had to be analysed in no less than 66 pages. However, undaunted by the amount of energy required the method has been adopted by many other psychologists, and it has been applied to such diverse areas as algebra, physics, medical diagnostics, chemical technology, backgammon, composing, graphical design and poetry. The computer simulations of the rules found in this way are sometimes impressive; thus, Hans Berliner has written a chess program which does not play chess very well, but at least does it in more or less the same way as humans do.

It might be thought that this line of research holds the most promise for the understanding of intelligence; by studying more and more protocols and by refining our computer simulations more and more, one might think that we will extract more and more strategies and rules people go by in solving the most diverse kinds of intellectual problems. However, I do not think the method holds such great potential. I think it is rather limited in scope at best, and that it may be seriously misleading within this narrow scope at that. Moreover, the computer simulations are totally unrealistic from a neural point of view and therefore do not throw any light on how people solve their problems.

First, the method of protocol analysis is limited to areas where people are capable of giving verbal descriptions of what they are thinking of. So it may work fine with puzzles like Smullyan's logical puzzles, although I think even that is doubtful; but it is in any case not applicable to the solving of visual puzzles like ambiguous pictures. Take the well-known picture with the old witch who is also a beautiful young lady, or Dali's painting "The Slave Market with Disappearing Bust of Voltaire": we seem totally unable to state what methods we employ in understanding them, and yet understanding them seems to be a sign of intelligence. We see the solution in a flash, without any conscious deliberation. Here, where we are concerned with the "right-hemispheric," nonverbal style of reasoning, so to say, the method is powerless; and indeed, it may be no accident that areas like the recognition of complex visual and auditory patterns, where we cannot verbally state what we are doing, are precisely the areas where research in articifial intelligence has the least success and the fewest ideas of what should be done. As Weizenbaum says in his Computer Power and Human Reason, present-day Artificial Intelligence research will probably never be able to give computer simulations of right hemispheric analogical, associative thinking in terms of holistic images.

Second, the method of protocol analysis may be misleading because the statements of people on what they are doing need not be accurate descriptions of what is really going on. The comments only reflect the reactions or effects on the speech system of activities which may be carried out by other cognitive systems. Perhaps these systems follow rules, but that does not imply that we are able to describe them. Citing a rule after an activity does not guarantuee that that activity was carried out by following that rule. So there may be some systematic relation between what these people say what they are doing and what they are really doing, but it need not be that of accurate description--which makes the method of protocol analysis useless.

Finally, the explanation of intelligent activity in terms of following rules can never be the whole story anyway, for it leads to infinite regress (as Ryle pointed out in The Concept of Mind). Surely intelligent activity is not the outcome of following the alleged rules stupidly. They must be followed intelligently. But this means that other, meta-rules must be applied for applying the rules. These must be applied intelligently as well, etc., etc., and we have ended up in a never ending loop.

Therefore the method of protocol analysis seems to rest on rather doubtful principles and is in any case limited in scope.

Brains versus computers

Furthermore, the computer simulations to which protocol analysis leads do not throw any light on how people manage to do the things which are done by these computers, even if the simulations should be perfect at the surface. It is evident that people operate according to different principles than computers, for they have radically different micro-architectures.

The most striking differences between brains and computers--and simultaneously the most basic principles any theory of the microstructure of intelligent activity should take into account--are the following.

First, neurons are slow, some million times slower than the elements of the average modern electronic computer. Yet, we are capable to do very sophisticated processing in only a few hundred milliseconds. Perceptual processing, most memory retrieval, much of language processing and much intuitive reasoning take less than one second. This means that these tasks must be done in no more than 100 or so steps. Current Articial Intelligence programs require millions of steps and would take several hours or even years if they were run on neuronal hardware. The "programs of the brain" have only a shallow logical depth.

Second, the slowness of the neurons is overcome by massive parallellism and a high degree of connectivity. Whereas the units in conventional, serial computers have only a few immediate neighbours, single cortical neurons can have from 1,000 to 100,000 synapses on their dendrites and, likewise, can have from 1,000 to 100,000 synapses on the dendrites of other neurons. The mechanisms of the mind result from the cooperative activity of very many relatively simple processing units operating in parallel. Individual neurons do not compute very complicated functions. This is not to say that there are no serial connections between different regions of the brain, or that there are no serial processes which may take several minutes or even hours (e.g., logical reasoning). But most processes occur within one region and are fast. The architecture of the connections within regions seems to be rather imprecise, whereas there is precision on the average. Most functions can only roughly be localized.

Third, there is no central processing unit in the brain. Processing occurs in all regions, and within these regions it is distributed over fairly large numbers of neurons. Individual neurons are not very specialized; even the most specialized cell found up to now, the so-called "monkey hand cell," is not absolutely specific.

Fourth, the brain is not error-prone; it manifests graceful degradation with damage and information overload, whereas in conventional computers damage of one element may result in a total breakdown.

Finally, brains are good at wholly different things than computers. They are good at pattern recognition and completion, generalisation and learning. They retrieve memories by content instead of by address like conventional computers do, where the central processing unit has to know the address--the "shelf-number"--of each piece of information and does not automatically retrieve similar pieces of information when it searches for one specific datum. On the other hand, the cortex is bad at things at which computers are good, such as floating-point arithmetic.

Facts such as these suggest that the cortex works wholly unlike present-day computers. Therefore present-day computer simulations of cognitive processes do not tell us anything about how the brain manages to carry out these processes. The computer models are perhaps useful for testing the consistency and power of models from cognitive psychology, but are neurobiologically unrealistic.

This finishes our look at the method of protocol analysis and the computer simulations which are based on it. We have seen that it does not help us very much in understanding natural intelligence.

Perhaps the trouble with cognitive psychology of this brand is that it starts from the wrong side, so to say: it starts from the high-level surface phenomena of cognition, which may be too complex and slippery to afford a good grip.

As early as 1838, Charles Darwin jotted down the following warning in one of his notebooks:

Experience shows the problem of the mind cannot be solved by attacking the citadel itself.--the mind is function of body.--we must bring some stable foundation to argue from.

It seems the protocol analysts have not heeded Darwin's advice.

So let us start afresh, and look at things from the bottom up, beginning at the level of the neurons and trying to work our way upwards toward the level of neuropsychology: it seems that present-day research in neurobiology provides as stable a Darwinian foundation from which to argue as we are ever likely to find.

The microstructure of intelligence

When we focus our attention on the brain in order to understand high-level phenomena, we are not primarily interested in individual cells; as we have said, it is unlikely that high-level processing occurs in individual cells. We are interested in the patterns of activity of larger ensembles of cells. In a way, it is fortunate that we will not be concerned with individual cells: for if our understanding would depend on knowing all cells individually, we might as well abandon hope altogether, considering the fact that there are 10,000,000,000 cells, each having up to 100,000 connections. Brain science will never, never acquire complete knowledge of even one single brain. Nor should it strive after such knowledge; instead it should search for the broad, general principles of brain functioning, approximately in the way physics searches for universal laws instead of complete knowledge of, say, the micro-architecture of one chair. It should be realized, too, that it is only broad general outlines of the functioning of the brain which can be coded for in the genetic material; the genome is far and far too small to be able to contain the information for the whole pattern of connectivity.

Theories of the appropriate level of abstraction have long been missing in the brain sciences. However, during the past decade the interest in such theories has been continuously growing, and now there are indeed various theories around which provide abstract, mathematical descriptions of the large-scale functioning of neuronal networks. These kinds of models originated within brain science; however, since some years they have caught the attention of a fast-growing number of psychologists, who now are using them in trying to explain psychological phenomena in neurally plausible ways. The old theories sometimes led to such strange views as that all cognition is nothing else than the proving of theorems in a "language of thought," this language being surprisingly like English and the procedure followed being surprisingly like theorem-proving in classical first-order logic (e.g. Fodor and Harman). Now, cognitive activity is explained in familiar sounding terms like "inhibition," "excitation" and "spreading of activity," which seem a lot more brain-like. The serial computer metaphor is being dropped for the brain metaphor--which seems, intuitively, a good thing.

The "connectionists" or "parallellel distributed processing theorists," as these psychologists call themselves, have already devised some attractive models. For example, they have made a model of word recognition which shows how partially blotted out characters may be reconstructed by taking the neighbouring characters into account. They have made a model which learns the past time of English verbs by extracting the common pattern in a number of presented examples and which makes exactly the same mistakes, such as overregularization ("camed" instead of "came"), as English children do. And they have made a model of hand positioning in typing which correctly predicts the posture of the hand in the typing of various words and which explains the errors human typists make.

And because their theories are stated in neural sounding terms and conform to the desiderata we have already mentioned--graceful degradation under damage, pattern completion, generalization, content-addressable memory, learning, shallow logical depth, etc.--it is imaginable that these theories indeed afford the first glimpses of how people actually do these things.

For this reason, the interest in "connectionism" is tremendous in the U.S.A., and Nature recently hailed McClelland's and Rumelhart's large two-volume survey of the field entitled Parallel Distributed Processing as "one of the publishing events of 1986." The interest is not confined to psychology and brain science either: many of the properties of the connectionistic models, such as visual pattern recognition, are precisely the features of human cognition which researchers in Artificial Intelligence have strived to simulate for so long and with so little result; for this reason, the activities in this field attract a growing number of computer people, and efforts to build parallel working computers on which these connectionistic models may be implemented in approximately the same way as the brain implements them (and which should therefore work as fast and reliably as brains do) are already well under way in the laboratories of the big computer firms.

So much for preliminaries. Now, what do these models look like? There is a great variety around, but the common thread is that the brain should be understood in terms of vectors. It is a huge matrix which is continually calculating vector products. In order to convey the flavour of this idea, let us consider a very simple network first.

The figure below schematically presents a very simple network of four by four neurons and sixteen synapses. Each of the four input axons (drawn as "|") has a synapse S on the dendrite (drawn as "-") of each output cell (whose cell body is indicated by O). The effect of an input fiber on an output cell is determined by multiplying the activation of the incoming fiber with the synaptic weight of the synapse. The output of each cell is simply the sum of the excitatory and inhibitory effects operating on it. (The numbers stand for the synaptic weights.)


View image as text

A network of this type is called a "simple linear associator." When it is confronted with one pattern, it produces another. For example, let the pattern (+1,-1,-1,+1) stand for the visual pattern of a rose and the pattern (-1,-1,+1,+1) for the smell of a rose: then as the figure shows, the visual pattern of a rose produces the pattern of its smell without the need for any olfactory input.

Mathematically, the network can be regarded as carrying out matrix multiplication: the output vector u is the inner product of a weight matrix W (representing the synaptic weights) and the input vector v.


View image as text

Simple though they are, these linear pattern associators have a number of nice properties.

First, they do not require a perfect copy of the input to produce an approximately correct output. So there is graceful degradation of output under degradation of input and a rudimentary form of pattern reconstruction and recognition of similarity. This is illustrated below: we have taken some input vectors v', v", v"' which slightly deviate from the ideal input vector v, and we see that the output vectors u', u", u"' are still in the right direction, although they are somewhat shorter than the ideal output vector u.


View image as text


View image as text

Length(u) = 2; length(u') = 1.5; length(u"') = 1. Angle between u and u' = angle between u and u"' = 0 degrees.

Second, there is graceful degradation under damage: you may remove a unit or destroy a connection, and this will only weaken the output, not destroy it altogether, as is shown below:


View image as text

Length(u'): ca. 1.89. Angle between u and u': ca. 6.6 degrees.

Finally, we see that the network behaves differently than current computer-inspired "language of thought" theories would have it. First, the input activation is not represented by means of a sentence in some "language of the network." Instead, it is represented as a global pattern of activity distributed over the whole network. (So it is not localizable within a part of the network.) Second, there is no central processing unit to compute the output. The computation is spread out over all the synapses. Third, the output is not computed by means of explicitly stored rules. There are no rules stored somewhere in the network which have to be consulted to determine the output. Rather, the knowledge is in the connectivity matrix. This is not to deny that the network may, on the surface, perhaps be described as following rules; but such a description is misleading in the sense that it does not correspond with what really goes on inside. Finally, we see that the representation of the input is not passive. To use an expression from Hofstadter's Gödel, Escher, Bach, the pattern of activity is an "active symbol," whose internal structure itself determines the way it behaves on the weight matrix.

Things get even more interesting when we consider learning. Learning in networks like these consists of acquiring the right connectivity matrix. Now, what should a connectivity matrix look like in order to produce some given output when confronted with a given input? Mathematically, this matrix may be calculated by taking the inner product of the output vector u and the transpose T(v) of the input vector v, as is shown below. (Because this procedure works only for input vectors of unit length, we have adjusted the magnitudes.)


View image as text


View image as text

Now the interesting thing is that each weight w[i,j] in the matrix W is just the product of the output from the synapse u[i] and the input to the synapse v[j] (e.g., the synaptic weight at row 3 and column 2 is w[3,2] = u[3].v[2] = +.5.-.5 = -.25). Both of these quantities are locally available at the synapse. If we suppose that synaptic links get stronger as they are more stimulated, and if the network makes the right association often enough, each synaptic weight will keep changing until it has reached the ideal value; when this state has been reached, the network will have learnt a global pattern of behavior by purely local changes of its synaptic weights on the basis of locally available information. The network will have learnt to make the right association, without the individual units having been aware that this was the outcome of their concerted efforts. There is no need for overall supervision; the networks need not be programmed; they train themselves.

From a neurobiological point of view, such a mechanism is highly plausible and was proposed as early as 1949 by Hebb. It is often thought that the strengthening of the synaptic link is due to an increase in its surface. So here we have a plausible mathematical model of associative memory.

Finally, these networks have the surprising property that each of them may learn several memories. Consider the following two matrices, each making its own associations. A third matrix capable of making both associations is obtained by adding the two matrices. It will produce the correct response in each case.


View image as text


View image as text


View image as text


View image as text


View image as text

Strictly speaking, there will often be interference between various associations; but this is not so bad, for it leads to the learning of "average" patterns and may even lead to the emergence of stable new patterns, new concepts so to speak. It is interesting to note that the models do not sharply distinguish between memory and plausible reconstruction, between genuine memory and mere confabulation--which is, of course, something psychologists do neither.

The linear associator is, of course, an overly simplistic model. More realistic models are not only much larger, having input and output spaces of millions of dimensions instead of four; they also add, for example, thresholds: the output neuron fires only when its activation exceeds the threshold. With linear threshold units, you can already make a Turing machine. But in order to increase neural realism, features such as noise, multiple layers of units, overlapping networks, and feedback loops within networks are additionally introduced.

The most elegant models constructed thus far are probably the so-called "Boltzmann machines," in which the output of a unit is a stochastic function of its input, and each unit is symmetrically connected with each other unit. As their name indicates, these machines are formally similar to thermodynamic systems. They continually relax into states of minimal "energy," that is, of best fit with the available input and the activity which is internally going on. Local minima of mismatch are avoided by starting at "high temperatures," that is, at high levels of "noise," and then gradually "cooling down" the system. Given sufficient time, these machines construct stable "inner models" of the outer world. Fully automatically, without the intervenience of any programmer, the synaptic junctions can make use of locally availabe information to bring the whole network in harmony with the outer world.

Now these are only mathematical models. What concrete results have been achieved? At the neural level, the most impressive result is that of Pellionisz and Llinás, who have made a vectorial analysis of the functioning of the cerebellum. The cerebellum is particularly suitable for such analysis: its parallel structure is very striking, and its "wiring diagram" is largely known. Pellionisz and Llinás regard the cerebellum as a huge matrix, which transforms an "intention vector" or "goal vector" coming from the cerebral cortex by way of the mossy fibers and climbing fibers, into an "execution vector" which is sent down to the muscles via the axons of the Purkinje cells. The basic task of the cerebellum is to act as a coordinate transformer: the incoming vector is stated in sensorimotor coordinates, whereas the outgoing vector must be stated in motor coordinates, specifying the detailed sequencing of muscle activity. They have made a related model of the vestibulo-ocular reflex, where it is the task of the vestibular nuclei to transform the three-dimensional input space (one dimension for each semicircular canal) into a six-dimensional output space (with one dimension for each extraocular muscle), and thus to compensate for head movements with eye movements.

It is interesting to note that such coordinate transformers are called "tensors," a term which may sound familiar from relativity theory--so the mathematics of the brain may be similar to the mathematics of physical space-time. According to Pellionisz and Llinás the brain should be understood more geometrico; the task of brain science is to understand "the intrinsic geometrical properties of the Central Nervous System hyperspace."

Pellionisz's and Llinás's work neatly ties in with brain research; other models do less so, but are still not far removed. Here one might think of David Marr's work on vision, which also employs many parallel working linear threshold units to derive global solutions from local parameters.

The psychological models thus far proposed make less direct contact with the working brain. We have already mentioned the model of learning the past time in English, and the models of word recognition and typing. A problem with these models is that the units here stand for concepts, conceptual nodes and hypotheses instead of synapses and neurons, and the connections for lines of inhibition or activation instead of dendrites and axons. The detailed relationship with real neurons is often unclear. However, these models are at least neurally inspired, and it is conceivable that the units may one day be replaced with real neuronal networks.

Further prospects

This ends our brief look at contemporary psychoneural theorizing. As we see, psychologists and brain scientist are already well under way in sorting out the basic components of human intelligence; at least they seem to be on the right track. Perhaps the processes which have been studied as yet will be found to be too mundane. However, I think that on the one hand, intelligence precisely consists of the interplay of relatively low-level processes, and that on the other hand, there is no superlunary and sublunary sphere to which different kinds of explanations apply, but one great continuum instead. The creative spark--perhaps the supreme manifestation of intelligence--is just another form of the falling of a system into a new stable configuration after it has been shaken up to a higher temperature. Thus, I think these models will ultimately illuminate the whole spectrum of cognitive phenomena.

Perhaps one of the most important uses of the new models is the new metaphors they suggest, such as "goodness of fit," "harmony," "temperature," "energy," "constraint satisfaction," "relaxation," "annealing," and "stability," "local and global mininima." These new metaphors lead to a new view of man and inspire new research strategies which may be more fruitful than previous ones.

Some philosophers have already overdramatized these results and say that our common, everyday, folk-psychological views will wither before these new theories. They think there will be a wholesale replacement of old and antiquated mental concepts such as "belief," "desire" and "thinking," presumably including "intelligence." This view seems too strong: I think there is preciously little generally accepted theory regarding these notions (that's why we are gathered here today), and therefore only a vacuum will be filled up.

Moreover, the change is perhaps smaller than might be thought. For example, the rule-following metaphor need not be dropped: matrix networks cannot be stronger than Turing machines, so each network can be simulated by a Turing machine, and if the latter is describable as following rules, then so is the machine it mimicks. For the same reason, Turing machine functionalism, according to which mental states are similar to, or even identical with, the logical states of Turing machines, is still as valid as before, even if we turn out to be Boltzmann machines moving around in non-Riemannian hyperspaces, or whatever. This is not to say, however, that the Turing machine and rule-following metaphors may not be seriously misleading and heuristically damaging, simply because they let the wrong associations pop up in our minds.

Whether old-fashioned, high-level cognitive psychology, which hardly paid attention to neuronal details, will ultimately be judged of value is a matter of the future to decide. Perhaps its results will be seen as attempts to write programs in a higher level language which turned out not be interpretable or compilable into the assembly language of the brain; or perhaps it will be seen as having been built up on foundations which it regarded as basic but which turned out to have a great deal of inner structure after all.

It is possibly best to rule out neither possibility and keep an open eye for all available shreds of insight, wherever they come from. For as Ebbinghaus said in his Abriss der Psychologie (1908), this is precisely one of the characteristic marks of intelligence:

Beschränktheit des Gesichtskreises also und starres Verlaufen der Reproduktionen in den gewohntesten Bahnen auf der einen Seite, dagegen Umsicht und Beweglichkeit des Denkens bei gleichzeitiger Festhaltung eines herrschenden Gedankens oder eines einheitlichen Zweckes auf der anderen, das sind die unterscheidenden Kennzeichnen von Dummheit und Intelligenz.

Selected readings

Abstract

Do the brain sciences have to tell us anything about intelligence? At first sight, they do not: the literature on the brain is concerned with neurones, synapses and neuronal networks, and hardly contains any discussions of the cerebral basis of intelligence. And indeed, some philosophers have maintained that the brain sciences will never say anything about intelligence, not because intelligence is essentially mysterious, but because it is a concept that simply does not fit in with science. According to them "intelligence" is, like most other "mental" terms, a term dating from pre-scientific, primitive times, comparable in standing to "witch," "demon" and "ghost": seeking neurophysiological explanations for intelligence would be as ludicrous as seeking quantum-mechanical explanations for the phenomena allegedly connected with witchcraft (say, the first law of magico-dynamics: witches float on water). There is nothing in nature corresponding to that quaint and antiquated concept, "intelligence."

In our talk, we will argue that the latter view is wrong: although they seldom use the vague term "intelligence," brain scientists are already well under way in sorting out the components of human cognitive functioning which together make up intelligence. Intelligence is not a basically flawed notion, but a term denoting a phenomenon belonging to the world of the natural sciences which may, in the long run, conceivably become no less well understood than other natural phenomena.

For some people, this is a discomforting thought: they see intelligence as an essentially mysterious aspect of "the wonder of being human" (to quote the title of a book by Eccles and Robinson) and regard each claim that it might eventually be understood as an insult to human dignity. These people doubt that any progress in the understanding of intelligence is possible at all and belittle each effort that is made in this direction: as soon as someone understands a particular task that presupposedly needs intelligence (and is able to prove this understanding by, say, putting forward a computer simulation of this task), they say that this only shows that the task did not involve real intelligence after all. However, such a position merely seems to be undue mystification. The cerebral mechanisms underlying intelligence need not be essentially mysterious and incomprehensible in order to be admired; if we were able to understand these mechanisms this would only make them all the more wondrous, for this would mean that they were able to make sense of themselves.


Previous | Up | Next

gjclokhorst@gmail.com || July 17, 2015 || HTML 4.01 Strict