Anthroposophical Quarterly 22.1 (Spring 1977)
In the course of a lecture he gave in 19091 Rudolf Steiner made the following observations:
Our feelings, perceptions and thoughts exist in the physical world along with phenomena of colour, sound, taste, scent, etc. It must be clear to us that everything which constitutes man in incarnation, every feeling experience between birth and death, every thought, every idea is a phenomenon (Erscheinung) of the physical world.
We do not know how much earlier in his life he had come to realise this pervasiveness of a physical ingredient to every normal mode of human consciousness. What we do know is that in his case it did not produce the consequences that have ensued from the same realisation elsewhere. It did not render him incapable of thinking. It did not, for instance, leave him incapable of distinguishing any longer between thoughts, feelings and ideas on the one hand and perceptions on the other; or between perceiving and being perceptible (a contrast one would rather expect an educated mind to be able to grasp and retain without undue exertion) and thus between what is by definition immaterial and what is by definition material.
Steiner, as most readers of this journal will be aware, laid the foundations of a physiology of the whole human organism precisely in its relation to consciousness, one which took full account of feeling and willing as well as of thinking and perceiving, and which therefore took account of two other systems of the body besides the brain and nerves, tracing the extremely intricate reciprocal interpenetration both of the systems themselves and of their distinct functions.
It is a hard saying, but I fear it is just a fact that what I think I must call establishment physiology, together with psychology so far as that is behavioural, is based on – or rather it has sprouted from – an inability to grasp precisely such self-evident distinctions as those I have just referred to; or, having grasped them, to retain that grasp for more than a few minutes at a time. The intellectual morass which has resulted from this species of mental paralysis is the thing that makes it so difficult to do what I have just been asked to do, namely to comment on the 1976 Reith lectures. I must do the best I can with them.
The overall result, the totality as it were of the morass, has of course been the deplorably unexamined assumption that ‘consciousness’ is – not just dependent on, or conditioned by, but ‘localised in’ the brain. From which it follows that, if you examine the brain minutely enough, you will find the little fellow nestling there, if not in one part then in another. And that is really what the Reith lectures were about. They were announced in the Listener as follows: –
The Reith Lectures 1976 – Mechanics of the Mind.
Dr. Colin Blakemore, this year’s Reith lecturer and at 32 the youngest yet, is a physiology lecturer at Cambridge and a Fellow of Downing College. He has just been awarded a three-year Royal Society Locke Research Fellowship which will enable him to pursue his major interest – studying how the brain modifies itself. His Reith Lectures cover the whole field of brain research, which, as he tells Ian McEwan, he regards as one of the last great frontiers of science.
The first thing that demands to be said about these six lectures is, that they were very bad; bad, not because of the point of view from which they were written, though no doubt the morass and the badness are not unconnected, but because they were so scrappy and incoherent. There was no perceptible marshalling of a varied material. I will try nevertheless to deal with them as if they had been an ordered whole.
Five of the six began with a fairly detailed anecdote concerning the effect of some particular accident to, or surgical operation on the brain of a human being or an animal. Much of this material was interesting, and the same applies to other bits of information, technical and otherwise, thrown out by the lecturer in the course of the series. I did not for instance know that the tissue of the brain itself is insensible to touch, heat, or pain. Or that damage to the brain almost never causes irreversible loss of language in a child, through irremediable loss is a common consequence of brain injury in adults; or that the change that occurs in the electroencephalogram when a person becomes more alert or nearer to waking, is not an increase in the amplitude of his alpha rhythm, but its virtual disappearance; or that every year one man in 14, one woman in 7, consult a doctor about some form of mental disease and 600,000 people in Great Britain are referred to psychiatrists. As a matter of fact I did not even know that Galileo went blind before he died.
The trouble was that the anecdotes and information appeared to have little, if any, bearing on the main theme. And if this was true of the bits of physiological information from his own field, it was no less true of the lecturer’s scattered allusions to ideas from other disciplines. These appeared to serve no particular purpose beyond showing that he had heard of them. The first lecture was devoted to the history of the subject – “man’s search for the soul”. The audience was told something about Plato and Aristotle and Descartes and was then informed that
The historical evolution of the concept of the mind mirrors man’s social development, from Plato’s genetically-controlled meritocracy of the mind, to Gall’s picture of innate organs of intelligence and character shining through the honesty of the shape of a man’s head. The force is still with us. Current debates about the inheritance of intelligence, about the use of techniques of behavioural modification, and about the genetic basis of social behaviour show that our models of the mind are still a part of the political and social theory by which we live.
Plato “postulated that the spiritual mind of man, the rational soul, is in his head” because he “was a wealthy aristocrat who believed that leisure was essential to wisdom, which was therefore automatically denied to the working poor.” (Karl Marx and Karl Popper.) We think differently for equally extraneous reasons. Does Blakemore himself then think as he does about the brain for similarly extraneous reasons? In the first part of the first lecture the suggestion was thrown off that perhaps he does. For there was first an allusion to T.S. Kuhn’s Structure of Scientific Revolutions, and the shifting paradigms of science, and then this remarkably casual comment:
It seems almost inconceivable today that anyone could ever doubt that man’s mind is in his brain. For me, the ‘me-ness’ of me is undoubtedly situated about two inches behind my eyes, in the very middle of my head. But I am sure that I feel this with such confidence because I accept the currently fashionable scientific evidence that it is so.
The suggestion is thrown out and we heard no more of it. The idea for instance that a confident feeling originating in current fashion might for that reason be worth re-examining rather carefully had evidently never occurred to him.
It was the same with the allusion to ‘holism’ in Lecture 2 (which was actually about sleep):
The problem of human consciousness has stirred up fierce debate between the reductionists, who would banish the Cartesian soul from the machinery of the body, and the holists, who see consciousness as the most personal evidence for a universal law – that the whole is more than the sum of the parts… Most scientists are embarrassed when they cannot explain events by the forces and laws that they already understand. Those who study the brain usually shuffle their feet uncomfortably , and quickly change the subject, when the discussion turns to that on brain function that we all know so intimately – consciousness itself.
There is then a ‘problem’ of sorts, it seems. But, once it had been mentioned, we have heard no more of it. It was very difficult to see what purpose was served by introducing these asides into a course of lectures based firmly throughout on the presuppositions of reductionism; reductionism (or materialism, as we used to call it) taken not at all as a paradigm or a fashion, but as a simple fact.
Perhaps I was not quite fair in saying we heard no more of it. Dr. Blakemore did follow up his reference to holism with a sort of argument. It is here that we begin to get sucked into the morass, and I pause to remark that the italics in the passage just quoted, and in any quotations that follow are my own, that I have added them in the hope of introducing a little firm ground, and that they all draw attention to one of two things: either the slithering of one idea into another, or the introduction of terms of mental reference or connotation (symbol, code, message, instructions, recognize, etc.) which, while perfectly justifiable in casual discourse, promptly return into an unwadeable jelly any context expressly directed to demonstrating that what they refer to need not be there at all. “The brain researcher of today,” Dr. Blakemore conceded,
is almost as impotent to evaluate consciousness as a computer is to judge beauty or put a price on a Rembrandt portrait. But it does not follow that beauty is more than the sum of a number of definable features, or that a Rembrandt is more than all its individual brush-strokes.
What are we to make of this, when no-one who had not succeeded in losing touch with his wits would ever dream of supposing that it does ‘follow’? Beauty, and any other quality, is more than the sum of a number of anything elses, not because scientists or computers can or cannot do something or other, but (Cardinal Newman was once faced with the same surgical necessity of operating on hopeless confusion by expounding the obvious in words of one syllable) “for the plain reason that one idea is not another idea”.
I really do not know what to do about lectures 3, 4, and 5 (entitled respectively “An Image of Truth”, “A Child of the Moment” and “A Burning Fire”). They seemed to be mainly about perception and memory, with incidental allusions to Renaissance-art, the Romantic movement, Bishop Berkeley, Noam Chomsky, and others. I have re-read them two or three times and can still make nothing but a hotchpotch of them. We began with a brief historical account of gradual concentration of brain-research on the human cortex, and thus we ended with this:
The spread of laws through social groups of animals, which is so well developed in the primates, and especially in man, allows the experience of the individual to become reflected in the behaviour of other members of the same social group, even those in later generations. This is truly the social inheritance of acquired characteristics.
Perhaps it is. But what has it got to do with brain research? Or is it the word inheritance intended to suggest that the whole process is a function of ribonucleic acid? I just don’t know. But it might be, since we had previously been informed that “RNA is the ribonucleic acid that transcribes the message from the DNA of the gene” and then goes on to assemble the protein molecules; that “all of these substances, DNA, RNA and protein, employ logically equivalent coded messages”; that “Each brain might contain within itself all that potential memories that it could ever form”; and that in that case “every event would merely trigger production of the appropriate molecule, which was already described, in latent form, in the animal’s inherited DNA.”
Once again, what on earth can be said about this sort of thing? Potential memories … logically equivalent messages … some trigger! It is no use being flippant, but I really do not think anything much can be done with these three lectures beyond transcribing a few passages to illustrate the slovenly thinking they reveal. The curious use of the words “explain” and “explanation”, for instance, in the following comment on the fact that rats sometimes choose between alternative behaviours:
I do not mean to suggest that rats have conscious free will like man, nor to say that man is just a jumped-up rat, but it does illustrate that the existence of choice is not necessarily incompatible with a mechanical view of mind. Just as one value of consciousness is to explain the actions and emotions of others, the element of free will in conscious thought is an internal explanation of our own (lecturer’s italics) behaviour.
Or this on internal “maps” (in relation to visual perception and the retinal image):
– much of the physiological investigation of the brain has been concerned with the Cartesian problem of internal maps.
The astonishing tangle within our heads makes us what we are … The richness of inter-connection makes each neuron a Cartesian soul.
– just as a real map is not the country it portrays so the sensory maps in the brain are not explanations for the objects of our perceptual world. For the source of knowledge, for the solution to Descartes’ dilemma, we must look within the maps, at the nerve cells of which they are made – the neurons of knowledge. There are more than 10,000 million nerve cells in the human brain etc, etc.
Does Dr Blakemore really think that, in order to find the “explanation” of a real map, we must look within it at the 10,000 million drops of ink and molecules of pulp of which it consists? Hardly. But, if not, what does he mean? Not apparently (in spite of the mind’s being a ‘mechanism’), that neurons and their interconnection are knowledge. For
Neurons present arguments to the brain based on the specific features that they detect…
So, when we have got down to neurons, we are left with the same old problem we started with:
We seem driven to say that such neurons have knowledge. They have intelligence, they are able to estimate the probability of outside events…
Presumably the knowledge and the intelligence and the faculty of estimation are dismissable, like free will, as “internal explanations”. But I really cannot go on.
But what about the “Burning Fire”? In the light of previous quotations readers who did not hear the lectures will not be surprised to learn that
– the eye has a language for its soliloquy to the brain. It speaks in symbols that define the important features of the visual scene.
These observations led up to Lecture 5, which was all on the subject of language. “Words are theories about objects”, we were told. It was rather less incoherent than the others, and dealt at one stage with recent discoveries about the different functions of the two cerebral hemispheres. One of them is specially connected with (located in, in Blakemore’s language) – broadly speaking – the imaginative activity of the mind, and the other with its analytical activity. Some people (although the lecturer did not mention this) have also begun to relate the two hemispheres to psychological differences between the two sexes. But the bulk of the lecture consisted of a lengthy account of experiments in teaching chimpanzees to speak, from which we learned that there is at least one now who can put as many as three words together; not only so, but it can grasp the difference made by putting them in one order or in another.
How relevant is all this to the problem with which the lecturer evidently thought he was dealing – the nature of human consciousness in the present and its possible development in the future? Suppose another chimpanzee is taught to put as many as six words together, will it become any more relevant? I fear not. What other purpose then will it serve? Alas it is clear from the lecture itself that the real motive behind this kind of speech is neither heuristic nor practical. It is done with the object, conscious or subconscious, of bolstering the Darwinian theory of an exclusively biological ‘descent of man’, which has come increasingly under (unpublicized) attack in the last decade or two. Thus:
Virtually the only things that can be defended as uniquely human traits are [man’s] continuous sexual appetite, his formal taboo on incest, and his language. Algernon Swinburne once called human speech “a burning fire”, and there can be no doubt that the use of language was just as important in human evolution as the discovery of the flame itself. But now the experiments with American sign language threaten to force man to share this ultimate cultural crown with apes.
Lucy in short (the chimpanzee in question)
– is a member of a small and élite group of apes who are, unknowingly, excavating the foundations on which man has built the myth of his biological uniqueness.
Teaching monkeys to speak is one thing. Brain research is another, and I am certainly not suggesting that it has no value, or that the resultant possibilities of remedying physical defects and injuries, and their psychic consequences, by physical means should not be exploted up to the hilt. A correspondent from the Cybernetics Department of Brunel Unversity wrote to the Radio Times after the first lecture a letter, which was published in the Listener, criticising the idea that different vital functions are ‘localised in various parts of brain tissue’ (and adding, incidentally, that it is no longer valid today), and he went on to pinpoint the confusion of thought:
If the nerves in my arm leading to my hand were severed, then I would not have been able to write this letter. This according to the ‘localisation’ hypothesis, could be taken to meant that the centre for writing letters to Radio Times is in my arm!
He might have added that, if he happened to have arthritis in his arm, the first step towards getting a letter out of him would have been to cure the arthritis.
But that is the sort of thing the lecturer is really interested in. His enthusiasm for brain research comes of an apparently unshakeable conviction that, once it has been established that, in a human being, the subtlest idea and the most refined feeling is invariably accompanied by a physical or say an electrical, change in the brain, the mystery of human consciousness will have been solved and man will be in full control of his future. The last lecture was entitled “Madness and Morality”. It began with more about the two hemispheres and then went on to sociology and the future evolution of humanity. And it displayed the same deplorable confusion of mind. If we know something about the physical structure of the brain, we can either make physical use of that knowledge (surgery) drugs and so forth), or we can decide that another way of approaching our problem is more appropriate. Let us call it the ‘consciousness’ way. Take the two hemispheres for instance. If a movement is set on foot for “liberating the right hemisphere”, that is the imaginative, and relatively feminine, one (and according to the lecturer, there is such a movement), then the campaigner must mean by ‘liberation’ one of two things – either direct action on the brain itself, or indirect action by the ordinary means of agitation, argument, propaganda; by “the spread of ideas” in fact: in which case no difference whatever is made by calling it ‘liberation of the right hemisphere’, instead of something like freeing the imagination, or liberation of women.
I must doubt whether the lecturer is capable of such an uncomfortably disjunctive proposition. Which of the two alternatives, for instance, had he in mind when he remarked:
What we should be striving to achieve for ourselves and our brains is not the pampering on one hemisphere to the neglect of the other (whether right or left) or their independent development, but the marriage and harmony of the two?
Apparently the latter, for he next spent a good deal of time reassuring his audience against any fears they might have of an Orwellian society based on ‘psycho-surgery’:
Such fears are, in the main, quite unfounded: fortunately, the sheer paraphernalia of experimental brain manipulation, the implanted electrodes, the cables and electronics, the tedious surgical techniques, make that kind of brain control beyond the reach of any modern-day Alexander or Gengis Khan who wishes to motivate an army or subjugate the world at the push of a button.
So the approach through psycho-surgery is out; and we are thrown back on the old “spread of ideas” approach:
And in any case, are our brains not already more totally disciplined, our opinions more firmly moulded, and our minds more sharply directed by the political and social environment than by any electrode that could be put into our heads?
This came near the end of the last lecture. But Dr. Blakemore was not going to let a tedious bit of logical consequence stand in the way of his march towards a peroration. So, just before the end, he crowned the whole disorderly conglomeration of ideas, of which the series consisted, by triumphantly drawing the opposite conclusion from the one he had just been pointing to. Research on the brain, he affirmed,
will give a greater understanding of the nature of man himself. The study of the brain is one of the final frontiers of human knowledge and of much more immediate importance than understanding the infinity of space or the mystery of the atom. For without a description of the brain, without an account of the forces that mould human behaviour, there can never be a truly objective new ethic based on the needs and rights of men.
As I sit back from this laborious and, I fear, not very interesting account and try once more to survey the whole of what I have attempted to describe, I am left with one question uppermost in my mind. How much longer will it all go on? For how much longer will educated men go on being allured by the ignis fatuus of a ‘consciousness’ accessible to physical experiment and investigation? How much longer will they go on spending untiring energy in pursuit of it?
Forever? Will any argument ever penetrate the mental morass far enough to convince them of its inherent futility? If the Reith lectures 1976, shortly to be published in hardback and paperback editions, make that look depressingly unlikely, or even impossible, I do take a morsel of comfort when I reflect on something else altogether, something from the past history of science – I mean the quest for perpetual motion. Here too, according to the Encyclopedia Britannica (11th edition), there was for a long time the same persistence in intensive technical research and experiment, the same refusal to be put off by failure after failure. No doubt the perpetual motion fans answered the objections of the more clear-eyed among their contemporaries with the identical arguments now used by Professor Blakemore, when doubts obtrude and his colleagues begin shuffling their feet: “We should not translate technical limitations into metaphysical ones … there is no reason to believe that the scientific method will fail…” And so the experiments went on and on, and the machines grew ever more and more complicated.
And yet – in the end – the experiments were abandoned. It was realised that perpetual motion is impossible on principle; and the principle was even given a name: the Law of the Conservation of Energy. In the same way I cannot think it altogether impossible that a time will come when even the most hard-boiled and soft-brained empiricists will manage to grasp the fact that someone who wants to discover what happens when he looks out of a window, and who (if anyone) is making it happen, will never, however long he goes on trying, do so by going out of his house and looking in through the window – or, put more abstractly, the principle (as final for a neuron as for a whole human being) that perceiving and every other mode of consciousness, is categorically other than being perceptible, and therefore being accessible to a merely physical investigation.
What about a high-sounding name for that one? The Law of Intentionality? Not very likely. They would find their own label for it. And then the way would perhaps lie a little more open towards a more open-eyed science, a science of the soul and spirit as well as of the flesh.