Sunday, July 20, 2008
Consciousness - a Just -So Story
My notion is summarized in the idea that any animal that has a brain complex enough that it has to engage in some form of internal "debate" to figure out what to do next will be conscious in some fashion. More on this later. But there appear to be animals that are essentially on autopilot all the time -- wasps (the various burrowing wasps which paralyze their prey before laying eggs in the still living victim will repeat all the steps to bring the prey into the burrow over and over if the naturalist interrupts the steps by moving the victim, for example), ants, bees might operate somewhat like I do when I drive my car and forget whether I made the turn off or not. Human beings (I certainly) can perform a variety of tasks that don't engage the conscious brain .... some simpler animals might have only this capability.
But as animals get larger and evolve other features it sometimes appears to be advantageous to hone the decision circuits of the brain to make overt comparisons of possible solutions rather than going through stereotyped routines whenever there is a certain stimulus. This internal "debate" will be seen subjectively as "consciousness." The implication here is that I think that there are some animals that are conscious in some form or other that have been excluded in the past. For example, I would allow that (say) cattle or dogs or raccoons are conscious in some fashion. The convention that an animal must be able to recognize itself in a mirror seems to be too restrictive to me.
I am left with a problem with the larger reptiles. They are big and long-lived but their brains appear too small to accommodate the kind of consciousness that I am postulating so I don't pretend to have this little story nailed down.
But I do think that since we are human beings and we tend to magnify all our traits, it may be that we are making of consciousness more than it needs to be. We'll see.
Monday, June 16, 2008
“The story of the human race is war. Except for brief and precarious interludes there has never been peace in the world; and long before history began murderous strife was universal and unending.”
Winston Churchill's summary of our species could be dismissed as the pessimism of a man who fought history's most awful war and was present at the birth of a cold war that could have destroyed humanity altogether. In fact it has sadly stood the test of time. Though the cold war is a memory, and hot wars between major nations are rare, we still do not have peace in the world. Even before the infamous year of 2001, with its horrific terrorist attacks on the United States and subsequent war in Afghanistan, the World Conflict List catalogued sixty-eight areas of systematic violence, from Albania and Algeria through Zambia and Zimbabwe.
Churchill's speculation about prehistory has also been borne out. Modern foragers, who offer a glimpse of life in prehistoric societies, were once thought to engage only in ceremonial battles that were called to a halt as soon as the first man fell. Now they are known to kill one another at rates that dwarf the casualties from our world wars. The archaeological record is no happier. Buried in the ground and hidden in caves lie silent witnesses to a bloody prehistory stretching back hundreds of thousands of years. They include skeletons with scalping marks, ax-shaped dents, and arrowheads embedded in them; weapons like tomahawks and maces that are useless for hunting but specialized for homicide; fortification defenses such as palisades of sharpened sticks; and paintings from several continents showing men firing arrows, spears, or boomerangs at one another and being felled by these weapons. For decades, "anthropologists of peace" denied that any human group had ever practiced cannibalism, but evidence to the contrary has been piling up and now includes a smoking gun. In an 850-year-old site in the American Southwest, archaeologists have found human bones that were hacked up like the bones of animals used for food. They also found traces of human myoglobin (a muscle protein) on pot shards, and—damningly—in a lump of fossilized human excrement. Members of Homo antecessor, relatives of the common ancestor of Neanderthals and modern humans, bashed and butchered one another too, suggesting that violence and cannibalism go back at least 800,000 years.
War is only one of the ways in which people kill other people. In much of the world, war shades into smaller-scale violence such as ethnic strife, turf battles, blood feuds, and individual homicides. Here too, despite undeniable improvements, we do not have anything like peace. Though Western societies have seen murder rates fall between tenfold and a hundredfold in the past millennium, the United States lost a million people to homicide in the twentieth century, and an American man has about a one-half percent lifetime chance of being murdered.
…………………………………………………..
the reduction of violence on scales large and small is one of our greatest moral concerns. We ought to use every intellectual tool available to understand what it is about the human mind and human social arrangements that leads people to hurt and kill so much. But as with the other moral concerns examined in this part of the book, the effort to figure out what is going on has been hijacked by an effort to legislate the correct answer. In the case of violence, the correct answer is that violence has nothing to do with human nature but is a pathology inflicted by malign elements outside us. Violence is a behavior taught by the culture, or an infectious disease endemic to certain environments.
This hypothesis has become the central dogma of a secular faith, repeatedly avowed in public proclamations like a daily prayer or pledge of allegiance. Recall Ashley Montagu's UNESCO resolution that biology supports an ethic of "universal brotherhood" and the anthropologists who believed that "nonviolence and peace were likely the norm throughout most of human prehistory." In the 1980s, many social science organizations endorsed the Seville Statement, which declared that it is "scientifically incorrect" to say that humans have a "violent brain" or have undergone selection for violence.8 "War is not an instinct but an invention," wrote Ortega y Gasset, paralleling his claim that man has no nature but only history. A recent United Nations Declaration on the Elimination of Violence Against Women announced that "violence is part of an historical process, and is not natural or born of biological determinism." A 1999 ad by the National Funding Collaborative on Violence Prevention declared that "violence is learned behavior."
Another sign of this faith-based approach to violence is the averred certainty that particular environmental explanations are correct. We know the causes of violence, it is repeatedly said, and we also know how to eliminate it. Only a failure of commitment has prevented us from doing so. Remember Lyndon Johnson saying that "all of us know" that the conditions that breed violence are ignorance, discrimination, poverty, and disease. A 1997 article on violence in a popular science magazine quoted a clinical geneticist who echoed LBJ:
We know what causes violence in our society: poverty, discrimination, the failure of our educational system. It's not the genes that cause violence in our society. It's our social system.11
The authors of the article, the historians Betty and Daniel Kevles, agreed:
We need better education, nutrition, and intervention in dysfunctional homes and in the lives of abused children, perhaps to the point of removing them from the control of their incompetent parents. But such responses would be expensive and socially controversial.
The Painful Elaboration of the Fatuous Norman Levitt Deconstructs Steve Fuller’s Postmodernist Critique of Evolution
book review by Dr. Norman Levitt
The Intelligent Design movement begets intellectual monstrosities with doleful regularity, but Steve Fuller’s new book, I think, occupies an especially odd place in this teratology. Fuller, be it remembered, is a professor of sociology of science at the University of Warwick (UK), whose career has been built on a lofty and careless disdain for science itself. That trajectory reached its apex (or, depending on how you look at it’s nadir) when he appeared as an “expert” witness for the defense (i.e., the crypto-creationists of the Dover, PA school board) in the celebrated “Kitzmiller” case. As we know, the upshot of this litigation was that a conservative and conventionally religious federal judge rendered a ruling that not only came down squarely against the pro-ID school board, but savagely excoriated the ID movement per se as without legitimate standing in science or science education. Fuller’s testimony only helped to seal the school board’s well-merited doom.
The book under review is Fuller’s subsequent effort to justify philosophically the position that failed so miserably to sway the Kitzmiller ruling in ID’s favor. It is with frank satisfaction and not a little glee that I can report that it is a truly miserable piece of work, crammed with errors scientific, historical, and even theological, a book that will find approving readers only amongst hard-core ID enthusiasts hungry for agreement but indifferent to the quality of evidence offered in support of their position. Fuller really does make it up as he goes along, laying out arguments that hardly need serious thought to refute in that they are based on howlers and solecisms that collapse under the lightest scrutiny. In this review I also want to consider the defection of Fuller (who all his life has proclaimed himself a progressive and “leftist”) to a cause demonstrably reactionary in all respects. Does this presage a wider convulsion in the academic left that will see a proliferation of equally peculiar misalliances? Academics are often very faddish creatures more terrified by the prospect of missing a bandwagon than by possible shortcomings in their own arguments. Has Fuller identified a true bandwagon with uncanny prescience or merely hopped on board a broke-down old manure wagon?
Abusing Ideas: Randomness, Complexity & All That
First, to the evaluation of Science vs. Religion itself. Merely out of mathematical whimsy, I want to consider Fuller’s very extensive discussion of “complexity” and “randomness.” This, as mathematicians and computer scientists are well aware, is a subject that has been thoroughly studied and analyzed for decades, generating a slew of deep results and fertile conjectures. Fuller, however, shows no awareness of the actual mathematical literature (even though much of it is accessible, at the basic level, to anyone with minimal mathematical skill). Instead, he seems content to take ID-theorist William Dembski as his guide. He attributes to Dembski a maxim to the effect that it is “impossible” to design a true random-number generator because it is ultimately possible to “infer” the algorithm that lies behind it (p. 61). But this grossly misunderstands a basic principle of complexity theory, the insight that in general it is not possible to devise an effective method for distinguishing a random from a non-random stream of data. Indeed, it is easily possible for virtually anyone to devise a simple way of generating such a data stream (making it highly “compressible” or non-random), which will, for all practical purposes, defeat any human attempt to say whether it is or isn’t random or how “compressible” it really is. For instance, just by way of mathematical doodling, let sn be defined as the integer between 0 and 9 that is specified by the formula:
sn = [(pnth digit of the decimal expansion of sin(17/31) ) – 4] mod 10
where pn is the nth prime number.
(Please note that this formula has no mathematical importance; it’s purely off the top of my head.) It is very easy for anyone knowing a bit of first-year calculus plus a bit of computer programming to write a program to generate this sequence using a couple of dozen lines of code, at most.
However, if I hand you, say, the first 3,000,000,000 terms of this sequence without giving you the generator as a program or purely in words, it will be impossible, for all practical purposes, for you to tell me whether this is a “random” sequence or a “compressible” one (it is, in fact, highly compressible), and still less possible for you to specify a generating algorithm.
Such phenomena are not mere computer-laboratory curiosities. In celestial mechanics, for instance, a deterministic classical process may generate a string of parameters that is indistinguishable from random despite its deterministic genesis. This is one of the most fascinating aspects of “chaos theory.” But in the context of I.D. “theory,” the effect is to refute the naïve notion that design by an intelligent agent is always discernible.
Fuller, despite devoting a full chapter to “complexity” and expatiating therein on chaos theory as well, shows virtually no sign of any real familiarity with this mathematics. His exposition jumps from one topic to another, from one thinker to another, mathematical or otherwise, without any demonstration that they should be linked other than by some vague connection to “complexity” in some sense or another. This deliberately discards the precision and rigor that the introduction of mathematical discourse is meant to ensure in the first place. The whole point of this chapter, one gathers, is that the emphasis on “complexity” by Dembski and Co. underwrites, to Fuller’s way of thinking, the legitimate scientific status of ID theory.
To give just one example of the fatuity to which this leads, Fuller swallows whole the idea that computer simulation of “lifelike” complexity requires that the “design” of that phenomenon must already be embedded in the “intelligent design” of the hardware and software involved. This goes wildly astray, as mathematicians incomparably superior to Dembski (John Conway, inventor of the Game of Life, for instance) will gladly testify. The point of such models is that they emulate the posited key features of the standard evolutionary model, that is, the action of a simple selective process on randomly-generated variation. It has to be noted that the variation involved may indeed be as random as seems possible in the universe; it need not be created by a pseudo-random number generator built into the program, but can be taken from unconnected external phenomena, e.g., radioactive decay or the total take at a Las Vegas casino. What emerges in the end from this completely un-designed input is “complexity” that mirrors that of organic processes. The “intelligent design” involved merely involves mimicking the mindless mechanism postulated by Darwinian theory, not creating novelty in that aspect of things. This constitutes an in silico test of the fundamental Darwinian thesis. These computer experiments have enormously strengthened the hypotheses that in nature what we think of as organic complexity arises from an algorithmic mechanism simple to describe historically iterated time and again as it acts upon random variation.
It is almost superfluous to add that Fuller has done little to come to terms with Dembski’s most trenchant critics, actual experts in complexity and information theory, such as Mark Perakh and Jeffrey Shallit, the latter of whom has justifiably damned Dembski’s work as “pseudo-mathematics.” Nor has Fuller been very accurate in describing Dembski’s intended program, which is to demonstrate “mathematically” that the evolution of complex life via natural selection is literally impossible. But to acquaint himself with this now-voluminous literature would violate one of his favorite axioms, viz., that a “social epistemologist” needn’t actually understand science in order to belittle it.
Evolutionists as an Old Boys Club
A similar farce plays out when Fuller tries to address the larger question of the supposedly contentious nature of evolutionary theory within the scientific community itself. In the World According to Fuller, evolutionary theory never really got past the stage of being a “well evidenced ideology” rather than a “properly testable science” (p. 123). What he is saying, in effect, is that the claims from all branches of biology and related science that they have contributed to a vast stream of convergent evidence verifying the essential precepts of evolution are in great measure delusional. He seems to think that biology, as a constellation of disciplines, is some kind of socially-constructed freemasonry in which assent to basic Darwinian principles constitutes a ritual formula necessary to make one part of the brotherhood rather than a cognitively-justified inference from hard evidence. More, he seems to think that evolutionary thought is mere ideological window-dressing, contributing nothing to the “hard science” behind molecular biology and the like.
None of this is backed up by serious analysis of the working methods and logical structure of biology itself. Fuller complacently views the ascendancy of evolutionary thought as a “rhetorical” rather than a “scientific” development. His principal evidence? The paucity of Nobel Prizes awarded for work on evolution! Of course, he never pauses to consider that under the idiosyncratic organization of the Nobel awards, there is no prize for biology as such. Biologists are smuggled in under the “Medicine and Physiology” category, which is just expansive enough to accommodate ethologists like Lorenz or Tinbergen, but not hard-core evolutionary theorists. In all of these pronouncements, Fuller is hard-pressed to hide his scorn for actual scientists who, it is obvious to him, know much less about what they think and how and why than a social theorist like himself who is enormously content to cite his own work endlessly.
Newton, Biblical Literalism & the Misuse of Terminology
Curiously, Fuller is even more careless and dogmatic when dealing with historical and religious matters than when talking about science. For instance, he blithely associates Newton’s secretive Anti-Trinitarianism with the Unitarian doctrine that began to gain popularity late in the Enlightenment, the idea being, I suppose, that Newton’s religiosity is really consonant with a tolerant and latitudinarian attitude toward doctrinal matters. But this flies in the face of the fact that Newton was a grim dogmatist in his religious beliefs, whose only link to “Unitarians” in the modern sense is that both deny the full divinity of Jesus of Nazareth. Newton, however, came to his views out of a strict biblical literalism deriving from the Puritan tradition that had driven England to civil war. From his point of view, the lack of direct biblical authority for the notion that Christ is an aspect of the deity condemned that dogma as a corrupt accretion inimical to true religion. Modern Unitarianism, on the other hand, arose from a skeptical attitude toward the literal truth of the Bible and severe doubts about supernaturalism and miracles in general. Newton would have been horrified by it.
This topic may seem to be a mere diversion in any serious discussion of the proper ground-rules of scientific practice, but Fuller makes Newton into a totemic figure for his own rhetorical position. Fuller’s major contention is that seeking to know the Mind of God, in a rather literal sense, trying to discern the root intelligence behind the accessible phenomenology of the universe, is just as good a way of doing valid science as “methodological materialism.” In this respect, Newton, whose religious motivations are beyond question, is the paragon to contrast with the metaphysical materialist Darwin (and, presumably, the vast majority of productive scientists who have lived and worked since Darwin’s day). However, one may freely concede that strong, conventional religious feeling can motivate an individual to do the hard work of science without yielding an inch to the quite different premise that the supposed insights of religion may rightfully dictate the manifest content of scientific work. The latter principle infuses Intelligent Design Theory, as practiced by Dembski, Behe, Wells and the gang clustered around the Discovery Institute under the tutelage of Phillip Johnson. And, conveniently, Newton himself provides a telling example of the intellectual quicksand into which it can lead.
Theistic Eschatology and Bad Physics: Newton’s Greatest Blunder
Newton’s religious streak led him to take an intense interest in eschatology, that is, the final purpose and fate of the created universe. He devoted as much time to investigations into the divine timetable for the End of Days — the prophesied arrival of the Day of Judgment — as he did to his research in mathematics and physics. But he did so in a traditional manner, that is to say, relying on information supposedly encoded in the Bible, rather than on any novel cosmological insights arising from the revolution he himself had wrought in celestial mechanics. In this sense, at least, we have evidence of the enormous waste of scientific talent and intellectual energy that can be caused by an obsessive concern with religion.
Yet Newton’s religion at one point led him into an even more paradigmatic scientific solecism, one that perfectly illustrates the peril of allowing the content of one’s scientific work to be dictated by one’s religious fervor. Newton, no less than his frankly materialist or Deist successors, was well aware that the cosmological picture flowing from his own achievement left little room for an interventionist God — an activist, miracle-working being whose constant attention is necessary to the steady functioning of the universe. He sensed that his own brilliant ideas constituted an argument for the deus abscondatus, a conceptual innovation that was soon to become a standard item of skeptical Enlightenment thought. But Newton’s religious traditionalism, unconventional as it was in some respects, found this notion abhorrent because the impersonal God it cautiously endorsed was a far cry from the Biblical Ancient of Days embedded in his own theology. This led him to argue that his own system of the world must be incomplete and that it must indeed be modified to allow a role for an interventionist God whose intermittent action is necessary to keep planets and comets in their orbits. The key point is that this line of thought did not follow from the mathematics of Newton’s mechanics, nor from any sound new physical insight. It was dictated, rather, by the psychological necessity of reconciling his scientific achievement with his pre-existing religious dogma. It was not only an uncharacteristically unsound idea, but constitutes Newton’s greatest intellectual blunder.
One would think that Fuller would at least try to come to terms with this curious history, given that he offers Newton as the paragon of scientific “design theorists”. But he never seems to have heard of it, assuming he is not simply burying it as grossly discomfiting to his line of argument. In any event, given that Newton is the stick with which Fuller intends to beat Darwin, his lack of real knowledge of serious Newtoniana is emblematic of the shallowness of his book.
Ignoring the Politically Obvious
That obliviousness is even more evident in Fuller’s utter failure to come to terms with the political nature of the Intelligent Design movement. He mentions the notorious “wedge” strategy once or twice, but only with an exculpatory purpose. As Fuller would have it, “Just as the ACLU helped to drive a wedge between the teaching of science and theology, the Discovery Institute would now drive a wedge between the teaching of science and the anti-theology prejudice euphemistically called ‘methodological naturalism.’” Aside from the false symmetry of this characterization, this description simply will not wash. The “wedge,” as conceived by the hierophants of the Discovery Institute, means discrediting evolutionary theory as the initial step of a program to re-institute traditional religious precepts (fundamentalist Christian in particular) as the dominant code governing civil and legal affairs in this country. It is a patently reactionary political program, not a philosophical one.
Naturally, this embarrassing fact is too much for someone who, like Fuller, thinks of himself as a left-populist, to admit directly. He tries to get himself off the hook by fulminating against the British National Party, a right-wing sect obsessed with maintaining ethnic and racial purity in the UK against immigrants and such, claiming that its repellent ideology is a direct corollary of Darwinian thought. This is more than a little silly if it is not actually disingenuous. I daresay that if I were given five bucks for every BNP recruit who was prompted to join that mob primarily by his enthusiasm for evolutionary theory, I couldn’t even muster the price of a tank of gas.
But this diversion serves Fuller as an excuse for ignoring the “deep structure” of, say, the Discovery Institute, whose board prominently includes the Christian Dominionist billionaire Howard Ahmanson, a prominent contributor to a legion of far right causes. Equally, it exonerates the very organization that called Fuller as a witness in the Kitzmiller Case, the Thomas More Legal Foundation, which provided pro bono counsel to the beleaguered Dover school board. This outfit, please recall, was founded and funded by pizza magnate Thomas Monaghan, an ultramontane right-wing Catholic who has also established the Ave Maria Law School, hoping to expand it to a full-fledged university dedicated to turning out neo-Crusaders by the thousand.
Movement or Tantrum?
Now I would like to consider the question of whether Fuller’s ideological flight into the embrace of the theocratic right bespeaks a wider tendency within the postmodern academy to trade its vaunted left-radicalism for the honor of riding shotgun on behalf of the new breed of creationist theocrats. Certainly, Fuller is not the first “science studies” scholar to put forth a brief on behalf of creationism. A few other figures, some even more prominent than Fuller, have done so within the past decade. Still, they all seem to have pulled in their horns as soon as it became clear that creationism is not simply the cultural self-assertion of a repressed minority trying to defy the brute scientism of modern society, but rather the tool of a well-funded and deadly serious political movement able to call upon the near-majority instinctively sympathetic to creationist ideas. Fuller, so far as I know, is the only member of this academic clan to have unreservedly taken the plunge, irrevocably committing himself to the creationist cause.
The wider lesson, if there be any, is that animosity to science as such and to its cognitive authority still pervades academic life outside the dominion of the science faculty. The compost that nurtured Steve Fuller and many of his associates in their development of “social constructivist” theory consisted principally of these doubts, resentments and antagonisms. This soil put forth a host of noxious weeds, quite varied, and sometimes taxonomically linked only by the common bitterness they exuded. Each in its own way — literary theory, cultural studies, cultural anthropology, women’s studies, ethnic studies, and a long-standing Marxisant approach to sociology — joined the tacit alliance of antiscientific intellectuals whose imprecations grew all the louder even as their influence over the practice of science and public science policy shrank to imperceptibility.
The anti-science of the contemporary academy is a late and petulant echo of Spiritualism, Anthroposophy, Theosophy, Forteanism, and a dozen other cults that once appealed to the culturally fashionable. But now they are bound up in the knotty and constipated jargon of journals and seminar rooms and lack the high spirits that made the original versions pleasantly whimsical. Anti-science in today’s university whines and grumbles when it is not busy bedecking itself with the pseudo-virtue of today’s eco-Puritanism: the Animal Rights Movement, fulminant opposition to genetic engineering, Deep Ecology, and so forth.
It is easy to mock this development and hard not to scorn it. But perhaps a little sympathy is in order, providing it stops well short of indulgence. Basically, one is dealing here with a community of people who, by common standards, are quite intelligent and imaginative, and certainly diligent enough to carve out large areas of discourse for themselves wherein their assumptions and modes of analysis remain in the saddle for decades at a time. This is not a trivial achievement, think what we may of the fundamental soundness of the enterprise. We can’t really speak of a Ship of Fools here, but rather a flotilla of somewhat unhinged idealists who still can put up a pretty good fight. Yet, ultimately, they are cruelly and fatally hemmed in by their inability to come to terms with the deepest and most penetrating ideas that our civilization, or any civilization, has yet been able to generate: the ideas of science and mathematics.
Further, they must confront the practical significance of the barrier that separates them from first-hand knowledge of science, engineering, and economics. Fields like these are vital to the formulation and critique of public policy and interact with our political institutions to a vast extent. They are the conceptual fuel that drives modern society. The vagaries of literary theorists or cultural anthropologists, by contrast, hardly leave a trace on wider public concerns. They could easily fade away without anyone outside the faith taking much notice.
Ultimately, then, we shouldn’t be startled by the alienation of academic non-scientists from science and technology, nor by the churlishnish with which they address such issues. Steve Fuller is merely an extreme case, an outlier. He represents what a widespread attitude may become when infused with mega-oses egotism and self-regard, and when maximally saturated with the desire to belittle and condescend to the much-hated scientific community. Fuller has perpetrated a dreadful book, but as a tantrum, it is exemplary. He may draw some cautious admiration from his colleagues for the operatic brio of his histrionics. But it seems to me doubtful — and this is a very good thing — that any large segment of the science-studies community, nor of the larger “academic left” will join him in the attempt to find comrades-in-arms in such venues as the Discovery Institute or the wider Intelligent Design movement. Figures like Johnson, Dembski, and Behe, not to mention Ahmanson and Monaghan, burn all too visibly with a searing desire to inaugurate a Godly polity that will be as intolerable to the postmodern left as to conventional liberals or secularists. These guys are just too scary, even for those academics who have heretofore flaunted their disdain for orthodox science. Fuller, I’m afraid, will just have to go it alone.
Tuesday, April 1, 2008
In 1984, members of the religious cult of Bhagwan Shree Rajneesh sprayed the salad bars of four restaurants in The Dalles, Oregon, with a solution containing salmonella. The idea was to keep townspeople from voting in a critically contested local election; 751 people became ill. This cult merely obtained mail-order biological salmonella samples and cultured them. (This is low-tech kitchen stuff.)
The second cult, in contrast, recruited technically trained people in considerable numbers and engaged in indiscriminate slaughter. Aum Shinrikyo ("Aum" is a sacred syllable that is chanted in Hindu and Buddhist prayers; "Shinrikyo" means supreme truth) is a wealthy religious cult in Japan (recently renamed Aleph), with many members in Russia. Their recruiters aggressively targeted university communities, attracting disaffected students and experts in science and engineering with promises of spiritual enlightenment. Intimidation and murder of political opponents and their families occurred in 1989 by conventional means, but the group's knowledge and financial base allowed them to subsequently launch substantial coordinated chemical warfare attacks.
In 1994, they used sarin nerve gas to attack the judges of a court in central Japan who were about to hand down an unfavorable real-estate ruling concerning sect property; the attack killed seven people in a residential neighborhood. In 1995, packages containing this nerve gas were placed on five different trains in the Tokyo subway system that converged on an area housing many government ministries, killing 12 and injuring over 5,500 people.
During the investigations that followed, it turned out that members of Aum Shinrikyo had planned and executed ten attacks using chemical weapons and made seven attempts using such biological weapons as anthrax. They had produced enough sarin to kill an estimated 4.2 million people. Other chemical agents found in their arsenal had been used against both political enemies and dissident members.
Sunday, March 23, 2008
A Question for Creationists:
Creationists who wish to deny the evidence of horse evolution should careful consider this: how else can you explain the sequence of horse fossils? Even if creationists insist on ignoring the transitional fossils (many of which have been found), again, how can the unmistakable sequence of these fossils be explained? Did God create Hyracotherium, then kill off Hyracotherium and create some Hyracotherium-Orohippus intermediates, then kill off the intermediates and create Orohippus, then kill off Orohippus and create Epihippus, then allow Epihippus to "microevolve" into Duchesnehippus, then kill off Duchesnehippus and create Mesohippus, then create some Mesohippus-Miohippus intermediates, then create Miohippus, then kill off Mesohippus, etc.....each species coincidentally similar to the species that came just before and came just after?
Creationism utterly fails to explain the sequence of known horse fossils from the last 50 million years. That is, without invoking the "God Created Everything To Look Just Like Evolution Happened" Theory.
[And I'm not even mentioning all the other evidence for evolution that is totally independent of the fossil record -- developmental biology, comparative DNA & protein studies, morphological analyses, biogeography, etc. The fossil record, horses included, is only a small part of the story.]
Truly persistent and/or desperate creationists are thus forced into illogical, unjustified attacks of fossil dating methods, or irrelevant and usually flat-out wrong proclamations about a supposed "lack" of "transitional forms". It's sad. To me, the horse fossils tell a magnificent and fascinating story, of millions of animals living out their lives, in their natural world, through millions of years. I am a dedicated horse rider and am very happy that the one-toed grazing Equus survived to the present. Evolution in no way impedes my ability to admire the beauty and nobility of these animals. Instead, it enriches my appreciation and understanding of modern horses and their rich history.
http://www.noanswersingenesis.org.au/horse_evolution.htm
Monday, March 3, 2008
From "How the Mind Works" by Steven Pinker
The evolutionary psychology of this book is a departure from the dominant view of the human mind in our intellectual tradition, which Tooby and Cosmides have dubbed the Standard Social Science Model (SSSM). The SSSM proposes a fundamental division between biology and culture. Biology endows humans with the five senses, a few drives like hunger and fear, and a general capacity to learn. But biological evolution, according to the SSSM, has been superseded by cultural evolution culture is an autonomous entity that carries out a desire to perpetuate itself by setting up expectations and assigning roles, which can vary arbitrarily from society to society. Even the reformers of the SSSM have accepted its framing of the issues. Biology is "just as important as" culture, say the reformers; biology imposes "constraints" on behavior, and all behavior is a mixture of the two.
The SSSM not only has become an intellectual orthodoxy but has acquired a moral authority. When sociobiologists first began to challenge it, they met with a ferocity that is unusual even by the standards of academic invective. The biologist E. O. Wilson was doused with a pitcher of' ice water at a scientific convention, and students yelled for his dismissal over bullhorns and put up posters urging people to bring noisemakers to his lectures. Angry manifestos and book-length denunciations were published by organizations with names like Science for the People and The Campaign Against Racism, IQ, and the Class Society. In “Not in our Genes”, Richard Lewontin, Steven Rose, and Leon Kainin dropped innuendos about Donald Symons' sex life and doctored a defensible passage of Richard Dawkins' into an insane one. (Dawkins said of genes “They created us, body and mind"; the authors have quoted It as "They control us, body and mind.") When Scientific American on behavior genetics (studies of twins, families, and adoptees) they entitled it "Eugenics Revisited," an allusion to the discredited movement to improve the human genetic stock. When the covered evolutionary psychology, they called the article “The New Social Darwinist,” an allusion to the nineteenth-century movement that justified social inequality as part of the wisdom of nature…….
In 1986, twenty social scientists at a "Brain and Aggression" meeting drafted the Seville Statement on Violence, subsequently adopted by UNESCO and endorsed by several scientific organizations. The statement claimed to "challenge a number of alleged biological findings that have been used, even by some in our disciplines, to justify violence and war":
'It is scientifically incorrect to say that we have inherited a tendency to make war from our animal ancestors.'
'It is scientifically incorrect to say that war or any other violent behavior is genetically programmed into our human nature.'
'It is scientifically incorrect to say that in the course of human evolution there has been a selection for aggressive behavior more than for other kinds of behavior."
"It is scientifically incorrect to say that humans have a "violent brain.""
"It is scientifically incorrect to say that war is caused by "instinct" or any single motivation. . . . We conclude that biology does not condemn humanity to war, and that humanity can be freed from the bondage of biological pessimism and empowered with confidence to undertake the transformative tasks needed in the International Year of Peace and in the , years to come."
What moral certainty could have incited these scholars to doctor quotations, censor ideas, attack the ideas' proponents ad hominem, smear them with unwarranted associations to repugnant political movements, arid mobilize powerful institutions to legislate what is correct and incorrect? The certainty comes from an opposition to three putative implications of an innate human nature.
First, if the mind has an innate structure, different people (or different classes, sexes, and races) could have different innate structures. That would justify discrimination and oppression.
Second, if obnoxious behavior like aggression, war, rape, clannishness, and the pursuit of status and wealth are innate, that would make them "natural" and hence good. And even if they are deemed objectionable, they are in the genes and cannot be changed, so attempts at social reform are futile.
Third, if behavior is caused by the genes, then individuals cannot be held responsible tor their actions. If the rapist is following a biological imperative to spread his genes, it's not his fault.
Aside perhaps from a few cynical defense lawyers and a lunatic fringe who are unlikely to read manifestos in the New York Review of Books, no one has actually drawn these mad conclusions. Rather, they are thought to be extrapolations that the untutored masses might draw, so the dangerous ideas must themselves be suppressed. In fact, the problem with the three arguments is not that the conclusions are so abhorrent that no one should be allowed near the top of the slippery slope that leads to them. The problem is that there is no such slope; the arguments are non sequiturs. To expose them, one need only examine the logic of the theories and separate the scientific from the moral issues.
My point is not that scientists should pursue the truth in their ivory tower, undistracted by moral and political thoughts. Every human act involving another living being is both the subject matter of psychology and the subject matter of moral philosophy, and both are important. But they are not the same thing. The debate over human nature has been muddied by an intellectual laziness, an unwillingness to make moral arguments when moral issues come up. Rather than reasoning from principles of rights and values, the tendency has been to buy an off-the-shelf moral package (generally New Left or Marxist) or to lobby for a feel-good picture of human nature that would spare us from having to argue moral issues at all.
The moral equation in most discussions of human nature is simple: innate equals right-wing equals bad. Now, many hereditarian movements have been right-wing and bad, such as eugenics, forced sterilization, genocide, discrimination along racial, ethnic, and sexual lines, and the justification of economic and social castes. The Standard Social Science Model, to its credit, has provided some of the grounds that thoughtful social critics have used to undermine these practices.
……………….
The confusion of scientific psychology with moral and political goals, and the resulting pressure to believe in a structureless mind, have rippled perniciously through the academy and modern intellectual discourse. Many of us have been puzzled by the takeover of humanities departments by the doctrines of postmodernism, poststructuralism, and deconstructionism, according to which objectivity is impossible, meaning is self-contradictory and reality is socially constructed. The motives become clearer when we consider typical statements like "Human beings have constructed and used gender—human beings can deconstruct and stop using gender," and "The heterosexual/homosexual binary is not in nature, but is socially constructed, and therefore deconstructable." Reality is denied to categories, knowledge, and the world itself so that reality can be denied to stereotypes of gender, race, and sexual orientation. The doctrine is basically a convoluted way of getting to the conclusion that oppression of women, gays, and minorities is bad. And the dichotomy between "in nature" and "socially constructed" shows a poverty of the imagination, because it omits a third alternative: that some categories are products of a complex mind designed to mesh with what is in nature.
Mainstream social critics, too, can state any absurdity if it fits the Standard Social Science Model. Little boys are encouraged to argue and fight. Children learn to associate sweets with pleasure because parents use sweets as a reward for eating spinach. Teenagers compete in looks and dress because they follow the example set by spelling bees and award ceremonies. Men are socialized into believing that the goal of sex is an orgasm. Eighty-year-old women are considered less physically attractive than twenty-year-olds because our phallic culture has turned the young girl into the cult object of desire. It's not just that there is no evidence for these astonishing claims, but it is hard to credit that the authors, deep down, believe them themselves. These kinds of claims are uttered without concern for whether they are true; they are part of the secular catechism of our age.
Thursday, February 28, 2008
What am I doing here?
Murmurs in the Cathedral
Review of Penrose, The Emperor's New Mind
The Times Literary Supplement, September 29-October 5, 1989.
[review of] Roger Penrose, The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics, Oxford Univ. Press, 1989.
The idea that a computer could be conscious--or equivalently, that human consciousness is the effect of some complex computation mechanically performed by our brains--strikes some scientists and philosophers as a beautiful idea. They find it initially surprising and unsettling, as all beautiful ideas are, but the inevitable culmination of the scientific advances that have gradually demystified and unified the material world. The ideologues of Artificial Intelligence (AI) have been its most articulate supporters. To others, this idea is deeply repellent: philistine, reductionistic (in some bad sense), as incredible as it is offensive. John Searle's attack on "strong AI" is the best known expression of this view, but others in the same camp, liking Searle's destination better than his route, would dearly love to see a principled, scientific argument showing that strong AI is impossible. Roger Penrose has set out to provide just such an argument.
It is a huge project. In order to build his case, Penrose must lead the reader through detailed discussions of many topics in mathematics (Turing machines and computability theory, complex numbers, the Mandelbrot set, Gödel's theorem, recursive function theory, complexity theory, Platonism versus intuitionism), classical Einsteinian physics and quantum physics, cosmology, and, of course, neuroscience. Most of these topics have been given excellent popular presentations in recent years--in Hofstadter's Gödel Escher Bach (Basic Books, 1979), Hawking's A Brief History of Time (Bantam, 1988), Gleick's Chaos: Making a New Science (Penguin, 1987)--but Penrose believes that he must go over this material again in his own way, digging deeper, explaining in more detail. The result is bracing reading, to say the least, and the topics for hundreds of pages on end apparently have nothing to do with the mind at all.
The inevitable first impression, then, is that the book is the ultimate academic shaggy dog story, a tale whose fascinating digressions outweigh the punch line by a large factor. Why does Penrose do it? Is there no swifter, more accessible route to his conclusion? No. Penrose sees that he has no hope of overthrowing the case for strong AI unless he can dislodge one of the most imperturbable objects in the intellectual universe: something I will call the Cathedral of Science.
The Cathedral of Science is the highly structured, beautifully articulated amalgam of "what everyone should know" about science, crowned by the inscrutable but talismanic formula, "e=mc2.". Its facade, visible to the general public, is popular lore: familiar and decorative items of information and misinformation about the Galilean physics of everyday objects, cartoon-style renderings of black holes and language-using chimps, and pockmarked with such tidbits as "you only use five percent of your brain" and "no two snowflakes are alike." Items in this layer are easily replaced or swept away, but underneath it lies the scientists' (and philosophers') much denser version of the same material, created largely of the remembered oversimplifications of university-level textbooks, supplemented by New Scientist and Scientific American articles, and such other high-quality interdisciplinary communications as the books just mentioned. This material forms the communally shared understanding on which everyone relies while working on their more particular specialities. Aside from a few brilliant polymaths, the neuroanatomist, the biochemist, the experimental psychologist and the philosopher of mind have roughly the same workaday understanding of quantum mechanics, entropy, and computability, for instance, and this understanding gives them sufficient reason to believe that they needn't understand these topics any better in order to do their work. The Cathedral's architecture is the familiar hierarchy of mechanistic materialism: living bodies are self-preserving, self-replicating machines made out of cells made out of molecules made out of atoms--with some weird quantum physics isolated (one hopes) in the cellar. No church has ever enjoyed a more entrenched or authoritative orthodoxy, an empire that expands with daily discoveries and protects itself from swift change by the distributed, mutual myopia of its adherents. Its heresies (ESP, creationism, vitalism) are readily identified and deplored in unison; its conservatism is hailed by almost all who participate in it, and for good reason.
It is Penrose's immense task to restructure our vision of the Cathedral of Science, shaking our conviction that it is largely settled and safe and familiar (except, of course, for that baffling business about quantum physics). His task is made all the more intricate by his recognition that most of the Cathedral is sound. He is a revolutionary, but no bomb-throwing nihilist. Like Archimedes, he needs a place to stand if he is to move the world, so he introduces a new taxonomy of theories in science, SUPERB, USEFUL and TENTATIVE, to distinguish what is inviolable and must somehow survive any revolution, from what might be replaced or abandoned. Euclidean geometry, Galilean dynamics, Maxwell's equations, Einstein's special and general relativity theories, quantum physics and quantum electrodynamics are all SUPERB, but even these must be put into a new alignment if we follow Penrose.
Briefly, here is the path of Penrose's proposed revolution. If the brain is a computer, its powers are circumscribed by the limits on all computation uncovered by Turing and Gödel. Turing showed that each possible mechanical computation can be precisely specified by a recipe consisting of a sequence of dead-simple mechanical steps. Such a recipe is called an algorithm; all computer programmes are algorithms. Gödel's Theorem showed that no algorithm for proving mathematical truths can prove them all. Doesn't Gödel's Theorem establish that there are tasks "we" (mathematicians, in any event) can perform that are beyond the capabilities of any machine? The idea that the human race can be saved from machinehood by riding on the coattails of those clever enough to understand Gödel's Theorem is well-explored territory, and the received wisdom is that all the previous arguments for this conclusion have been roundly defeated, so if Penrose is to get his needed premise here, he must find a new wrinkle. The standard Cathedral vision is that Gödel's Theorem proves that there is just some single arcane truth of number theory (a machine's Gödel sentence) that is beyond all mechanical computation (by that machine), and Penrose's detailed exposition of the wealth of non-computable but knowable results replaces that vision with an appreciation of the depth and importance of the realm of non-computable mathematics, certainly a domain that is eminently accessible to human mathematicians relying on "insight". Moreover, the results of complexity theory show that there are many officially computable results that are not practically computable--the algorithms that are guaranteed to yield the answer would take billions of years to run on the fastest conceivable computers. How, then, do "we" arrive at solutions to these problems? Penrose proposes that there is a "theoretical possibility that a quantum physical device may be able to improve on a Turing machine." (p.146)
This leads then to a solid review of classical (non-quantum, but relativistic) physics, packed with novel perspectives and designed to impress us that "we should not be too complacent that the pictures that we have formed at any one time are not to be overturned by some later and deeper view." (p.217) With our minds thus stretched open, we plunge into "quantum magic and quantum mystery," and are led yet one more time through the two-slit experiment, Heisenberg's Uncertainty Principle, the collapse of the wavefunction, the Einstein Podolsky Rosen paradox and Schrödinger's notorious cat--in more detail than I have heretofore encountered in a popular book. Here the upshot is more radical: Penrose doubts that the puzzles of quantum theory and its relation to classical theory will succumb to any tidy, local resolution, and, like Einstein, he resists the standard "anti-realist" interpretations favored by most physicists. After a further chapter laying groundwork in cosmology on the flow of time and the curious status of the second law of thermodynamics, we are ready for the suggestion that if a unified theory is to be found, it will have to be a theory of "quantum gravity," requiring "a change in the very framework of the quantum theory." (p.348) At this time Penrose can present only speculations about the "germ" of such a theory, which is not yet, in his own terms, even TENTATIVE, so he has to settle for some gestures in the direction he feels the revolution will take.
Finally, then, what does all this have to do with minds and brains? He returns to the topics of the early chapters, and resumes the argument that mathematical insight (in particular) is non-algorithmic. Here is where consciousness comes in. The function of consciousness, in Penrose's view, is to leapfrog the limits of (practical) computability by conjuring up appropriate judgments in circumstances in which "enough information is in principle available for the relevant judgment to be made, but the process of formulating the appropriate judgment, by extracting what is needed from the morass of data, may be something for which no clear algorithmic process exists--or even where there is one, it may not be a practical one." (p.412) The way a "quantum computer" would achieve this apparent magic would be by being a sort of super-parallel computer, using superposition of computational states to perform a near-instantaneous global search through an otherwise untraversable space of possibilities, with the solution being output by the collapse of the wavefunction. This would not be old-fashioned Cartesian dualism, but radically new-fashioned (revolutionized) materialism. Several features of what is currently known or believed about connectivity of neural nets in the human brain suggest to him that the brain could in principle be such a quantum computer.
One of the defining doctrines of strong AI is the possibility in principle of teleportation--transporting a person from A to B by transmitting a complete, atom-for-atom description of the person's body (and brain) and using the description to construct a duplicate at the destination. Is teleportation murder-and-artifice or a means of transportation? Popular science fiction has for years softened us up for the latter vision, but Penrose is among those who find this idea simply incredible, and one of the cardinal virtues of the quantum computer idea, in his eyes, is that it would rule out perfect duplication of the total quantum state of a brain on what he argues are relatively secure quantum-physical principles.
Where, though, would Penrose have quantum physics draw the line? In principle, could a geranium in a pot be teleported? (When as a child I first heard of "sending flowers by wire" I assumed that they were teleported, and was deeply disappointed to learn the truth.) We already teleport documents (by FAX) and CADCAM (computer-aided-design/computer-aided-manufacture) would readily permit us to teleport automobile parts. Is it all living things, or only all complicated living things, or only all human brains that quantum mechanics would prevent from being thus teleported? Are we the only things in the universe that require quantum computers for their persisting identity?
For those who share Penrose's suspicion about human teleportation, this is one of the comforting implications of his theory, but the price to be paid (in terms of revision of the Cathedral of Science) is high. Among the likely casualties, according to a tentative and impressionistic argument of Penrose, will be the standard neo-Darwinian theory of evolution by natural selection. He does not see how algorithms for mathematical judgment could evolve, and "To my way of thinking, there is still something mysterious about evolution, with its apparent 'groping' towards some future purpose. Things at least seem to organize themselves somewhat better than they 'ought' to, just on the basis of blind-chance evolution and natural selection." (p.416) Creationists are not alone in harboring deep misgivings about the standard view; there are biologists who dare to wonder about the one leap of faith still required by the standard view: has there really been enough time for evolution to do all the designing by blind methods? Penrose shares those doubts, but provides no new argument to support them.
Might Penrose be right about all this? I suppose he might; he is an extraordinarily inventive and undogmatic thinker with an awesome mastery of many fields. If anyone could make such a discovery, it would have to be someone like Penrose. But whether he is right or not, his strenuous efforts to combat strong AI by unsettling the Cathedral of Science show, more clearly than any of the manifestos for the other side, that strong AI is a straightforward implication of orthodoxy. We cannot simply add a new wing to the Cathedral, enshrining an alternative theory of the mind; if strong AI is mistaken, the whole structure of science must be rebuilt from the ground up. This will inevitably lead some readers to reason as follows: If an opponent as brilliant and dedicated as Penrose discovers he has to go to such lengths to build a presentable case against strong AI, and can come up with nothing stronger than a speculative suggestion about quantum gravity, strong AI must be more secure than I had thought!
The argument Penrose unfolds has more facets than my summary can report, and it is unlikely that such an enterprise would succumb to a single, crashing oversight on the part of its creator--that the argument could be "refuted" by any simple objection. So I am reluctant to credit my observation that Penrose seems to make a fairly elementary error right at the beginning, and at any rate fails to notice or rebut what seems to me to be an obvious objection. Recall that the burden of the first part of the book is to establish that minds are not "algorithmic"--that there is something special that minds can do that cannot be done by any algorithm (i.e., computer program in the standard, Turing-machine sense). What minds can do, Penrose claims, is see or judge that certain mathematical propositions are true by "insight" rather than mechanical proof. And Penrose then goes to some length to argue that there could be no algorithm, or at any rate no practical algorithm, for insight.
But this ignores a possibility--an independently plausible possibility--that can be made obvious by a parallel argument. Chess is a finite game (since there are rules for terminating go-nowhere games as draws), so in principle there is an algorithm for either checkmate or a draw, one that follows the brute force procedure of tracing out the immense but finite decision tree for all possible games. This is surely not a practical algorithm, since the tree's branches outnumber the atoms in the universe. Probably there is no practical algorithm for checkmate. And yet programs--algorithms--that achieve checkmate with very impressive reliability in very short periods of time are abundant. The best of them will achieve checkmate almost always against almost any opponent, and the "almost" is sinking fast. You could safely bet your life, for instance, that the best of these programs would always beat me. But still there is no logical guarantee that the program will achieve checkmate, for it is not an algorithm for checkmate, but only an algorithm for playing legal chess--one of the many varieties of legal chess that does well in the most demanding environments. The following argument, then, is simply fallacious:
(1) X is superbly capable of achieving checkmate.
(2) There is no (practical) algorithm guaranteed to achieve checkmate.
therefore
(3) X does not owe its power to achieve checkmate to an algorithm.
So even if mathematicians are superb recognizers of mathematical truth, and even if there is no algorithm, practical or otherwise, for recognizing mathematical truth, it does not follow that the power of mathematicians to recognize mathematical truth is not entirely explicable in terms of their brains executing an algorithm. Not an algorithm for intuiting mathematical truth--we can suppose that Penrose has proved that there could be no such thing. What would the algorithm be for, then? Most plausibly it would be an algorithm--one of very many--for trying to stay alive, an algorithm that, by an extraordinarily convoluted and indirect generation of byproducts, "happened" to be a superb (but not foolproof) recognizer of friends, enemies, food, shelter, harbingers of spring, good arguments--and mathematical truths!
Chess programs, like all "heuristic" algorithms, are designed to take chances, to consider less than all the possibilities, and therein lies their vulnerability-in-principle. There are many ways of taking chances, utilizing randomness (or just chaos or pseudo-randomness), and the process can be vastly sped up by looking at many possibilities (and taking many chances) at once, "in parallel". What are the limits on the robustness of vulnerable-in-principle probabilistic algorithms running on a highly parallel architecture such as the human brain? Penrose neglects to provide any argument to show what those limits are, and this is surprising, since this is where most of the attention is focussed in artificial intelligence today. Note that it is not a question of what the in-principle limits of algorithms are; those are simply irrelevant in a biological setting. To put it provocatively, an algorithm may "happen" to achieve something it cannot be advertised as achieving, and it may "happen" to achieve this 999 times out of a thousand, in jig time. This prowess would fall outside its official limits (since you cannot prove, mathematically, that it will not run forever without an answer), but it would be prowess you could bet your life on. Mother Nature's creatures do it every day.
I may well have missed a crucial ingredient in Penrose's argument that somehow obviates this criticism, but it is disconcerting that he does not even address the issue, and often writes as if an algorithm could have only the powers it could be proven mathematically to have in the worst case. It will be interesting to see how he would repair this omission. In the meantime I would say that whether or not the Penrose revolution in physics is coming, he has not yet shown the need for the revolution in order to explain facts of human cognitive competence.
I have left no doubt about the difficulty of this book, and I must balance that impression by noting that it is nevertheless a pedagogical tour de force, with some dazzling new ways of illuminating the central themes of science. I was struck as never before by the gleeful staircase of human artifices--diagrams, mappings, formalisms--piled one on top of the other over the years, permitting our species so much as to entertain such audacious hypotheses about the world we live in. His discussion of phase spaces, for instance, and his development of the rationale for the second law of thermodynamics, are particularly refreshing. His exemplary candor, particularly in the chapters on cosmology and quantum physics, provides the uninitiated reader with a vivid experience of the way gut intuitions and aesthetic reactions call the tune in science until someone figures out a conversation-stopping proof, mathematical or experimental.
And along the way he makes important points that have been overlooked by the believers in strong AI, even if they can be incorporated into it. For instance, he closes the book with a speculation about time I believe is exactly right:
I suggest that we may actually be going badly wrong when we apply the usual physical rules for time when we consider consciousness! . . . My guess is that there is something illusory here. . .and the time of our perceptions does not 'really' flow in quite the linear forward-moving way that we perceive it to flow (whatever that might mean!). The temporal ordering that we 'appear' to perceive is, I am claiming, something that we impose upon our perceptions in order to make sense of them in relation to the uniform forward time-progression of an external physical reality. (p.443-4)
This is, in my opinion, the key to removing the last, harmful vestiges of Cartesian thinking from our standard vision of how consciousness relates to the brain, but you don't need quantum magic or quantum gravity to get there. A clear statement of the point has been given by Douglas Snyder Endnote 1, and I myself have more recently been developing the case for this claim from an entirely conservative--indeed an "engineering"--base, as the best way for Mother Nature to handle the synchronization problems that arise in a brain that must cope with events that sometimes occur on a time scale faster than its own internal transmissions. Endnote 2
A philosophy professor once said to his class, "I want you to believe the things I tell you, but not because you believe me; I want you, rather, to believe them because you yourself see that they must be true." This is Penrose's ideal, and indeed it should be every teacher's ideal, but we all fall short; the semester (or life) is too short, and at some point we fall back on "Take it from me: that idea just doesn't work." Penrose is positively heroic in his attempts to live by this standard. The reader is warned, after weathering over two hundred pages on the lambda calculus, the class of NP-complete problems, Maxwell's equations, the Lorentz equation, special and general relativity, and much more, that in the next chapter, on quantum mechanics, things are going to get "a bit technical. In my descriptions I have tried not to cheat, and we shall have to work a little harder than otherwise." (p.227) But although matters do then get still more technical, uninitiated readers cannot "work harder"--because we simply do not know the rules. If we are to "see for ourselves" the truths of quantum physics, we must be active and skeptical, but the world of quantum physics is so alien that we can no longer trust our untutored judgments of what counts as a telling objection and what is merely a misapplied maxim or analogy drawn from more familiar territory. I suspect that nothing short of extended immersion in the actual use of the mathematics to solve particular problems can give one a confident sense of how this game is played, and why the rules are what they are. We get assured by Penrose that various hard-to-swallow options make sense while others are just not on, and we have to take his word for it. His brilliant exposition up to this point gives us ample reason to respect his obiter dicta once they start to flow, but, contrary to his best intentions, his readers at this point must cease being participants and start being spectators.
This raises a perplexity about Penrose's intentions in writing this book. He repeatedly acknowledges that his colleagues, who already understand the difficult materials he is teaching us much better than we ever will, do not yet accept his idiosyncratic vision. But if he can't convince them, pulling out all the stops, what good will it do if he convinces us with a relatively elementary version? What then? Are we supposed to join in a Children's Crusade to persuade his colleagues to get in step? This cannot be his intention.
I suspect he has a more subtle strategy in mind. When experts talk to experts, they are careful to err on the side of underexplaining the fundamentals. One risks insulting a fellow expert if one spells out very basic facts. There is really no socially acceptable way for Penrose to sit his colleagues down and lecture to them about their oversimplified and complacent attitudes about fundamentals. So perhaps educated laypeople are only the presumptive audience for this book, hostages to whom he can seem to be addressing his remarks, so that the experts--his real target audience--can listen in, from the side, without risk of embarrassment. I think this is a wonderful strategy, perhaps the only way of getting experts who are talking past each other to refresh their mutual understanding of the fundamentals. (It is especially valuable in philosophy.) It may leave the non-experts in the role of spectators, but at least it gives them ringside seats.
Endnotes
1. "On the Time of a Conscious Peripheral Sensation," Journal of Theoretical Biology, 1988, 130, 253-254.
2.In "The Autocerebroscope," at a symposium in memory of Heinz Pagels, The Reality Club, Alliance Française, New York City, February 1, 1989; "Temporal Anomalies and the Architecture of Consciousness," Cognitive Science Colloquium, Indiana University, February 28, and the Gildea Lecture at Washington University School of Medicine, May 2, 1989.
