My latest in the International New York Times. Click here to read the pdf.
[This is an excerpt from a forthcoming essay-review of several books on animal extinction.]
There is a great die-off under way, one that may justly be compared to the disappearance of dinosaurs at the end of the Cretaceous, or the sudden downfall of so many great mammals at the beginning of the Holocene. But how far can such a comparison really take us in assessing the present moment?
The hard data tell us that what is happening to animals right now is part of the same broad historical process that has swept up humans: we are all being absorbed into what was once comfortably called ‘civilization’, and in the process we are being homogenized, subjected to uniform standards, domesticated. A curiosity that might help to drive this home: at present, the total biomass of domestic mammals raised for food vastly exceeds the biomass of all mammalian wildlife on the planet (it also exceeds that of the human species itself). This was certainly not the case 10,000 or so years ago, at the dawn of the age of pastoralism.
It is hard to know where exactly, or even inexactly, to place the boundary between prehistory and history. Indeed, some authors argue that the very idea of prehistory is a sort of artificial buffer zone set up to protect properly human society from the infinite expanse of mere nature that preceded us. But if we must set up a boundary somewhere, it would be difficult to do better than to choose the moment when human beings began to dominate and control other large mammals for their own, human ends.
We tend, still today, to think about history as by definition human history. Yet a suitably wide-focused perspective reveals that nothing in the course of human affairs makes complete sense without some account of the non-human animal actors who show up as well: history has in fact been a question of human-animal interaction all along. Cherchez la vache is how E. E. Evans-Pritchard claimed the social life of the cattle-herding Nuer of South Sudan might best be summed up --‘look for the cow’-- but in fact one could probably, without much stretching, extend this principle to human society in general. The cattle who now outweigh us are a mirror of our political and economic crisis, just as cattle were once a mirror of the sociocosmic harmony that characterized Nuer life.
Most of history, to the extent that it is understood narrowly as a human affair, has consisted in a patchwork of interconnected, but still largely autonomous, human societies; or at least they were autonomous in their self-conception, even if in fact they were always intricately interconnected by trade, war, migration. In the 18th century, a period in Europe sometimes called the ‘Enlightenment’, thinkers such as Immanuel Kant had come to understand history precisely as the process whereby European civilization radiates out and progressively engulfs the Arctic, the Americas, and the South Sea islands: progressively bringing them, that is, into the fold of history. And however we define ‘history’, it is certain at least that these areas were enfolded into something new and unprecedented. When Kant was writing, the Inuit, for example, lived more or less independently, as hunters and foragers, in a mode of life that was directly adapted to and integrated with their environment. Today, the Inuit live under the administration of a Euro-American colonial state, and many depend for their food on transport of mass-produced, processed commodities from the urban, industrial south.
What is often overlooked in the familiar summaries of this process --overlooked, perhaps, for fear of appearing disrespectful by running indigenous peoples and wild animals together-- is that it has not been limited to a single species. Non-human animals are swept up in exactly the same frenzy: either join up with what is increasingly the only game in town, and you will grow fat, and homogeneous, and your very body will be instrumentalized for economic ends; or die out. Mammalian biodiversity is dropping, while the biomass of cattle is skyrocketing. Cattle, which is to say the bovine portion of modern global civilization, are even driving indigenous humans out of their habitats, most notably in the Amazon, either to assimilate into the urbanized proletariat, or, likewise, to die off.
We do not need to exaggerate the analogy between human cultures on the one hand and biological species on the other in order to appreciate the unitary nature of the process that is under way. History has always been the history of humans within their environments, and it is crucial to understand history in this trans-species way in order to place the recent idea of the ‘anthropocene’ in proper perspective.
It may seem a terribly presumptuous thing to propose that the principle characteristic of the present period of the Cenozoic era is the presence of human beings on the planet. After all, these are divisions in a geological time scale, and the rocks go fairly deep, and hide from even the most ill-thought-out plans of men. But in truth all the epochs and eons, going back to the boundary of the Archean 2.5 billion years ago, have been named according to their representative life forms, and no life form represents the present better than homo sapiens.
The supposed presumptuousness of acknowledging this role fails to take into account that we literally couldn’t have done it without the animals. We brought the world to its present state, but we did so by putting non-human nature to work for us. A crucial part of this has been the exploitation of, and occasional cooperation with, animals, and it is not surprising that as we appear to be approaching some sort of climactic finish, the animals that remain are now principally the ones that have been incorporated into the process in some way or other: the ones that are regulated, conserved, bred, consumed, and in so many other ways made to play a role in the global world system.
By the end of the 17th century, attacks on the Cartesian philosophy were often rather unoriginal and derivative, philosophically speaking. Yet many continued to reflect new preoccupations of philosophy in the post-Cartesian period in imaginative ways. A good example of this is the Jesuit Gabriel Daniel's 1690 satire, Voiage du Monde de Descartes, which envisions an interstellar journey through Descartes's World, now conceived not as a mere thought experiment, but as a sort of science-fiction construction, as a parallel possible world. Such a construal, as I would like to argue, though not so significant with respect to the history of critiques of Cartesianism, nonetheless reveals important developments in the deployment of the philosophical concept of 'world'.
It will be useful perhaps to begin with a question that might seem too obvious to need asking, but that in the 17th century was at the center of a number of fundamental philosophical debates: what, in general, is a world?
It is a commonplace in history-of-science scholarship since Alexandre Koyré to acknowledge that the beginning of the modern period witnessed a transition between two very different cosmological models, from the closed world to the infinite universe, as the title of Koyré’s most influential book put it. But beyond its closedness, there are a few other important features to note in the most familiar cosmological models inherited from antiquity.
Aristotle, like many of his contemporaries, had found it useful to divide the world into two basic regions: the superlunar and the sublunar. The first of these is the home of the celestial bodies. They are immortal and, relatedly, unmixed. They are entirely composed of a single element, and therefore cannot be corrupted, cannot cease to be, through a separating out of their constituents. They must of necessity move, since only the unmoved mover is free of all change, but their motion duplicates the perfect quiescence of the supreme being as much as possible: it is perfectly circular, and so the immortal celestial bodies always come back to the very points in the cosmos that their motion has already traced. Sublunar bodies, for their part, consist in various mixtures of earth, air, fire, and water, and for this reason are destined sooner or later to come apart, to cease to be, when the elements that constitute them go their separate ways. Some sublunar beings, which we today would call ‘biological’, in turn imitate the circular motion of the celestial spheres --in much the same way their circular motion imitates the stillness of the unmoved mover-- through what Aristotle calls ‘cyclical’ motion (as distinct from circular), which is, as he explains, a ‘cycling back upon oneself’ in sexual reproduction, which wins for the individual mortal natural being a share of eternity ‘in kind if not in number’.
So much for the sublunar and the superlunar. What about the moon itself? What side is it on? If it is on neither, then what is the significance of such a fundamental ontological divide? In this paper I would like to look at a few ways in which the moon, as a boundary entity of crucial significance, has since antiquity played a fundamental role in thought experiments that have helped natural philosophers to come to a picture of nature as a whole, to arrive at least partially at what Thomas Nagel would call a ‘view from nowhere’. This is particularly clear in the early modern period when the Aristotelian two-region picture of the cosmos is rejected in favor of a picture of the cosmos on which the same laws hold everywhere in the same way, and the study of projectiles on the earth’s surface may be carried out in exactly the same way as the study of planetary orbits.
In On the Generation of Animals, Aristotle makes a special plea, evidently intended for unnamed critics, for the idea that there is something worthwhile about the study of perishable living creatures. He acknowledges that the celestial bodies are more divine, in large part because they are immortal (and also, by the way, intelligent), but, he adds, they are also very far away, and therefore hard to study. Living creatures are less divine, but they are also close at hand. It would be a mistake, moreover, to exclude them from the realm of the divine entirely simply in view of their perishability, for here too, as Aristotle says, there is something beautiful and wonderful. He cites in this connection the fragment of Heraclitus who, when caught lounging naked on a stove by distinguished visitors, protests: “Here too dwell gods.” Here, what Aristotle is first and foremost doing, in his way, is defending the integrity and the legitimacy of the study of living beings.
‘The world’, for Aristotle, is a system in which certain events happen in one region, which in turn trickle down and cause homologous yet different events in another region. Astronomy and ‘biology’ (to use a blatant anachronism) are two domains of a unified science of nature. In his own, directly inverse, way, Descartes too would aim to present biology as a regional instance of a general science of nature: this aim is precisely what is at stake in Descartes’s expressed desire to explain the formation of the fetus “in the same manner as the rest,” that is, by appeal to the same minor laws that also explain the motion of projectiles and the orbit of planets. Here, plainly, the unity of the superlunar and the sublunar is no longer a result of some cosmic trickle-down from the former to the latter, but rather of the fact that for Descartes no region is special, no region has causal or explanatory priority over any other.
Aristotle’s world is closed, as Koyré emphasized, but it is also hierarchically structured. In the modern period, we do indeed witness a sudden and tremendous expansion of the world, an ‘infinitization’, but also a destructuring or disassembly. And here, there were different available models for reconceiving the structure of the cosmos. The most significant division, perhaps, was between those who imagined the new infinite world as, so to speak, an ‘infinite extension of the same’, and those who envisioned an infinite fracturing or reduplication. That is, one could go with Descartes, and maintain that the world is simply extensus indefinite, that it consists in res extensa, in the same way it does right here, however far out from here one may travel; or one could take up the alternative view, espoused by Henry More after his initial exposure to the philosophy of Descartes, according to which there are infinite worlds, or infinite centers of well-structured kosmoi. Thus More reflects in his Democritus Platonissans of 1646 on the ‘fair glistering lights in heaven’, that:
If onely for this world they were intended
Nature would have adorn'd this azure Round
With better Art.
The evident disorder of the heavens is for More a reason to suppose that each star is the center of its own world, rather than having its raison d’être in relation to the earth, or indeed to life on earth.
There is, in the multiplication of worlds from More to Bernard de Fontenelle, who published his famous Conversations on the Plurality of Worlds in 1686, a deep ambiguity in the meaning of ‘world’. Are we to understand by a ‘many-worlds’ hypothesis that there are multiple, inaccessible, self-contained ‘realities’, so to speak; or are we rather to understand that there are simply multiple regional centers of one and the same reality? A clear illustration of this ambiguity (which extends in important senses even into David Lewis’s defense of modal realism) may be found, for example, in the Spanish-Inca political philosopher Garcilaso de la Vega’s Royal Commentaries of the Incas, first published in 1609, in which the author devotes the first chapter of the first book to denying the impious view that there are ‘many worlds’, and affirming that the ‘New World’ is so called only because it was discovered recently, not because it is in any sense a discrete or independent reality. “If there are any men who imagine that there are many worlds,” de la Vega writes, “there is no other response to offer them, unless they persist in their heretical belief until they are disabused of it in hell.”
We see in de la Vega that the problem of many worlds extended all the way from the cosmographical to the much more fine-grained questions of geography, and the Spanish-Inca author is firm in his objection to what might be called, in deference to Lewis, ‘many-worlds realism’. For him, the term ‘worlds’ offers nothing more than a poetic way of describing spatially distant but potentially traversable regions. We might say of Henry More that he takes just the opposite approach: for him, spatial distance, or at least a significant amount of spatial distance, is already sufficient for establishing the true, proper distinctness of one ‘world’ in relation to another. What is more, he even manages to extend his ‘many worlds’ realism to Descartes by eliding his own conception of ‘world’ with the French philosopher’s model of the world as consisting in infinite extension: “Nay,” More writes, “and that sublime and subtil Mechanick too, Des Chartes, though he seem to mince it must hold infinitude of worlds, or which is as harsh one infinite one. For what is his mundus indefinite extensus, but extensus infinite? Else it sounds onely infinitus quoad nos but simpliciter finitus.” Even this early, we see More’s characteristic attitude towards Descartes already in place: to the extent that More can claim common cause with Descartes, it is only by separating the latter’s claims from the concerns that motivated them. Prima facie, it is an odd thing to recruit Descartes for one’s case for the infinity of worlds, and the case for these worlds is not much strengthened by Descartes’s commitment to one indefinitely extended world.
Willy-nilly, then, by entering into the world business, Descartes becomes implicated in the problem of worlds in the plural. This implication continues long after More, and can perhaps be better understood by a brief consideration of Fontenelle’s famous work on the plurality of worlds. The moon is a world like ours," Fontenelle causes his protagonist to assert, "and to all appearance, inhabited." In the Conversations, Fontenelle is evidently playing on the dual signification of the French notion of ‘monde’, as describing both the physical universe as well as human society, or, more particularly, a single, self-contained human society. Where there are intelligent creatures living together and constituting un monde for one another, it follows that this amounts to a distinct world. But if a part of the cosmos is made out of the same matter as the earth is, and is subject to the same laws, then it naturally follows that it will be inhabited by intelligent beings like us. There are multiple worlds not in any strong Lewisian sense of parallel intraversable realities, but simply in the sense that there are multiple mondes with, ordinarily, no traffic between them, even if in principle one could in fact move from one to another. And the existence of such multiple mondes is a direct and inevitable consequence of the fact that the entire cosmos is governed by the same basic laws, and can be explained, like the formation of the earthling fetus, “in the same manner as the rest.” The dismantling of the structured hierarchy of the ancients leads directly to extraterrestrials.
Or at least this is the conclusion to which many late 17th-century satirists and fantasists were drawn. There appears in fact to be a direct translation of the Cartesian doctrine of explanation “in the same manner as the rest,” into what would come to be known as ‘the Harlequin principle’: toujours et partout, c’est tout comme ici [‘always and everywhere, it’s the same as it is here’]. This is the catch-phrase exclaimed by a chorus of lunar characters in Anne Mauduit Nolant de Fatouville’s comic opera, Arlequin, empereur dans la Lune, spoken in unison as a response to reports about the hypocrisy and vanity of social life on earth. In other words, wherever there is a monde, the same human (or humanoid) comedy will repeat itself. Later, in the New Essays concerning Human Understanding of 1704, G. W. Leibniz would adapt this catch-phrase as a pithy summary of his own theory of monads, according to which the world consists in an infinity of ultimately identical perceiving substances, which are individuated only in virtue of their perspective from within the order of reciprocal perception. From the hierarchically structured and closed world, we move through the world as indefinite extension of the same, a view crystallized in the formula partout comme ici, which is then taken up as the motto of a metaphysical system grounded in the infinite repetition of perception, fractured, kaleidoscopically, into infinitely many points of view.
Leibniz does not mean to present his system of monads as science-fiction. He takes it as a true account of how the world really is, and this account, while seldom taken as true today, is considered worthy of study by the designers of undergraduate curricula in philosophy. And yet it is, as we have begun to discern here, in large part a sort of following-out of the implications of a way of thinking about the world, or about the plurality of worlds, that was already set in motion by Descartes. As Leibniz himself acknowledges, important, if theoretically unsophisticated, voice was given to this way of thinking not only by philosophers, but also by fantasists such as Nolant de Fatouville (elsewhere in the New Essays, Leibniz also mentions Cyrano de Bergerac in the same connection). It is perhaps worth taking more seriously the role of science fiction, and of literary and satirical works more generally, in the uses to which the concept of 'world' was put throughout the course of early modern philosophy.
[I have been approached by a publisher with the idea of writing a semi-popular book on the history of the Philosopher as a persona and social actor. That is, what are the various self-conceptions philosophers have had throughout history? How have these transformed throughout different eras and in different philosophical cultures? What follows is an excerpt from a draft of my proposal.]
What is philosophy? One possible answer is that it is the intellectual project that disdains money. Or at least pretends to do so.
Such disdain forms a central part of the founding myth of Western philosophy as told by Plato in his relation of the trial and execution of his mentor Socrates. Here we learn that Socrates has been wrongly charged by the court at Athens on what may be reduced to two principal counts: that he teaches doctrines, and that he accepts money for this teaching. The denial of the first charge is important to the story we would like to tell here, too, but we will return to it soon enough. As to the second charge, Socrates protests: "As little foundation is there for the report that I am a teacher, and take money; that is no more true than the other. Although, if a man is able to teach, I honor him for being paid." Socrates relates of a certain teacher named Evenus that he would admire anyone who "really has this wisdom, and teaches at such a modest charge. Had I the same, I should have been very proud and conceited; but the truth is that I have no knowledge of the kind."
Thus philosophy, on this understanding, cannot be remunerated, because there is no real exchange, at least if this is thought of in terms of the offering of goods or services. Philosophy does not give its adepts a new body of knowledge, but only leads them through a dialectical method that shows what they already knew, or thought they knew, in a new light. There is nothing to sell here, and thus nothing to pay for.
Except that philosophy often does quite a bit more. It also tells us about the world. Socrates himself abjured not only Sophistry but also natural philosophy: the other accusation by the Athens court, which he disputed no less vigorously than that of Sophistry, was that he had "searched into the things under the earth and in heaven." But such a search had in fact been the central preoccupation of most pre-Socratic philosophy, and it would soon be again in Aristotle, and even in Plato's own later dialogues such as the Timaeus. All sought to pry into nature and discover its first principles: water, for example, or air, or hylomorphic compounds. And wherever such prying occured, we find a mixture of theoretical interest and concern for practical, and often overt economic, gain. Natural philosophy (lately called 'science') is not just interested in the first principles of nature, but also in its powers, and how to harness them.
Although Socrates provides a model already in antiquity of philosophy as unconcerned with what goes on under the earth and in the heavens, it is only very recently that the self-conception of philosophers has become entirely separate from that of what are now called 'scientists'. It was only toward the end of the 18th century that natural philosophy ceased to be an integral, indeed perhaps the most important, part of what it is that philosophers think of themselves as doing. Thus we find John Evelyn, in his Fumifugium of 1661, writing about "unwholsome vapours, that distempered the Aer, to the very raising of Storms and tempests; upon which a Philosopher might amply discourse." No one in the 17th century raised an eyebrow at the association of the 'philosopher' with this sort of curiosity. Philosophy remained at least as closely linked in the popular imagination to meteorology as to, say, logic.
René Descartes continued to dissect the heads of animals passed to him by the local butcher. Gottfried Wilhelm Leibniz spent many of his most fruitful years as a metaphysician almost singularly devoted to research on the medicinal properties of the Brazilian ipecacuanha root, which culminated in the 1695 treatise, On the New American Anti-Dysenteric. He seems to have been more pleased with this work than any other of his vast accomplishments. Long before there was x-phi, there was experimental philosophy, as practiced by Robert Boyle, Margaret Cavendish, and many others, some of whom wrote lucidly on the theoretical reasons why philosophy is best conceived as fundamentally consisting in the project of hacking through nature's thorns, to speak with the poet James Merrill, and kissing awake new powers.
What changed? Immanuel Kant's own career seems to straddle a divide between two epochs. In his early formation, doing what was expected of him as a philosopher in training, the young Kant wrote a Latin treatise On Fire. He would go on in his career to write extensively of cosmology, anthropology, and physical geography. But by the time of his mature, critical work, as for example in the 1783 Prolegomena to Any Future Metaphysics that May Come Forward as a Science, Kant is sharply aware of a growing gap between science and philosophy, or, again, between natural philosophy and fundamental philosophy: the former is making rapid and unprecedented progress, while the latter keeps cycling back around the same questions, and never seems to get anywhere. Kant thus gives voice in the Prolegomena to a well-known crisis in metaphysics, but what is less often noted is the shift here in the conception of science: he wants fundamental philosophy to 'come forward as a science', and believes that if it cannot, then it has no future. But this implies at the same time that science is something distinct from philosophy, which until very recently it had not been.
The full explanation for this split is complicated, and has to do not just with internal developments in the self-conception of philosophers, but also with social and institutional history. In particular, after some centuries in which the centers of philosophical activity were located outside the university --in royal courts, museums, scientific societies, and so on-- by the end of the 18th century philosophy was again becoming principally an academic endeavor: academic in our contemporary sense of being housed in a university faculty or department, and being focused on pedagogical instruction within the bounds of a clearly defined curriculum.
In the two centuries, and some, since Kant, philosophy has grown increasingly professionalized, again as a result both of internal developments in the field as well as the complicated social and institutional history of the modern university. Today, no one could make any sense of the claim that a philosopher, qua philosopher, might enjoy discoursing upon storms and tempests, or delivering the results of her research upon the medical virtues of syrup of ipecac. Still further from our understanding of the social role of the philosopher are models from the deeper past, which come mostly from the church and the temple, a social milieu from which philosophers would like to maintain a very safe distance: the philosopher as priest or monk, as social mediator between the human and the divine, or as isolated, world-renouncing contemplator of the divine. But these are part of the long heritage of the discipline, too, and it behoves us to understand the way this long history continues to impact philosophy's efforts to define itself.
Such an effort at historical self-understanding become particularly urgent in periods of dramatic social and institutional change, such as those the current university system is now facing. Whether we like it or not, the future of philosophy, like much of its past, may well unfold outside of the university. J. M. Coetzee has recently compared the situation of humanities professors today to the one faced by dissenting academics under the communist regime in Poland, where those who were not permitted to teach real philosophy let it be known that they "would be running a philosophy seminar in [their] living room, outside office hours, outside the institution." In so doing, Coetzee writes, "the study of philosophy was kept alive. It may be something along the same lines will be needed to keep humanistic studies alive in a world in which universities have redefined themselves out of existence." Unlike Polish communism, however, the changes happening in the world today are global, and it is difficult to imagine how they could possibly be reversed.
Ironically, one of the mechanisms by which universities are destroying themselves, or at least are seeking to remove their own humanistic hearts, is by forcing philosophers to conceptualize their own work on the model of the positive sciences: by forcing them to apply for large grants, for example, with explicit 'methodologies' (which can no longer be simply reading a bunch of books, and thinking about them) leading to concrete research 'results' (which can no longer be simply interesting and compelling observations about the world and our place in it). But what the administrators and the faculty alike both miss in their mutual misunderstanding here is the depth of the historical relationship, indeed the identity, between what is now being called 'science' and what has for a much longer time been called 'philosophy'. It cannot be that philosophers must retain their independence from the sciences, for it is a simple historical fact that this independence is a recent invention, and not necessarily a justified or useful one.
If there is, then, a new expectation that philosophers justify what they do in terms appropriated from the sciences, the deeper problem with this expectation might be that these terms are equally inadequate for grasping what it is that science itself does, or might have done in the past and might do once more in the future. But again, in order to see this, we need to reconstruct the history of natural philosophy, to expose the self-conception of the people who in different eras and contexts sought to hack through nature's thorns. And we need to do this in a way that does not cordon natural philosophy off from philosophy on the basis of anachronistic divisions.
For Coetzee, the living-room philosophy of the Polish intellectuals is noble, and worth replicating, presumably in large part because it is offered freely, just as Socrates offered it so long ago. But again, this is only one conception of what the philosophical endeavor is, and by no means the most prominent one throughout philosophy's history. Philosophers have been cast in many different social roles, with many different job descriptions. Among these, we may identify at least six, though of course they are partially overlapping and any given philosopher in history will probably be a mixture of at least two or three. These six roles, and the way they have been embodied by different historical figures in different periods and contexts, will constitute the six principal chapters of the book.
There is, first of all, the Priest. This is the oldest social role of the philosopher. The label here is to be understood in a broad sense, to include any socially revered figure (almost always an older man) who is held forth as a mediator between the immanent and transcendent realms, who is held to be able to speak for the gods or interpret what is going on beyond the realm of human experience. It includes, for example, the Brahminic commentators on the sacred scriptures of India, who have provided us with the textual basis of classical Indian philosophy. This social role is also surely continuous with that of shamans and like figures in non-textual cultures, even if it only starts to look like a philosophical or quasi-philosophical endeavor at the point in history when the mediating role of the Priests is laid down in texts that display some concern for conceptual clarity and valid inference.
Second, there is the Ascetic, who appears in what Karl Jaspers helpfully calls 'the Axial Age', the age in which Buddhism and Christianity come onto the world stage, both positioning themselves as explicit rejections of the authority of the Priests in their ornate temples. Cynics, Jainists (known to the Greeks as 'gymnosophists'), early Christians, and other world-renouncers provide a template for a conception of philosophy as first and foremost a conformation of the way one lives variously to nature, or to divine law, or to something beyond the illusory authority of society, the state, or the temple. The Ascetic continues to be a familiar figure in philosophy throughout the middle ages, though now mostly confined within the walls of the monastery, and still has late echoes in secular modernity in figures such as Friedrich Nietzsche. The latter is generally seen as a peculiar individual, but this may have something to do with the fact that there was by the late 19th century no longer an obvious social role for him to play.
Third, there is, of course, the Gadfly, who understands the social role of the philosopher not as mediating between the social and the divine, nor as renouncing the social, but rather as correcting, to the extent possible, the myopic views and misunderstandings of the members of his own society, to the extent possible. Socrates is a special case of the Gadfly, since, as we have already discussed, he does not have a positive program to replace the various ill-conceived beliefs and plans of his contemporaries, in contrast with the various social critics or philosophes engagés who follow in this venerable and still vital vein.
A well-known and much despised social role for the philosopher is, fourth, that of Courtier. A recent popular book set up Baruch Spinoza as the noble Ascetic against the unscrupulous Gottfried Wilhelm Leibniz, who was ready to sell his philosophical services to whichever European sovereign was willing to pay the highest salary. Here, for the first time in our list, money makes its explicit appearance (though it was surely there in the Priest's temples as well). The more recent incarnation of the Courtier is the 'sell-out', or, to put it in somewhat more euphemistic terms, the 'public intellectual', who unlike the Gadfly is out there in society, not in order to change it, but in order to advance himself and his own glory. (The gendered pronoun here is intentional.) But there is a problem, of course, in determining who fits this description and who does not; all philosophers need support, and few have the fortitude to retreat into pure Asceticism. Those who get cast as Courtiers seem to be the ones who take earthly wealth and glory as the end in itself, rather than at most as a by-product of their pure love of wisdom. Or at least they are the one who do a particularly bad job of concealing the fact that it is wealth and glory they are after. Whether, however, these desiderata are strictly incompatible with profound thought is an important question. Leibniz would seem to provide a counterexample to the claim that they are incompatible, but an interesting question remains, and indeed a question whose answer could tell us much about the nature of the philosophical project, as to why 'Courtier' continues to function as such a potent ad hominem against the integrity of a philosopher.
Fifth, there is the Curiosus, the great forgotten model of the philosophical life. It is a principal concern of this book is to solve the mystery of his disappearance. He is the philosopher who expatiaties on storms and tempests, on magnetic variation, on the fine-grained details of the wings of a flea. The Curiosus is often a Curiosa: many of the adepts of early modern experimental philosophy were women (women have also been Ascetics and Gadflies, less often Priests). Curiosae and Curiosi believe that there is nothing shameful about knowledge of what Leibniz celebrated as res singulares: singular things. These too can reveal the order of nature as a whole, and it is eminently the task of the philosopher, on their view, to discover this order. The paradigm statement of this approach to philosophy may well be found in Aristotle's defense of the worthiness of marine biology against unnamed critics: looking into the viscera of some sea cucumber or cephalopod, he proclaims, citing Heraclitus, who was caught by distinguished visitors lounging naked on a stove: "Here too dwell gods." In a complementary vein, Nietzsche observes that science first emerges when people are no longer able "to think of the gods well." The Curiosus, a familiar figure of the 17th century, just prior to the emergence of the figure of the scientist, seems to have been the last of the philosophers to see the gods, so to speak, in the particular things of nature.
There is, sixth and finally, the Mandarin. This is a pejorative term, though unlike 'Courtier' it describes an entire class of people rather than exceptional individuals who may emerge from that class. The term comes from the examination system that produced the elite class of bureaucrats in Imperial China, and may be easily extended to the modern French system that produces normaliens, and also with only a bit more stretching to the system of elite education in the Anglo-American sphere out of which the great majority of successful careers in philosophy take shape. Mandarins have a vested interest in maintaining what Thomas Kuhn called 'normal science', and are typically jealous guardians of disciplinary boundaries, wherever these happen to be found in the era of their own professional activity. Like Courtiers, Mandarins often have wealthy benefactors (now corporate rather than royal) and they stay close to centers of power (top schools in philosophy today tend to be found within a short drive or train ride from the world's major metropolitan concentrations of capital). But unlike Courtiers for the most part they are able to pursue their careers more or less as if money were not an issue, and indeed are the ones most quick to denounce the Courtiers for their unseemly conduct.
It is the Mandarins whose fate is most uncertain in the post-university landscape presaged by Coetzee. And yet, as I hope to go some way towards showing in this succinct survey of the different types of philosopher throughout history, it may be that we will be better able to usher philosophy into a very new historical period in a very new and unfamiliar world --where the old insititutions are rapidly transforming beyond recognition, and new ones are taking shape in ways that are nearly impossible to predict-- by resurrecting and breathing new life into old and forgotten ideas about what it is to be a philosopher.
‘Among’ is a curious preposition. Today it seems to be used mostly to describe the relation of being with beings that are not quite like us, but still enough like us that a properly second-person relation may obtain with them. One can be ‘among’ ghosts and apes, while such a relation is less plausible when it comes to snails or rocks. This connotation perhaps explains the survival of the word in contexts where the intent is to sound like an anthropologist. When Napoleon Chagnon’s book, Noble Savages: My Life among Two Dangerous Tribes- The Yanomamö and the Anthropologists (Simon & Schuster, 2013), tells us that its author has spent time ‘among’ these ‘tribes’, we can immediately take from this that the author wishes to sound like an anthropologist, and to sound like one for an audience that prefers to imagine this discipline in its pith-helmeted, imperial chapter. That Chagnon is in reality an anthropologist does not change the fact that his concern to sound like one is meretricious in the extreme. For he would have us believe that discipline in whose name he speaks still wears a pith helmet, that it remains a matter of men of science descending into the heart of darkness to study groups of people who, while possessed of a language of sorts, are not recognized in any significant way as having their own voice. More than this, he would have us believe that it is only in this uniform, it is only ‘among the tribes’, that anthropology can continue to present itself as scientific.
Chagnon is a staunch defender of one side --the losing side-- in a recent rift within academic anthropology. In 2010, as was widely and apocalyptically reported by scientistically minded journalists such as Nicholas Wade, a subcommittee of the American Anthropological Association deemed its own subject a non-science, or at least argued for excluding any mention of science from its self-desciption. This change was meant as an acknowledgement of the fact that anthropology does indeed deal with interactive subjects who can speak for themselves, and does not deal with mute ants or atoms. From the outside, it is not hard to regret this unnuanced decision, for what the AAA subcommittee seems to have missed is that one need not make a choice: the human sciences can be sciences too, real sciences; even if they require some methods and provisos of their own, they can still remain connected to the broader unified project of understanding how the world really, truly works. One can’t help but share in Chagnon’s frustration at the hasty decision of the majority of his disciplinary peers to disown its historical connection to any branch of the complex and variegated scientific tradition. After all, until very recently (and to some extent to this day still in languages such as French and German), a ‘science’ was any relatively systematic body of knowledge, anything the goal or product of which was scientia, and it is only in the very most recent times that the notion has been reduced to the figure of somber men seeking to run the world on the basis of claims of unassailable expertise. Yet the cartoon version of science that Chagnon proposes in response, in its total failure to recognize that there might be special problems of theory-ladenness, power inequality, looping effects, prejudice --in a word, all those factors that make the scientific study of humans a more delicate matter than the study of other domains of nature--, can easily make one wish to take the ‘postmodern’ turn oneself, if only to get away from this astoundingly simplistic pretense of scientificity.
Recent scholarship in French, Spanish, and Portuguese focusing broadly on Amazonian cultures cannot but further contribute to the view that the rift in American anthropology has led to a further impoverishment of the discipline in this country. In Europe and Latin America, we find very theoretically sophisticated engagements with Amazonian cultures being undertaken by authors who take a broadly naturalistic and scientific view of their project, and who proceed with complete indifference to the work of Chagnon. Some of these approaches deploy decidedly western and science-friendly conceptual tools, yet still manage to say something valuable and revealing about the actual beliefs and concerns of the people being studied. Thus for example in his remarkable book, Par delà nature et culture (Paris, Gallimard, 2006), Philippe Descola, following the precedent of the Brazilian anthropologist Eduardo Viveiros de Castro, takes the philosophy of G. W. Leibniz as providing a point of access to certain varieties of animist ontology in South America. Leibniz, like the Makuna and the Wari’ peoples, supposes that “that thing is a subject that finds itself activated or turned into an agent by means of a point of view.” In this respect, for Leibniz and the South American tribespeople alike, the discontinuity of forms in nature is underlain by a deeper unity, to be explained by a difference of perspectives. The ‘perspectivism’ at the basis of his approach, Descola explains, “is thus the expression of the idea that every being occupies a point of view of reference, and thus finds itself situated as a subject.” Descola concludes that a Leibnizian perspectivism amounts to an “ethno-epistemological corollary of animism.” Every being, on this view, is an expression of exactly the same rational order. But heterogeneity or discontinuity of forms arises at the corporeal level. Different beings have different bodies, and so also different phenomenologies, since their perception of the world takes place through their bodily sense organs. This means also that they must conduct themselves in the world differently, that they will be non-identical with respect to their agentive means, even if at a fundamental level all in the end have the same rational ends. For Descola, in turn, the fact that the Makuna and the Wari’, like Leibniz and all the representatives of European science who came after him, see all of nature as governed by something like reason, and by a reason that is knowable by human beings, is in the end grounded in a universalism about human minds that is informed by cognitive science and neurolinguistics.
I have dwelt on this example at some length in order to illustrate an important point: that one can in fact approach the subject matter of anthropology naturalistically, using the conceptual tools of European traditions of thought, and still come up with theoretically sophisticated accounts of indigenous beliefs that remain nonetheless sensitive to the actual concerns, to the ‘voices’, of the people being studied. This is what the best social anthropology, conceived as a branch of the human sciences, has always aspired to do: to tell us what other people are thinking, as a step toward piecing together the puzzle of what it is that humans qua humans think, and to determine what they are thinking by taking seriously their own categories and commitments, by learning to think like they do. The interpretation of culture is a complicated task, and it doesn’t automatically become uncomplicated if one affiliates oneself to ‘science’ rather than ‘postmodernism’. Rigorous, mature science --and this applies already to the sciences that study the non-human world-- understands that it is not just about observing the bare facts and relating them, but rather always involves interpretation through an appropriate theoretical framework, the choice of which must be made in large part on a priori, indeed philosophical, grounds. Chagnon’s dichotomy is forced and false.
But are his particular scientific claims as false as his belief in his own role as conservator of science? There is nothing intrinsically ridiculous about Chagnon’s central hypothesis that ‘primitive’ societies are structured by male competition over women, or that this competition results in systemic violence. More generally, the central conviction, that culture is an outgrowth of nature, and that many features of human societies can be studied just like ant colonies can be, seems to me almost certainly correct, and in any case should be assumed to be correct as a matter of methodology for many research purposes. Surely it ought to be permissible to call Chagnon out as a poor reasoner --to the extent that he makes a fallacious leap from the view that culture is natural to the view that there simply are no problems of interpretation or ideology or theory-ladenness in a scientist’s undertaking to tell us what this nature-bound culture is like-- while refusing to rush headlong in the other direction and to deny that culture is natural, or to insist that human societies are fundamentally different from ant colonies and can not at all be studied by the same methods.
The real problem with Chagnon’s work, whatever accusations his enemies in anthropology may throw at him, is not the Hobbesian hypothesis at its core, but rather the fact that he does not prove, or even begin to prove, this hypothesis from the empirical evidence mustered. Chagnon boasts that “most anthropologists have never lived among people who are really primitive.” The fact that he has done so, he believes, gives him access to special insight into the basic forces underlying human history. Namely, as a result of his Amazon sojourn, “I discovered that maximizing political and personal security was the overwhelming driving force in human social and cultural evolution.” But how do you ‘discover’ this from observing a single group of people? Might there be some alternative interpretations of the same observations? Might there indeed be alternative interpretations that one could plausibly defend without even having been there, simply from the sort of book-learning that Chagnon sets himself up as disdaining? For example, might group fitness also play a significant role in individual human actions? Or, to put it less Darwinistically, might values associated with community play a role in the explanation of human motivation, equal to the maximization of individual benefit? I am not saying they do play such a role; but I seriously doubt that the definitive conclusion that they do not, on the basis of observation of a single group of people, however ‘pristine’, could possibly count as good science. I would trust a sharp theoretical mind that grasps the subtle interpretive problems involved in accounting for the driving forces in human evolution, even if that mind has never lived ‘among’ pristine tribes, sooner than I would trust an experienced field-worker who has no apparent talent for, or interest in, problems of interpretation or of the difficulty of choosing theoretical frameworks.
We have dissected some of the controversial terms in the subtitle of Chagnon’s book, even down to its prepositions, but have so far steered clear of everything having directly to do with his invocation of either nobility or savagery. The phrase ‘noble savage’ dates back to John Dryden’s 1672 play, The Conquest of Granada, whose hero, a Spanish Muslim in the New World, discovers the delights of going native: “I am as free as nature first made man, / Ere the base laws of servitude began, / When wild in woods the noble savage ran.” Chagnon does not in fact claim that the tribe of anthropologists has anything noble about it, and in this respect the book’s title, likely a marketing-inspired afterthought, is not really fitting. What is important for our purposes anyhow is the access Dryden’s immortal phrase gives us to a certain basic opposition in the history of modern European thinking about the nature of human culture, progress, and diversity.
To speak of the nobility of the savage state is to imply a sort of romantic primitivism that does not sit well with most varieties of political progressivism. Some forms of anarchism and ecologism excepted (a significant exception), the main currents of progressive thought have been based on the core belief that, through human agency, history can and ideally does move from a worse state to a better one. One might place Chagnon by saying that he has accepted the worst of both worlds: the belief that the modern, developed west knows best, together with a version of the belief that the ‘savages’ have a unique power to reveal to us who we really are as human beings. A romantic primitivist of Chagnon’s sort does not want us to learn from the Yanamamö in the sense of adopting their ways or returning to a more ‘primitive’ form of life in the spirit of the anarcho-primitivists and various other droppers-out. Rather, he wants us to draw the lesson that ‘primitive’ Amazonians, to the extent that they reveal more clearly the elementary constituents of human social life, are therefore in a position to tell us what human beings are really like. Now ordinarily ‘science’ is held to contrast with ‘romanticism’ nearly as starkly as with ‘postmodernism’, and Chagnon himself would surely not acknowledge this part of his pedigree. Indeed, again, his is a very peculiar sort of romanticism: it says not that we should reject the status quo in order to return to something primordial, but rather that the status quo is in truth only a late echo or unfolding of that primordial something.
Of course, and again, what is missing here is any real concern about the difficulty of interpreting the facts. This difficulty is there even in the case of the ants, but it is greatly compounded by the tremendous variability of culture, that is, by the fact that human beings respond to the same basic environmental exigencies in hugely different ways. Human culture is not discontinuous with the rest of the natural world, but this does not mean that it carries no special problems of its own, and the best way to characterize the central problem, which also ought to be enough to give anthropology its raison d’être as a distinct and somewhat autonomous domain of science, is to note that there is no obvious way, no algorithm or formula, to explain how a given set of natural circumstances will lead to a given cultural form. Different groups will come up with widely different responses to more or less the same environmental pressures, not because humans are supernatural or in principle incomprehensible in scientific terms, but because the science of human culture must in the end be what Leibniz would have called a science of res singulares, of singular things. This does not mean that we cannot look for an order underlying the diversity, but it does mean at the very least that we must be very careful about generalizing from any particular case to a claim about humanity in general, even, or perhaps especially, when the case at hand is ‘primitive’, in the sense that it displays the purportedly elementary features of human society.
This interpretation of the basic shortcomings of Chagnon’s supposed science should be familiar enough to those who have followed the drawn-out dust-up between him and Marshall Sahlins. The latter has been a committed opponent of sociobiology since its first articulation by E. O. Wilson in the 1970s-- or, better, since its first articulation under the label of ‘sociobiology’; some form or other of the view that human society is fundamentally of the same sort as animal society has always been around, and has been defended by figures as diverse as Diogenes the Cynic, Piotr Kropotkin, and Bernard Mandeville, author of the 1714 Fable of the Bees. Sahlins sharply noted, in a 2000 review of Patrick Tierney’s Darkness in El Dorado: How Scientists and Journalists Devastated the Amazon (W. W. Norton & Co., 2000) (which accused Chagnon of, among other things, intentionally spreading the small pox virus among the Yanomamö, an accusation that has since been disproven), that it would be impossible --not difficult, but theoretically impossible-- to identify a universal selective pressure for violence, just as it would be impossible to genetically track the rapidly transforming behavioral values associated, e.g., with competition over potential mating partners. It follows for Sahlins that the only selective force that we can identify is the one that favors a human ability “to realize innate biological dispositions in a variety of meaningful ways, by a great number of cultural means.” Sahlins continues:
Violence may be inherently satisfying, but we humans can make war on the playing fields of Eton, by sorcery, by desecrating the flag or a thousand other ways of ‘kicking butt’, including writing book reviews. What evolution has allowed us is the symbolic capacity to sublimate our impulses in all the kinds of cultural forms that human history has known (“Jungle Fever,” Washington Post, December 10, 2000).
In short, the fact that human beings are symbol-using animals ensures that violence will not always just be violence; we will get creative, find new and often amazingly subtle ways to fight. Other animal species are indeed capable of symbolic displays of hostility—they can make their hair stand up on end, for example, or show their teeth. But human beings can bring about similar effects by means, for example, of sequences of carefully selected black symbols against a white background. Writing is a uniquely human activity, but it is not unique among human activities. In fact, as many archeologists are coming to appreciate, all human material culture is symbolically dense in the same way texts are, and the division between the study of prehistory, as concerned with non-textual cultures, on the one hand, and on the other history, which begins roughly with cuneiform tablets, is entirely artificial, set up, arguably, to give us a comfortable psychological buffer zone --‘prehistory’-- between properly human history and the dark abyss of time that precedes it. Material culture, dress, weaponry, ritual objects, scarification, perhaps writing too: all of these developments make possible a sublimation of violence from its blunt club-to-the-head variety into manifestations that can be exceedingly difficult to interpret or even to detect. And this means that we are returned inevitably to the approach to anthropology that understands it fundamentally as an interpretive endeavor, not entirely unlike the effort to extract meanings from texts or tombstones. This is not postmodernism, but only an acknowledgement of one of the things, alongside raw data collection, that science, in the rich old sense of Wissenschaft, has always done and cannot but do.
We have, so far, contrasted Chagnon’s blunt ‘scientific’ approach with both ‘postmodernism’ as well as with any theoretically sophisticated approach to the human sciences. What these distinctions fail to capture, however, is the fact that a great deal of current anthropology positions itself not principally as any of these, but rather in an advocatory role. Many of its practitioners are uncomfortable in the role of scientific explainers not because they are subjectivists about truth or they reject grand metanarratives, but simply because they have understood the fragility of the plights of the indigenous peoples they study, and have determined that their efforts and resources are better directed toward mediating between native peoples, on the one hand, and governments and rapacious corporations on the other, than toward telling us what the driving forces of human social evolution are.
But here there is not such a sharp separation, since many anthropologists manage to unite this foremost concern with profound theoretical insights. Thus for example Terence S. Turner has been able to portray the Kayapo of Brazil in a way that both conveys the richness and complexity of their beliefs and corresponding actions, and, at the same time, sensitively deals with both the threats to these beliefs coming from encroaching modernity, as well as with the potential ways in which anthropology itself can hasten this encroachment (see, e.g, Terence S. Turner, “Representing, Resistance, Rethinking: Historical Transformation of Kayapo Culture and Anthropological Consciousness,” in G. Stocking, ed., Colonial Situations: Essays on the Contextualization of Ethnographic Knowledge, History of Anthropology, Vol. 7, Madison, University of Wisconsin Press, pp. 285-313). Chagnon, meanwhile, has actively been portraying the Yanomamö as a people who in certain important respects do not have their house in order, and this in a context where he knows, and they know, that the land they inhabit is extremely desirable to a number of parties who would be perfectly be happy to use the civilizing mission as a pretext for moving in on them.
It is important to note here, if only in passing, that support for the gradual ethnocide that would result from governmental acculturation programs, driven in the end by corporate interests, remains surprisingly widespread even among purported humanitarians. Thus in a recent article in the New York Review of Books, the conservation biologist John Terborgh offers the following argument of assimilating uncontacted indigenous peoples: “Do we want,” he asks,
to keep people in a ‘cultural museum’, a time warp as it were? Putting aside the practical questions of how this would be accomplished, is it morally the right thing to do? This is a question of values and some of my anthropologist colleagues would say yes. But the morality of this question has to be considered in the light of our own cultural origins. Once upon a time, the ancestors of each and every one of us lived in a premodern culture. Those cultural origins have now been completely erased from our collective memory. Do any of us regret the loss of this memory? Would any of us prefer to return to our ancestral condition, rather than to live in the modern world? Few, if any, would say yes. To live in isolation is to live a short, hard life in the absence of modern medicine and in complete ignorance of history, geography, science, and art (“Out of Contact,” New York Review of Books, April 5, 2012).
Terborgh’s heart may be in the right place, but he could not be more wrong. He evidently has learned nothing from his anthropologist colleagues. As has been familiar since Claude Lévi-Strauss’s 1962 La pensée sauvage, indigenous peoples do have science, in fact they have a tremendously complex system of classifying and interpreting salient features of the natural world. This complexity has been borne out in more recent work by Brent Berlin, Scott Atran, and others on what is sometimes called ‘ethnotaxonomy’. They also have history, or what we would call, usually dismissively, ‘myth’. It may involve different standards of evidence and be underlain by a different epistemology, but it is meaning-giving and valuable to the people who recite it, learn it, and embody it. They certainly have geography: just ask any outsider who needs to find his way around indigenous territory and who seeks the aid of a native guide. And the claim that they do not have art is simply nonsensical.
But the greatest problem with Terborgh’s plea for assimilation comes at the practical rather than the philosophical level. Indigenous people are never assimilated into a larger society anywhere other than at the very lowest rung. From being people who occupy no particular social class, they become, when urbanized or engulfed into a state structure, the occupants of the bottom class, enjoying none of that society’s privileges. I don’t see how anyone could argue that it is better to be a proletarianized slumdweller than to live out a life in the traditional way, beyond the pale of history, as Kant, for example, would put it in his Enlightenment-era condescension. Assimilation always means introduction to new hazards: guns, drugs, high-fructose corn syrup, wages guaranteed in advance to maintain the laborer in poverty (for an excellent account of the consequences of absorption of the Sudanese Nuer into a state structure, for example, see Sharon E. Hutchinson’s excellent study, Nuer Dilemmas: Coping with Money, War, and the State, University of California Press, 1996). Terborgh has absolutely no vision for a program of assimilation that would yield better results.
Chagnon for his part does not explicitly argue for assimilation, but nor does he show much concern about the relationship between his own work and the potential consequences for the Yanomamö. He is right to protest against his enemies, such as Marvin Harris, who falsely and slanderously interpreted Chagnon as arguing that “the Yanomamö have a gene for warfare and violence,” and who saw a direct link between this claim and the subsequent depredations that these people suffered at the hands of Brazilian gold-mining companies. Yet Chagnon explicitly denies that ‘activism and advocacy’ have any place in anthropology, in fact sees these as one member of an unholy trifecta alongside ‘postmodernism’ and ‘biophobia’, and appears to understand the decision of other anthropologists to take up the role of the advocate as necessarily involving the use of false claims against competitors based only on presumed moral authority. (Perhaps this is the source of the talk of nobility: that activist anthropologists have taken to telling ‘noble lies’?) He speaks of other anthropologists who became politically radicalized, and consequently began “fighting the forces of acculturation rather than actually studying the Yanomamö.” From Chagnon’s point of view it is his job as a scientist to simply tell it like it is, to be a straight-shooter, and insofar as he is transmitting the truth, whatever happens as a result can only be of secondary importance.
At this point, though, one cannot help but notice the connection between the lack of theoretical sophistication and the sort of damage to indigenous people that anthropologists like Turner are working hard to forestall: a subtle and broadminded fieldworker, one imagines, would be able to give an interpretation of the people under investigation that would show the complexities of their connection to the environment they inhabit, that would, in simply aiming to show the truth --the scientific truth, if you will-- ipso facto show an outsider the supreme and inviolable reasons for leaving those people sovereign and unmolested in their native habitat. Thorough, unsimplistic anthropology can already, in itself, amount to something close to advocacy: it shows the richness of other worlds, and therefore the tragedy of destroying them. Chagnon’s work does none of these things, for reasons that are all interconnected, and that at bottom have to do with an impoverished and caricatural understanding of science. The AAA subcommittee’s desire to extinguish any mention of science in its own self-understanding, in turn, is surely a symptom of the same impoverishment. (This self-understanding, alas, did not take hold; in the AAA's latest public statement on the topic, 'What Is Anthropology?' there is an acknowledgment of important connections to science, even if it stops short of saying every domain of anthropology is itself a science)Anthropology was born of sin in the period of high-modern European imperialism, and there is no question that its problematic origins in large part explain its current fracturing and stalling. But none of this changes the fact that the minute description of human cultures through participant-observation with an eye to accounting for the nature and extent of their variety, and to revealing the complex web of dependencies and interactions between nature and culture (disputed categories for Descola), can still in principle be of tremendous value in our effort to understand ourselves and our place in the world.
In view of my impending move to France at the end of this month, I am frantically sorting through a lifetime's worth of old papers, trying to perform a rather radical triage and to get what remains down to the dimensions of a single box or two to be sent after me by freight. I've been discovering some real gems, and posting photos of them to a private social media site. There has been a tremendous outcry among my friends against the proposal to throw some of this stuff away. Fine, I thought. I'll sell it. I'll have an auction here at jehsmith.com.
The PayPal donation button to the right has lapsed into near total desuetude over the past few weeks. Apparently people still want physical objects for their money. Well, here's one: a 1987 report on walruses, "a truly fascinating animal that belongs in the pinniped (fin-feet) classification along with seals and sea lions and it lives in the northern Pacific, northern Atlantic, and Arctic Oceans." Send me a bid, by private message, or in the comments section, within the next week (or until I have a bid that's sufficiently high), and if the price is right I will send you the report by post. If you think it is scandalous that I would part with such a thing, then you, as the highest bidder, may elect to hold it in trust for me until I, uh, need it again.
We'll see how well this works. I have many other equally precious papers with which I might follow this up, including a 1989 Sacramento County arrest warrant for yours truly, and envelopes stuffed with old love letters, in order to help offset the exorbitant cost of this intercontinental move.
All joking and stuntsmanship aside, it strikes me as I go through my papers that I actually have a fairly rich archive, documenting not just my own curious peregrinations and many false starts, but also a good slice of life in the late-20th and early-21st-century West. And you could own a part of it!
A student in rural Iceland, of sheep-farming stock, had her guard down, or didn't yet have a guard. She didn't know how to talk to foreigners, or perhaps felt there was something she had to get across to foreigners, or to this foreigner, who showed an interest in her country. She said, in the hope of conveying to me the whole ethical-spiritual outlook of her country in a single concrete example: In Iceland we are taught not to smash rocks.
In recent years something called 'environmental ethics' is moving into the mainstream, finding space alongside the Kantian, the utilitarian, and so on, which for their part suppose that an ethical relation can only be had toward an ethical subject, and that such subjects are found only among human or at most animal beings. Even environmental ethics tends to imagine the environment with a thick arboreal canopy, with lush grass, and lillypads covering seething green ponds. But in the Arctic and sub-Arctic the 'environment' is mostly a geological rather than a biological phenomenon, and it is not altogether surprising that in such a setting rocks come forward as phenomenally salient, as creatures, as others, more readily than in the Amazon. And still less do the rocks come forward as our petrous co-beings in the big cities of the world, where they only appear ground down and formed into angular artifacts of human ingenuity, which in turn you are not supposed to smash, since in the process of their transformation they have become 'property'.
I was conveyed back from rural Iceland to Keflavik and was flown from there to JFK, AirTrain to the subway to Manhattan, where the next day I gave a talk for a crowd of people who, I sensed, did not grow up on sheep farms. After the formal talk, the informal talk drifted to the matter of moral status, or, more precisely, of what species of animal we may appropriately eat. There was broad interest in episodic memory as marking a cut-off point on the scale of being: animals that have a conception of self, and of having a past that is their own, are not to be eaten, whereas the others may be eaten. A cow is not really an individual cow with a memory of its own cow youth (let alone an idea of its own impending cow doom), but rather is only a series of succinct time intervals of cowhood. Thus to kill it is not to deprive some particular integral cow of its life, but only to terminate one series of five-minute-long-or-so cow intervals. Or so goes the prevailing theory, the theory that is most in keeping with what we know about nervous systems.
Should I bring up the Icelandic student with the rocks?, I wondered. Jesus I move through worlds too quickly. I paused, stuttered. Yes, I said, nervous systems and episodic memories, fine. But don't we also see, in the world, evidence for something that our predecessors might have called 'conatus'? The tendency of all things to hold together? Is it really such a stretch to suppose that this tendency reveals to us a sort of integrity in things that need not be justified in terms of neurophysiological structure or of the conscious states that are thought to be uniquely grounded in this structure? And isn't it a transgression to violate this integrity?
Or maybe I didn't get that far. Maybe I held back. I know the usual replies too well. But that would mean we'd have to stop eating plants too. We'd all die. That's extreme. That sounds like Buddhism. It also sounds like 12th-century compassion theology, and, I suspect, like what pretty much everyone understood about the world for the vastly greater part of human life on earth, but 'Buddhism' is invoked here as a rhetorical strategy to cause one's deepest truest commitments to come out smelling like a mere dusting of Oriental spice.
Somewhere Liz Harman gives an argument about abortion in which early-stage fetuses are shown to be no more morally relevant than are plants. But that hardly settles the matter, if you are not starting out from the premise that moral relevance flows from what is going on 'in there', let alone that what is going on in there can be simply and uncontroversially read off of physical structure. Plants are morally relevant, and to deprive them of their integrity and their thriving is not a morally insignificant matter. What gets eaten, what gets aborted, what gets smashed, will never be decisively resolved by an inspection of internal structures and capacities of a given candidate for destruction. Even smashing a mere chunk of solidified lava --evidently purely passive, and homoeomerous from one end to the other-- can be experienced as a transgression by the person who is properly sensitized, for whom the chunk shows up as salient within her ethically charged environment. Are fetuses morally relevant? Yes, they are. So are chunks of lava. Does that mean you mustn't destroy them? Not necessarily, but you shouldn't suppose that the way to gain license to destroy them, whether this license is conceived cosmically, socially, or individually, is to produce arguments that cut them off from the sphere of moral relevance.
The prevailing ethical theories suppose that there must be an ethical subject in some bit of matter in order for ethical commitments toward that bit to properly obtain. Parallel to the partial rise of environmental ethics, there is a metaphysical view that is perhaps slowly recrudescing, pananimism, which holds that all of nature is imbued with mind or mind-like powers. There are variations on this; Galen Strawson for example argues that some form of panpsychism is entailed by any realistic physicalism, and in this he is restyling an old argument of the 18th-century materialists, such as Diderot, to the effect that even marble can think-- it simply has to be ground up and sprinkled on grass that is eaten by a cow that is in turn eaten by a thinking human. Many philosophers have understood that to attribute mind-like powers to all of nature logically compels one to adopt the view that every bit of nature harbors an actual subject-- thus we find Ralph Cudworth arguing in 1678 that hylozoism (the view that all matter is alive) entails the 'clubbing together' of infinite minds everywhere (Cudworth saw this as a reductio ad absurdum of hylozoism, just as more recently John Searle has argued against pananimism on the grounds that there must be an individual mental subject wherever there is mental activity). A few years after Cudworth, we find G. W. Leibniz working out the elements of his own theory of monads, which holds precisely that the world is entirely constituted from the activity of infinitely many nodes of perception.
Such theories strike us as outlandish, and indeed as outliers in the modern conceptual landscape. But one thing of which I am growing increasingly convinced as I attempt to broaden my reading from philosophy to anthropology, Religionswissenschaft, and related fields, is that from the perspective of the longue durée something like what the theory of monads articulates has been the default human understanding of nature all along. It is what Heraclitus has in mind when he says that gods dwell in his stove; it is what preoccupies the Inuit when they suppose that in eating walrus meat they are eating the souls of ancestors; and it is the way of thinking that informs Virgil's poetic account of the Zephyr's power to impregnate mares. There are souls, gods, ancestors (whatever!) all around us; they are in evidence in the structure and cohesion of nature; and it is a transgression against them to needlessly violate this structure and cohesion.
This is the sort of thinking in which ethics originates. The ontology of philosophers has changed, and sent us scrambling for a new ethics, on which some things make the cut as relevant targets of ethical concern and others don't. But the old folk-ontology is not entirely extinct, and I suspect that there is much in folk-ethics, in the actual ways people form their ethical commitments (to icons, rocks, fetuses, etc.), that will escape the comprehension of philosophers so long as we continue to dismiss as backward or primitive or unscientific the idea that gods, or something like them, dwell here, for any here, too.
There is a trite and obvious thing to say about Iceland, and that is that it looks like the moon. Descending into the Keflavik lava fields the other day, on an Icelandair flight from Paris, I was permitted to feel annoyed and a bit superior when I overheard the virgin French tourists behind me exclaiming as they gawked at the land below: Mais il n'y a rien là! By 'nothing' I thought perhaps they had meant 'no Michelin stars', but then one of them added, as if on cue: C'est comme la lune! Yet if there is an association between the earth's only satellite and this basalt outcropping of the mid-Atlantic range that is too obvious to mention, there is another that remains to this day far too occult, and that is as deserving of notice as the other is of suppression.
The great German astronomer and natural philosopher Johannes Kepler, most famous for defending the Copernican system and for establishing the elliptical orbit of planets, began his 1634 Somnium (sometimes subtitled A Posthumous Work on Lunar Astronomy) as follows:
My name is Duracotus. My country is Iceland, which the ancients called Thule. My mother was Fiolxhilde. Her recent death freed me to write, as I had long wished to do. While she lived, she carefully kept me from writing. For, she said, the arts are loathed by many vicious people who malign what their dull minds fail to understand, and make laws harmful to mankind. Condemned by these laws, not a few persons have perished in the chasms of Hekla.
The narrator goes on to describe his mother's commercial undertakings in what might be euphemistically called 'folk medicine', or, slightly more bluntly, 'natural magic':
In the earliest years of my boyhood my mother, leading me by the hand and sometimes hoisting me up on her shoulders, often used to take me up to the lower slopes of Mt. Hekla. These excursions were made especially around St. John's Day, when the sun is visible all twenty-four hours, and there is no night. Gathering some herbs with many rites, she cooked them at home. She made little bags out of goatskin, which she filled and carried to a nearby port to sell to the ship captains. This is how she earned her living.
That Kepler decides to preface what turns out to be a fairly serious work of lunar astronomy --to wit, a theoretical reflection upon the conditions on the still-mythical dark side of the moon-- with this narrative set-up is peculiar, not least because we know, and his contemporaries knew, that it reflects Kepler's own life in significant ways. Kepler's own mother, Katharina Kepler, née Guldenmann (1546-1622), was in the years leading up to the Somnium's composition being held in prison in Stuttgart on suspicion of witchcraft, under threat of imminent torture and execution. The son fought for years for his mother's release, and eventually won it, but much of their life was spent in the frenetic uncertainty of the Hexenjagd. Kepler was concerned with witchcraft not only as a source of fanciful characters for his thin fictions.
Duracotus cuts open one of his mother's little bags, and the herbs and runes come falling out. In a rage, she sells the little boy to a ship captain, and he sails off to the Danish island of Hven, where he eventually falls into an apprenticeship under the great Tycho Brahe, who instructs Duracotus first in Danish, then in astronomy. From the beginning, the displaced Icelander notices the similarity between his new learning and the traditions embodied by his own mother:
I was delighted beyond measure by the astronomical activities, for Brahe and his students watched the moon and the stars all night with marvelous instruments. This practice reminded me of my mother, because she, too, used to commune with the moon constantly.
Duracotus eventually returns to Iceland and is reunited with Fiolxhilde. Upon reuniting, they engage in an exhaustive exchange of information about their respective traditions. The Icelandic witch is happy to learn the latest science from the continent, and her son, in turn, is just as pleased to finally be indoctrinated into native secrets, "those arts among a people so remote from all the others." Fiolxhilde relates her wisdom as follows:
Advantages have been conferred, Duracotus my son, not only on all those other regions to which you went but also on our country, too. To be sure, we are burdened with cold and darkness and other discomforts, which I feel only now, after I have learned from you about the salubriousness of other lands. But we have plenty of clever persons. At our service are very wise spirits, who detest the bright light of the other lands and their noisy people. They long for our shadows, and they talk to us intimately. Among them there are nine chief spirits. Of these, one is especially known to me. The very gentlest and most innocuous of all, he is evoked by one and twenty characters. By his help I am not infrequently whisked in an instant to other shores, whichever I mention to him... Most of the things which you saw with your own eyes or learned by hearsay or absorbed from books, he related to me just as you did.
This spirit, it turns out, specializes in organizing trips to 'the island of Levania', which lies 'fifty thousand German miles up in the ether', and which is, evidently, nothing other than the moon. Fiolxhilde goes on, channeling the spirit, to explain how the lunar voyage unfolds:
The road to it from here or from it to this earth is seldom open. When it is open, it is easy for our kind, but for transporting men it is assuredly most difficult and fraught with the greatest danger to life... Great as the distance is, the entire trip is consummated in four hours at the most... Because the opportunity is so fleeting, we take few human beings along, and only those who are most devoted to us. Some man of this kind, then, we seize as a group and all of us, pushing from underneath, lift him up into the heavens. In every instance the take-off hits him as a severe shock, for he is hurled just as though he had been shot aloft by gunpowder to sail over mountains and seas. For this reason at the outset he must be lulled to sleep immediately with narcotics and opiates. His limbs must be arranged in such a way that his torso will not be torn away from his buttocks nor his head from his body, but the shock will be distributed among his individual limbs. Then a new difficulty follows: extreme cold and impeded breathing. The cold is relieved by a power which we are born with; the breathing, by applying damp sponges to the nostrils. After the first stage of the trip is finished, the passage becomes easier. At that time we expose their bodies to the open air and remove our hands. Their bodies roll themselves up, like spiders, into balls which we carry along most entirely by our will alone, so that finally the bodily mass proceeds toward its destination of its own accord. But this onward drive is of very little use to us, because it is too late. Hence it is by our will, as I said, that we move the body swiftly along, and we forge ahead of it from now on lest it suffer any harm by colliding very hard with the moon When the humans wake up, they usually complain about an indescribable weariness of all their limbs, from which they later recover well enough to walk.
Fortunately, like some Laurence Sterne or DFW who can never be happy or finished with his work, Kepler provides us with his own explanatory notes on the Somnium, which are significantly more lengthy than the work itself. And he does not shy away from the question we've been circling around, Why Iceland? He begins by noting that "in our German language," 'Island' means 'Ice Land'." But, he continues, the real reason for choosing it is the power it seems to hold of inducing a state of rêverie:
In this remote island I perceived a place where I might fall asleep and dream, in imitation of the philosophers in this branch of literature. For Cicero crossed over into Africa when he was getting ready to dream. Moreover, in the same western ocean Plato fashioned Atlantis, whence he summoned imaginary aids to military valor.
But still, why Iceland in particular, rather than the distant lands selected by his classical predecessors? Kepler supposes Plutarch had already discussed Iceland in his On the Face of the Moon from the 1st century CE, though he insists that this had nothing to do with his own choice. This choice, if we are to take Kepler at his word, turns out to be fairly uninteresting, a contingent result of a contingent choice of an obscure publisher to collate the plots of unrelated stories into a single volume. He writes of his sojourn in Bohemia:
At that time there was for sale in Prague Lucian's book about the trip to the moon, as translated into the German language by Rollenhagen's son; bound with it were the stories of St. Brendan and St. Patrick's Purgatory in the earth beneath Mt. Hekla, the Icelandic volcano. Moreover, since Plutarch, in accordance with the belief of pagan theology, located the purgatory of souls on the moon, I decided that when I set out for the moon I would most prefer to take off from Iceland.
My thesis here, if I must have one, is that there is more to the story than this. The association between Iceland and lunar voyages runs deeper than some editorial caprice of Georg Rollenhagen fils. For one thing, we know that Kepler was reading other books, which can be shown to have had a significant influence on the content of the Somnium. In particular, the 16th-century Swedish minister and polymath Olaus Magnus's Historia de gentibus septentrionalibus (Rome, 1555) seems to have been particularly important. Magnus refers to the "artem magicam apud septentrionales populos" (Bk. 3, Ch. 15, 117), a phrase which Kepler subsequently picks up virtually unaltered. According to Edward Rosen, "Kepler's use of the words montis... ignivomi to describe Mt. Hekla as the site of Purgatory strikingly echo" the terminology of the 1599 abridgement of Magnus's Historia.
In fact, when we look at the entire passage on Iceland to which Rosen refers, we can plainly see that Kepler is relying on Magnus not just for his picture of Mt. Hekla, but also for much of Fiolxhilde's later account of the Icelandic spirits that, as she puts it, 'love the shadows'. Allow me to cite the passage at some length:
Islandia terra est subiecta polo Arctico, vento praesertim Circio opposita, ac mari Glaciali propinqua: atque ob id dici meretur terra glacialis, seu ultimum Tyle, nulli veterum non celebrata... Pro maiori parte montosa, & inculta, praesertim versus plagam Septentrionalem, ob austera spiracula praedicti venti Circii, qui nec frutices elevare permittit. Insula est insolitis miraculis praedicanda. Rupes etenim, sive promontorium in ea est... quod instar Aetnae perpetuis ignibus aestuat. Ibique locus esse creditur poenae, expiationisque sordidarum animarum. Illic nempe spiritus, seu umbrae, comperiuntur se exhibentes manifestos humanis ministeriis submersorum, sive alio violento casu enectorum. Spectra sese offerunt congressibus notorum hominum tam manifesta, ut tanquam viventes accipiantur ab ignaris mortis illorum, data dextra. Nec deprehenditur error prisquam disparuerint umbrae. Incolae plurimum praesagiunt fata principum, quiduc remotius in orbe peragitur, revelationibus apparentium spectrorum non ignorant, prout inserius cap. de magicis praestigiis lucidius aperietur.
The land of Iceland is situated beneath the Arctic pole, oriented principally toward the northwest, and close to the glacial sea: and this is why it merits the name of 'Ice Land', or 'Ultima Thule', celebrated among the ancients... For the greater part it is mountainous and wild, above all towards the northern coast, for nor does the austere vent of the aforementioned northwesterly wind permit anythng to grow. The island is known for its unheard-of wonders. And indeed the rocks, or the promontory in them, ... are seething with a perpetual fire like Mt. Etna. And it is in that place that the punishment and expiation of besmirched souls is believed to occur. One finds there spirits, or shadows, that are manifested through the conjurations of men. The specters of drowned men or of men who died a violent death appear so real that, like living men, they shake hands with men who don't know they are dead, and the error does not reveal itself until the moment when the specters have disappeared ('De apparentibus umbris submersorum', Bk. II, ch. 3, 62).
It is, in short, even more clear than it had been for Rosen, when we look at this entire passage, that Kepler is greatly indebted to Magnus, and not just for his appreciation of Hekla as a point of entry to a nether-realm, but also as an inspiration for the idea that Iceland is a place of commerce with spirits.
But we are still missing something very important: the sort of transit Kepler describes as happening from Iceland is not a descent into a dark nether world. Iceland as a point of entry to the center of the world is of course a familiar trope, as for instance in Jules Verne, whose 1864 Voyage au centre de la Terre begins with the deciphering of a supposed runic manuscript that describes where the volcanic points of entry lie. The idea that Iceland possesses such points of entry appears to be rooted both in the geology of the mid-Atlantic range, as well as in Norse mythology itself. Thus in the Gylfaginning, which makes up the first part of Snorri Sturluson's 13th-century Prose Edda, there is mention of the Gjallarbrú, a bridge across the River Gjöll into Hel (a place that is what it sounds like). But the Gjallarbrú has its symbolic inversion in the Asbrú, or Æsir Bridge, which does not descend into the bowels of the earth, but rather extends from this world into Asgard, the realm of the gods. An alternative term for 'Asbrú' is 'Bifröst' (as found in the Prose Edda) or 'Bilröst' (as found in the Poetic Edda). These terms can all serve, even in contemporary Icelandic, as poetic or elevated ways of saying 'rainbow': a bridge that leads to the æsir, the gods. Thus in the Grímnismál, a component of the Poetic Edda, we find a concealed Odin revealing cosmological knowledge to Agnarr, who learns that the Asbrú is the best of all bridges, even if it is constantly burning up those who cross it:
Ka/rmt oc Avrmt
oc Kerla/gar tver,
þer scal Þorr vaða
er hann doma ferr
at asci Yggdrasils;
brenn a/ll loga,
heilog votn hlóa.
Or, in Benjamin Thorpe's 19th-century translation: "Körmt and Ormt, and the Kerlaugs twain: / these Thor must wade each day, / when he to council goes / at Yggdrasil's ash / for as the As-bridge is all on fire, / the holy waters boil."
There is no sign in Magnus, or any of Kepler's known sources, of an understanding of the commerce wth spirits in Iceland as initiating a flight rather than a descent. There is little indication of any interest in the Historia in the arcana of Norse mythology, and if it is to this latter that we ultimately wish to trace Kepler's Somnium, from the lunar voyage back to the burning bridge of the Æsir, scholarly prudence requires us to concede that we have yet to discover the path by which this trope was transmitted from medieval Iceland to early modern Germany.
What can be established is that in Iceland, as a result of the coincidence of its geological features with its geographical situation, we find the confluence of two widespread and ancient mythological tropes. Its vulcanism marks it out as a site of access to Hades or Hel, while its Arctic location causes it to be associated with another world not beneath but above or beyond this one-- we see the Arctic being used for exactly this purpose in, e.g., Margaret Cavendish's conception of a 'blazing world' that attaches to this one at the North Pole, and in the many subsequent iterations of the idea that 'Nova Zembla', as it was long called in quaint misunderstanding (and where, good maps show, there is a river called 'Nabokov'), is a site where the ordinary laws of nature break down. Remarkably, both of these tropes are found already in the medieval Eddas, narratively represented by the two bridges, Asbrú and Gjallarbrú, burning and scintillating, dissuading the common lot of men from crossing them.
My piece in the New York Times Stone series on February 10 received an unusally large volume of responses, both in the comments section there as well as in the form of e-mails to me and posts on a number of blogs. Many commenters were generally supportive of my line of analysis, and many also raised legitimate points of criticism. Many others still raised illegitimate points of criticism. Let's get to those first.
Roughly 10% of the comments I received came from bona fide racists. Perhaps I should pause at this point to define my terms. By 'racist' I mean someone who (i) believes that racial categories map onto real, biologically significant subdivisions of the human species; and (ii) that these various subdivisions are characterized by greater and lesser physical and cognitive abilities, and/or by different temperamental or emotional profiles. Many of the racists who wrote to me cited genetic studies (e.g., this) that show, in one way or another, that traits do indeed cluster in populations. By a curious coincidence, within just a few days of my piece, Nicholas Wade --who has also appeared recently with a sympathetic appraisal of Napoleon Chagnon's Hobbesian éloge to the Yanomamö-- published an article in the Times trumpeting the great antiquity of the mutations that led to what are held to be typically East Asian traits. This article was in turn cited by many commenters as the scientific proof that 'race deniers' do not know what they are talking about.
But I never denied that traits cluster in populations. There is a very simple, logical point that always seems to get shot to hell by the people who are poised and ready to fire back against 'race deniers' like me with the supposedly heavy artillery of genetic research. That simple point is that people who use such scientific research as a way of strengthening their case for the reality of race are begging the question. And I mean this not in the incorrect sense in which we now often hear this phrase from Republican politicians and from know-nothing advertising campaigns. I mean it in the sense of petitioning the principle: they are assuming at the outset the reality of the very thing they're supposed to be trying to prove. Yes, traits cluster in populations, but it is only if you have already presupposed that the human species breaks down into real subdivisions that you will subsume new information about such clustering into a racial schema. Otherwise, what you will notice are all the salient respects in which the population that is the locus of such clustering does not amount to a discrete kind. For one thing, it is entirely permeable at its boundaries, and thus has nothing in common with the isolated reproductive communities that constitute biological species on analogy to which races are, consciously or unconsciously, modeled. For another, trait clusters tend to be noticed in populations that were already of interest to us as purported races for initially non-scientific reasons. Take the example of the new discoveries about 'East Asians'. We assumed at the outset that there were such people, constituting a real subdivision of the human species, and then we went in search of their distinctive features. Lo, we found some in the sweat glands, hair follicles, and breast size of females. But would we find the same traits clustering in, say, Tungusic peoples, or the Chukchi? They are East Asian too, after all, Eastern Siberian, to be precise, and it is a contingent fact about human history that they are outnumbered by, say, Han Chinese. If we sample all of the peoples of the world, rather than the ones that are salient to us on pre-scientific cultural and historical grounds, we will notice that our conception of where the racial boundaries lie is rooted in our pre-scientific interests, and only subsequently filled out, as best it can be, by new genetic research.
This is particularly evident in the local US context, in which genetic and medical information about African-Americans becomes naturalized and universalized in such a way as to purportedly tell us something about a significanct subdivision of the entire human species, one that was formerly called 'Negroid'. But of course such information tells us nothing of the sort: it is useful for diagnostic and therapeutic purposes for doctors in the US to know that someone is 'black', but this in no way implies that the same information about a person in the Kalahari, Ethiopia, or even (depending on the culture and period of history you inhabit) New Guinea or Australia could be used in the same way. The information is of strictly local interest, and yet it calls upon a global system for subdividing the human species, one that places Khoi-San, Ethiopians, and African-Americans, at least, in the same quasi-natural kind.
And this point needs to be emphasized in order for non-racists to have at their disposal the simple and obvious, but too-often neglected, response to the racists who invoke supposedly scientific evidence concerning the superior performance of black athletes in track events, or the inferior performance of black students on standardized tests. It is seriously unlikely that a mass-scale standardized test of everyone who is placed for historical and non-scientific reasons in the folk category 'black' could ever be carried out in a sufficiently rigorous way to warrant a conclusion of the sort: "'Blacks' perform worse on standardized tests than 'whites'." Again, what this would involve is devising a test that could be given to Namibians, Ethiopians, Haitians, etc., and whose results could then be compared with those of the same test as given to Norwegians, Circassians, Scotch-Irish West Virginians (who counts as 'white' in a given era, and why, is no less perplexing than who counts as 'black'). This will never happen, but that doesn't matter to the racists, because anyway what they really mean when they invoke such tests to ground claims of racial difference is that here, in our local context, there is such a difference. But race is supposed to be global, natural, a result of evolution, etc., while local differences are obviously only the result of civil history and culture. And this is the great inconsistency of the pseudo-scientific concept of race: that it is reaching too far too fast, invoking a global natural order to which claims about local 'racial' difference never accurately apply, and failing to notice that the local differences admit of a much more parsimonious explanation than the one that has to move all the way down to the level of biology.
There is a connected point to be made about evidence from history. Many racist commenters echoed David Hume in their suspicion that the accomplishments of non-white people have been exaggerated for the sake of 'political corectness'. Thus the racist Joel Eidsath sends, for example, this sarcastic dismissal of the legacy of Anton Wilhelm Amo:
Your contribution to the rather burgeoning field of "race does not exist" articles is much appreciated. The "there was once an African of decent intelligence" sub-category of these sorts of tracts can always use another entry! [Sub-sub-category, reference to the "black" St. Augustine.]
Nevermind, now, that my point was not principally to trumpet Amo's contribution to philosophy, so much as to observe the way his work was received by his contemporaries. Eidsath's main concern here is that anti-racists are looking a bit too frantically for historical evidence of the sort that would refute Hume's comment. But if you are skeptical of the Black Athena thesis in its particulars, there is still no avoiding a general corollary of it, that in order for a group to be perceived as substandard or as high-performing relative to another group, it must already be conceptualized as a group, and there is no good historical evidence that in the ancient world there were any categories that even loosely mapped onto our own 'black' and 'white' (and even these differ from region to region when taken synchronically). And if you wish to dispute that, then it might be helpful to move away from our own local concern about black and white, where our emotions are perhaps clouding our judgment, and consider a case of perceived racial differences that lies entirely off of our own historical and geographical radar (and by 'our' I mean 'most readers'). Anyone who has an intimate experience of the treatment of the Dalits in India can affirm that they are in a position very similar in structural respects to that of African-Americans. There are plenty of non-Dalit Indians who will tell you that, as history and statistics and good common-sense show, Dalits are plainly temperamentally different, more prone to criminal behavior, etc. In other words, Dalits are being conceptualized racially by the people who are discriminating against them, even if from a distance, for people who don't know anything about India, there is no perception of any racial difference at all.
One commenter wrote to tell me that my mention of the case of the Tutsi and Hutu weakened my argument, since in the Rwandan context these two groups perceive one another in racial terms. But what supposedly weakened my argument in fact did the opposite, since one of my principal points was precisely that race is the naturalization, or projection into the biological sphere, of what is in fact only a matter of local history: a history that might involve two groups in which genetic traits cluster differently, but can also involve two groups in which the supposedly essential or 'racial' differences are entirely invisible. So, to get back to Amo and Augustine, even if it were to turn out that no one we would consider 'black' ever made a significant contribution to the human endeavors that have historically been valued in Europe, this would still not enable the racist to get around the problem that who is considered racially distinct from whom --who is considered Dalit or non-Dalit for example, or who is considered black or white-- is always a local, contingent, and unstable affair.
While many racist commenters wrote to express their disapproval of my insufficiently racist position, another significant portion of commenters judged that my position is in fact racist. Most of these commenters judged that I misunderstand the American situation, and some supposed that this is because I am not myself American-- I am, though I have lived outside of the country for a decade. Sometimes it seems to me that being an American looking at my country from a distance frees me up to think and express views that are rather harder to arrive at from within the belly of the beast. It also becomes easier, I think, to examine American history comparatively, to resist the belief that that country has a Sonderweg that sets it apart, and to look at things like the history of race relations in a way that places this history alongside comparable --while, obviously, non-identical-- histories elsewhere in the world.
I believe that slavery and its aftermath constitute the defining legacy in the formation of the American identity, and I also believe that much of the difficulty of talking about and understanding 'race' in the US is a side-effect of a largely praiseworthy attempt to heal this trauma. To some extent since the end of the Civil War, but more importantly since the beginning of the Civil Rights movement, there has been an effort to overcome ethnic divisions in the US by construing all Americans as Americans simpliciter, and to some extent denying that descendants of slaves on the one hand and immigrant European beneficiaries of white privilege on the other constitute distinct cultural groups at all. (The denial is not as severe as, say, the denial of cultural distinctness in the modern Turkish republic, which has lead to the denial of the very existence of Kurds, now conveniently redescribed as 'Mountain Turks'; in the US case, a certain amount of cultural difference is allowed to flourish in traditions of cuisine, music, etc., as long as it flourishes way below the threshold at which it might threaten the unity of the nation-state). But social reality forces us to acknowledge cultural difference in some way or other, even if the political legacy of integration forbids us from talking about it as cultural difference. And so we are left in a peculiar situation, in which it can easily appear more racist to speak of two cultures and no races, than to speak of one culture in which any lingering perception of difference is to be explained by appeal to the supposed fact that this one culture is made up out of two races. In other words, we've chased difference out of the political and cultural dimension, and it didn't have anywhere else to go but down to the biological dimension. Where, I repeat, it plainly does not belong.
In this respect, at least, the US really does have a Sonderweg. I cannot think of any other multicultural society that has dealt with its internal differences in a similar way. This imperfect response to the legacy of slavery is, I would also add, itself a part of the legacy of slavery: the fact that 'race' suggests itself as a category for the interpretation of cultural difference is a result of the fact that we inherited this category from a system that found it useful for the preservation and enforcement of structural racism.
One final note about cultural difference: Americans are to some extent justified in disputing the claim that there are 'two cultures' constituting their society, since obviously there are very many more than that. African slaves came from many cultures, and so did European immigrants (and, later, immigrants from elsewhere). But by force or by elective affinity, everyone who arrived was filtered into the one or the other (or became stalled as a problem case, a tertium quid, as, for a time, the Irish, Jews, and, more recently, Latinos). As many scholars have noted, ethnogenesis is almost always a matter of an initial political union. For example, in his monumental History of the Goths, Herwig Wolfram shows convincingly that the ancient Germanic people originally came into being as a result of elective tribal confederation. In other words, Germans are creoles too. All cultures, in their origins, are creole, and it is in no way to deny the distinctness of pre-modern African cultures to concede that, once brought here by force and compelled to adapt to new circumstances, over the centuries a distinctive African-American culture emerged. There is such a thing, like all cultures a loose, permeable cluster of family resemblances, and we don't need to drop down to the level of 'race' in order to make sense of it. History will do just fine.
The most interesting and formidable criticism of my article (particularly as formulated by Kenan Malik, drawing on his own earlier work on the subject) had to do with what was initially a throw-away claim in the article, about the contrast between Johann Gottfried Herder and the majority of his contemporaries. This claim then came to frame the entire article when the editors chose a title for the piece that made reference to the Enlightenment. (Why don't authors get to choose their own titles in the newspaper business? Where did this practice originate? Why don't more authors protest against it?) But my principal aim had never been to defend the counter-Enlightenment against the Enlightenment, and I am certainly aware, as was pointed out by a number of critics, that at the center of the Enlightenment there were many thinkers, notably Condorcet, who mounted laudable defenses of racial equality, fought for abolition, and so on. What's more, it is certainly fair to see the Haitian Revolution of 1802 as an inevitable extension of the revolutionary spirit that had first been sparked in Europe largely by Enlightenment thinkers' promotion of liberty, equality, and so on. Many, not just Toussaint-L'Ouverture but also a number of European sympathizers, did in fact suppose that the only legitimate place for the boundaries of equality to be drawn were around the entirety of the human race, rather than around a mere part of the human race deemed in advance to be, as the saying goes, more equal than others.
But still, but still, if we are to attempt to spell out some precise commitments that we may properly identify with that nebulous notion of 'the Enlightenment', it seems to me that at the core of this movement, or Zeitgeist, or whatever you might wish to call it, is the idea that history is progressive, and that Europe is, as of the middle of the 18th century, further along in the course of progress than the rest of the world. For their own good, Enlightenment thinkers supposed with near unanimity, non-European peoples must be brought into the fold of European history in order to be able to ride the wave, so to speak, of historical progress. This is abundantly clear in Kant, who supposed that the lives of South Sea islanders, to the extent that they are spent outside of the fold of history, are literally not worth living. And it continues to echo loudly in Marx, who maintained that the British installation of industrial looms in Bengal might have increased the misery of Bengali weavers, for the time being, but at least it did them the service of moving them into a historical position from which their lives could begin to improve.
On this understanding, there simply is no room for indigenous voices at all, and I do not know of a single thinker centrally associated with the Enlightenment, including even Condorcet, who is able or willing to make room for such voices, to acknowledge that a life entirely outside of the grand unfolding of historical progress initiated by European civilization might nonetheless be worth living. I see the readiness to notice the intrinsic interest and value of sub-historical or extra-historical folk-ways as characteristic of a form of thinking that was from the outset self-consciously poised against the Enlightenement. I see Herder as one of the exemplary figures in this history (though I see interesting anticipations of it already in Leibniz), and I see it as echoing through the much later work of Franz Boas, Zora Neale Hurston, and many, many others. This is in no way to deny the laudable strains of progressivism in the Enlightenment, but it is to question whether progressivism, or the belief that the human good always consists in progress, is the exclusively laudable approach to the problems human diversity poses.
[This was originally scheduled to come out in another venue. I went through long, difficult discussions with the editors, and ultimately, because it was so important to me in this case that the words remain my own, I decided to pull it (I'm usually very flexible with editors!). I sat on it for a number of months, but with the coming of the New Year it dawned on me that it might set me off on the right foot to release it here.]
Different people, different closets. I don't quite know how to say it delicately so I'm just going to come right out and say it. I believe in God. Apart from periodic spells of foolish pride, I have believed in God all my life. Even during these spells, I did not so much cease to believe, as turn my back on what I believed.
As far as I'm concerned there cannot really be any concern that God does not exist. Even to see God's existence as a problem is to misapprehend what is at stake, since God just is the love, sweet and radiant, that charges through every drop and leaf and mote of the creation, always ready to be felt by anyone who is ready to believe.
God is not male, and I cannot say 'he', however tempted I am to remain with the conventions of my beautiful language and its beautiful tradition of devotional writing. But this is a relatively trivial corollary of the more important point that God is not a being, and so also neither a monarch nor a father nor a ruler of any sort. God is love, and I can keep my love of God and have my anarchism too.
Indeed, as I see it the two not only can but must go together. To believe in God, and to feel the divine love that charges through all of creation, is precisely not to bow down, but to rejoice. The great travesty of the history of religion, and the victory of its enemies, has been to bend the idea of God to the legitimation of earthly rulers, to convince people that God is like dad, or the king, or the tyrant, but more so, and that, conversely, these mundane potentates are little reflections of God. There is none of this in my love of God, which shines out of my encounter with creatures, God's creatures, themselves having no power other than the power of their own growth and integrity, their own life, which is itself an expression of the same joy in God as my own.
To experience this joy is to know that the states of my soul and the states of infinite nature always fit, that each is an expression of the other, and so, that my death cannot be the end of anything, since nature, of which my soul was a modulation, a beautiful if dirty outcropping, will keep doing what it always does, and I, now only more obviously a convolution of nature, will flow along in streams and breezes and cosmic rays and will no longer be held up on this concern about the 'I' at all, about its finitude and its mortality. (I have just expressed a version of what is sometimes called 'monopsychism' or 'the Averroist heresy', and it is the first of a handul of heresies to which I will assent here.)
For some centuries now, no small confusion has arisen from the fact that we talk about belief in God, rather than love of God. The two amount to the same thing, but the first of these expressions, at least since the beginning of the modern period, pushes us willy-nilly into the field of evidence and argumentation, a field where the standards of commitment have nothing to do with the issue at hand, and so not surprisingly, though for poorly understood reasons, belief in God cannot but be a failing proposition.
But start from love, start from joy, and the demand for further evidence vanishes. To continue to make it would be like demanding to see the hormones that cause an erection before accepting that there is such a thing as eros. It would be vulgar. It is vulgar, every time we hear it from the puffed-up fools who believe they are defending the honour and integrity of something, which they also do not understand, but which they call 'science'. Science has more often than not been driven by what its practitioners have experienced as joy and wonder before God's creation. This is a historical fact, and even if you are one of the puffed-up fools who thinks belief in God deserves nothing but mockery, you cannot change this fact.
Too often, God talk is set over against science talk, as if the one were concerned with fiction and illusion, the other with truth and reality. But this distinction presupposes an understanding of all talk as principally concerned with denoting entities in the world, and so takes those varieties of talk for which no entities are to be found as inferior or off-target. But there is another method of dividing up the different ways people talk, on which what the aforementioned fools think of as science finds itself in the undistinguished company of insurance claims and warehouse inventory lists, while God by contrast shares space, in the universe of human meanings, with music, metaphor, poetry, and dreams.
Everyone understands when God comes up in the lyrics of a Pixies song, for example. One song says, insistently, 'God is seven!', as if recalling some forgotten Pythagorean numerology, or, just as likely, deploying a Chomskyan example of a nonsense sentence. When I hear that song, I hear something, born though it may be of irony and the exigencies of rhyme, that I can only hear as a crying out to God. Countless rock songs could be substituted here to make the same point, which is that no one who was into such a song up to the moment where God came up will suddenly throw off the headphones and declare that there has been a mistake.
In his poem 'The Angels', Rainer-Maria Rilke attributes to God a pair of 'wide workman's hands' which move through the pages 'of the dark book of the beginning'. Does God have wide workman's hands? No, and yet, if you ask me, Where is the truth to be found, in Rilke's poem, or in my 2011 tax returns?, I will answer without a second of hesitation: the poet is telling the truth, and of the two sorts of text it is only the first that can even be said to be engaging with something important enough to be called 'true' or 'false' at all. The turn to metaphor is not a turn away from truth, but a response to the difficulty of its expression.
But metaphor is not always what conveys the truth best. When Al Green sings the lines of the old gospel song, 'My God is real, for I can feel him in my soul', no proposition could be more direct, and only a puffed-up fool would seek to tell him he is mistaken. Just listen to him; he's not mistaken. And we can know this because of the depth of the feeling that motivates its expression, so much like the erotic desire we might find him singing about in the tracks that precede and follow 'My God Is Real'. Some of the most incontrovertible expressions of belief in God move in just this way, between RnB and gospel, so to speak, between different modalities of love. In this respect, Al Green positions himself in the same lineage as Teresa of Avila and St. John of the Cross. The expression of their love of God is the more compelling for their refusal to cordon it off from carnal love.
Much invocation of God in music and poetry is not declarative at all, but vocative; it says not that God is so-and so, it says 'Oh Lord', 'Oh my God'. (Sometimes people even say these things when they're fucking.) The puritans will tell you that this is 'swearing', that it is taking the name of God in vain. But listen attentively, openly, in the spirit of charity, and you can only understand that it is not in vain, and that we have no a priori rule to distinguish between swearing and praying.
Are the lyrics true? (Is God seven?) Are they more true or less true than a correct inventory of the items held in a warehouse? This is an argument I am not interested in having. But I am certain that music, poetry, all those contexts in which everyone understands, whether they 'believe in God' or not, that it makes sense to invoke God: these are not to be dismissed as illusion, shadow, deceit, nor yet as distraction, playtime, divertissement, kid's stuff. An account of human life that does not include them would not amount to much.
But enough about wrong-headedness; this is not a debate, but a confessio fidei. I am proclaiming what I believe, not beating down opponents. Opponents and sympathists alike are by now probably saying, God, God, fine. But which God? And I answer, with even more trepidation than I felt in making the initial profession of faith: the Christian God.
But what is Christian here? So far I've only spoken in the sort of vagaries that a few hundred years ago would have found me accused of Spinozism, which is to say crypto-atheism. They were wrong about Spinoza, then, too. Spinoza believed in God, and as far as we can tell his mature philosophy was in no small measure the product of a deep interest in radical Protestant reconceptualizations of Christology. To say 'God or nature' only appears to be a reduction of God if you already hate nature; if you do not hate it, you will understand Spinoza's formula not as a reduction, but as an exaltation. Too often, to say, with Einstein, that one 'believes in Spinoza's God', is interpreted as a way of distancing oneself from belief in the proper sense, a way of sounding respectably modern while also expressing some concern that our various insurance claim forms and inventory lists are inadequate as sources of meaning. It is the high-brow way of saying one is 'spiritual but not religious'.
But I want to say something unmodern here, something that would have made sense in 30 AD. I am a Christian because I affirm the core message of the Gospels, which, I take it, is that God is love, and that therefore a life that aspires to love of all of creation is a life lived in accordance with God's law. This interpretation of what I have called the core message is one that emphasizes what is often called 'the social gospel', and that aligns itself with thinkers like Tolstoy who see Christ as a social revolutionary. From love flows the principle of unconditional forgiveness, and from this flows a commitment to pacifism which underlies all of a Christian's political commitments. To be a Christian is by definition to abhor war. It is also to abhor conventional morality, and family values (see Luke 14:26). It is not only unmodern, but untimely in any era, in 30 AD as much as today.
The only adequate fulfillment of the core message of love, for me, is the one that, as I have already suggested, extends this love to all of creation, and not only to one's 'fellow man'. This is a challenge, since the Christian tradition, and indeed for the most part the Gospels themselves, is overwhelmingly anthropocentric. Animals come up parabolically, as representatives of singular human virtues or vices (sheep good, goats bad, etc.), but seldom as works of God in their own right. This absence is somewhat compensated in the lives of saints --Francis of Assisi, Seraphim of Sarov, Theodora of Sihla-- who retreated to the wilderness, who learned how to talk to the animals, and who, we may thus infer, understood that animals are an expression of the same divine order and divine goodness as we are. But one must go looking for these tendencies in the Christian tradition in a way that one need not in, say, Buddhism. You might now ask, Why not just be a Buddhist, then? To which I would reply: Quit joking around. I can't be a Buddhist. (I could pretend, I suppose, like many Westerners have, though they usually get tired of doing so before long.)
I confess I do not have much of a feeling for the meaning of the crucifixion, and for the cult of death that sprang from it. My part of the four Gospels comes early on: the Beatitudes, the Sermon on the Mount; they lose me with the crucifixion. It is not that I don't believe in or understand sin: on the contrary, I find the picture of God's creation as always charged through with grace but also simultaneously tainted by sin entirely compelling. Eating, for example (as the Chandogya Upanishad, by the way, vividly conveys) is a transgression against what is eaten, against that portion of living and striving nature that you have usurped for your own living and striving, but it is by God's grace --by 'saying grace', as some wholesome folk do-- that this usurpation can be made right. Sin is pursuit of self-interested ends as if you deserved the fruits of this pursuit simply in virtue of who you are, as if God had nothing to do with it. Sin is pride, turning away from God in the belief that you have what it takes to make it on your own (one telling and somewhat archaic synonym of 'atheism' in German is Abgötterei, which translates as something like 'away-from-Goddery', 'the condition of turning from God'; atheism here is not a doxastic state about the existence of something or other, but rather an emotional state, a stubbornness). So I get sin, but I don't get what it means to say that Christ died for my sins. I just don't get it.
I know that I am picking and choosing, and that by many standards I've failed to meet the requirements of being a Christian. Many, like those with the banners at the sports events, take John 3:16 to contain the core message of the Gospels. I also claim to know what the core message of the Bible is: love and forgiveness (1 John 4:8, 1 Corinthians 13:13, Matthew 5:38), and I claim that there is much extraneous stuff too, which can have little to do with our understanding of the essence of Christianity: the rules concerning marriage, the disregard for animals, the cosmic significance of crucifixion. How do I justify my picking and choosing? Well, who wants me to justify it? The hoarse-voiced goon at the sports match shouting about how Jesus Christ died for my sins? What concern is he of mine?
Since the Paris World Fair in 1900, the Galerie d’Anatomie Comparée of the Muséum d’Histoire Naturelle exhibition has been housed together with the Galerie de Paléontologie, featuring the fossils of extinct creatures: the dinosaurs and outsized Pleistocene mammals that so enrapture the children, and that often compel them to pose some of their first philosophical questions about the nature of existence (dinosaurs do not exist, yet they are not, for that reason, fictional; how is this possible?). I feel more at home in Comparative Anatomy. It reveals a static, Polaroid-like slice of nature as it is at present. But nature is not really like this. Nature is always unfolding, and it is the Gallery of Paleontology that reveals the effort of the Muséum's founders (most of whom, again, were creationists) to take into account the entire film of it, so to speak, rather than only the single still that the present provides.
Entering the Gallery of Paleontology from the north staircase, we immediately encounter three large dinosaur skeletons. All of these turn out, on closer inspection, to be casts made of metal, rather than skeletons of bone. The kids running around don’t seem to care. Successive waves of them flow by, all declaring that they are in the presence of ‘dinosaurs’: not bones of dinosaurs, let alone models of bones of dinosaurs, but dinosaurs. One of the three models, of the peculiarly named Allosaurus fragilis, is the only bipedal carnivore on display, the only one approximating the Platonic form of the dinosaur established for all time by the Tyranosaurus rex. Another of the three is the Iguanadon bernissartensis, a species whose gait, and the proper display of whose bones, has been the source of sustained controversy. Does it walk upright? Does it walk on all fours? The masses of six-year-olds are eager to know. Finally, the largest of the three is a cast of a Diplodocus Carnegii, a brontosaurus-like dinosaur, one of the lumbering, gentle vegetarians, given to the Muséum d’Histoire Naturelle as a gift from Andrew Carnegie in 1908, who was at the same time building up one of the world’s great natural history museums in Pittsburgh, Pennsylvania. A year earlier, Carnegie had been one of the principal donators to the French Muséum; like Jefferson to Buffon before him, the American continued to think of Paris as one of the great centers of natural history.
We also find the holotype of the Sarcosuchus imperator, not a dinosaur but rather a gigantic --as in, characterized by gigantism-- extinct relative of modern crocodiles dating from the early Cretaceous. A holotype is a paleontological specimen that is used for the original species-defining description. This means that at the Gallery of Paleontology one can see the Sarcosuchus imperator itself, the standard against which all subsequent finds are measured. In this respect modern paleontology retains a trace of Platonism, to the extent that there is an exemplary measure of what it is to be a member of a given kind. Yet rather than casting its exemplars into some otherworldly realm of transcendental forms, it locates them in actual fossils, than which nothing can be more concrete.
There is a cast of a pteranodon hanging nearby, and there are glass cases with a Pelagosaurus typus and a Lystrosaurus Murrayi. These names, I am finding, are exhausting and unevocative. Other than a few of the most well-known --T. Rex, bronto, sabertooth, etc.--, for obvious reasons extinct animals do not have popular names parallel to their binomial nomenclature. One of the most intriguing things about the ground floor of the gallery is the way the Latin binomials variously complement and contrast with the names that have bubbled up spontaneously for the animals in the vulgate (taupe, tatou, etc.). This is a possibility paleontology lacks, since for the most part it studies creatures that departed before there was language in the world.
There is a composite Cryptoclidus oxoniensis, pieced together from several representatives of this marine reptile species dug up in Oxfordshire. There is an Ichthyosaurus tenuirostris not yet removed from the rock in which it was found, and a Tarbosaurus Bataar, discovered, as its name implies, in Mongolia, by Pierre Teilhard de Chardin, the Jesuit priest and paleontologist who believed that evolution is a testimony to God's greatness.
Before coming to the half-way point of the main floor of the exhibition hall (the point at which, one floor below in the Gallery of Comparative Anatomy, the display of terrestrial mammal skeletons gives way to the Cetaceum), we begin to find remains of the great mammals that thrived throughout the last glacial period, particularly the skulls of gargantuan Rhinocerotidae. Only one of these is known by a folk name, the ‘woolly rhinoceros’, which is to say the Coelodonta antiquitatis. These various skulls are mixed in with a cast of from a triceratops, and other remnants of the Cretaceous period, which preceded the Pleistocene of the great mammals by 65 million years or so. But no one seems to notice the anachronism, and least of all the children, who are running about indiscriminately labelling everything they see a ‘dinosaur’. And it is hard to blame them, for the Rhinoceros megarhinus, the Acerotherium Persiae, and most of all the Arsinoitherium of the early Oligocene, with two massive blades for horns, do indeed appear nearly indistinguishable, in mass and in monstrosity, from the saurians. Sometimes, even their names are misleading, as is the case with the Dinoceras mirabile, an Eocene mammal whose name means ‘wonderful terrible-horn’. This creature is like the dinosaurs, in name, only to the extent that it is deinos, terrible, but its ceras has nothing to do with a saurus.
We are now half-way through. The second segment of the hall, directly above the Cetaceum, begins, from left to right, with a bloc ossifère, a fossil-bearing stone block, with scattered vestiges of the ancient horse known as ‘hipparion’; then a glass case with dodo bones (originally called by Linnaeus a Didus ineptus; was it out of respect for the departed that this name was subsequently changed?) on top of a wooden cabinet with locked and unlabelled drawers; then the complete skeletons of two extinct fossil species of manatee. Behind these, Cuvier’s own complete skeleton of a Megatherium Americanum, which translates simply as ‘great American beast’: a fitting name for this Pliocene ground sloth at least as large as any African elephant. It is a sort of prototypical beast, a hungry mound of fur, without many distinguishing features beyond this. Even the name ‘great American beast’ seems too specific for it.
To its left is the complete skeleton of a Hipparion mediterraneum, and behind it a complete Equus hemionus (a ‘half-ass’) and a cast of a Hippidion principale. The horse lineage does not seem to have exhibited the diversity of the rhinoceroses or even the elephants: all equids seem, more or less, to be horses. There is another sort of ground sloth behind them, and to their right two skulls of the Pleistocene Hippopotamus major, and also a complete Glyptodon asper, to which we will return shortly. There are scattered skulls, jaws, and tusks of extinct proboscidians; to the left a composite skeleton of an Archaeobelodon, and to the right a rather small skeleton of a juvenile Mammuthus primigenius. In the middle rear of the hall, directly above the fin whale one floor below, is the Pleistocene pièce de résistance, a complete Mammuthus meridionalis skeleton, a behemoth extracted out of deepest, coldest Siberia. Even its skeleton seems woolly, somehow. Surrounding the mammoth are various casts, of a saber-toothed tiger (known in the nomenclature by the evocative name of Smilodon, which unfortunately has nothing to do with smiling); and of a Diprotodon australis, a sort of rhinoceros-sized wombat. There is a cast of a male Irish elk, and a skeleton of a much smaller female.
In the middle, behind the great mammoth, there are two glass cases. One contains the skeletons of three cave bears (Ursus spelaeus), two lions, a wolf, and a hyena, all displayed, though skeletons, in the middle of some species-specific activity (rearing up, scavenging). The rearing cave bear is the only display in the Gallery, on either floor, that has been able to induce in me a frisson of fear. Behind these creatures is another glass case, filled with extinct ‘ratite’ birds (giant, flightless, ostrich-like), including the shockingly large Aepyornis maximus, along with five of its watermelon-sized eggs. These do not invoke fright, as does the cave bear, but a much more familiar feeling from other encounters in the museum: a sort of wonder mixed with queasiness, a sort of fascinated nausea. It is peculiar that there is no word for this.
As at the Gallery of Comparative Anatomy, here as well there are glass cases surrounding the exhibition floor. In the Gallery of Paleontology though they are much more haphazard; the numbering, as one moves clockwise around the hall from the north entrance, is odd, going up to 115 at the other end of the hall, at which point one arrives at an even 116 and begins to count down from there to cabinet number 2. Many of the cases are empty, and many contain models made of styrofoam, felt, and what look like fishing lures, the sort of re-imaginings of the Jurassic one might find at a high-school science fair. There is extensive information, often presented on what looks like construction paper, about various excavations that took place in France in the 19th and early 20th centuries. Cabinets 59 through 67, running alongside the hipparion skeleton on the main floor, tell of the evolution of equids; and on the opposite side cabinets 106 through 98 run through the history of primates, and feature a cast of the Lucy skeleton, whose original is on permanent display in Addis Ababa. Cabinet 86 features the still furry and gristly leg and face of a Siberian mammoth; cabinet 18, the still scaly fossil of a Lepidotus laevis fish. Cabinet 10 teaches us, in cut-out construction paper, that among fish it is a mark of advanced evolution to have fewer and softer bones; as the French puts it, ‘ossification regresses’ in fish evolution.
And there is much that I missed besides: the skull of an Odobenocetops, a sort of tusked Pliocene whale; a display on the neoteny of stegocephalids; the Ichthyosaurus platyodon skull found by Mary Anning, Cuvier’s English correspondent from Dorset, who from 1826 until her death ran a small shop, Anning’s Fossil Depot, near the Blue Lias cliffs in Lyme Regis, where she sold devil’s-fingers and the fossils of 'verteberries': her own spontaneous folk term, a sort of diminutive, for ‘vertebrates’.
For more information about Rob Carter's 'Faith in a Seed', go here.
I was slow in noticing the wonder of plants, and in this I do not believe I was unusual. When one is young, it is the furry things with faces, the creatures that dart about looking for food, driven on by their appetitive souls, that attract attention. At this stage, the plants are only the stage-setting, the animals the protagonists.
But in my case the innatention to the vegetal order continued well past my first youth. I long took Aristotle's greatness as a philosopher, for example, relative to that of his disciple Theophrastus, to consist principally in this, that whereas the former wrote books on the generation, parts, motion, and history of animals, the latter only came up with a couple of books about plants. It strikes me now, however, that this lack of interest would better be described as a severe case of phytophobia: I insisted plants were uninteresting, but what I really meant is that they are positively threatening.
"Naturam expellas furca, tamen usque recurret," said Horace: you can drive nature out with apitchfork, but she always comes back. This does not mean what I long thought it meant, but let us proceed as though it did. The Roman poet intended to say that innate character (the 'nature' of an individual being) cannot be supressed, but what the dictum long said to me was rather more literal: that nature, or rather Nature, cannot be beaten back for long. In appealng to 'Nature' here, what naturally comes to mind is of course the world of plants. Animals can be contained, more or less (other than insects and other microanimals, which, modern taxonomy be damned, no one really thinks are animals anyway), but the order of plants, as Lord Shaftesbury already understood incontemplating an early modern English garden, is sublime. The stated goal of the Jardin Royal des Plantes in Paris, by contrast, to assert 'controle totale sur le monde végétal', cannot but appear as so much absolutist and unsustainable hubris.
My fear of plants is deep-rooted. When I was little, growing up in a depressed, post-agricultural exurb of California's Central Valley, I dreamt of perfect cities somewhere far away. When I first went to San Francisco, the nearest so-called city, I was disappointed to discover empty lots overrun with weeds, and grass pushing up even through the cracks in the sidewalk. Nature, I thought, by which I understood vegetal nature, would take this place back in no time if we were to let our guard down. I set my sights on New York, imagining that it was there (if anywhere, etc.) that plants had been succesfully shut out. I imagined a city consisting in nothing but World Trade Center-like highrises, stretching out in all directions as far as one could see, between which one might hop by jetpack, never even having to descend low enough to catch sight of some invasive blade of grass. Kant once said there would 'never be a Newton for the blade of grass', and I wanted only that part of the world for which there could be a Newton, the world described by mechanical physics, the world out of which our architectural accomplishments have been built up. Our cities are not, of course, as I imagined them. I've been to all the 'real' ones, and I can report that Berlin is scarcely more than a willow grove, and when I'm in London I'm never quite sure when I'm in the botanical gardens and when I'm outside of them. It's a plants' world, I mean to say (at least the terrestrial part of the world is; under the ocean, the relative proportions of animal and vegetable biomass are reversed, which is really just to say that under the sea, where everything is by definition à rebours, topsy-turvy, upside down, one of the surest signs of the reversal of cosmic order is the preeminence of animals).
Rob Carter's Faith in a Seed, reminds us of many basic truths about the balance between nature on the one hand and human settlements on the other, but for me it was impossible to apprehend these truths without reflecting upon my long history of phytophobia. It struck me, in particular, that the three men Carter selected as subjects for his work, each in his own way, wielded a pitchfork and sought to do with it what the ancient poet had insisted could not be done.
You are not supposed to say 'seminal' anymore, but that is only because the people who think it is sexist don't know that it is really just the adjective of 'seed'. Thus to say that the work of Lawes, Thoreau, and Darwin was 'seminal', is not at all to praise them, as men, for their intellectual cum-shots, but only to say that things grew from them. One could put this in more mechanical-physical terms and say they had an 'impact', but that might be missing something. They changed the way we understand fundamental aspects of humanity's place in nature.
Thoreau opened up a line of very influential questioning of the value of modernity, at precisely the moment when conventional opinion was converging on the view that there was no problem (disease, war, etc.) that modern industry and industriousness could not solve. Darwin, well, Darwin is generally given credit for reinscribing human beings into the order of nature, following upon a few millennia of separation, and doing so at more or less the same moment that Thoreau was questioning the automatic inscription of modern men and women into, so to speak, the order of cities. Whether Darwin really did what he is given credit for doing is not so important (there were of course Étienne Geoffroy St. Hilaire, and Jean-Baptiste Lamarck, and Charles's own grandfather Erasmus, who were all saying things with fundamentally the same metaphysical import as that of The Origin of Species, even if the details about the mechanism of natural selection were still missing).
Lawes appears as something of an outlier, though in fact he left a far greater mark (or stain) on the world than the other two. In creating the first chemical fertilizers from superphosphate, more than any other single individual Lawes helped to advance the industrialization of agriculture. That is, he brought the human activity of cultivating and growing, a practice that had by then been well established for 10,000 years or so, under the purview of the men who were principally interested in building, automating, and controlling, a cluster of activities that had enjoyed tremendous advances across only the previous few centuries. Chemistry, the discipline in which Lawes primarily worked, may be seen as a sort of bridging science between mechanical physics (for which, again, there is a Newton) and what we now call 'biology'. Here, entities that are ordinarily seen as homogeneous masses, the clumps or lumps or heaps that nature produces in such abundance, are shown to be made up out of 'building blocks': a telling term, for it reveals how someone such as Lawes, in applying chemistry to nature, can take a clod of dirt and reconceptualize it in fundamentally architectural terms.
Lawes's pitchfork is easily visible: he wanted to exercise controle total in the world of agriculture. Yet there is one important thing to note here. Superphosphates are part of nature's bounty too, and strictly speaking they are no more unnatural than, say, the techniques of crop rotation or irrigation that had long been in use. Agriculture is by definition a bending of nature to human will (it is on these grounds that some anarcho-primitivsts think, with some justice, that it is first with the Agricultural Revolution, and not with the Industrial Revolution or the demise of the welfare state or any such thing, that social inequality and all the other symptoms of demise really start to become apparent). Lawes only wanted to apply a new way of bending nature, not to go against nature. In older alchemical terms, the science of chemical fertilizers is a perfective art applied to nature; it is not contra naturam. And yet, here we are, with the earth now almost completely divided up into plots you can see from an airplane, producing mostly unhealthy food from seeds patented by multinational corporations, and if that's notcontra naturam, it's still rather hard to see it as perfective.
Darwin for his part was not bending nature, but only trying to make sense of it. One way of thinking about the sense he made is that he sought to replace the grid, favored by the older natural historians such as Linnaeus, with a tree or an arborescent model of the kinds of creature in nature. And yet he wanted this tree to be well-groomed, a sort of perfect, easily surveyable bansai. Botanists today will tell you that strictly speaking there is no such thing as a tree, that is, there is no significant biological boundary between the kind of plants with woody trunks, and the kind that appear more bush-like. The order of nature has itself proven rather more like a bush, where it's hard to see which ramification leads where, where new branchings are always ready to be followed out, where neighboring bushes appear to form one entity, but in fact do not, and so on. It turns out, moreover, that the bush of species has no permanent, stable parts. Rather, all of nature's kinds are always on their way to being something else, and species names turn out to be more like proper nouns, identifying an individual that comes into existence for a certain period of time before giving way to another. Yet Darwin wanted to hold onto species as real, enduring kinds. Unlike Lawes, Darwin did not want to impose more order than his predecessors had on the domain of nature that interested him. But to some extent he remained unable to anticipate just how disordered things were going to get, largely as a result of theseed he planted.
Thoreau, it is said, preferred not to relate, in his Walden, how frequently and easily he repaired to the general store of Concord when he was running low on supplies. As far as world-renunciation goes, other wise men in history have taken it quite a bit further (though Thoreau's case is probably an instance of a more general tendency among famous world-renouncers). Thoreau doesn't so much drive nature out with his pitchfork, as he drives out society. But that, too, comes creeping back, or rather, our lone transcendentalist goes creeping back to it, whenever he finds himself running low on grain or potatoes.
So here are these three peculiar men, each with a very special relationship to nature, and each strongly associated with a particular home, of which, to all appearances, each of them seems to have been very fond. Thoreau with his Walden cabin, Darwin at Down House, and Lawes at Rothamsted Manor: each edifice is now imagined as a monument to the ideas of the men who inhabited them, ideas that, again, involve a particular stance against nature (in the dual Latin sense of 'against',contra: either 'abutting' or 'opposing'). And now Carter proposes to miniaturize these edifices, and to have us watch as sprouts grow around them from seeds he has cultivated.
What exactly is going on here? I have said that each of these men was working, each in his own way, 'against nature'. Something that is against nature --and this by definition-- is art. Artists today will likely not want to hear it, but I'm sorry, that's just what the word 'art' means. Perhaps this point can be made clearer if we appeal to the notion of the 'artificial', an adjective that once would have been understood to contain our glorified noun. Art is what little human beings are able to come up with inresponse to the immeasurabilty, untameability, and indifference of infinite nature. Nature, if I may say, generally kicks art's ass.
From this admittedly premodern optic, the idea of turning the growth of plants into art is a puzzling one. Carter's art is not like the artificial disruption of vegetal growth that we see in bansais or in the Jardin Royale des Plantes. Instead, he plainly wants to show the plants doing what they do best, which is to say growing towards the sun unimpeded by human designs. And moreover, he shows these plants usurping human monuments of some significance to the development of the modern world's perspective on nature. So art, which is small, seizes upon nature to show the smallness of art. This is, if I may also say, a grand geste, and by it Carter wins one for art.
In the way I have been construing art, as artifice, as techne, there is nothing further from the semantic cluster that contains it than the cluster sprouting from that fine, archaic, Indo-European root,*gen, which gives us such diverse words as 'gene', 'genus', 'generation', the Greek and Persian words for 'woman' (gune, zan), the Russian word for 'wife' (zhena), 'gynecology', etc. It also gives us 'germ', and the Greek verb gignesthai, 'to become'. Germs become plants through generation, and in this they are fundamentally unlike the works of human artifice, which are made rather than generated. There is, again, a certain comfort in making. But in the end, at least on our planet, generation is the law of the land. Carter's art, to the extent that it reminds us of this law, while still not, by that, being diminished as art, is worthy of notice. Anyhow it forced me to confront my own lifelong phytophobia, and to see what I'd been sensing for some time already, but could not fully articulate without Carter as cultivator: that the plants are not the stage-setting, but the very protagonists of this dazzling show.
The main exhibition hall is eighty meters long, or anyhow roughly five fin whale skeletons could be stretched out lengthwise in it (in fact there is only one). There are thirteen iron beams supporting the roof, and as many high-set, arched windows on each side. The beams call to mind the Eiffel Tower, as well as the Brooklyn Bridge, and every other iron-girded architectural project of the era. The edifice conveys the spirit of the 1900 World Fair for which it was constructed, and one can just as easily imagine the space being used by men in top hats to showcase diesel engines, magnetic wire recorders, or tins full of vegetable cooking grease.
The first thing you encounter on entering the main hall is L'Homme écorché, a molded figure of a man with his skin peeled away, made by the sculptor Jean-Pancrace Chastel (1726-1793). He is wearing a fig leaf, and he is one of the only signs of pudicity in the entire Gallery. He is best ignored.
To his left is a glass case with a dozen or so skeletons of higher primates in it (including one of a human), and to his right a glass case, of the same size, with the skeletons of an okapi, a quagga, and a small wild ass, known in French as a hémippe or 'semi-horse'. The quagga, a cousin of the zebra last seen alive in 1870, is, along with the marsupial wolf, the Steller's sea-cow, and perhaps a few others, one of the only extinct species on the ground floor of the Gallery. First described and classified in 1778, quaggas differed from ordinary zebras in the pattern and colour of their stripes, and Balzac may have had this in mind in his story of Marmanus's intrigue.
Between and directly behind the primate and equid cases, there are twenty-seven skeletons of middling-to-large terrestrial and littoral mammals: a hyena, a panther, a panda, a sea lion, a walrus, and one identified as a 'wolf-bitch mongrel' (which, if the labels were replaced, would be changed to 'wolf-dog hybrid', and which, in the French 'loup-chien métisse', carries the additional antiquated racial connotation of 'mestizo').
Moving further into the hall, we find, to the left, the skeletons of two rhinoceroses, one black and one white, and also a Malayan tapir. On the right, an onager, also known as a 'hemione' or 'semi-donkey', alongside two famous skeletons, or, rather, skeletons of famous animals: one, the Rhinoceros of Versailles, to whom we have been amply introduced; the other, Rock-Sand (1900-1914), a British thoroughbred racehorse who won the Triple Crown in 1903, and who is said to have grown unruly and temperamental after being sold to a French syndicate in 1912.
The row of large beasts including Rock-Sand and the three rhinoceroses draws our attention to something that is not so apparent in the initial vanguard of equids, primates, and sundry quadrupeds: skeletons look very different, depending not just on their species, but also on their provenance, their treatment, their age, and what we might call their 'life history' (a history which begins after death). The white rhinoceros's skeleton is grey and petrous; it looks something like the pumice stone you might find in a shower. Rock-Sand is brownish-yellow, as if the blood and gristle were not completely boiled off, but had somehow stained the bones he left behind. The variety of tinctures and textures is vividly illustrated as one moves further into the hall and comes to the glass case, on the right side, containing the skeletons of mummified animals brought back from Egypt by Geoffroy: a dog, a cat, two Dorcas gazelles, a peregrine falcon, an ibis, and an enormous Abyssinian ox. They all look exactly as if they have been wrapped in bandages, soaked with their own bodily juices, for the past 2500 years. It is hard to say exactly why, but the bones of the rhinoceros of Versailles look like Enlightenment-era bones; the mummy bones, for more obvious reasons, look like true antiquities. It is hard to look at the ancient and modern skeletons next to one another and not to think of them as, principally, cultural artifacts, and only secondarily as the work of nature.
Symmetrical with the case full of mummies on the right side of the hall, to the left there is a case featuring marsupials, including a number of skeletons of various kangaroo species (such a variety of sizes!), as well as the extinct marsupial wolf. There is also long-beaked echidna, and a lone platypus, known in French by the beautiful Greek-rooted name ornithorhynque ('bird-snout'). These last two are the 'monotremes', so called because the urinary, defecatory, and reproductive functions that in mammals have been separated out into three distinct holes, and in reptiles into two, all take place in these parsimonious creatures through one hole alone, the cloacha.
Between the marsupials and the mummies there are two hippopotamuses, and further back still we find two giraffe skeletons, standing symmetrically on the left and the right, as if on guard. One of the two sentries is the famous Stadthouder giraffe, taken by Napoleon upon seizing Holland. Between it and its less famous partner stand sundry buffalo and elk, and an oryx. As we move in further, there is a reptile case to the right, and a case of exceptional birds, such as the ostrich and the condor, to the left. Between them, an Asian and an African elephant, an enormous manatee, an even larger dugong, and a Steller's sea cow, larger still, and extinct since 1768. With this massive sirenian we are half-way through the hall. We have passed 100 or so large, freestanding skeletons, and perhaps two to three hundred more skeletons, of adults, juveniles, and fetuses of mammals, reptiles, and birds, in the various glass cases.
The second half of the floor of the exhibiton hall is designated the 'Cetaceum', it was conceived by Georges Pouchet (1833-1894), a professor of comparative anatomy at the Muséum, who is said to have been passionate about cetaceans and to have brought together the eight complete whale skeletons in the cetaceum according to 'an aesthetic and systematic logic'. The pièce de résistance is surely the fin whale at the center, the second largest species of animal ever to have existed. To its left there is a southern right whale, with its massive baleen hanging down, tapering off at the ends into fine hairs that must once have served as the ecosystem for countless marine parasites. There is a humpback whale, a coalfish whale, a northern bottle nose, a giant beaked whale, a minke, and, finally, the whale that sustained the 19th-century energy industry with its blubber and with the 'sperm' from which it derives its name. Mixed in among these skeletons are various porpoises and dolphins of both sea and river.
It was of course Herman Melville who most fully realized Nabokov's advice, while applying it to whales rather than trees. The 'Cetology' chapter of Moby-Dick, occasionally cut by unscrupulous and illiterate editors, is surely one of the greatest celebrations of the names and histories of great creatures in the history of great literature. It makes one afraid to say anything more. What can I say? Their skeletons are stunning; there are vastly fewer of them than there are of the terrestrial beasts here at the Gallery, yet they take up the same amount of space. Some are beaked, some toothed, some endowed with a massive sieve over their mouths for the filtering through of infinite krill. Both as skeletons and in vivo, the cetaceans look as though they are smiling, but of course they are not.
This completes the inventory of the main floor of the exhibition hall, but in truth we have just begun, for the bulk of the collection is contained along the walls. There are twenty-seven plaques between the windows, numbered in the Roman style, with skulls mounted on them, each plaque devoted to animals in the same broad class or family. One plaque feature boars and warthogs, the next various antlered animals. In all these plaques add little to the exhibition, other than to make it feel rather closer to a hall of hunting trophies than it otherwise would.
The alternating plaques and windows are located above a stretch of glass cabinets that go around the length of the hall, numbered clockwise from 1 through 110. Beginning at the left, we find mammalian skeletons, then reptilian, then fish osteology and general anatomy, then a small section devoted to teratology; next, continuing down the right side of the hall, we have a display of the various bodily systems, consisting mostly in the digestive, respiratory, and circulatory organs of various animals in jars of formaldehyde. The tour wraps up with a few glass cases (numbers 96 through 110) that have an overtly didactic tone rather at odds with the rest of the exhibit, where we learn that skeletons are the 'witnesses of evolution', and that the skeleton has its own 'alphabet'. It is as if the disease that has thoroughly deformed the Great Gallery of Evolution has begun to creep in here as well, having infected, for now, only a few of the late-numbered glass cabinets.
Starting from cabinet number 1, we first encounter primates. The idea seems to be one of descent down the scale of being, from primate to felid to monotreme to frog and so on. Yet by this criterion things get off to a peculiar start, for in cabinet 2 we are introduced to an indri and a gibbon, and then in cabinet 3 we find an entire skeleton of a Homo sapiens, only after which the descent begins in earnest. Cabinet 5 shows rows of higher primate skulls in various stages of development: fetal gorilla, juvenile gorilla, young gorilla, etc., and the same for orangutans, chimpanzees, and humans, all with the aim of showing that at the outset there is scarcely any difference. In the final stage of development the human skull looks freakish: the cranium is far too large, as if pushed out on all sides by some rare tumor; the teeth are far too small, and no good for biting much of anything.
The primates continue on, getting cranially more diminutive until we arrive at the macaques, and then the lemurs, and then, finally, in cabinet 13, we cross over to the bats. There are hundreds of miniature bat skulls under glass domes, and a few full skeletons splayed out on velvet-covered planks. Next come various other insectivores, such as those of the family of Potamogalidae, including various shrews and tenrecs. Most of the Latin labels are so color-damaged as to be practically illegible, and many of the names are no longer accepted.
Next are a dozen cabinets devoted to 'carnivores', which as a taxonomic term refers to an order of mammals including bears, wolves, civets, and all the others we can easily picture eating meat. There are various felids, a bear-cub skeleton mounted in a case, and a moulage of a walrus fin. And next, four quick cabinets for what in French are delighfully called rongeurs or 'gnawers', which we know much less evocatively as 'rodents'. The Gambian pouched rat and the South American paca, whose Linnean genus Cuniculus makes it out to be a sort of small rabbit, both stand out among their fellow Rodentia, though probably only because of their names. At the skeletal level, the truth is they all look more or less the same.
Cabinets 32 through 34 are devoted to perissodactyla, which is to say the order of odd-toed ungulates, followed in 35 through 42 by their even-toed counterparts, the artiodactyla. These orders together give us that fundamental distinction between the beasts that are 'cloven of hoof', on the one hand, the even-toed pigs and boars and goats, and on the other those that are not. Famously, a number of dietary and symbolic significances would flow from this distinction, which, if you think about it, or if you speak of in terms of even and odd rather than cleaving, is perfectly trivial.
Cabinets 43 through 45 are unlabelled, and their hodge-podge of specimens: a bison-cow mongrel, an aardvark (orycterope), and various armadillos, all suggest that we are now well along in our slide down the scale of being. This suspicion is confirmed when we arive at the édentés of Cabinet 46, an abandoned 19th-century designation, meaning 'toothless', for the various orders of anteater, pangolin, and sloth, and, finally, in cabinet 47, we pass as if imperceptible from these alien orders into a different class altogether: the Reptilia.
Snakes and lizards are treated hastily, before moving on to the 'ichthyopods' or 'fish with feet', an abandoned category that appears to have once denoted amphibians. Of course, the fish with feet, though it lost its place in nomenclature, remains in an important respect the singular symbol of evolutionary science, the successor to natural history: think for example of the emblem seen on the rear of cars, of a proper ichthyopod bearing the word 'Darwin' inside its body, perhaps mounting or devouring one of those footless ΙΧΘΥΣ fish beloved of Christians. On the old nomenclature, every single frog and toad represented a sort of living Darwin fish, a testament to and recapitulation of that fiat lux moment at which some ambitious pisces, or so we like to imagine, set itself the task of pushing out a set of quadruped stubs.
It is with the ichthyopods that we first see something in the display cases besides skeletons and artificial molds: now, for the first time on our tour, the Gallery reveals the fleshy specimens, in jars and tubes, that inflect every visit there with an unmistakable element of unease. Skeletons are part of the bodies of animals too, of course, and no small violence has to be carried out in order to extract them. But in ther final state they are pristine and sterile, and they are cleansed of that property that fundamentally defines for us the biological world: they are not soft and wet, but hard and dry. Skeletons are the part of the body that, being hard and dry, is not subject to decay. With the soft parts we can trick nature by immersing them into fluids such as formaldehyde that are so inhospitable to life as to keep at bay all the microorganisms that ordinarily see to the decomposition of their macro confrères. But of course the vitality of the soft parts cannot be retained, and a frog trachea that sits in a jar of pungent liquid for a century, as its label dulls and fades, transforms over time into something entirely unnatural, ashen, dull and ghastly. This is the Gallery's great aesthetic counterpoise to the stark and clean osteology of the great mammals.
We are now half-way through, and are standing at the rear end of the hall, directly behind the cetaceum. There is a bust of Henri-Marie Ducrotay de Blainville (1777-1850), Lamarck's successor as the Muséum's chair of natural history; then there is a two-door emergency exit, then a bust of Geoffroy, and then the Gallery's most sensational specimens: its teratological cabinets, with the two-headed goat and the cyclops pig and various other monsters, and with their archaic and somehow excessively scientific nomenclature. It is said that Geoffroy was representative of a broad shift in the history of teratology: from the Renaissance preoccupation with monsters as portents of God's wrath (the word 'monster' after all, suggests that something is being shown or de-monstr-ated), by the late 17th century a process of normalization had begun, a process of which Geoffroy's 1812 Essay on the Classification of Monsters is a sort of culmination. In this process, birth defects came to be seen rather as opportunities to better understand organic development in general, as exceptions that illuminated the rule. Yet it is hard not to conclude that Geoffroy was no less prone to wonderment at the sight of these animal freaks than any 16th-century village deacon would have been. Monsters are peculiar, and if modern natural history was able to inscribe them into the larger order of nature better than premodern people had been, this may have only illustrated that that entire order is itself peculiar.
Cabinet 58 displays a lone glass tube with an eight-inch dolphin fetus inside. There is nothing monstrous about it, yet it is certainly peculiar, and one wonders why it was placed there, by itself, between the teratological cabinets to its left, and, to its right, a cabinet filled with the diminutive, faux-grinning skeletons of twenty or so human fetuses.
The next large section of wall cabinets, from 60 through 94, takes up, in turn, various systems of the animal body: first the ensemble de viscères or the totality of inner organs, including an impressive mold of a cross-section of a camel, a lamprey and a macaque, each cut open length-wise down the front and placed in formaldehyde; then digestion (a jarred camel caecum, a dried and mounted gorilla jejunum, etc.); then respiration (including a pair of jaguarundi lungs); next, circulation (a macaque thyroid, the brachial plexus of a bradypus); the nervous system (civet spines, human brains). Cabinet 91 is dedicated to skin, and 92 through 94 to sense organs, particularly hearing, since this is the one sense that, in the ossicles, leaves a bony and preservable trace.
Cabinet 95 features a splayed iguana, for no apparent reason, and then, in 96, the aforementioned heavy-handed science education commences. Cabinet 108 interrupts the lesson momentarily with a curious historical artifact: the radius of a giraffe, a bone that in 1760s the Marquise de Pompadour (1721-1764) had tried to pass off as the thigh of a giant, but that Cuvier was able to identify as properly girrafid only after the Stadthouder of the Hague had had his specimen seized by Napoleon's army and brought back the Muséum. Cabinet 110 shows Cuvier's own osteological display case for the bones of a Lophius piscatorius, called variously in English a sea-devil, an angler, and a frog-fish. This creature is nearly all mouth, and when its skeleton is still whole it looks as though it is smiling an ecstatic smile far beyond the anatomical reach of any mammal, which it may in fact be doing.
[This is a piece I wrote for the Opinionator series in the New York Times to which I occasionally contribute. Somehow it fell through the cracks, and by now the news item to which it is responding is fairly old (in news terms), so I'll just post it here.]
A New York Times article of August 22 announced with appropriate irony the results of a recent study by researchers at Northwestern University: "No Surprise for Bisexual Men: Report Indicates They Exist." This study followed another, completed in 2005, which had concluded that “with respect to sexual arousal and attraction, it remains to be shown that male bisexuality exists.” The question of human male bisexuality is starting to look like that of the healthfulness of eggs: we are just going to have to get used to periodic about-faces on the part of the scientists. Or at least the questions would look alike, if sexual orientation were not so much more difficult a concept to get a scientific handle on than that of health.
Both studies on bisexuality used the questionable method of penile plethysmography, in which the flow of blood to the test subject's genitals is measured, as images showing various combinations of men and women engaged in sexual activity are shown. Now I do not wish to question this method here, though I will note in passing that there is good reason to wonder whether a bisexual orientation is something that may be measured by degrees of genital arousal. I will also not discuss the inherent problems in selecting subjects for such a study; the recent one is said to have made improvements over the one conducted in 2005, yet plainly, to the extent that such a study must rely on volunteers, there is always a danger that what is being measured is a disposition of people with the sex-study-volunteering orientation, rather than, so to speak, a naturally occurring one.
I want to focus instead on the a priori commitments that would lead a scientist to go in search of such a creature as a bisexual in the first place. I believe that to do so is no less ridiculous --and will look no less ridiculous some generations from now-- than an early-20th-century attempt to, say, distinguish a quadroon from an octoroon, to draw an example from the history of American racial pseudoscience, looks to us today.
In both the case of race and of sexual orientation, we are witnessing a conflation of social and natural kinds. Complex social and historical factors bring it about that it should make sense to speak of a certain kind of person in a certain place and time (a shaman, a serf, a hysteric, to cite a few examples). Today, these factors in turn cause science to set about looking to establish the real existence of such social kinds in nature, as if society and history had no role in our having come to speak of the kind in the first place.
This point was anticipated by Michel Foucault in his powerfully argued History of Sexuality. The Canadian philosopher Ian Hacking, partly inspired by Foucault, would make the case for what he calls 'historical ontology': the process by which historical change brings into being kinds that did not, and need not, always exist. Once such kinds come into existence, however, they appear as if necessary; it becomes difficult to conceptualize the world without them.
It is a grave mistake of science, however, to fall prey to this difficulty, and to neglect the historical conditioning of the kinds of things we go out looking for in nature. This mistake results largely from the lack of communication between psychologists (in this case, the ones specializing in what people once felt comfortable calling 'sexology'), on the one hand, and historians and anthropologists on the other.
I see the history of 'racial science' as involving an overextension of the classificatory project, initiated in the early 18th century by Carl Linnaeus, beyond the real (or relatively real, though certainly not permanent) boundaries between biological species and down to supposed human subtypes, which are in fact only the artefacts of political and ideological preoccupations and have no biological significance whatsoever. The creation of sexual subtypes, I take it (as kinds of person, rather than as different behavioral propensities of people), is a continuation of this misplaced naturalization, of this Linnean overreaching. It shows science at its most historically oblivious.
Yet the most surprising aspect of the scientific 'discovery' of social kinds is the willingness of self-identified sexual minorities to mimic science's naturalistic language and to express something approaching gratitude at having been discovered. In this respect, the Times headline conveys an admirable if misplaced irony. It reminded me when I read it of Thomas Laqueur's riff in Making Sex: Body and Gender from the Greeks to Freud on Realdus Columbus's discovery in 1559 of the human female clitoris, and of how this in turn echoed that other Columbus's discovery some decades earlier of something exactly half the world already knew about.
But there is a difference in the case of bisexuality. The existence of bisexuality in men might be something with which the men who experience it are familiar without science having to tell them about it. But unlike knowledge of female anatomy or of New World civilization, these men might protest upon being discovered that they are not an entity in nature, like some new species of butterfly. They are not some special variety of men, but rather they are men simpliciter, with a certain range of interests within a domain of life that, for complicated reasons extensively analyzed by Foucault, we take to be overly definitive of who we as human beings are.
But the politics of science being what it is, human beings who fit the profile of the sort of person characterized by a scientific study are much more likely to take their 'discovery' as an affirmation, one that potentially affords them political leverage, than they are to demand not to be treated as a natural kind. And in this way they perpetuate what we might call the 'Linnean fallacy': the extension of the classificatory project that is well-suited for the identification of biological kinds into human social reality, to which it is most poorly suited.
You might suppose that the person who takes the mushroom as his subject is hardly the most intrepid of nature photographers. Mushrooms can't move! Or at least they are not endowed with locomotion. They can of course move through growth, but this does not enable them to flee, and so the photographer is able take his sweet time in moving up close, finding the right light and angle, and taking as many shots as necessary.
Yet I confess that when I do this, it is often with trepidation. I feel that I am violating the mushroom's territory, and that it could easily lash out at me, giving off some invisible dust, or perhaps just radiating its deathly essence, in my direction. Sometimes I feel as though I'm descending perilously into the world of decay, where only corpses are meant to go. I fear that the mushroom will spread out and engulf me. Once I've snapped my shot I recoil and shiver and walk away fast.
So I'm not photographing lions or cobras or anything like that, but I am definitely coming up against beings: entities that, despite everything I know about neurophysiology and about where science draws its lines in nature, I can't help but take to have their own little world unto themselves, their own something that it's like to be them.
(Help in identifying genus and species will be greatly appreciated.)
Some kind of Boletus
Surely the single largest category of folk names for mushrooms is the one having to do with evil and death, and with the beings who bode and bring these: Witch's Hat, Death Cap, Destroying Angel, Poison Pie, Lead Poisoner, Corpse Finder, Witches' Butter, Devil's Urn, Goat's Foot, Dead Man's Fingers.
Other names are identifications, by appeal to some other thing in nature or artifice, that the mushroom supposedly resembles (though for the most part does so only remotely): Chicken Mushroom, Fried-Chicken Mushroom, Rooting Cauliflower Mushroom, Black Jelly Roll, Moose Ears, Old Man of the Woods, Pig's Ear Gomphus, Pretzel Slime, Scrambled Egg Slime, Blue Cheese Polypore.
Still other names involve incongruous juxtapositions (which include more or less all the 'tooth' mushrooms): Shaggy Parasol, Imperial Cat, Big Laughing Gym, Northern Tooth, Spongy-Footed Tooth, Bearded Tooth, Spreading Yellow Tooth, Hairy Parchment. Many names call the very thing they are naming into question: Deceptive Milky, Fuzzy False Truffle, Questionable Stropharia. Many attach a derogatory English epithet to a proper Latin taxon: Fetid Marasmius, Dung-Loving Psilocybe, Hated Amanita. Yet others appear as plays on words even though they are not, e.g., Dirty Trich. Some names are just revolting: Insect-Egg Slime, Tapioca Slime, Many-Headed Slime, Red Tree Brain; while many are simply and inexplicably delightful: Peppery Milky, Dirty Milky, Buff Fishy Milky, Fuzzy Foot, Carbon Cushion, Elegant Stinkhorn, Stinky Squid.
These are all just folk terms, and so, since the beginning of the 18th century anyway, are not the real names of anything. Or at least that's what we're supposed to believe. But classification is just one of the things we do with language; evocation, or conjuring, is another.
Other vulgates are just as rich as English in their myconymical creations. Thus German: Hahnenkamm, Dickfuß, Hexenpilz, Satanspilz (one possible etymology for the English 'toadstool' is Todesstuhl, which is to say 'death's stool'). Many folk associations are lost as we move from one language to another, thus the Dirty Trich (Tricholoma pardinum) evokes the tiger in its German and Latvian names (Tigerritterling, Tīģeru pūkaine), and the panther in Swedish (Pantermusseron). The Poison Pie (Hebeloma crustuliniforme), also known as a Weeping Fairy Cake, becomes the ciuperca plângătoare ('drooping mushroom') in Romanian, and the parastā bārkstmale in Latvian, which means (I think) 'tattered parasol'. A Swedish cognitive scientist has in fact drawn our attention to the significance of myconymy on the example of this very species; see Gunnar Persson's nicely titled "Fränskivling eller poison pie: kognitiva aspekter på namngivning av svamp i olika språk" ("Fränskivling or Poison Pie: Cognitive Aspects of the Naming of Mushrooms in Different Languages").
What the Germans call the Hexenei or 'witch's egg' is known in English as the 'universal shell', an oviform envelope that surrounds the young mushroom before it takes on its familiar stalk-and-cap shape. The German Stinkmorchel or stinking morel is born from a witch's egg but grows to resemble a phallus, so much so that Linnaeus could not refrain from classifying it as the Phallus impudicus. It is the mushroom that is impudent enough to demand that it be called after the thing no one can deny it resembles. No, that's not quite right. It doesn't resemble the phallus; it duplicates the phallus fungally. It is impossible to come across a Phallus impudicus and not find oneself transported back into that prescientific world-view on which affinities abounded between different categories of natural beings that share no ancestral relation.
It is also hard not to be transported back to a time when the names of things were held to bear some sort of essential relationship to the things themselves. And here (other than in a few cases where the Latin follows the folk, as with the impudent phallus, along with the Lactarius mucidus or Slimy Milky, the Tricholoma saponaceum or Soapy Trich, etc., all of which are only as foreign to our inmost sense of the names of things as is Latin itself): here it is the folk names, and not the Latin binomial nomenclature, that preserves the bond of being between word and thing.
Often, in fact, the Linnean name for a thing picks out features of it in a seemingly arbitrary way, features that seem to have little to do with what we associate with a given creature. In this respect it is often better not to know Greek or Latin, if one wants the name of the being to resonate. To move away from mycology for a moment and into Pleistocene mammalian paleontology, I recall being deeply disappointed when my Greek became good enough to notice that glyptodont means nothing more than 'carved tooth'. As if the shape of that giant, lumbering armadillo's teeth had anything to do with its essence! Much better to just hear the sound, glyptodont, and to picture the beast, as it is not hard for the un-Hellenized to do, as a being that naturally embodies that sound.
Folk names work differently. They do not pick out some arbitrary and contingent feature of a being (I contend that a glyptodont would still be the being it is even if its teeth were otherwise than they are), but instead zero in on the most salient properties of a being, the properties that could not be subtracted without annihilation of the being itself, the properties that the philosophical tradition has associated with essence. That this essence is plainly related to human concerns (Dead Man's Fingers, Scrambled Egg Slime, etc.) does not compromise its status as essence, since the folk see the world anthropocentrically, as thrown up around them for their own purposes, edification, and temptation. In this respect, mushrooms have only being-for-us.
Nabokov famously told the story of the Cornell student who beseeched him to divulge the secret of great writing. 'Learn the names of plants', Nabokov is said to have said. He surely did not mean the Linnean names (though those can help to add an extra flair of erudition); he meant the Russian-English-French names that turn the things into repositories of human lore and values and fears.
I've always found plants boring and have not managed to learn the names of more than a handful of them. I raced through Aristotle's five books on animals, but could not bring myself to read his disciple Theophrastus's Enquiry into Plants or On the Causes of Plants, which were supposed to complete the Aristotelian project of investigating the living world. Animals jump out at me, sometimes literally; they are phenomenally salient, as the cognitive scientists say, whereas plants just fade into the background. We might suppose it shows Nabokov's great subtlety of mind that he picked them out for attention anyway (though we should follow up with the point that his greatest interest among living beings was for that most sensationally salient of creatures, the butterfly). Still, the advice is excellent, perhaps the best ever given in the history of literary instruction. Nabokov's novels themselves (especially Ada) are vividly botanico-entomological, or, better, they are showcases for a sort of phyto-entomonymical mastery. Nabokov understood how to draw essences out of names; he understood that what makes literature live is precisely the theory of nomenclature, the philosophy of language, that had to be repudiated with the rise of modern science, one of the great achievements of which was the arbitrary naming scheme of the System of Nature of 1735.
Genetically and evolutionally, fungus is closer among biota to the animalia than it is to plantae. It doesn't have locomotion, and it doesn't do photosynthesis either. In high school I learned that its natural function (as if anything had such a thing) was that of 'decomposer', in contrast with the 'producer' plants and the 'consumer' animals. This triad is calling out for allegorical interpretation, but I won't offer it. I want only to say here that, whatever is actually going on in nature, mushrooms can't but come across to us as liminal, as a higher-order instance of being neither fish nor flesh.
Liminal entities, as Mary Douglas has shown us, frequently offer a good point of access for unraveling the knots of cultures. Yet ethnomycology, particularly in the wake of R. Gordon Wasson's work, has for the most part been largely preoccupied with hallucinogens and with new-age forms of 'mind-expansion'. I've been doing my best to avoid association with that approach here. Not all mushrooms are psilocybes, and the strange position of fungi in human cognition of the natural environment would be no less strange even without the hallucinogenic species. It is more likely the toxicity of some, rather than the psychedelicness of others, that charges mushrooms with such folkloric force.
But beyond this it is their indissociability from decay and death that, I think, gives them the particular cultural role that they have at least across the Indo-European world. Wasson and his Russian wife, Valentina Pavlovna Guercken, argued in their monumental Mushrooms, Russia, and History of 1957 that the Indo-Europeans can be further subdivided into mycophilic and mycophobic cultures, with the Slavs standing as the clearest example of the former, and the Anglo-Saxons of the latter. One wonders how deep this phobia runs, however, and whether there might not be something in particular about the rise of modern science and rationality (a phenomenon centered in Northwestern, Protestant Europe, relative to which the Orthodox, Slavic world has been on the distant periphery) that required a repudiation of these queer beings, of these living lumps so deeply associated with ghosts and witches and fairies, with beings the existence of which could no longer be defended --in English, anyway-- much after the 17th century.
But literature isn't about what exists, and the non-existence of the beings wrapped up in the folk names of so many mushrooms only makes them that much richer, that much better suited to the sort of exercise Nabokov recommended. It would be nice to read some fungal literature: literature that grows out of years of apprenticeship in the art of myconymy and that is as evocative as the folk-names of mushrooms. This would be a different sort of novel than Nabokov's phyto-entomonymic masterpieces: there would be no flight, no sunlight. It would be a novel from the ground.
The myconymic apprenticeship would at the same time serve as a point of access to human existence --I would say 'being-in-the-world', but I'm already concerned about how Heideggerian I'm sounding-- through a concrete subdomain of language. On the surface this subdomain is concerned, like Linnean taxonomy, with the simple naming and distinguishing of entities in the natural world; but in fact it could not be more different from the concerns that motivated the System of Nature. Name-giving here is not classificatory, it is not 'the logic of the concrete' in Lévi-Strauss's sense. It is rather the accrual of cultural meaning through the things of nature in which this meaning is invested. The literary use of language on this understanding is the unraveling of this meaning through a mastery of the names of things: the real names.
Follow me on Facebook.
Photos taken in a protected woodland of central New Jersey, early June, 2011. I should add, perhaps, that I remain a perfect neophyte when it comes to attaching name to thing. I believe these are two varieties of agaricus, plus some variety of polypore, but as to anything more specific than that I am, as they say, at a loss for words.
As I've said many times, I do my best here not to enter into the usual blogospheric disputations. But there is a certain kind of science writing that has irked me for years, and that is now much harder to avoid as a result of the proliferation of blogs, that I cannot refrain from briefly denouncing.
It is proliferated by scientists and science amateurs alike, but in both cases it seems to have roughly the same connection to science as Cosmo's sex-position tips have with Eros. Whatever concrete domain of science it addresses, its underlying message is: science is awesome. This genre works by a sort of magic, turning anything and everything that it takes to be of interest into 'science' by its simple attention: the way light shines through stained-glass windows, the way foods cooked sous vide affect the palate, and so on. This transformative power parallels that of the practitioner of yoga, who will claim that in virtue of her commitment to yoga everything else in her life is charged with yogic specialness as well. It is a presumption that, whether yogic or scientistic, must be resisted by intelligent people.
All this a fortiori when, on the basis of this presumption of the universal authority of the scientistic outlook, the science writer ventures into philosophical territory. A fine example of this was Sean M. Carroll's recent blog post for Scientific American on the impossibility of the immortality of the soul. Now Carroll is a friend of many friends of mine, and seems like a gracious and thoughtful person. I hope we can disagree non-crazily, as he would put it. But his post is simply bad: a pseudophilosophical conclusion, drawn from pure non-sequiturs, and based on nothing but the aura of authority with which his social status as a physicist is supposed to imbue his opinions.
At the heart of Carroll's argument is the un-argued-for presumption that souls must be the sort of entities that attach to particles, that they cannot exist without particle hosts, and, further, that since no empirical data about the host particles of souls has been forthcoming in the now-complete physics of everyday life, we are justified in assuming that whatever it is we think of as 'soul' dissolves with the death of the body.
Carroll believes that anyone who is committed to the existence of an immortal soul must answer the following questions:
If you claim that some form of soul persists beyond death, what particles is that soul made of? What forces are holding it together? How does it interact with ordinary matter?
Now some philosophers have argued that souls must remain attached to or associated with particulate hosts (the Stoics, for example), but this is probably a minority view in the history of Western philosophy. There are varieties of soul, the vegetative and sensitive, that are generally held to be indissociably wrapped up with the body, and thus to perish along with the body. But these are generally contrasted with the rational soul: the soul that is capable of knowing the eternal truths of mathematics, etc., and that is ontologically independent of whatever biological host it happens to ride for a while. There is no reason to expect to find this soul stored after death in a physical substrate that can be accounted for in terms of the physics of everyday life: this soul is not the information stored by the brain, which would then after death be transmitted to some other physical storage unit.
Now I sure as hell am not going to argue for the existence of a soul, but I know that if I were to seek to argue against it, I would not suppose at the outset that what is meant by 'soul' is 'brain-stored information about the self'. There are other options in the history of human thought, options that have been well defended, and that require sophisticated refutations based on knowledge of the history of philosophy.
Since Carroll attempted to impress us all into agreeing with him by trotting out Dirac's intimidating-looking equation --it looks hard, the implicit message goes, therefore anyone who understands it must be able to answer our questions about life after death--, I think I'll reply in kind by citing a profound bit of philosophy in French and declining to translate it: où il n'y a pas un être, il n'y a pas un être. Don't worry about the details; what this says is that for deep metaphysical reasons, it is not possible that reality should consist in physical particles, let alone in congeries built up from physical particles. Whatever deserves to be called a being, is going to be absolutely simple, one, monadic, and this means, among other things, undetectable by the physics of everyday life.
Is it true? I have no idea. I am convinced, as are Carroll and Alva Noë, and as is pretty much everyone who qualifies as salonfähig in our era, that the best explanations are naturalistic, that is, that they are based on what is detectable. But it is one thing to circumscribe the domain of permissible explanations, quite another to dismiss an entire order of explanation as false. So again, I don't know whether reality is one and simple or whether it consists in clumps of particles, but I do know that the author of that French phrase thought at least as hard about the nature of reality as Sean Carroll has, and that if the phrase is not true its untruth will not be established by appeal to the current state of physical science. It is not up to the physical scientists, in other words, to refute it, and still less is it up to them to dismiss it. The fact that they are allowed by editors and readers to do so anyway is a matter principally of sociological interest.
Postscript, 30 May:
I realized quickly after hitting 'publish' one thing that I should have made clear, and that is that I am certainly not arguing for a strict division of labor between scientists and philosophers, with scientists holding forth on the features of the phenomenal world and philosophers uniquely qualified to speculate on what if anything is grounding the appearances. To argue such a thing would be to go exactly against the spirit of what I have been pushing elsewhere, as in my recent piece for The Stone, where I urge that what is most needed is a return to the era when science was constitutive of philosophy, rather than at best complimentary to it, or at worst antagonistic. But this reconciliation will not occur simply by allowing any scientist, trained in the way scientists are today, with no real awareness of their place in the history of natural philosophy, to hold forth on philosophical questions.
When I was 10 years old or so I came up with what I thought was a knockdown argument against the possibility of reincarnation: I called it 'the demographic argument', as it appealed to the fact that there are vastly more people alive today than in the past, and therefore only a small percentage of them could have a series of past lives going all the way back to the beginning of human history. This was intellectually satisfying, but it failed to grasp the alternative conceptions of history (roughly speaking, cyclical rather than linear), and of the scope of the shared community of beings (extending to all life forms rather than just to human beings), that make theories of metempsychosis make sense. Understood holistically, embedded in the total belief system of which it is a part, metempsychosis makes powerful sense, and it is only a ten-year-old know-it-all mentality that would rest content with an argument such as the demographic one, which imports all of the presumptions of our own contemporary philosophical (or weltanschaulich) predicament for the assessment of a theory that does not arise from that predicament. This is similar to what happens when scientists untrained in the history of philosophy swoop in to settle its millennia-old questions.
I took this picture while waiting outside a pharmacy in Plainsboro, New Jersey, as the Rutgers-trained apothecary prepared my prescription for antidepressants. I don't know why it takes them so long to count out thirty pills and slide them into a little plastic bottle, but from years of experience I know when I go for a refill that I will have to put in a good hour. Sometimes I take my blood pressure; sometimes I pass the time in admiration of the collection of Russell-Stover chocolate gift boxes; sometimes I wander through the makeshift grocery section, and speculate as to the criteria that determine which grocery items belong in a CVS or Rite-Aid (Chef Boyardee, definitely; Lean Cuisine, probably; Manischewitz, no), and which by contrast are meant only for the shelves of a proper Wegman's or Shop-Rite.
This time I decided to go outside. It was the last day of a series of days of rough storms, which I, being from dry California, always think of as 'tropical'. These storms in the mid-Atlantic and the Northeast were the distant and final echo of the same weather system that had, two days prior, killed several hundred people in Alabama and neighboring states. The rainbow seemed to be saying, on behalf of the sky: Sorry for killing all those people. Sometimes I just, you know, lose it.
Over the years I've returned again and again to the metaphor of storm systems in explaining clinical depression to those who know nothing of it. If in my soul I remain Californian (which is to say that I respond to the vegetation and the topography and the sky and the light there as fitting and proper for a creature such as I), in my neurochemistry I am at this point entirely of the East Coast. My brain is like a May storm in the mid-Atlantic, with its menacing cumulonimbus and its hazy instability. The darkness and the thunder appear suddenly, and for no reason, and just as suddenly give way to half-ass rainbows of apology, which to me, at this point, look like nothing so much as portents of the storm's return.
The water cycle makes sense to me in a way that the process of natural selection that produced brains such as mine does not. A recent study showed that well-exercised mice are more likely to anxiously cower in the corner of their cages than lazier ones. A scientist jumped in to caution that we should not extrapolate too much from them to us, since they are prey animals, and it is advantageous to not be excessively at ease. But I am not a prey animal, and I cannot for the life of me figure out what natural utility such a neurometeorological profile could possibly serve.
But that's just how it is, and there's something undignified about dwelling on it overmuch. One doesn't want to let it become the kind of person one is, in the way that Foucault argued so many new kinds of people came into existence in the last few hundred years.
But still, the timing of the rainbow deepened my sense that in the end meteorology and psychology (or psychiatry; whatever) are but two branches of the same science.