Joan Slonczewski is both a microbiologist (she is co-author of one of the standard textbooks in the field) and a science fiction writer. Her latest sf novel, MINDS IN TRANSIT, is a sequel to her previous novel BRAIN PLAGUE (2000), and overall the fifth novel in her Elysium Cycle (to which most of her novels belong). It would probably help to have read BRAIN PLAGUE before tackingly MINDS IN TRANSIT. We have a pair of planets with future technologies, the most important of which is that microbes are sentient, along with many artificial entities and systems. So the people in this world are continually negotiating both with one another and with the million microbes who inhabit them. There are evil microbes who take over their human inhabitants by manipulating their pleasure and pain systems, but most people get along with the microbes that inhabit them in a more or less symbiotic fashion. For instance, the main character Chrys is an artist, and her visual works are collaborations with the microbes within her. The novel mostly consists in all sorts of social and political interactions among the characters, including the microbial ones, and there is no clear line separating social interactions from political power moves. This may sound cynical, but the novel really is not so. The science fictional novum of intelligent microbes is really a way to dramatize how all life involves interactions among multiple life forms, all of which shape and are shaped by the physical environment as well as by one another. Interactions can exist anywhere along the spectrum from complete symbiotic mutualism to one-sided parasitic exploitation. And in fact, IRL our lives are profoundly shaped by such interactions, even if many of the partners (like the microbes that actually do live within our bodies) are not in actuality capable of language and conscious reflection. Slonczewski powerfully illustrates how the mutual web of life really works, through the extrapolative tactic of extended sentience. The plot, such as it is, is quite convoluted, but this makes total sense, given the ways that the book is depicting and making visible the sorts of connections and disconnections that all living beings are involved in. We are all — people, animals, plants, fungi, and microbes alike — involved with one another in multiple ways, involving both unit integrity and interconnections that mean that no unit is actually self-enclosed.
Category: Science
Children of Memory, by Adrian Tchaikovsky
Adrian Tchaikovsky’s Children of Memory (2022; but published in the US on January 31, 2023) is the third science fiction novel in a series that started with Children of Time (2015), and continued with Children of Ruin (2019). The books are concerned with different varieties of sentience and intelligence. The background scenario for this far-future series is that human beings on Earth set forth with an ambitious project to terraform planets across the galaxy, but that the project and never completed. The terraforming project involves creating a livable climate, and stocking the planet with a diverse enough range of Earth organisms to create a functioning ecology. After this, either the planet can be inhabited by human beings, or else the world is seeded with a plasmid that provokes genetic mutations to raise another species to human-level intelligence. But due to troubles on Earth, the plan is never quite realized. In Children of Memory, instead of uplifting nonhuman primates, the plasmid creates a species of intelligent Portia spiders. The novel traces the stages of the spiders’ rise to civilization, and considers how their mentality might be different from a human one due to the intrinsic biological differences between the species. In Children of Ruin, octopuses on a water world are boosted to human-level intelligence; again, the novel explores how such a cephalopod intelligence would be different from either a primate or an arachnid one. In addition, in the second novel, the human beings, spiders, and octopuses also encounter an alien lifeform that is something like a parasitic slime mold. The slime mold assimilates, stores, and remembers the mentality and the experiences of any other living species that it encounters. This is at first a danger to the other sentient species: the slime mold transforms all the mindful entities that it encounters into more versions of itself. But eventually, this behavior is changed from a predatory, parasitic lifestyle into one of symbiotic mutualism. The slime mold craves novelty and new experiences; eventually it realizes (or is persuaded) that it can get more of these if it does not assimilate other organisms, but rather coexists alongside them and shares their experiences.
[WARNING: WHAT FOLLOWS CONTAINS SPOILERS] Children of Memory introduces an additional uplifted species: Corvids (the exact species is not specified; they seem to be a crow and raven hybrid). The Corvids do not get the plasmid that the spiders and octopuses got in the previous volumes; rather, they evolve greater intelligence on a partly-terraformed planet where they have become the dominant species. Once again, Corvid intelligence is qualitatively different than that of human beings and other species in the previous novels. The Corvids are able to speak, but their intellectual activity happens, not in individual birds, but only in pairs. One member of a pair gathers information, parses patterns in the information, and especially notices instances of novelty. The other member of the pair in effect collates this information and strategizes ways to act upon it. Neither of the pair can do much on its own; but in conjunction, the pairs are able to analyze large reams of data and operate complex technology. Whether they are capable of originality (as opposed to noticing and moblizing novelties that they discover in their environment) is uncertain. The Corvids deny that they are sentient; the actual situation seems to be that sentience inheres in their combined operations, but does not quite exist in either of their brains taken separately. In certain ways, the Corvids in the novel remind me of current AI inventions such as ChatGPT; they emit sentences that are insightful, and quote bits and fragments of human discourse and culture in ways that are entirely apt; but (as with our current level of AI) it is not certain that they actually “understand” what they are doing and saying (of course this depends in part on how we define understanding). Children of Memory is powerful in the way that it raises questions of this sort — ones that are very much apropos in the actual world in terms of the powers and effects of the latest AI — but rejects simplistic pro- and con- answers alike, and instead shows us the difficulty and range of such questions. At one point the Corvids remark that “we know that we don’t think,” and suggests that other organisms’ self-attribution of sentience is nothing more than “a simulation.” But of course, how can you know you do not think without thinking this? and what is the distinction between a powerful simulation and that which it is simulating? None of these questions have obvious answers; the novel gives a better account of their complexity than the other, more straightforward arguments about them have done. (Which is, as far as I am concerned, another example of the speculative heft of science fiction; the questions are posed in such a manner that they resist philosophical resolution, but continue to resonate in their difficulty).
The dilemma of the Corvids and their degree (or not) of sentience is encased within a much broader story or unsuccessful terraforming, or of the mismatch between human organisms and their re-created environment. The novel mostly takes place on and around a planet that has been only incompletely terraformed; thousands of years later, a generation starship containing thousands of human beings in cryonic suspension arrives with the mission to found a new society on this planet. The attempt is tragically unsuccessful, for a number of reasons. I don’t want to give away all the plot twists here, so I will just say that the novel envisions a series of interactions between Earth-born colonists and their descendants and an unforgiving environment that only includes a limited number of transplanted Earth species, as well as these baseline humans’ interactions with the various transformed species (including but not limited to human beings who have themselves been boosted by their encounters with the other intelligent species and with the advanced technologies arising from their encounters), and also with an even more powerful technology left behind by an unknown alien species. There are multiple levels of simulation and speculation, as well as even more complex and self-reflexive levels of both intelligence and sentience (with the relation between these never becoming entirely certain). There is a lot here that deserves unpacking at much greater length than I am capable of, after writing this brief review from just one reading. The entire Children series, and this third volume in particular, exemplifies how science fictional fabulation, at its best, can lead us to reflect upon vital issues in ways that simplistic pro- and con- arguments are unable to do.
Kim Stanley Robinson, THE MINISTRY FOR THE FUTURE
I have just finished reading Kim Stanley Robinson’s new novel, The Ministry for the Future. It is one of Robinson’s best books. It is a near-future novel, starting a few years from now, and continuing for several decades thereafter. It is about global warming, and the possibilities for alleviating climate catastrophe.
The novel begins with a real punch to the gut. The opening chapter depicts in excruciating detail a disastrous, and all too plausible, weather event. Recent scientific studies demonstrate that human beings cannot survive a wet-bulb temperature of over 35 degrees Celsius. (Wet bulb temperatures measure a combination of heat and humidity). The worst extreme-heat events across the world have almost reached this threshold; it is not unlikely that the threshold will be crossed sometime in the near future. When it gets that hot and humid, human bodies are unable to cool themselves any more; people die, even when they are in good health, have access to drinking water, and do nothing but sit motionlessly in the shade. Robinson’s opening chapter extrapolates such an event, imagining it taking place in the Indian state of Uttar Pradesh, and killing 20 million people in the space of a couple of days.
After this harrowing opening, the novel looks at responses to, and ramifications of, a gathering awareness that something has to be done about climate change. The novel focuses on two protagonists. Frank May is an American aid worker in India, one of the few survivors of the opening chapter’s climate event. Unsurprisingly, he is both traumatized by PTSD, and weighed down with survivors’ guilt. Mary Murphy, the other protagonist, is an Irish politician who is named head of the eponymous Ministry for the Future, a UN agency founded in order to enforce the Paris Agreement and other international climate accords. It is underfunded, and has no military or police power to punish nations or corporations that violate the agreements, but it has some room to give financial support to modest climate initiatives, and to exercise moral pressure on governments and banks.
The Ministry for the Future is far more loosely organized than most of Robinson’s previous novels. Though it keeps on coming back to Frank and to Mary, it also offers a wide range of other voices and perspectives. Robinson is not interested in exploring bourgeois interiority, in the manner still typical of literary novels today (and even of literary novels that flirt with science fictional conceits). Rather, these two central characters are by design fairly flat and generic. Even their particular personal characteristics are forged in a kind of feedback response to the economic, social, political, and technological forces in the world they live in.
(I have to say that, personally, I find the novel of bourgeois interiority insufferable in the 21st century; which is why I prefer straightforward genre writing, like Robinson’s, to most varieties of more ‘literary’ science fiction).
In any case, the lives of Frank and Mary are (aside from the initial catastrophe Frank suffers through and witnesses) not all that dramatic. What’s dramatic are the events that unfold around them — world-scale in their impact, but most often local and small-scale in their enaction. The book is divided into over a hundred chapters, all of them relatively short (on the average, each chapter is 3 pages long or so; though individual chapters range in length from a single paragraph to fifteen or so pages). Though some chapters give third-person accounts of the lives of Frank and Mary, most of them come from other voices. Some are fairly straightforward infodumps; others describe local happenings in a wide range of voices, usually anonymous and often collective (“we” rather than “I”). Here we learn of the experiences of, among others:
- climate refugees who flee ravaged developing countries, and spend years in refugee camps in Switzerland and other western countries;
- engineers in Antarctica, experimenting with various techniques to slow down the melting of the glaciers;
- economists and lawyers seeking to convince the world’s central bankers to adopt more climate-friendly policies;
- terrorists who carry out targeted assassinations of oil company executives and other megarich people who are directly responsible for ruining the climate in the interest of short-term profit;
- exploited workers who rebel against neo-slavery conditions in extractive industries like mining;
and many others. These many chapters give the novel a diffuse feel. Robinson is juggling many threads, but he has no interest in combining them all into a tightly organized narrative. This is in part, at least, because the world we live in doesn’t work that way. It is unimaginably complex, and it is at least potentially open. The Ministry for the Future is dedicated to Fredric Jameson, and it offers an elegant and effective solution to the dilemma that Jameson outlined in his discussion of postmodernism several decades ago: how to “endow the individual subject with some new heightened sense of its place in the global system,” when this system is dense and interconnected in ways that defy ordinary forms of representation. Robinson knows that a Spinozian understanding of this system sub specie aeternitatis, or a Hegelian grasp of the system in its dialectical totality, is impossible — the world system cannot be captured experientially, nor can it be cognized completely. Therefore, Robinson gives us multiple, and only loosely interconnected, perspectives — each of them is grounded in particular, incomplete sorts of experiences; but all of these actions and passions have global ramifications, well beyond the immediate experiences of the people who act and undergo them. The novel is filled with close descriptions of places and of actions, that are filled with local detail — but that also have implications that reach well beyond their immediate contexts. The book as a whole is discontinuous rather than synthesized into a perfectly shaped whole — but part of Robinson’s demonstration is that anything that were so well-shaped, would be, by that very fact, representationally inadequate. It is precisely this sort of open, indefinitely extensible, and never-completed endeavor that makes science fiction writing into “the realism of our time,” as Robinson insists in numerous essays and interviews.
(Side note: I find this sort of approach much better than the more common one that sees science fiction as utopian and/or dystopian. Fiction like Robinson’s doesn’t estrange us from contemporary social reality; rather, it gives us a “heightened sense,” to use Jameson’s words of that social reality, both in its hard actuality and in its still-open potentiality).
In a certain sense, The Ministry For the Future is almost a guidebook to how we may overcome the horrors of global warming, and avert a climate apocalypse. The novel does not offer us a messianic and utopian vision of revolution. Such a depiction would be useful in itself, by giving us a sense of what we need to fight for. But here Robinson is doing something different. The novel is filled with careful discussions of pragmatic policies that actually could be implemented in the world as we know it today, and that would have important positive effects. These are things like introducing a blockchain-regulated “carbon coin,” that would be paid to states, corporations, and individuals who succeed in sequestering carbon instead of spewing it into the atmosphere; geoengineering to make the waters of the Arctic, once they are unavoidably melted, more reflective of sunlight so as to decrease global heating; drilling in Antarctica to extract liquid water from underneath glaciers, where they lubricate fast motion of the ice above them into the ocean, but which, when extracted and refrozen on the surface increase the bulk of water trapped in ice form; setting up rewilding corridors in areas around the world, so that animal populations increase, and biotic products circulate without releasing carbon into the atmosphere; the replacement of gasoline-fueled airplanes with airships (essentially, large helium- or air-filled balloons), and of tankers with new sorts of clipper ships that move by a combination of air in the sails and motors whose generating power comes from sunlight via photovoltaic cells; a replacement of predatory private platforms like Facebook and Google with an organization of the Internet that is publicly owned and that preserves people’s privacy; and many more.
None of these technologies (using this word in the broadest possible sense) by themselves will save us from climate catastrophe, but deploying so many of them, together with creating a social atmosphere that is conducive to their continued discovery and development, can alleviate the otherwise runaway processes of global warming, and perhaps even reduce it to some extent. The point of giving us such detailed descriptions of all these processes is to make us aware that they are achievable in the actual world, with our current levels of technology and social and political organization. Robinson does not shy from the fact that getting these entirely plausible policies enacted will require, not only mass political protest around the world, but also some judicious doses of environmental terrorism. For instance, the transition over the course of the novel from fuel-consuming airplanes to carbon-neutral airships is prompted by eco-terrorist drone attacks that take down the former vessels frequently enough that even the rich are scared to fly in jet planes any longer. More broadly, central bankers (who are, the novel suggests, closer than any other group to being the actual rulers of the world) need to be bullied and threatened, as well as cajoled, into moving the world’s economies into more beneficial arrangements — they will only do so when they are convinced that current capital-accumulation policies can lead only to worldwide economic collapse and the loss of value of all the world’s currencies.
In a powerful sense, The Ministry for the Future is a remarkably optimistic novel. It assumes that our capitalist rulers can somehow be forced, or convinced, to accept the reforms necessary to save the human world from ruination. The novel is, as I have already suggested, a reformist rather than a revolutionary one. It seems resigned to the fact that capital will never entirely relinquish its hold; but holds out the hope that it might agree to social changes that somewhat diminish its power and wealth, in order to avoid what Marx and Engels called “the common ruin of the contending classes.” It also depicts an improvement of the international situation. Robinson says little in the novel about the United States, implying (probably accurately) that conditions here are so vile and degraded as to be totally irreparable. He does depict some positive ecological initiatives that take place at the state level. Though at one point Robinson imagines the catastrophic flooding of Los Angeles — something for which a precedent exists in the Great Flood of 1862 — he also sees a California that is progressive enough to pioneer rewilding initiatives despite the hostilty of the US federal government. (There is even a short passage about surfing towards the end of the novel, though it is set in Hawaii rather than California).
But in the novel’s vision, other parts of the world do considerably better than the United States. The climate disaster in India leads to the total discrediting of Modi and the Hindu nationalists, and the election of a new government whose main object is to make sure that such a catastrophe never happens again. The novel also envisions a China that continues with its relatively (compared to the rest of the world) climate-friendly economic policies, while giving up on its heavy-handed totalitarian governance (not out of goodwill, but simply as a result of discovering by experience that it doesn’t really work very well) and according more rights to its currently hyper-exploited working class. And in the various countries of Europe, though the rightwing anti-immigrant parties still exist forty years from now, they fail to take power or to disrupt the semi-enlightened internationalism of the more liberal European tradition.
All in all, The Ministry for the Future gives us a best-case scenario. It is not without loss — there are also policy setbacks, murders and bombings by revanchist rightwing terrorists and venal governments, and so on. But nevertheless, by the end of the novel, the world seems to have drawn back from the precipice of climate catastrophe — although the improvements in both the climate situation and the social situation, remain precarious. The world has not been saved, and hard work and massive international solidarity will still be needed for an indefinite future. But the worst has been averted, at least temporarily. Arguably, we need more quasi-optimistic (but not mindlessly optimistic) speculation like this, if only as a counterweight to our seemingly endless diet of dystopian horror.
And yet, and yet… I called The Ministry for the Future a best-case scenario. If precarious survival is the best that we can hope for, what will we face in a non-the-best case? It remains extremely unlikely that as many things will go right as the novel needs to have going right in order for it to present its case. The novel demonstrates that a better world is truly possible, and attainable, on the bases of the resources and technologies we have now. But I cannot help also realizing that without all these technologically possible, and yet all-too-politically-unlikely developments, we are, in fact, well and totally fucked.
Cognition and Decision in Nonhuman Biological Organisms
My edited volume, Cognition and Decision in Nonhuman Biological Organisms, has just been published as part of the new Living Books About Life series from Open Humanities Press.
I’m excited about the entire Living Books About Life series. It represents a new form of collaboration between scientists and scholars in the humanities. And it is entirely open access as well. Each volume contains a number of crucial science articles, collected (or curated) and introduced by a humanities scholar.
My own volume covers topics such as “free will” in fruit flies, moods and emotional tones in bees, and more generally processes of affect, cognition, and decision found not just in animals, but in other sorts of organisms (trees, slime molds, bacteria) as well.
When the biologist and science fiction writer Joan Slonczewski, in her recent novel The Highest Frontier , envisions plants that display a sense of humor, and that can learn to resolve “Prisoners Dilemma” situations with mutual cooperation, she isn’t extrapolating all that much from what we actually already know about “mental” operations even in entities that have few or no neurons.
Fruit Flies and Slime Molds
Two recent scientific articles help to illuminate the notion of decision, which for Whitehead is constitutive of all actual entities.
In the first place, Bjorn Brembs, who was one of the co-authors of a 2007 paper that suggested that fruit flies are able to generate spontaneous behavior that is not determined in advance either by genetic pre-programming or by environmental cues, has released a new paper in which he generalizes his argument. Brembs cites research by himself and others that points to the “common ability of most if not all brains is to choose among different behavioural options even in the absence of differences in the environment and perform genuinely novel acts.” That is to say, fruit files and other animals possess a sort of “free will.” Brembs dismisses, of course, what he calls “the metaphysical concept of free will,” i.e. the traditional Cartesian notion that is “inextricably linked with one variant or another of dualism.” But he also rejects strict determinism, both on account of quantum indeterminacy, and — more directly in biological terms — on the basis of the idea that, for animals, complete predictability of behavior is not viable. Any organism that reacted to stimuli in a completely predictable manner could all too easily be wiped out by predators who were able to anticipate these responses. Therefore, “predictability can never be an evolutionarily stable strategy. Instead, animals need to balance the effectiveness and efficiency of their behaviours with just enough variability to spare them from being predictable… Competitive success and evolutionary fitness of all ambulatory organisms depend critically on intact behavioural variability as an adaptive function. Behavioural variability is an adaptive trait and not ‘noise’.” All this suggests that motile animals, at the very least, have evolved mechanisms to generate behavioraly variability — action that is not pre-determined, and hence not predictable. Moreover, organisms are able to control the extent of this variability. In many circumstances, routine, habit, and “instinct” are the best strategies; but “faced with novel situations, humans and most animals spontaneously increase their behavioural variability.”
Brembs cites many examples of “self-initiated actions” (behaviors that are spontaneously and endogenously generated) in all sorts of animal organisms, and not just among vertebrates. He suggests that neural mechanisms have evolved which exhibit and exploit an “unstable nonlinearity.” These brain mechanisms are “exquisitely sensitive to small perturbations,” and they are irreducible to any binary alternative between “complete (or quantum) randomness and pure, Laplacian determinism.” This provides the basis for what Brembs calls a scientific concept of free will: one that is not an absolute, dualistic concept, but an immanent and relative one: “The question is not any more ‘do we have free will?’; the question is now: ‘how much free will do we have?’; ‘how much does this or that animal have?’. Free will becomes a quantitative trait.”
Brembs rightly draws philosophical conclusions from his argument, even though he disclaims being a philosopher. “Analogous to mutation and selection in evolution, the biological process underlying free will can be conceptualized as a creative, spontaneous, indeterministic process followed by an adequately determined process, selecting from the options generated by the first process. Freedom arises from the creative and indeterministic generation of alternative possibilities, which present themselves to the will for evaluation and selection. The will is adequately determined by our reasons, desires and motives—by our character—but it is not pre-determined.” From this point of view, free will requires something like a “self,” which is able to determine its own action; we may infer such self-willed action whenever “no sufficient causes for this activity to occur are coming from outside the organism.”
Free will does not, however, necessitate consciousness in the human sense. Fruit flies make decisions — they determine and generate their own behavior, to the limits that external constraints allow them to — without necessarily being “conscious” of making these decisions. Even among human beings, this is most likely the case. Brembs cites, in passing, Benjamin Libet’s experiments, which suggested, by means of testing neural responses, that human beings make decisions prior to being conscious of their decisions. Libet’s results have often been cited as disproving the existence of “free will”; but Brembs rightly says that, although these results discredit the “metaphysical” (dualist) notion of free will, they “are not relevant for the concept proposed here.” For what Libet showed was not that I do not make spontaneous or uncaused decisions, but rather that my “mind” makes these decisions, or my brain generates them, prior to my becoming consciously aware of them. Brembs’ empirically grounded notion of free will is entirely consonant with the argument — one metaphysically beyond the scope of Brembs’ paper, but which I would want to make on Whiteheadian grounds — that things like consciousness and responsibility are not the grounds or preconditions for decision or the exercise of free will, but rather the consequences (in some, but not necessarily all, cases) of making decisions and (thereby) exercising free will.
Brembs suggests that free will is an evolutionary adaptation of the nervous system; it would thereby be restricted to animal organisms. But what about biological entities that don’t have nervous systems (including plants, fungi, protists, and bacteria)? All these organisms have been shown to engage in various sorts of cognitive activities. “Plant cognition and behavior” has come to be a recognized biological subfield; bacterial “quorum sensing” is widely recognized and experimented upon; and slime molds (in particular, the model organism Physarum polycephalum) have been shown to exhibit “smart behavior” in solving a maze, and to solve “combinatorial optimization problems.” But most of this research has focused on cognition and problem-solving, not on the issue of free will that Brembs raises in connection with fruit flies and other invertebrates.
Slime molds are particularly interesting organisms, because they are neither unicellular nor multicellular, but something in between. They exist for most of their lives as blobs of protoplasm with many nuclei. Meiosis occurs at the end of the life cycle, when the slime mold develops “fruiting bodies” composed of haploid spores. These spores are widely dispersed, and begin their lives as haploid, single-celled organisms. Two of these unicellular organisms mate, forming a larger cell with a diploid nucleus. But from that point on, mitosis, or the separation and replication of nuclear DNA, is not accompanied by cell division. Rather the entire blob grows in size as it comes to contain multiple nuclei. The blob moves around, sending out filaments of protoplasm in various directions as it searches for food. It is in the course of this process, which seems not to be centrally coordinated, but to involve internal communication among different parts of the organism, that slime molds have succeeded in threading mazes and solving combinatorial problems. [I am referring here to myxomycetes, or “true” slime molds; as opposed to the also interesting, but vastly different, cellular slime molds].
[One might also note that Gilbert Simondon ponders at great length on the question of whether animals that live in colonies, like coral, are truely individuated or not. Is each organism an individual? Or is it only the colony that is an individual? Obviously, the same question could apply to the notion of ant or bee colonies as superorganisms. But the case of slime molds is even stranger; as far as I can recall, Simondon never mentions them (please, somebody, correct me if I am wrong). Slime molds are more than individual cells, but less than differentiated multicellular organisms. Not only don’t they divide into separate cells, but they don’t differentiate into separate tissues or organs, except when they form fruiting bodies at the point of sporulation. And, as mentioned above, this differentiation takes place, and the spores become separate entities, only via meiosis. This question is related to the fact, discussed below, that slime molds do not make decisions as unified “individuals,” but only as loose, decentralized collectivities — although, again, the members of this “collective” are not separate from one another, as they are in the cases of corals and of ants.]
This brings me to the second recent article I mentioned above. It concerns “irrational decision-making” processes in slime molds. This article, by Tanja Latty and Madeleine Beekman, is of much narrower scope than Brembs’ essay; and its explicit focus is entirely cognitive. Nevertheless, I think it is relevant to the questions that, following Brembs, I am raising. Latty and Beekman created situaitons in which slime molds were allowed to choose between different food sources, which varied both as to how nutritious they were, and as to how illuminated they were. Slime molds prefer richer food sources to poorer ones, but they also prefer darkness to light (since they are easily harmed by exposure to bright light and ultraviolet radiation). What “preferences” would the slime molds establish, when confronted by the alternative between a rich, but brightly-lit food source, and a poorer, but dimly-lit and therefore much safer one?
With multiple trials, and the insertion of additional alternatives, the scientists determined that slime molds, like human beings and other animals, do not operate in accordance with the dictates of what has been called (in the human social sciences) “rational choice” theory. That is to say, they do not make “economically rational” choices “based on the absolute value of items” they are choosing among, but rather “use comparative valuation rules.” There are many problems with rational choice theory, and even with the amended version, “behavioral economics,” which acknowledges that people (and other organisms that make decisions) often make use of “comparative valuation rules” and other, not-strictly-rational, cognitive shortcuts. I will not go into these problems here (that would require an entire separate essay, or several); suffice it to note that these approaches have an impoverished notion of “decision,” since they regard it not as spontaneously-generated activity, but merely as an ordered selection among items on a pre-determined menu or list.
Letty and Beekman don’t address Brembs’ question of free will, because they remain within an entirely cognitivist and behavioural-economic context. But two aspects of their experiments are nonetheless relevant here. In the first place, they suggest that the presence, among slime molds, of the same limited rationality and behavioral strategies that one finds among animals with nervous systems suggest that such strategies of choice are not just “a consequence of the way brains process information,” but rather indicate “an intrinsic feature of biological decision-making,” even when brains and neurons are not involved. Although they (wrongly, in my opinion) regard decision in exclusively cognitive terms, as a form of information processing, they do not see this “processing” as an exclusively animal-based, or neurally-based activity, but give it a much wider provenance. We know that it is taking place in slime molds and other brainless organisms, even though we do not yet understand how this happens. This suggests that the biological basis of free will is not necessarily tied to neurons and nervous systems in the way that Brembs suggests; it is a broader, or more basic, evolved feature of organisms.
The researchers state that “acellular slime moulds, like insect colonies, are collective decision makers, where the behaviour of the collective is a result of the behaviour of its underlying parts. Each slime mould is made up of many tiny pieces of slime mould, each oscillating at a frequency determined partly by the local environment, and partly by interactions with adjacent oscillators such that each oscillator can entrain those close to it.” Given this situation, and “owing to the slimy nature of acellular slime moulds, it was not possible to test [rationality] in individuals, and instead, we relied upon population-level preferences.” But there is still a weird difficulty here. The authors note that “recent work on rationality in ants,” in which each organism in a colony makes individual decisions, and the colony’s behavior as a whole is the sum of these decisions, “has led to the suggestion that organisms using collective decision-making processes may be immune to irrational decisions.” However, even if thisis the case with ants in a colony, it turns out not to be the case for slime molds. Is this perhaps because a slime mold is neither a unity, nor a collection of entirely separate individual units, but something strangely in between?
Another problem with rational choice theory and behavioral economic theory is that they assume separate individual “preferences” which are only summed secondarily and extrinsically. But in actuality,this is never the case. Every individual’s decisions are influenced by (even if not reducible to) the decisions of others, plus all sorts of supplemental contextual factors. As Whitehead says, in every process of decision “whatever is determinable is determined” by the situation in which the individual finds itself, the “stubborn fact” that it cannot evade; although at the same time “there is always a remainder for the decision” to be made by the actual entity itself (PR 27-28). This mixture of self-determination and dependence is a matter of degree, just like the balance between externally determined and internally self-generated action that Brembs describes. Slime molds represent an extreme ontological case, in which the contrast between internal and external definition, as well as between individual and collective determination, is pushed to its most intensely ambiguous point. This is why slime molds seem to slip in between the logic of separate individual decisions, and that of collective, but extrnisically-summed, decisions. Reducible to neither, they embody the point at which the logic of preferences-among-a-menu-of-items breaks down. And this is why Latty and Beekman’s focus on limited choice expands into something more like the indeterminacy of free will as defined by Brembs.
The second point I’d like to note from Latty and Beekman’s article is their finding that “even within a treatment group, slime moulds varied in their choices. This is particularly surprising as we controlled for weight, nutritional state and genetic differences.” In other words, even the slime molds’ compliance with “irrational” comparative valuation rules is not absolute. It is a statistical result, rather than something observed in every instance. This again suggests that there is a margin, or remainder, of indeterminacy that allows for unconstrained, spontaneous decision. The authors suggest that “some of the variability we observed arises from slight differences in the experiments’ initial conditions… These small differences in initial condition, combined with feedback via biomass recruitment mechanisms, could ultimately result in the observed variability.” This is undoubtably the case; but I would add that, as sensitivity to initial conditions approaches a point of indiscernibility, we get closer to Brembs’ claim that “determinism versus indeterminism is a false dichotomy,” which he bases in part on observing situations of extreme sensitivity to initial conditions. As Brembs puts it, “stochasticity is not a nuisance, or a side effect of our reality. Evolution has shaped our brains to implement ‘stochasticity’ in a controlled way, injecting variability ‘at will’.” The only amendment to this that we need to cover the case of slime molds is that this evolved ability to inject variability at will is not just a property of brains, but “an intrinsic feature” (as Letty and Beekman put it) of all biological entities.
I’ll end my own discussion here with a speculative epilogue that makes claims I cannot presently defend (although I am hopefully working towards them). It may be noted that research into biological free will and biological decision-making is not entirely unrelated to the questions about panpsychism raised by such analytic philosophers as Thomas Nagel, Galen Strawson, and Sam Coleman, and which I have discussed previously in this blog. For Strawson, the emergence of mentality from non-mentality is a serious problem, even though the emergence of life from non-life is not. He argues, therefore, that an incipient mentality must already exist on the level of subatomic particles. I suggest that it helps to make sense of this claim if we understand mentality in terms of “decision,” rather than in terms of consciousness or “qualia.” The evolution of biological decision making, and biological free will, might well depend upon, and make use of, an implicit potential of all matter. If decision were not already possible, then living things that actually do make decisions could not have come into existence. Rather than decision being a power of life, then, life would be a consequence of the potentiality of decision.
Splice
Panpsychism
In my last book, I wrote that Whitehead’s position, that all entities have a “mental” as well as a “physical” pole, needs to be distinguished “from the ‘panpsychism’ of which he is sometimes accused” (page 28). I now realize that this is entirely wrong; such a distinction cannot be made, because Whitehead’s position is, in a very classical sense, a panpsychist one. Moreover, panpsychism is a respectable philosophical position, and not something that anyone needs to worry about being “accused” of.
I come to this new understanding from reading David Skrbina’s work on panpsychism — the philosophical doctrine that “mentality” is in some sense a universal property of all entities in the universe, or of matter itself. Skrbina’s book, Panpsychism in the West, both argues for panpsychism as a philosophical doctrine, and gives an extended history of this doctrine. Skrbina shows that panpsychism has been a leading strand in Western thought for 2500 years, from the pre-Socratics through Spinoza and Leibniz, on to William James and Whitehead a century ago, and up to many thinkers today. The idea that everything in the world thinks, in some fashion, is far more prevalent than its “crackpot” reputation might lead us to assume.
Skrbina’s companion edited volume, Mind That Abides, contains essays on the possibilities of panpsychism by a variety of contemporary philosophers, ranging from analytic philosophers (among whom Galen Strawson is probably the best-known), through post-Whiteheadian process-oriented thinkers, to “speculative realists” along with other non-analytic metaphysicians (there are contributions from Graham Harman and Iain Hamilton Grant). Together, these volumes make a powerful case for the plausibility of panpsychism, as well as making it clear that Whitehead’s contention that all entities have some sort of incipient mentality is a central expression of the panpsychist doctrine.
Arguments for panpsychism come in many forms, and its adherents often contradict one another. But if there is a central strain to contemporary panpsychist argumentation, it is this. If we reject radical mind/body dualism, and accept materialism, physicalism, or any other form of monism, then we must face the question of \emph{how to explain} the indubitable existence of mind or mentality. I am using “monism” here in its widest possible sense; I define it to include, not just scientific physicalism (the doctrine that the world is composed entirely of mass-energy, or that it is reducible to the subatomic particles described by contemporary physics), but also any form of what might be called “immanentism” (the doctrine that the world is composed of something like Spinoza’s unique substance, or of Bergson’s multiple durations, or of “experience” as it is understood in William James’ “radical empiricism”, or indeed as pure multiplicity, or as an open collection of independent objects a la Graham Harman). In other words, any philosophy that rejects supernaturalism or mind/body dualism as a way to explain the existence of mentality, must find some naturalistic, or at least immanent, way to do so.
I am trying to give as broad as possibile a definition of “mind” or “mentality” as well. This may be defined as consisting in cognition, and cognitive operations, of some sort; and, I would argue, in affectivity as well. But above all mentality consists in phenomenal experience, or of what analytic philosophers call “qualia”: my sensation of the redness or hardness of some particular object, or of pain or delight, or simply of being present in the world. Phenomenal experience is often conflated with consciousness, or the state of intentionality, being-aware-of; I have reservations about this identification, which I will get to later, but the rough equation may be accepted for the moment.
Understood in any of these ways, mentality would seem to be an irreducible aspect of our own existence, at the very least — leaving open the question of what other beings might have it. The question nagging at philosophers is how to explain the seeming indubitability, or incorrigibility of phenomenal experience. (“Incorrigibility” is what Descartes bases his entire philosophy upon. Everything that I think may be false or mistaken; but the fact that I am thinking cannot be mistaken). Cartesian dualism is the great classical solution to this dilemma, of course. Descartes has been (rightly) criticized for hundreds of years for reifying the act or fact of thinking into the the form of the “I” as a thing-that-thinks, and for separating the thinking-mind from any notions of body, matter, or extension. But this doesn’t negate the urgency of his initial observation.
Few of us are willing today to take Descartes’ dualist route, however. So the question becomes: how do we explain qualia, or phenomenal experience, or consciousness, or “inner” experience, on a materialist or monist basis? Modern thinkers have tended to favor either eliminativism or emergentism. Eliminativism is a reductionist thesis; it argues that qualia, consciousness, intentionality, and phenomenal experience are merely illusions, or linguistic misunderstandings, which disappear once we understand how neurological mechanisms operate on the physical level (one can find different versions of this position in Daniel Dennett, in Thomas Metzinger, and in the Churchlands).
Emergentism argues that mentality is the epiphenomenal result of interacting physical processes that have attained a certain level of complexity, as is the case with the massive aggregations of neurons in our brains. Phenomenal experience emerges at some point in the course of evolution; it may be associated either with the existence of neurons and nervous systems in animals, or with some more complex development of the nervous system in organisms of sufficient complexity, or in vertebrates, or in mammals, or just in human beings.
Both eliminativism and emergentism can be criticized, however, for just “explaining away” mentality, rather than actually explaining it. As Whitehead says, “philosophy destroys its usefulness when it indulges in brilliant feats of explaining away.” Eliminativism doesn’t account for mentality so much as it suggests that it is too trivial or illusory to even merit being accounted for; it ignores Whitehead’s insistence that “the red glow of the sunset should be as much part of nature as are the molecules and electrical waves by which men of science would explain the phenomenon.”
Emergentism, for its part, can be accused of begging the question. It is one thing to say that certain physical properties emerge out of other physical properties (in Strawson’s example, a single molecule of H2O isn’t in itself wet). But it is another thing altogether, Strawson argues, to maintain that mentality, or experience, or phenomenality, can emerge from something that is entirely non-mental, non-experiential, and non-phenomenal.
More generally, I think that it is worthwhile to challenge our almost reflexive belief, today, in the power of emergence or self-organization. (See my previous post, “Against Self-Organization”, for more discussion of this). It’s all too easy for “spontaneous emergence” or “self-organization” to be put into play as a catch-all explanation for things that cannot be explained any other way. The emergentist thesis threatens to violate Whitehead’s ontological principle, which is that “there is nothing which floats into the world from nowhere.” Theories of emergent self-organization may well be ways of illicitly reintroducing an idea of preprogrammed finality, or of a benevolent “invisible hand,” into our understanding of events, as Jean-Jacques Kupiec has recently suggested.
Panpsychist thinkers propose, against the eliminativists, that mentality is real. Against the emergentists, they propose that mentality doesn’t just come into being out of nothing; it is always already there, no matter where you look. Mind, in some form or other, exists all the way down. Panpsychists argue that mentality, or experience, is itself a basic attribute of matter (of subatomic particles, of quanta of mass-energy, of actual occasions, of minimal differences, etc.). In other words, mentality is not separate from physicality, but coextensive with it. One might think of this, classicaly, in Spinozian terms (matter and mind are two attributes of the same unique substance) or in Leibnizian ones (every monad is at once material and mental, since it is both a particle of the world and a perspective upon the world). But Galen Strawson, David Skrbina, and others have reconceptualized these arguments in terms that are grounded in contemporary physics. As Strawson puts it, the “ultimates” out of which the universe is composed “are intrinsically experience-involving… All physical stuff is energy, in one form or another; and all energy, I trow, is an experience-involving phenomenon.”
This line of argument intersects in interesting ways with the arguments of the Speculative Realists. For it implies that mentality must be seen as intrinsic to the universe itself — rather than just being a feature of the way that “we” (human beings, rational minds, subjects) approach it. To restrict mentality just to human beings (and perhaps also to some other species of “higher” animals) is an unjustified prejudice, an instance of the “correlationism” denounced by Meillassoux, or the human-centeredness questioned by Harman. (This also accords with Whitehead’s frequent point that the duality of subject and object is a situational and always changing one. Every entity is a “subject” in some conditions or some relations, and an “object” in others).
In Skrbina’s anthology, both Iain Hamilton Grant and Graham Harman write about the relation between realism and panpsychism in ways that are too complicated for me to do them justice here. Grant argues for “panpsychism all the way down, that is, without exception”; but in doing so, he complicates the whole question of emergence. For his part, Harman is reserved with regards to panpsychism. He sees mentality as an inevitable component of any relationality, or interaction between objects; “objects collide only indirectly, by means of the images they present as information.” But objects are not reducible to the “information” that they transmit to other objects. Harman therefore denies the property of information, or mentality, to objects insofar as they are in themselves, and therefore to objects that do not enter into “vicarious” relations with other objects. And of course, for Harman, relationality is only incidental to, and not constitutive of, the nature of objects. Hence, for Harman, “even if all entities contain experience, not all entities have experience.” Grant’s and Harman’s articles both raise important issues that I do not have the space to pursue right now — I will have to leave them both for another occasion.
In any case, Whitehead gives his own crucial twist to the overall panpsychist argument. In Whitehead’s formulation, all “actual entites” or “actual occasions” have both a “physical” pole, and a “mental” or “conceptual” pole. He also expresses this by saying that they have both a “public” aspect and a “private” aspect. “There are no concrete facts which are merely public, or merely private. The distinction between publicity and privacy is a distinction of reason, and is not a distinction between mutually exclusive concrete facts.” Everything exists, to different degrees, both physically or publically, on the one hand, and mentally or privately, on the other. Every occasion is inwardly mental or private, in its own process of “concrescence,” as it prehends other (previous) occasions. But every occasion is also physical or public, insofar as it enters into relations with the universe by serving as a “datum” to be prehended in turn by other occasions.
(There is thus a temporal as well as existential asymmetry between the mental and the physical, or between private and public dimensions of existence. This asymmetry has important consequences for how we understand relationality in general. In the privacy of its self-constitution, the occasion prehends, and thereby relates to, the entire universe. Publically, as a datum, the occasion is prehended by other occasions, and functions as a relational factor. I need to work out this asymmetry in more detail — I think that it is crucial for how Whitehead is able to maintain both relationality all the way down, and the sense that an occasion is something more than just the sum of its relations).
The most crucial way in which Whitehead revises the panpsychist argument is that, for him, mentality — or what William James calls “experience” — is not equated (as it is in the work of most panpsychists) with consciousness. Photons and quarks, and stones and thermostats, all have “experiences,” which means that they do possess some sort of incipient mentality; but for Whitehead, they are probably not conscious. Even in human beings, Whitehead says, most mental processes occur unconsciously, or below the threshold of consciousness. What makes them “mental,” then? Whitehead’s notion of unconscius thought is related to, but also quite different from, both the psychoanalytic sense of the unconscious, and from cognitive science’s recognition that most cognitive processes are unacompanied by, and often irreducible to, consciousness. Like psychoanalysis, Whitehead sees unconscious experience as having to do with “feelings” and “appetitions”, processes of action and reaction that are not merely automatic responses to stimuli; but in contrast to psychoanalysis, for Whitehead these feelings and appetitions do not necessarily involve any sort of representational activity.
For Whitehead, mentality is characterised by what he calls “conceptual feelings,” or “valuations.” These are processes in which potentialities are in some sense contrasted or weighed against one another. There is not just the perception (and perhaps the recognition) of what is. For Whitehead, such a perception and recognition is exactly identical to physical causality; to say that B physically perceives or prehends A is exactly the same thing as to say that A physically affects B, or that A is the cause of which B is the effect, e.g. in the way that one billiard ball transmits energy and motion to another billiard ball by hitting it, and causing it to move in turn. In addition to all this, Whitehead says, B also has a mental or conceptual experience of A: the experience, let’s say, of being-caused-to-move. I doubt that the billard ball is in any sense conscious; but the event of energy-transfer is a mental experience for Whitehead, because it involves the activation of a potential (precisely of a potential for movement). Mentality consists in the comparison of moving and not-moving; this comparison is the “mental pole” of the “occasion” in which billiard ball B is hit by billiard ball A and propelled into motion.
Now, the role of mentality, or experience, in the case of the billiard ball is vanishingly small, or (as Whitehead tends to put it) negligeable. Nonetheless, it exists — it is at least present structurally, you might say. Experience is present potentially, but almost not at all actually. But if this is so, it is because experience is in itself the impress of potentiality. The energetic shock of being hit by another billiard ball is precisely a prehension, or an apprehension, of possiblity. Possibilities, or conceptual prehensions according to Whitehead, are always perceptions of what he calls “eternal objects,” or “pure potentials” — and these, in turn, are equivalent to what other philosophers call “qualia.” The apprehension of qualia — of the red glow of the sunset, for instance — is intrinsic and irreducible, because it is felt, pleasantly or unpleasantly as the case may be, and because, insofar as it is thus felt, it implies potential and contrast. Redness-as-a-potentiality is in excess of merely being a quality or an aspect of this particular moment, this particular sunset. My sense of redness implies that this scene could perhaps change, so as not to be red after all; and also that something else could be imbued by redness as well. And my affective response to the sunset has to do with my liking or disliking of this redness, a reaction that extends into the prospect of other things being red, or of this redness itself disappearing (as it does, once the sun has entirely set).
Experience, or conceptual feeling, thus always involves a certain process of “valuation,” or evaluation. Whitehead agrees with the cognitivists in seeing that these evaluative processes are most of the time non-conscious. But he does not see evaluation as itself a “cognitive” process — it has much more to do with “appetition,” which “includ[es] in itself a principle of unrest, involving realization of what is not, and may be… All physical experience is accompanied by an appetite for, or against, its continuance.” In this way, mentality (or experience) is not just the calculation and representation of what is, but also involves a striving towards some potential novelty. As a result of this, experience always issues in some sort of decision; and for Whitehead, such decision “constitutes the very meaning of actuality.”
Experience is, as Whitehead says, irreducibly private; which means that I cannot observe anyone else’s experience aside from my own. (There may very well even be a limit as to the extent of my ability to observe my own experience — as Harman also suggests from another angle). The privacy of experience has fueled the skepticism found throughout modern Western philosophy, from Descartes to Hume, and beyond into the twentieth century. (I include, under this head, the answers to skepticism, or dissolution of its paradoxes, given by thinkers such as Wittgenstein and Cavell). But for Whitehead, the decision in which private experience culminates is also what makes it public and potentially conscious. Decision is not grounded in consciousness or cognition; rather, decision is what makes consciousness, cognition, and public relationality possible in the first place. “Feelings,” or movements of “appetition,” are the basic elements of mentality (or “inwardness,” or “qualitative experience”). Cognition, consciousness, and responsibility are consequences of this basic mentality, rather than preconditions for it. An aesthetic of decision precedes and grounds cognition and consciousness — rather than either of these being the grounds or preconditions for any process of decision. I say an “aesthetics” of decision, because it is a non-cognitive, and non-generalizable process; the problem of how decision leads from privacy to publicity, in Whitehead’s account, is a transformation of Kant’s problematic of how a singular, non-cognitive, non-conceptual aesthetic judgment can nonetheless lay claim to universality, through the process (precisely) of being made public.
I will stop here; instead of explicating this in more detail (which certainly needs to be done) I will conclude by simply juxtaposing Whitehead’s notion of experience-as-decision with some recent speculation in the physical and biological sciences. This is a continuation and expansion of some of the speculation that is already in my book.
The biologist Martin Heisenberg, in a recent article called “Is Free Will An Illusion?” makes a similar point about the “decisions” made by biological organisms. Arguing from experiments on bacteria, fruit flies, and other organisms, Heisenberg states that such organisms exhibit “behavioral output” that is independent of “sensory input”; that is to say, these organisms “actively initiate behavior” that is “self-determined,” rather than being “determined by something or someone else.” Studies of plants and slime molds, as well as bacteria and fruit flies, have isolated instances of “decision” that are not causally determined by the circumstances in which they occur, or the conditions to which they are a response.
Recognizing decision in all living organisms might seem to point to a kind of vitalism. But it would be considerably different from traditional vitalism, because it would not claim that some sort of intrinsic vital force would make living beings radically distinct from non-living things. Rather, as Whitehead says, the line between life and non-life of fuzzy, and the mentality or decisionality of life is something that is essential to life, but not exclusive to life: it extends all the way down.
Along these lines, the physicists John H. Conway and Simon Kochen propose what they call the Strong Free Will Theorem. According to Conway and Kochen, under certain conditions that arise as a result of quantum entanglement, subatomic particles respond “freely,” that is to say, non-deterministically, unconstrained by any prior physical events. If experimenters may be said to be acting “freely” when they collapse a quantum-indeterminate state by choosing which of several possible parameters they will measure, then to the same extent the particle thus measured is acting “freely” when it “chooses” which value to give this parameter. If this is correct, then even photons may be said to have a certain sort of inner “experience,” and to make a kind of “decision.”
Against Self-Organization
Life on earth is doomed, according to the biologist Peter Ward in his new book The Medea Hypothesis. This book is meant to be polemical and provocative; I lack the knowledge to evaluate its particular scientific claims. But just as a thought experiment, it is bracing.
Ward’s book is a critique of the quite popular Gaia Hypothesis, originally developed by James Lovelock, which claims that the Earth as a whole, with all its biomass, constitutes an emergent order, a self-organizing system, that maintains the whole planet — its climate, the chemical constitution of the atmosphere and the seas, etc. — in a state that is favorable to the continued flourishing of life. Essentially the Gaia Hypothesis sees the world as a system in homeostatic equilibrium — in much the same ways that individual cells or organisms are self-maintaining, homeostatic systems. Gaia is cybernetically, or autopoietically, self-regulating system: continual feedback, among organisms and their environments, keeps the air temperature, the salinity of the sea, the amount of carbon dioxide in the atmosphere, etc., within the limits that are necessary for the continued flourishing of life.
Ward’s Medea Hypothesis directly contests all these claims. According to Ward, the ecosphere is not homeostatic or self-regulating; to the contrary, it is continually being driven by positive feedback mechanisms to unsustainable extremes. Most of the mass extinction events in the fossil record, Ward says, were caused by out-of-control life processes — rather than by an external interruption of such processes, such as the giant meteor hit which supposedly led to the extinction of the dinosaurs at the end of the Mesozoic. The great Permian extinction, for instance — the most catastrophic of which we have knowledge, in which 90% of all species, and 99% of all living beings, were destroyed — was caused by “blooms of sulfur bacteria in the seas,” which flourished due to greenhouse heating and poisoned the oceans and the atmospheres with increased concentrations of hydrogen sulfide, which is extremely toxic.
More generally, Ward claims that life processes have destabilizing effects, rather than homeostatic ones, upon the very environment that they rely upon for survival. This is largely because of the Malthusian basis of natural selection. Traits that give any organism a selective advantage over its rivals will spread through the gene pool, unless and until they overwhelm the environment and reach the limits of its carrying capacity. An organism that is too successful will ultimately suffer a crash from overpopulation, depletion of resources, and so on. The success of sulfur bacteria means the poisoning of all other organisms; or, to give another example, the rise of photosynthetic organisms 2 billion years ago poisoned and killed the then-dominant anaerobic microbes that had composed the overwhelming majority of life-forms up to that time.
Now, biologists in recent years have given careful attention to the evolution of cooperation and altruism as means of averting these dangers. For instance, in an environment of cooperating organisms, a cheater will outperform the cooperators, and through natural selection will eventually drive them into extinction, thus leading to an environment of cheaters who no longer have access to the benefits for all of cooperation. But this prospect can be averted, and altruism can be maintained within a group, if the cooperators evolve mechanisms to detect, and punish or otherwise discipline, the cheaters. Scenarios like this have led to something of a revival of the once-discredited notion of “group selection” (a group all of whose members benefit from cooperation will be able to outperform a group dominated by cheaters).
Be that as it may, Ward does not see any evidence that cooperation or altruism can evolve on a meta-, or planetary, level. He argues, counter-intuitively but with impressive statistical analyses, that in fact the total biomass, as well as the diversity of species, has been in decline ever since the Cambrian explosion. And he suggests that life on Earth is doomed to extinction long before the heating and expansion of the sun make the Earth too hot to live on. The depletion of carbon dioxide in the atmosphere, leading to the extinction of all plant life, the decline of atmospheric oxygen, the consequent extinction of all animal life, and finally the evaporation and loss to outer space of the oceans, could happen as little as 100 million to 500 million years from now — a span far less than the 1.5 billion or 2 billion years we have before the sun roasts the planet to a cinder. The Earth will end up much like either Venus or Mars — both of which initially had conditions that were favorable to the origin and sustenance of life, but no longer do (in this regard, it would be quite interesting if we were to discover, as has often been hypothesized, that Mars once did have life but no longer does).
Now, even 100 million years from now seems too far off in the future for us to worry about today. And, as Ward points out, our current problems — for the next century or so — have to do with too much carbon dioxide in the atmosphere, even if ultimately the Earth will die from too little. Nonetheless (and regardless of whether or not the book’s arguments stand up in their scientific details, which is something, as I already said, that I am unable to judge), Ward’s replacement of Gaia (the good mother Earth) with Medea (the ultimate bad mother, who murdered her own children) makes an important point. In critiquing the Gaia Hypothesis, it is really questioning our contemporary faith in self-organizing processes and systems.
I use “faith” here in as strong a sense as possible. The widespread contemporary belief in “self-organization” is almost religious in its intensity. We tend not to believe any more in the Enlightenment myth (as it seems to us now) of rationality and progress. We are skeptical of any sort of “progress” aside from technological innovation and improvement; and we no longer believe in the power of Reason to dispel superstition and to make plans for human betterment. The dominant ideology in these (still, despite the economic crisis) neoliberal times denounces any sort of rational planning as “utopian” and thereby “totalitarian,” an effort to impose the will on matter that absolutely resists it. This also entails a rejection of “grand narratives” (as Lyotard said in the 1980s), and an overall sense that “unintended consequences” make all willful and determinate action futile.
Instead, we turn to “self-organization” as something that will save us. The anarchist left puts its faith in self-organizing movements of dissidence and protest, with the (non-)goal being a spontaneously self-organized cooperative society. Right wing libertarians, meanwhile, see the “free market” as the realm of emergent, spontaneous, self-organized solutions to all problems, and blame disasters like the Great Depression of the 1930s, and the current Depression as well, on government “interference” with the (allegedly otherwise self-equilibrating) market mechanism. Network theory, a hot new discipline where mathematics intersects with sociology, looks at the Internet and other complex networks as powerfully self-organizing systems, both generating and managing complexity out of a few simple rules. The brain is described, in connectionist accounts, as a self-organizing system emerging from chaos; today we try to build self-learning and self-organizing robots and artificial intelligences, instead of ones that are determined in advance by fixed rules. “Genetic algorithms” are used to make better software; Brian Eno devises algorithms for self-generating music. Maturana and Varela’s autopoiesis is taken by humanists and ecologists as the clear alternative to deterministic and mechanistic biology; but even the harcore neodarwinists discover emergent properties in the interactions of multiple genes. Niklas Luhmann, in his turn, applies autopoiesis to human societies. This list could go on indefinitely.
Now, it is certainly true that many phenomena can be better understood in terms of networked complexity, than in those of linear cause and effect. It is rare for an occurrence to be so isolated that linear models are really sufficient to explain it. And it is also certainly true that unexpected consequences, due to factors that we did not take into account (and in some cases, as in chaos theory, that were too small or insignificant to measure in advance, but that turned out to have incommensurably larger effects), interfere with our ability to make clear predictions and to impose our will. The best laid plans, etc. But still —
I think that we need to question our reflexive belief — or unwarranted expectation, if you prefer — that emergent or self-organizing phenomena are some how always (or, at least, generally) for the best. And this is where Ward’s Medea Hypothesis, even if taken only as a thought experiment, is useful and provocative. Lovelock is almost apocalyptic in his worries about environmental disruption; his recent books The Revenge of Gaia and The Vanishing Face of Gaia warn us that human activity is catastrophically interfering with the self-regulating and self-correcting mechanisms that have otherwise maintained life on this planet. For Lovelock, human beings seem entirely separate from, and opposed to, “nature,” or Gaia. From Ward’s perspective, to the contrary, human beings are themselves a part of nature. Human-created climate change and ecological destruction are not unique; other organisms have caused similar catastrophes throughout the history of life on earth. All actions have “unintended” consequences; these consequences may well be destructive to others, and even to the actors themselves. Presumably bacteria do not plan and foresee the possible consequences of their actions, and discursively reason about them, in the ways that we do; but this does not mean that ecological catastrophes caused by bacteria should be put in a fundamentally different category than ecological catastrophes caused by human beings. [I am enough of a Whiteheadian that I am inclined to think that bacterial actions have a “mental pole” as well as a “physical pole” just as human actions do, albeit to a far feebler extent; there is definite scientific evidence for bacterial cognition.] Rather than separating destructive human actions from “nature”, Ward suggests that “nature” itself (or the organisms that compose it) frequently issues forth in such destructive actions. The mistake is to assume that the networks from which actions emerge, and through which they resonate, are themselves somehow homeostatic or self-preserving. Rather, destructive as well as constructive actions can be propagated through a network — including actions destructive of the network itself.
Of course, on some level we are already aware of this destructive potential — as is witnessed in discussions of the propagation of both biological and computer viruses, for instance. Yet somehow, we tend to cling to the idea that positive self-organization somehow has precedence. And this idea tends to arise especially in discussions that cross over from biology to economics. Both Darwinian natural selection and economic competition tend to be celebrated as optimizing processes. Stuart Kauffman, for instance, the great champion of “order for free,” or emergent, self-organizing complexity in the life sciences, has no compunctions about claiming that his results apply for the capitalist “econosphere” as well as for the biosphere (See his Reinventing the Sacred, chapter 11). The highly esteemed futurist Kevin Kelly, a frequent contributor to Wired magazine, has long celebrated network-mediated capitalism, analogized to biological complexity, as a miracle of emergent self-organization; just recently, however, he has praised Web 2.0-mediated “socialism” in the same exact terms.
But the most significant and influential thinker of self-organisation in the past century was undoubtedly Friedrich Hayek, the intellectual progenitor of neoliberalism. For Hayek, any attempt at social or economic planning was doomed to failure, due to the inherent limitations of human knowledge, and the consequent prevalence of unintended consequences. In contrast, and inspired by both cybernetics and biology, Hayek claimed that the “free market” was an ideal mechanism for coordinating all the disparate bits of knowledge that existed dispersed throughout society, and negotiating it towards an optimal outcome. Self-organization, operating impersonally and beyond the ken of any particular human agent, could accomplish what no degree of planning or willful human rationality ever could. For Hayek, even the slightest degree of social solidarity or collective planning was already setting us on “the road to serfdom.” And if individuals suffer as a result of the unavoidable inequities of the self-organizing marketplace, well that is just too bad – it is the price we have to pay for freedom and progress.
Hayek provided the rationale for the massive deregulation, and empowerment of the financial sector, of the last thirty years — and for which we are currently paying the price. But I have yet to see any account that fully comes to terms with the degree that Hayek’s polemical argument about the superiority and greater rationality of emergent self-organization, as opposed to conscious will and planning have become the very substance of what we today, in Europe and North America at least, accept as “common sense.” Were the anti-WTO protestors in Seattle a decade ago, for instance, aware that their grounding assumptions were as deeply Hayekian as those of any broker for Goldman Sachs?
I don’t have much in the way of positive ideas about how to think differently. I just want to suggest that it is high time to question our basic, almost automatic, assumptions about the virtues of self-organization. This doesn’t mean returning to an old-fashioned rationalism or voluntarism, and it doesn’t mean ignoring the fact that our actions always tend to propagate through complex networks, and therefore to have massive unintended consequences. But we need to give up the moralistic conviction that somehow self-organized outcomes are superior to ones arrived at by other means. We need to give up our superstitious reverence for results that seem to happen “by themselves,” or to arrive “from below” rather than “from above.” (Aren’t there other directions to work and think in, besides “below” and “above”?).
Whitehead says that every event in the universe, from the tiniest interaction of subatomic particles up to the most complex human action, involves a certain moment of decision. There are no grounds or guidelines for this decision; and we cannot characterize decision in “voluntaristic” terms, because any conscious act of will is a remote consequence of decision in Whitehead’s sense, rather than its cause. Decisions are singular and unrepeatable; they cannot be generalized into rules. But all this also means that we cannot say that decision simply “emerges” out of a chaotic background, or pops out thanks to the movement from one “basin of attraction” to another. No self-organizing system can obviate the need for such a decision, or dictate what it will be. And decision always implies novelty or difference — in this way it is absolutely incompatible with notions of autopoiesis, homeostasis, or Spinoza’s conatus. What we need is an aesthetics of decision, instead of our current metaphysics of emergence.
Reinventing the Sacred (Stuart Kauffman)
Stuart A. Kauffman’s Reinventing the Sacred: A New View of Science, Reason, and Religion recapitulates many of the ideas about the role of emergence in biology that were worked out in Kauffman’s earlier books (At Home in the Universe and Investigations), but also tries to place these ideas within a broader philosophical focus. Ultimately, Kauffman hopes to repair the breach between reason and emotion, or between science and culture, or between a naturalistic worldview and one that emphasizes spirituality.
It’s really a question of how we get there from here. Kauffman, who has long been associated with the Santa Fe institute, draws upon complexity theory in order to elucidate the role of emergence in biological processes. Working with computer simulations rather than with actual organisms, he has sought to show how, given the right conditions, autocatalytic loops might have emerged out of a primary soup of organic chemicals, and how such a process might have contributed to the origin of life. He has pioneered the idea that living organisms, and the environments they interact with, might exist in a zone of “criticality” in between excessive stability, on the one hand, and excessive chaotic tendencies, on the other. And he argues that the emergence of spontaneous, self-generated order — “order for free” — plays a major role in evolution, alongside natural selection. All these themes from Kauffman’s earlier books are recapitulated in the course of Reinventing the Sacred.
Kauffman is thus one of the few scientists who challenges the neodarwinist consensus that is endorsed by the overwhelming majority of contemporary biologists. Alongside Kauffman, one could also list Lynn Margulis (theories about the role of symbiosis in evolution), Stephen Jay Gould (both for punctual evolution, and for his insistence, together with Richard Lewontin, on the importance of exaptation), Susan Oyama and her colleagues (Developmental Systems Theory), Humberto Maturana and Francesco Varela (autopoiesis), James Lovelock (the Gaia hypothesis), Jean-Jacques Kupiec and Pierre Sonigo (who deploy Darwinian selectionism against genetic determinism). One might also mention recent attempts, from within the neodarwinist framework, to rehabilitate the idea of group selection (e.g. David Sloan Wilson), to insist upon the continuing importance of embryology and development, rather than seeing these as a mere matter of implementing what is already coded in the DNA (e.g., the work of Mary Jane West-Eberhard on developmental plasticity, and other work in so-called “Evo-Devo”), and to show the importance of non-adaptive “genetic drift” (e.g. Michael Lynch). These numerous strands of recent biological theory differ greatly among themselves; and they also differ in terms of the degrees to which they are conciliable with, or in opposition to, mainstream neodarwinism. Also, these strands are not themselves all mutually compatible; and it is too early to judge the extent to which any of them stand or fall. But together they point to the fact that the neodarwinian synthesis has not altogether disposed of philosophical questions about “life.” It is possible to take issue with neodarwinist reductionism without thereby slipping into vitalism or creationism. Darwin’s legacy remains richer and stranger than is accounted for in current mainstream discourses of genetic determinism and evolutionary psychology.
Kauffman is one of those scientists who strongly insists that the neodarwinian synthesis leaves far too much out of account. Reinventing the Sacred moves from biological speculations to a broader attack on the very notion of scientific reductionism. Kauffman insistd that biological emergence (and other forms of emergence in the natural and social/cultural worlds, for that matter) leads to the existence of phenomena that cannot be accounted for or predicted on the basis of physical laws alone. Nothing in biology contradicts the laws of physics; but the biological world does not follow from the laws of physics in themselves, and cannot entirely be described or understood in terms of those laws. Even in principle, a perfect knowledge of the positions and velocities of all the particles in the universe (Laplace’s demon) would not suffice to determine the future. For the future is open and unpredictable. The universe is characterized by a “persistent creativity,” operating on all scales and in all contexts, but especially where there is life. This creativity cannot be accounted for in terms of natural laws, and elementary particles and forces. It will not be comprehended within whatever supposed “theory of everything” the physicists manage to come up with (if they ever do). Kauffman is arguing very much in the tradition of Bergson and Whitehead (though, unfortunately, he never mentions these thinkers, and doesn’t seem to know anything about them), and Ilya Prigogine.
Reinventing the Sacred is mostly concerned with “breaking the Galilean spell” that has held us in its thrall for something like four hundred years. Even complexity theory, with its understanding of “deterministic chaos,” involving abrupt, nonlinear changes from one phase state or basin of attraction to another, does not break with the logic of linear causality and mechanistic determinism. It is still “fully lawful” (in the sense of scientific laws — 141). Kauffman claims, however, that what he calls “Darwinian preadaptation” — by which he means pretty much the same thing as Gould and Lewontin do by exaptation, a word that Kauffman oddly does not use — does indeed break with such a logic. In taking already-existing phenotypic features and detourning them to new uses, organisms explore what Kauffman calls the “adjacent possible,” and thereby expand the range of actuality in unforeseen and unforeseeable ways. For “Darwinian preadaptations appear to preclude even sensible probability statements” (139). This is because judging probabilities requires knowing at least the “sample space” within which all possible outcomes are contained. But biological innovation (and cultural innovation as well) changes the very shape of this space itself. It doesn’t just choose among already-existing possibilities, but changes or expands what is possible.
I think that a lot of this resonates with Whitehead’s speculations on creativity and innovation, and with Deleuze’s notion of the virtual or potential (and how it differs from the merely possible). But this in turn brings up the entire question of how to relate science and philosophy. Whitehead and Deleuze are opposed, as Kauffman is, to scientific reductionism: that is to say, they are opposed to the claim that the reduction of mental experiences to neural firings, and of physical phenomena to elementary particles and forces is all there is. As I say in my Whitehead book:
Against all reductionism, Whitehead insists that “we may not pick and choose. For us the red glow of the sunset should be as much part of nature as are the molecules and electrical waves by which men of science would explain the phenomenon†(1920/2004, 29). The phenomenologist only considers the red glow of the sunset; the physicist only considers the mechanics of electromagnetic radiation. But Whitehead insists upon a metaphysics that embraces both. For “philosophy can exclude nothing†(1938/1968, 2).
The problem is not with scientific explanations in themselves, whose truth we can and should accept. The problem is only with thinking that these lower-level scientific explanations are ultimate and exhaustive, so that “higher-level” sorts of explanation can be entirely reduced to them — as E. O. Wilson claims with his notion of consilence, or as Paul and Patricia Churchland do with their notion of eliminative materialism. In other words, the problem comes when the low-level scientific explanation is accepted as what really is the case, and everything else is regarded as illusion or mere appearance. (This ironically reinstates the old reality/appearance distinction that scientific empiricism was supposed to get rid of once and for all). Now, it is unclear to me that this really makes much of a difference to the way that working scientists actually do their research. It only comes up when those scientists sit back and reflect upon their research in a non-experimental context — or when philosophers like the Churchlands, or armchair cultural speculators like myself, ask meta-questions about such research. But such speculations are themselves inevitable and unavoidable — it is impossible to separate “pure science” from them. The result is, we are left in a kind of circle. And Kauffman’s generous speculations are certainly welcome in contrast to Wilson’s “scientific imperialism,” his reductionist attempt to subordinate all other forms of understanding and inquiry to his particular kind of science.
At the same time, of course, we need to beware of the trap of taking Deleuze or Whitehead as an absolute starting point, and judging scientific theories on the basis of how well they conform to an already-existing philosophical argument. Both Whitehead and Deleuze were keenly interested in the science of their times, and both of them sought to create a metaphysics that was in tune with that science. This was (is) a two-way process. Both Whitehead and Deleuze insist that there is no such thing as positivistic, value-free science; all empirical research presupposes a background of theories, assumptions, and already-accepted facts. There is no physics free of metaphysics. Whithead and Deleuze therefore both strive to provide a metaphysics that will be adequate to the needs of modern science; but this does not mean that they claim, in the Kantian manner, to stipulate in advance the necessary and sufficient conditions for all knowledge (scientific or otherwise). This is part of what it means to say that they are (as Deleuze put it) “transcendental empiricists” rather than Kantian transcendental idealists. As the metaphysical process of what Whitehead calls generalization or speculation proceeds, it must continually test itself and modify itself in accordance with the developments of scientific knowledge (and other sorts of knowledge), even as it resists the exclusivist or imperialist claims that arise from, or are made on behalf of, these developments of knowledge.
To get back to Kauffman: given his interest in the role of creativity in the universe, and particularly in life processes, it’s really too bad that he seems entirely unaware of Whitehead. It is all too easy for me to translate Kauffman’s formulations into Whiteheadian terms; but I’d like to get more of a sense of how Kauffman’s speculations might allow us to modify or ‘update’ Whitehead. The weakest aspect of Kauffman’s book is his attempt to move from science to philosophy: there is a sense in which his philosophical musings are just too simplistic, or “naive.” When he gets beyond the technical details of his computer simulations, Kauffman is way too eager just to make a “leap of faith” into an embrace of teleological and spiritual concerns. There’s a lot of blather in the book about the wisdom of past civilizations, and the need to construct a “global ethic,” and far too little a sense of what it means to engage in speculation.
Now, when I say that Kauffman’s claims are largely speculative, this is not a criticism, because I do not share the positivist sense that speculation is unacceptable and that we must confine ourselves to hard empirical evidence and legitimate induction from such evidence. As Whitehead says, “the Baconian method of induction… if consistently pursued, would have left science where it found it.” A certain amount of speculation is necessary, if we are to discover or invent anything at all. Kauffman is indeed unique among contemporary scientists because of the degree to which his research has been almost entirely speculative — his work has largely consisted, as I have already noted, in running computer simulations of biological processes, rather than looking at any actual organisms. This is precisely why his claims about emergent order have been ignored, rejected, or dismissed as incomprehensible by the vast majority of biological researchers. But it’s also why his suggestions are important, for any effort actually to think the biological in terms that go beyond genetic determinism and strict adaptationism.
However, some of Kauffman’s speculations in Reinventing the Sacred are just too tenuous, too lame. This is especially the case when he spends a chapter proposing a quantum model of the brain — one that differs from Roger Penrose’s better-known proposal, but that shares with it an argument that quantum indeterminacy could account for brain processes that are non-deterministic, and (especially) non-algorithmic. This is a case where Kauffman protests way too much — every step in his tortuous line of reasoning is qualified by statements like, “the hypothesis… is not at all ruled out” (211), certain factors “may remain available” according to his particular scenario (212), “perhaps something similar” is happening in a completely different realm from the one in which a particular kind of pattern has been noted (214), “it may always be the case” that such and such a process can take place (219), and so on at embarrassing length. In effect, Kauffman is constructing a Rube Goldberg machine to account for a process — let’s call it “decision” or “choice” — that classic determinism cannot explain, but only explain away. This seems utterly misguided to me — it makes far more sense just to accept, as a primary datum, recent observations about, for instance, fruit flies making unconstrained, undetermined decisions, than to go through Kauffman’s barely plausible chain of inferences and pleadings in order to allow for such a possibility.
The trouble, in a case like this, is that Kauffman’s speculations are simply not speculative enough. There needs to be some middle way between Kauffman’s appeal to a tortuous chain of reasoning on the one hand, and delirious invocations of cosmic forces on the other. It is especially noteworthy, and symptomatic, that Kauffman pulls off his explanation by appealing to quantum mechanics. It strikes me that the appeal to quantum indeterminacy, to give a scientific explanation of some otherwise unaccountable phenomenon, is a sort of get-out-of-jail-free-card to be used on all occasions when one cannot come up with anything else, or anything better. The same thing happens, for instance, in Greg Egan’s novel Teranesia — except Egan pulls out his quantum trump card in defense of neodarwinist reductionism, while Kauffman does so in defence of anti-reductionism.
In any case, for all that Kauffman is a speculative biologist (and, again, I am using this in a laudatory rather than dismissive sense), he fails to realize how his own mode of speculation is itself an example of the creative process that he sees at work throughout the biosphere, and perhaps the entire physical universe. Even though he has in effect abandoned the “scientific method,” he remains overly attached to “hard” factual claims, rather than understanding the continual play between what Whitehead calls “stubborn fact” and the way that, as Whitehead also says, “there is not a sentence, or a word, with a meaning which is independent of the circumstances under which it is uttered”, so that “every proposition proposing a fact must, in its complete analysis, propose the general character of the universe required for that fact.” This is why science must always be accompanied by robust speculation, whether in the form of metaphysics or in that of science fiction.
The Head Trip; consciousness and affect
I’ve been reading Jeff Warren’s The Head Trip: Adventures on the Wheel of Consciousness, basically on the recommendation of Erik Davis. It’s a good pop-science-cum-therapy book, which explores basic modes of conscious experience, both nocturnal and diurnal, and combines accounts of what scientific researchers and therapists are actually doing with a narrative of Warren’s own subjective experiences with such modes of consciousness-alteration as lucid dreaming, hypnotic trances, meditation, neurofeedback, and so on. Warren maps out a whole series of conscious states (including ones during sleep), and suggests that consciousness in general (to the extent that there is such a thing “in general”) is really a continuum, a mixture of different sorts of mental activity, and different degrees of attentiveness, including those at work during sleep. These various sorts of conscious experience can be correlated with (but not necessarily reduced to) various types of brain activity (both the electric activity monitored by EEGs and the chemical activity of various neurotransmitters; all this involves both particular “modules” or areas of the brain, and systematic patterns running through the entire brain and nervous system).
The Head Trip is both an amiable and an illuminating book, and I really can’t better Erik Davis’ account of it, which I encourage you to read. Erik calls Jeff Warren “an experiential pragmatist in the William Jamesian mode,” which is both high praise and a fairly accurate description. Warren follows James in that he insists upon conscious self-observation, and looks basically at what James was the first to call the “stream of consciousness.” Like James, Warren insists upon the pragmatic aspect of such self-observation (what our minds can do, both observing and being observed, in all its messy complexity), rather than trying to isolate supposedly “pure” states of attention and intention the way that the phenomenologists do.
At one point, Warren cites Rodolfo Llinas and D. Pare, who argue that consciousness is not, as James claimed, “basically a by-product of sensory input, a tumbling ‘stream’ of point-to-point representations,” because it is ultimately more about “the generation of internal states” than about responding to stimuli (p. 138). But this revised understanding of the brain and mind does not really contradict James’ overall pragmatic style, nor his doctrine of “radical empiricism.” James’ most crucial point is to insist that everything within “experience” has its own proper reality (as opposed to the persistent dualism that distinguishes between “things” and “representations” of those things). Not the least of Warren’s accomplishments is that he is able to situate recent develops in neurobiological research within an overall Jamesian framework, as opposed to the reductive dogmas of cognitivism and neural reductionism.
Nonetheless, what I want to do here is not talk about Warren’s book, but rather speculate about what isn’t in the book: which is any account of emotion or of affect. Shouldn’t we find it surprising that in a book dedicated to consciousness in all its richness and variety, there is almost nothing about fear, or anger, or joy, or shame, or pride? (There’s also nothing about desire or passion or lust or erotic obsession: I am not sure that these can rightly be called “emotions,” but they also aren’t encompassed within what Warren calls “the wheel of consciousness”). There are some mentions of a sense of relaxation, in certain mental states; and of feeling a sort of heightened intensity, and even triumph, when Warren has a sort of breakthrough (as when he finally succeeds in having a lucid dream, or when his neurofeedback sessions are going well). Correlatively, there are also mentions of frustration (as when these practices don’t go well — when he cannot get the neurofeedback to work right, for instance). But that’s about it, as far as the emotions are concerned.
The one passage where Warren even mentions the emotions (and where he briefly cites the recent work on emotions by neurobiologists like Antonio Damasio and Joseph LeDoux) is in the middle of a discussion of meditation (pp. 309ff.). The point of this passage is basically to discuss the difference between how Western rationalism has just tried to repress (in a Freudian sense) the emotions, whereas the Buddhist tradition has instead tried to “cultivate” them (by which he seems to mean something like what Freud called “sublimation”). Warren oddly equates any assertion of the power of the emotions with evolutionary psychiatry’s doctrine that we are driven (or “hardwired”) by instincts that evolved during the Pleistocene. The existence of neuroplasticity (as recognized by contemporary neurobiologists) effectively refutes the claims of the evolutionary psychologists — this is something that I am entirely agree with Warren about. But Warren seems thereby to assert, as a corollary, that emotions basically do not matter to the mind (or to consciousness) at all — and this claim I find exceedingly bizarre. Warren seems to be saying that Buddhist meditation (and perhaps other technologies, like neurofeedback, as well) can indeed, as it claims, dispose of any problems with the emotions, because it effectively does “rewire” our brains and nervous systems.
What is going on here? I have said that I welcome the way that Warren rejects cognitivism, taking in its place a Jamesian stance that refuses to reject any aspect of experience. I find it salubrious, as well, that Warren gives full scope to neurobiological explanations in terms of chemical and electronic processes in the brain, without thereby accepting a Churchland-style reductionism that rejects mentalism or any other sort of explanatory language. Warren thus rightly resists what Whitehead called the “bifurcation of nature.” Nonetheless, when it comes to affect or emotion, some sort of limit is reached. The language that would describe consciousness from the “inside” is admitted, but the language that would express affective experience is not. I think that this is less a particular failing or blind spot on Warren’s part, than it is a (socially) symptomatic omission. Simply by omitting what does not seem to him to be important, Warren inadvertently testifies to how little a role affect or emotion plays in the accounts we give of ourselves today, accounts both of how our minds work (the scientific dimension) and of how we conceive ourselves to be conscious (the subjective-pragmatic dimension).
Some modes of consciousness are more expansive (or, to the contrary, more sharply focused) than others; some are more clear and distinct than others; some are more bound up with logical precision, while others give freer reign to imaginative leaps and to insights that break away from our ingrained habits of association. But in Warren’s account, none of these modes seem to be modulated by different affective tones, and none of them seem to be pushed by any sort of desire, passion, or obsession. Affects and desires would seem to be, for Warren, nothing more than genetically determined programs inherited from our reptilian ancestors (and exaggerated in importance by the likes of Steven Pinker) which our consciousness largely allows us to transcend.
Another way to put this is to say that Warren writes as if we could separate the states (or formal structures) of attentiveness, awareness, relaxation, concern, focus, self-reflection, and so on, from the contents that inhabit these states or structures. This is more or less equivalent to the idea — common in old-style AI research — that we can separate syntactics from semantics, and simply ignore the latter. Such a separation has never worked out in practice: it has entirely failed in AI research and elsewhere. And we may well say that this separation is absurd and impossible in principle. Yet we make this kind of separation implicitly, and nearly all the time; it strikes us as almost axiomatic. We may well be conscious of “having” certain emotions; but we cannot help conceiving how we have these emotions as something entirely separate from the emotions themselves.
It may be that consciousness studies and affect studies are too different as approaches to the mind (or, as I’d rather say, to experience) to be integrated at all easily). Indeed, in this discussion I have simply elided the difference between “affect” and “emotion”: the terms are sometimes used more or less interchangeably, but I think any sort of coherent explanation requires a distinction between the two. Brian Massumi uses “affect” to refer to the pre-personal aspects (both physical and mental) of feelings, the ways that these forces form and impel us; he reserves “emotion” to designate feelings to the extent that we experience them as already-constituted conscious selves or subjects. By this account, affects are the grounds of conscious experience, even though they may not themselves be conscious. Crucial here is James’ sense of how what he calls “emotions” are visceral before they are mental: my stomach doesn’t start churning because I feel afraid; rather, I feel afraid because my stomach has started churning (as a pre-conscious reaction to some encounter with the outside world, or to some internally generated apprehension). The affect is an overall neurological and bodily experience; the emotion is secondary, a result of my becoming-conscious of the affect, or focusing on it self-reflexively. This means that my affective or mental life is not centered upon consciousness; although it gives a different account of non-conscious mental life than either psychoanalysis (which sees it in terms of basic sexual drives) or cognitive theory (which sees non-conscious activity only as “computation”).
There’s more to the affect/emotion distinction than James’ account; one would want to bring in, as well, Sylvan Tompkins’ post-Freudian theory of affect, Deleuze’s Spinozian theory of affect, and especially Whitehead’s “doctrine of feelings.” Rather than go through all of that here, I will conclude by saying that, different as the field of consciousness studies (as described by Jeff Warren) is from cognitivism, they both ultimately share a sense of the individual as a sort of calculating (or better, computational) entity that uses the information available to it in order to maximize its own utility, or success, or something like that. Such an account — which is also, as it happens, the basic assumption of our current neoliberal moment — updates the 18th century idea of the human being as Homo economicus into an idea of the human being as something like Homo cyberneticus or Homo computationalis. For Warren, this is all embedded in the idea that, on the one hand, our minds are self-organizing systems, and parts of larger self-organizing systems; and on the other hand, that “we can learn to direct our own states of consciousness” (p. 326). Metaphysically speaking, we are directed by the feedback processes of an Invisible Hand; instrumentally speaking, however, we can intervene in these feedback processes, and manipulate the Hand that is manipulating us. The grounds for our decision to do this — to intervene in our own behalf — are themselves recursively generated in the course of the very processes in which we determine to intervene. The argument is circular; but, as with cybernetics, the circularity is not vicious so long as we find ourselves always-already within it. This is in many ways an enticing picture: if only because it is the default assumption that we cannot help starting out with. And Jeff Warren gives an admirably humane and expansive version of it. Still, I think we need to spend more time asking what such a picture leaves out. And for me, affect theory is a way to begin this process.