Bad Science

It’s been too long since I last unpacked an example of bad science (or of bad reporting of science, it is sometimes difficult to say which). Jacalyn sent me the url for an article on a new study: “Lesbians’ brains respond like straight men.” Now, what does this mean, exactly? In what way do lesbians’ brains respond like the brains of straight men? First of all, we are given no indication of how the survey defined “lesbians” and “straight men”; if they used peoples’ own self-definition, that is well and good from a sociological standpoint, but it is way too vague to stand as a biological category. Do they mean to imply that, if somebody’s self-definition changes, say if I stop defining myself as a straight man (which I am very often tempted to do, because of all the baggage that comes with the phrase “straight man,” baggage which goes very far beyond the accurate but fairly vague observation that most of the time I am more likely to be turned on sexually by women than by men), then their hormones suddenly change? But that is getting ahead of myself. We also have to ask, when lesbians’ brains “respond” like those of straight men, to what are they responding? Well, the article says, “lesbians’ brains react differently to sex hormones than those of heterosexual women… Lesbians’ brains reacted somewhat, though not completely, like those of heterosexual men.” Now, isn’t there something circular going on here? “Sex hormones” are defined later in the article as “pheromones” — but this is itself a category that far too little is known about, and that is so dubious, especially in human beings, that making generalizations on their basis is unacceptable to begin with; the article itself notes in passing that “whether humans respond to pheromones has been debated.” But even if we accept the pheromones, all the research is saying, really, is that lesbians and “straight men” are both more sexually attracted to women than to men; which is of course the tautology that the survey presupposed in the first place. Oh, I should add that the physiological correlate of this arousal is that the scent which someone finds sexually arousing is processed in a different part of the brain (the hypothalmus) from the normal scent-processing areas. So again all we have is a tautology, the repetition of what was presupposed at the beginning. Of course the arousal is taking place on a subconscious level, by the scents of pheromones themselves, without the test subjects knowing which gender’s scent they are smelling. But this was also presupposed in the initial plan of the study; it’s just another tautology. (I should also note that the idea that biological males and females have completely separate pheromonal scents, which supposedly do not overlap, or show enough variation to undo the rigid binary, is also being unwarrantedly presupposed). However, even this is not quite accurate as a basis for the alleged “similarity” between “lesbians” and “straight men.” The report notes that “ordinary odors were processed in the brain circuits associated with smell in all the volunteers.” However, “in heterosexual males the male hormone was processed in the scent area but the female hormone was processed in the hypothalamus, which is related to sexual stimulation.” Whereas in lesbians, “both male and female hormones were processed the same, in the basic odor processing circuits.” But wait: if this is the case, then they entire claim of the study, that lesbian brains respond somewhat like straight male brains, collapses. Lesbian brains (I have given up putting up scare quotes just out of laziness; of course they should be present each time any of these pseudo-categories is mentioned) are similar to straight male brains in terms of what both categories of brains share with straight female brains — they all process scents of the gender they aren’t aroused by in the ordinary “odor processing circuits.” But lesbian brains, unlike straight male brains, do not process the odors of the gender they are attracted to in the hypothalmus. Therefore, by the study’s own terms, and even accepting their dubious categories, the entire parallel between “lesbians” and “straight men”collapses.

I could go on, but this is probably enough. The largest claims the study makes, according to the article, are that “there are biological factors that contribute to sexual orientation,” and that “homosexuality has a physical basis and is not learned behavior.” The first of these is something that nobody of any sense would ever doubt; since to doubt it you would have to think that the mind is entirely disconnected from the body, to a degree that even Descartes never maintained. And the second statement is utterly nonsensical, since any behavior whatsoever has a “physical basis” by definition (if it had no physical basis, in what sense would it even exist? what would it mean to observe it?), regardless of whether it is “learned” or innate, or something else (I am not convinced that learned vs. innate is a meaningful duality to begin with, since there is so much overlap between the two terms, and since you have to define them way too broadly in order to eliminate other possibilities, and include every observation on either one side of the duality or the other).

The thing is, a “scientific” report or study this idiotic, this devoid of any meaningful terms, or real scientific basis, can be found in the press every week. All it shows, basically, is that people (both scientists and news reporters and, probably, the general public) “want to believe” that everything in human life has a “genetic” basis (something else that is way too ill-defined to pass muster), and that the “common sense” prejudicies of our culture are true. At the start of Western science, empiricist mocked the old philosophy’s explanation of opium’s power to put people to sleep by its being alleged to have a “dormative virtue.” But today human genetics seems itself to be entirely based upon the positing and proclaiming of such imaginary virtues and essences.

The next time I post, this blog will return to its usual programming.

It’s been too long since I last unpacked an example of bad science (or of bad reporting of science, it is sometimes difficult to say which). Jacalyn sent me the url for an article on a new study: “Lesbians’ brains respond like straight men.” Now, what does this mean, exactly? In what way do lesbians’ brains respond like the brains of straight men? First of all, we are given no indication of how the survey defined “lesbians” and “straight men”; if they used peoples’ own self-definition, that is well and good from a sociological standpoint, but it is way too vague to stand as a biological category. Do they mean to imply that, if somebody’s self-definition changes, say if I stop defining myself as a straight man (which I am very often tempted to do, because of all the baggage that comes with the phrase “straight man,” baggage which goes very far beyond the accurate but fairly vague observation that most of the time I am more likely to be turned on sexually by women than by men), then their hormones suddenly change? But that is getting ahead of myself. We also have to ask, when lesbians’ brains “respond” like those of straight men, to what are they responding? Well, the article says, “lesbians’ brains react differently to sex hormones than those of heterosexual women… Lesbians’ brains reacted somewhat, though not completely, like those of heterosexual men.” Now, isn’t there something circular going on here? “Sex hormones” are defined later in the article as “pheromones” — but this is itself a category that far too little is known about, and that is so dubious, especially in human beings, that making generalizations on their basis is unacceptable to begin with; the article itself notes in passing that “whether humans respond to pheromones has been debated.” But even if we accept the pheromones, all the research is saying, really, is that lesbians and “straight men” are both more sexually attracted to women than to men; which is of course the tautology that the survey presupposed in the first place. Oh, I should add that the physiological correlate of this arousal is that the scent which someone finds sexually arousing is processed in a different part of the brain (the hypothalmus) from the normal scent-processing areas. So again all we have is a tautology, the repetition of what was presupposed at the beginning. Of course the arousal is taking place on a subconscious level, by the scents of pheromones themselves, without the test subjects knowing which gender’s scent they are smelling. But this was also presupposed in the initial plan of the study; it’s just another tautology. (I should also note that the idea that biological males and females have completely separate pheromonal scents, which supposedly do not overlap, or show enough variation to undo the rigid binary, is also being unwarrantedly presupposed). However, even this is not quite accurate as a basis for the alleged “similarity” between “lesbians” and “straight men.” The report notes that “ordinary odors were processed in the brain circuits associated with smell in all the volunteers.” However, “in heterosexual males the male hormone was processed in the scent area but the female hormone was processed in the hypothalamus, which is related to sexual stimulation.” Whereas in lesbians, “both male and female hormones were processed the same, in the basic odor processing circuits.” But wait: if this is the case, then they entire claim of the study, that lesbian brains respond somewhat like straight male brains, collapses. Lesbian brains (I have given up putting up scare quotes just out of laziness; of course they should be present each time any of these pseudo-categories is mentioned) are similar to straight male brains in terms of what both categories of brains share with straight female brains — they all process scents of the gender they aren’t aroused by in the ordinary “odor processing circuits.” But lesbian brains, unlike straight male brains, do not process the odors of the gender they are attracted to in the hypothalmus. Therefore, by the study’s own terms, and even accepting their dubious categories, the entire parallel between “lesbians” and “straight men”collapses.

I could go on, but this is probably enough. The largest claims the study makes, according to the article, are that “there are biological factors that contribute to sexual orientation,” and that “homosexuality has a physical basis and is not learned behavior.” The first of these is something that nobody of any sense would ever doubt; since to doubt it you would have to think that the mind is entirely disconnected from the body, to a degree that even Descartes never maintained. And the second statement is utterly nonsensical, since any behavior whatsoever has a “physical basis” by definition (if it had no physical basis, in what sense would it even exist? what would it mean to observe it?), regardless of whether it is “learned” or innate, or something else (I am not convinced that learned vs. innate is a meaningful duality to begin with, since there is so much overlap between the two terms, and since you have to define them way too broadly in order to eliminate other possibilities, and include every observation on either one side of the duality or the other).

The thing is, a “scientific” report or study this idiotic, this devoid of any meaningful terms, or real scientific basis, can be found in the press every week. All it shows, basically, is that people (both scientists and news reporters and, probably, the general public) “want to believe” that everything in human life has a “genetic” basis (something else that is way too ill-defined to pass muster), and that the “common sense” prejudicies of our culture are true. At the start of Western science, empiricist mocked the old philosophy’s explanation of opium’s power to put people to sleep by its being alleged to have a “dormative virtue.” But today human genetics seems itself to be entirely based upon the positing and proclaiming of such imaginary virtues and essences.

The next time I post, this blog will return to its usual programming.

Creationism

In the wake of Bush’s statement endorsing the teaching of “intelligent design” theory alongside Darwinian evolutionary theory in the schools, I saw a debate on CNN between somebody from the Discovery Institute (the foundation behind the recent push for “intelligent design”) and Michael Shermer of Skeptic magazine. I was appalled. The Discovery Institute guy sounded open-minded and reasonable, with all his talk about new research and intellectual flexibility — though of course everything he said was pure garbage. On the other hand, Shermer was pompous and overbearing, the condescending voice of Authority, lecturing the public about the importance of peer-reviewed articles in prestigious journals, and actually saying at one point that only allowing the expression of ideas that have been properly peer-reviewed is “how we do things in a free society.” (He also kept on referring to “the marketplace of ideas,” evidently not realizing that the “marketplace of ideas,” like any other marketplace, has no concern for the truth or rationality he was otherwise trumpeting).

If you didn’t know anything about the subject, whom would you believe? Shermer’s performance justified everything Isabelle Stengers has said about the imperialist arrogance of official spokespeople for Big Science. Though ostensibly he was talking about the importance of rationality and of the objective gathering and weighing of empirical evidence, his affect was one of argument from authority, as if to say: “how dare you contest what we, the enlightened elite, have determined to be the case!” (Not to mention that, as an academic myself, I have ample experience with “peer review,” and I know how corrupt and dishonest it is). With supporters like this, Darwin doesn’t need enemies. Shermer, just like the Democratic Party, almost seems to go out of his way to justify all the sterotypes the Republicans and fundamentalists have been promulgating for years now about “liberal elitism” and liberals’ contempt for the common person. After hearing advocates for Science like Shermer, most Americans will find Bush to be speaking plausibly when he says that “intelligent design” ought to be taught alongside evolution because “part of education is to expose people to different schools of thought.”

In reality, of course, teaching “intelligent design” is an historical falsification. It is equivalent to teaching the theories of people who deny that the Holocaust ever happened, and of people who say that blacks were treated kindly and humanely under slavery. I doubt that even Bush would endorse Holocaust denial as a benevolent example of exposing people to different schools of thought. But the argument is never made along these lines, not in the courts, and not in the statements of any of the scientists who oppose creationism.

Of course, giving any legitimacy at all to “intelligent design” is actually a form of religio-political indoctrination; but recognizing this forces us, too, to recognize the unpleasant fact that no form of education is entirely devoid of indoctrination. (I am referring not only to formal education in the schools, but also to things like teaching my 3-year-old daughter to use the potty and to be polite and show consideration for other people). There’s no easy way out of this dilemma; it brings us to the limits of secular humanism/liberalism, which is the dogma I prefer over all others, except for the fact that it refuses to admit that it is one dogma among others, and which (like all dogmas) can only establish itself by vanquishing others.

I have no conclusions here, no suggestions as to how we can better defend historical truth against imposture (to give the whole question a more 18th century turn of phrase than perhaps it merits in these postmodern times). Currently science is losing the battle to religious fanaticism, and to a large extent this is science’s own fault (just as all the recent Republican victories are the Democrats’ own fault). Which probably just means that we are doomed (as I already said after Bush’s re-election).

In the wake of Bush’s statement endorsing the teaching of “intelligent design” theory alongside Darwinian evolutionary theory in the schools, I saw a debate on CNN between somebody from the Discovery Institute (the foundation behind the recent push for “intelligent design”) and Michael Shermer of Skeptic magazine. I was appalled. The Discovery Institute guy sounded open-minded and reasonable, with all his talk about new research and intellectual flexibility — though of course everything he said was pure garbage. On the other hand, Shermer was pompous and overbearing, the condescending voice of Authority, lecturing the public about the importance of peer-reviewed articles in prestigious journals, and actually saying at one point that only allowing the expression of ideas that have been properly peer-reviewed is “how we do things in a free society.” (He also kept on referring to “the marketplace of ideas,” evidently not realizing that the “marketplace of ideas,” like any other marketplace, has no concern for the truth or rationality he was otherwise trumpeting).

If you didn’t know anything about the subject, whom would you believe? Shermer’s performance justified everything Isabelle Stengers has said about the imperialist arrogance of official spokespeople for Big Science. Though ostensibly he was talking about the importance of rationality and of the objective gathering and weighing of empirical evidence, his affect was one of argument from authority, as if to say: “how dare you contest what we, the enlightened elite, have determined to be the case!” (Not to mention that, as an academic myself, I have ample experience with “peer review,” and I know how corrupt and dishonest it is). With supporters like this, Darwin doesn’t need enemies. Shermer, just like the Democratic Party, almost seems to go out of his way to justify all the sterotypes the Republicans and fundamentalists have been promulgating for years now about “liberal elitism” and liberals’ contempt for the common person. After hearing advocates for Science like Shermer, most Americans will find Bush to be speaking plausibly when he says that “intelligent design” ought to be taught alongside evolution because “part of education is to expose people to different schools of thought.”

In reality, of course, teaching “intelligent design” is an historical falsification. It is equivalent to teaching the theories of people who deny that the Holocaust ever happened, and of people who say that blacks were treated kindly and humanely under slavery. I doubt that even Bush would endorse Holocaust denial as a benevolent example of exposing people to different schools of thought. But the argument is never made along these lines, not in the courts, and not in the statements of any of the scientists who oppose creationism.

Of course, giving any legitimacy at all to “intelligent design” is actually a form of religio-political indoctrination; but recognizing this forces us, too, to recognize the unpleasant fact that no form of education is entirely devoid of indoctrination. (I am referring not only to formal education in the schools, but also to things like teaching my 3-year-old daughter to use the potty and to be polite and show consideration for other people). There’s no easy way out of this dilemma; it brings us to the limits of secular humanism/liberalism, which is the dogma I prefer over all others, except for the fact that it refuses to admit that it is one dogma among others, and which (like all dogmas) can only establish itself by vanquishing others.

I have no conclusions here, no suggestions as to how we can better defend historical truth against imposture (to give the whole question a more 18th century turn of phrase than perhaps it merits in these postmodern times). Currently science is losing the battle to religious fanaticism, and to a large extent this is science’s own fault (just as all the recent Republican victories are the Democrats’ own fault). Which probably just means that we are doomed (as I already said after Bush’s re-election).

Teranesia

Greg Egan is one of the finest contemporary writers of “hard” SF, which is to say science fiction that strongly emphasizes the science, trying to keep the science coherent and to extrapolate plausibly (at least) from currently existing science and technology. Most of Egan’s books involve physics and computer science, speculating about such things as artificial intelligence and quantum mechanics. Teranesia is something of an exception in his work, as it deals with biology, takes place in the very near (instead of far distant) future, stresses character development and emotion — especially guilt and shame — more than his other novels, and has some directly political themes (Egan touches on religious and ethnic strife in Indonesia, with its heritage of both colonial exploitation and military misrule and corruption; as well as on Australia’s shameful mistreatment of asylum seekers — a matter on which he expands in his online Afterward to the novel). I read Teranesia mostly because I am looking at “bioaesthetics”, and at the “the biological imagination” (though I wish I had a better phrase for this); I was curious to see what Egan would do with biology.

The novel worked for the most part in terms of plot, characters, and emotion; but the biology was indeed the most interesting thing about it. The major conceit of Teranesia is the appearance of strange mutations, initially confined to one species of butterfly on one island in the Molucca Sea, but increasingly manifested across animal and plant species, and in a wider and wider area. These mutations seem to be too radical, too well-calibrated, and too quick, to be explicable by chance mutations plus the winnowing effect of natural selection. In the space of twenty years, entire animal and plant species develop altered body plans that allow them to feed (or to protect themselves from predation) much more easily, to squeeze out all competitors in the ecosystem, and to proliferate themselves from island to island.

It’s almost as if Egan had set himself as a task to envision a scenario of “biological exuberance“, a scenario that would seem to strongly imply some other evolutionary force than Darwinian natural selection — whether Christian “intelligent design,” some variant of Lamarckianism, Bergsonian elan vital, Richard Goldschmidt’s “hopeful monsters”, or the constraints of form championed by such non-mainstream biologists as Stuart Kauffman and Richard Goodwin — and yet to explain the scenario in terms that are entirely in accord with orthodox neodarwinism and Dawkins’ selfish gene theory. How could rapid and evidently purposive evolutionary change nonetheless result from the “blind watchmaker” of natural selection? All the scientists in Teranesia take the orthodox framework for granted; and in opposition to them, Egan sets religious fundamentalists on the one hand, and “postmodern cultural theorists” who celebrate the trickster mischievousness or irrational bounty of Nature on the other (Egan’s heavy-handed, Alan Sokal-esque satire of the latter group — the book came out at around the same time as the Sokal-vs.-Social Text incident — is the most lame and tiresome aspect of the novel).

[SPOILER ALERT] The way that Egan solves his puzzle is this. The mutations all turn out to be the result of the actions of a single gene, one that can jump from species to species, and that has the ability to rewrite/mutate the rest of the genome in which it finds itself by snipping out individual base pairs, and introducing transcription errors and replacements. Given a random DNA sequence to work with, the effect of the mutations is basically random. But given an actual genome to work with, the new gene enforces changes that are far from random, that in fact optimize the genome for survival and expansion. The new gene does this by, in effect, exploring the phase space of all possible mutations to a considerable depth. And it does this by a trick of quantum theory. Egan calls on the “many worlds” interpretation of quantum mechanics. Mutations are correlated with the collapse of the quantum wave function. All the mutations that could have happened to a given genome, but did not, in fact have occurred in parallel universes. Over the course of a genome’s history, therefore, all the alternative universes generated by every mutation constitute a phase space of all the possible changes the organism could have undergone, and it is these “many universes” the new gene is able to explore, and “choose” the changes that, statistically speaking, were the most successful ones. In this way, the new gene is able to optimize the entire genome or organism; even though it itself is purely a “selfish gene,” driven only to maximize its own reproduction. Egan wryly notes that “most processes in molecular biology had analogies in computing, but it was rarely helpful to push them too far” (256); nonetheless, he extrapolates this logic by imagining a DNA “program” that works like a “quantum supercomputer” (289).

Egan’s solution to his own puzzle is elegant, economical, and shrewd. He’s really doing what hard SF does best: applying the rigor of scientific reasoning to an imaginary problem, and (especially) abiding by the initial conditions set forth by the problem. He successfully constructs a scenario in which even the most extreme instance of apparent design can be explained without recourse to teleology. Though Egan’s hypothesis is counterfactual and probably impossible — which is just a fancy way of saying he is writing fiction — it does in fact usefully illuminate the logic of biological explanation.

And it’s this logic to which I want to turn. Getting rid of teleology is in fact harder than it might seem. Darwin’s theory of natural selection explains how meaningful and functioning complex patterns can emerge from randomness, without there being a pre-existing plan. “Intelligent design” theory today, like the 18th-century “argument from design,” claims that a structure like the eye, or like the interwoven network of chemical pathways that function in every cell, are too complex to have arisen without planning. Darwinian theory argues, to the contrary — quite convincingly and cogently — not only that “selection” processes are able to account for the formation of these structures, but that these structures’ very complexity precludes their having been made by planning and foresight, or any other way. (For the most explicit statement of this argument, see Richard Dawkins’ The Blind Watchmaker. Dawkins gives a reductionist, atomistic version of the argument. I would argue — though Dawkins himself would not agree — that this account is not inconsistent with the claims of Kauffman that natural selection rides piggyback on other sorts of spontaneous organization in natural systems).

But none of this stops Dawkins, or other hardcore Darwinians, from using the vocabulary of purpose on nearly all occasions. The eye is a structure whose purpose is seeing; genes are “selfish” because they “want” — i.e. their “purpose” is — to create more copies of themselves. Dawkins, at least, is aware that his use of purpose-language is metaphorical; but the metaphors you use affect your argument in powerful, structurally relevant ways, even though you may intend them “only,” and quite consciously, as “mere” metaphors. As Isabelle Stengers puts it, Dawkins is still describing life by comparing it to a watch — or to a computer — even if the “watchmaker” is “blind” and not purposeful or conscious. Kant’s pre-Darwinian observation, that we cannot help seeing life as “purposive,” even though we would be wrong to attribute explicit “purpose” to it — still holds true in evolutionary theory.

This is partly a question about adaptation. Hardcore neodarwinism assumes that every feature of an organism, no matter how minor, is adaptive — which is to say that it has a reproductive purpose, for which it was selected. And evolutionary theorists go through extraordinary contortions to explain how “features” like homosexuality, which evidently do not contribute to the production of more offspring, nonetheless must be “adaptive” — or reproductively selected for — in some way. In a case like homosexuality, it seems obvious to suggest that: a)it is not a well-defined category, but one that has a lot of blurry edges and culturally variable aspects, so it’s misguided in the first place to find a genetic correlate to it; and b)that to the extent that genes do play a role in same-sex object choice, it may well be that what was “selected for” was not homosexuality per se, but something more general (the sort of sexual disposition that is extremely plastic, i.e. capable of realizing itself in multiple forms).

More generally, adaptationism is problematic because defending it soon brings you to a point of reductio ad absurdum. Many features of organisms are evidently adaptive, but when you start to assert that everything must be, a priori, you are condemning yourself to a kind of interpretive paranoia that sees meanings, intentions, and purposes everywhere. You start out aware that (in Egan’s words) “evolution is senseless: the great dumb machine, grinding out microscopic improvements one end, spitting out a few billion corpses from the other ” (112). But you end up with a sort of argument from design, a paradoxical denial of contingency, chance, superfluity, and meaninglessness. Evolutionary theorists assume that every feature of every organism necessarily has a meaning and a purpose; which is what leads them to simply invent purposive explanations (what Stephen Jay Gould disparaged as “just-so stories”) when they cannot be discovered by empirical means.

All these difficulties crop up in the course of Teranesia. Egan’s protagonist, Prabir, is gay, and he supposes that his sexual orientation is like an “oxbow lake” produced by a river: something that’s “not part of the flow” of the river, but that the river keeps creating nonetheless (109). Conversely, he is (rightly) angered by the suggestion that homosexuality is adaptive because it has the evolutionary purpose of being “a kind of insurance policy — to look after the others if something happens to the parents” (110). Angry because such an explanation would suggest that his being as a person has no value in its own right, for itself. And this is picked up at the end of the novel, when the new gene crosses species and starts to metastasize in Prabir’s own body. As a ruthless and super-efficient machine for adaptation, it threatens to wipe out Prabir’s own “oxbow lake,” together with anything that might seem “superfluous” from the point of view of adaptive efficiency (310).

By the end of the novel, the new gene has to be contained, for it threatens to “optimize” Prabir, and through him the rest of humanity, into a monstrous reproductive machine. Teranesia suddenly turns, in its last thirty pages or so, into a horror novel; and the final plot twist that saves Prabir is (in contrast to everything that has come before) exceedingly unconvincing and unsatisfying, because it hinges on seeing the malignant gene as purpose-driven to an extent that simply (I mean in the context of Egan’s fiction itself) isn’t credible.

Teranesia thus ends up tracking and reproducing what I am tempted to call (in Kantian style) the antinomies of neodarwinian explanation. Starting from the basic assertion that “life is meaningless” (338 — the very last words of the novel), it nonetheless finds itself compelled to hypothesize a monstrous, totalizing purposiveness. The specter of biological exuberance is exorcized, but monstrosity is not thereby dispelled; it simply returns in an even more extreme form. Even Egan’s recourse to quantum mechanics is symptomatic: because quantum mechanics is so inherently paradoxical — because it is literally impossible to understand in anything like intuitive terms — it becomes the last recourse when you are trying to explain in rationalistic and reductive terms some aspect of reality (and of life especially) that turns out to be stubbornly mysterious. Quantum mechanics allows you to have it both ways: Egan’s use of it can be compared, for instance, to the way Roger Penrose has recourse to quantum effects in order to explain the mysteries of consciousness. In short, Teranesia is a good enough book that it runs up against, and inadvertently demonstrates, the aporias implicit within the scientific rationality to which Egan is committed.

Greg Egan is one of the finest contemporary writers of “hard” SF, which is to say science fiction that strongly emphasizes the science, trying to keep the science coherent and to extrapolate plausibly (at least) from currently existing science and technology. Most of Egan’s books involve physics and computer science, speculating about such things as artificial intelligence and quantum mechanics. Teranesia is something of an exception in his work, as it deals with biology, takes place in the very near (instead of far distant) future, stresses character development and emotion — especially guilt and shame — more than his other novels, and has some directly political themes (Egan touches on religious and ethnic strife in Indonesia, with its heritage of both colonial exploitation and military misrule and corruption; as well as on Australia’s shameful mistreatment of asylum seekers — a matter on which he expands in his online Afterward to the novel). I read Teranesia mostly because I am looking at “bioaesthetics”, and at the “the biological imagination” (though I wish I had a better phrase for this); I was curious to see what Egan would do with biology.

The novel worked for the most part in terms of plot, characters, and emotion; but the biology was indeed the most interesting thing about it. The major conceit of Teranesia is the appearance of strange mutations, initially confined to one species of butterfly on one island in the Molucca Sea, but increasingly manifested across animal and plant species, and in a wider and wider area. These mutations seem to be too radical, too well-calibrated, and too quick, to be explicable by chance mutations plus the winnowing effect of natural selection. In the space of twenty years, entire animal and plant species develop altered body plans that allow them to feed (or to protect themselves from predation) much more easily, to squeeze out all competitors in the ecosystem, and to proliferate themselves from island to island.

It’s almost as if Egan had set himself as a task to envision a scenario of “biological exuberance“, a scenario that would seem to strongly imply some other evolutionary force than Darwinian natural selection — whether Christian “intelligent design,” some variant of Lamarckianism, Bergsonian elan vital, Richard Goldschmidt’s “hopeful monsters”, or the constraints of form championed by such non-mainstream biologists as Stuart Kauffman and Brian Goodwin — and yet to explain the scenario in terms that are entirely in accord with orthodox neodarwinism and Dawkins’ selfish gene theory. How could rapid and evidently purposive evolutionary change nonetheless result from the “blind watchmaker” of natural selection? All the scientists in Teranesia take the orthodox framework for granted; and in opposition to them, Egan sets religious fundamentalists on the one hand, and “postmodern cultural theorists” who celebrate the trickster mischievousness or irrational bounty of Nature on the other (Egan’s heavy-handed, Alan Sokal-esque satire of the latter group — the book came out at around the same time as the Sokal-vs.-Social Text incident — is the most lame and tiresome aspect of the novel).

[SPOILER ALERT] The way that Egan solves his puzzle is this. The mutations all turn out to be the result of the actions of a single gene, one that can jump from species to species, and that has the ability to rewrite/mutate the rest of the genome in which it finds itself by snipping out individual base pairs, and introducing transcription errors and replacements. Given a random DNA sequence to work with, the effect of the mutations is basically random. But given an actual genome to work with, the new gene enforces changes that are far from random, that in fact optimize the genome for survival and expansion. The new gene does this by, in effect, exploring the phase space of all possible mutations to a considerable depth. And it does this by a trick of quantum theory. Egan calls on the “many worlds” interpretation of quantum mechanics. Mutations are correlated with the collapse of the quantum wave function. All the mutations that could have happened to a given genome, but did not, in fact have occurred in parallel universes. Over the course of a genome’s history, therefore, all the alternative universes generated by every mutation constitute a phase space of all the possible changes the organism could have undergone, and it is these “many universes” the new gene is able to explore, and “choose” the changes that, statistically speaking, were the most successful ones. In this way, the new gene is able to optimize the entire genome or organism; even though it itself is purely a “selfish gene,” driven only to maximize its own reproduction. Egan wryly notes that “most processes in molecular biology had analogies in computing, but it was rarely helpful to push them too far” (256); nonetheless, he extrapolates this logic by imagining a DNA “program” that works like a “quantum supercomputer” (289).

Egan’s solution to his own puzzle is elegant, economical, and shrewd. He’s really doing what hard SF does best: applying the rigor of scientific reasoning to an imaginary problem, and (especially) abiding by the initial conditions set forth by the problem. He successfully constructs a scenario in which even the most extreme instance of apparent design can be explained without recourse to teleology. Though Egan’s hypothesis is counterfactual and probably impossible — which is just a fancy way of saying he is writing fiction — it does in fact usefully illuminate the logic of biological explanation.

And it’s this logic to which I want to turn. Getting rid of teleology is in fact harder than it might seem. Darwin’s theory of natural selection explains how meaningful and functioning complex patterns can emerge from randomness, without there being a pre-existing plan. “Intelligent design” theory today, like the 18th-century “argument from design,” claims that a structure like the eye, or like the interwoven network of chemical pathways that function in every cell, are too complex to have arisen without planning. Darwinian theory argues, to the contrary — quite convincingly and cogently — not only that “selection” processes are able to account for the formation of these structures, but that these structures’ very complexity precludes their having been made by planning and foresight, or any other way. (For the most explicit statement of this argument, see Richard Dawkins’ The Blind Watchmaker. Dawkins gives a reductionist, atomistic version of the argument. I would argue — though Dawkins himself would not agree — that this account is not inconsistent with the claims of Kauffman that natural selection rides piggyback on other sorts of spontaneous organization in natural systems).

But none of this stops Dawkins, or other hardcore Darwinians, from using the vocabulary of purpose on nearly all occasions. The eye is a structure whose purpose is seeing; genes are “selfish” because they “want” — i.e. their “purpose” is — to create more copies of themselves. Dawkins, at least, is aware that his use of purpose-language is metaphorical; but the metaphors you use affect your argument in powerful, structurally relevant ways, even though you may intend them “only,” and quite consciously, as “mere” metaphors. As Isabelle Stengers puts it, Dawkins is still describing life by comparing it to a watch — or to a computer — even if the “watchmaker” is “blind” and not purposeful or conscious. Kant’s pre-Darwinian observation, that we cannot help seeing life as “purposive,” even though we would be wrong to attribute explicit “purpose” to it — still holds true in evolutionary theory.

This is partly a question about adaptation. Hardcore neodarwinism assumes that every feature of an organism, no matter how minor, is adaptive — which is to say that it has a reproductive purpose, for which it was selected. And evolutionary theorists go through extraordinary contortions to explain how “features” like homosexuality, which evidently do not contribute to the production of more offspring, nonetheless must be “adaptive” — or reproductively selected for — in some way. In a case like homosexuality, it seems obvious to suggest that: a)it is not a well-defined category, but one that has a lot of blurry edges and culturally variable aspects, so it’s misguided in the first place to find a genetic correlate to it; and b)that to the extent that genes do play a role in same-sex object choice, it may well be that what was “selected for” was not homosexuality per se, but something more general (the sort of sexual disposition that is extremely plastic, i.e. capable of realizing itself in multiple forms).

More generally, adaptationism is problematic because defending it soon brings you to a point of reductio ad absurdum. Many features of organisms are evidently adaptive, but when you start to assert that everything must be, a priori, you are condemning yourself to a kind of interpretive paranoia that sees meanings, intentions, and purposes everywhere. You start out aware that (in Egan’s words) “evolution is senseless: the great dumb machine, grinding out microscopic improvements one end, spitting out a few billion corpses from the other ” (112). But you end up with a sort of argument from design, a paradoxical denial of contingency, chance, superfluity, and meaninglessness. Evolutionary theorists assume that every feature of every organism necessarily has a meaning and a purpose; which is what leads them to simply invent purposive explanations (what Stephen Jay Gould disparaged as “just-so stories”) when they cannot be discovered by empirical means.

All these difficulties crop up in the course of Teranesia. Egan’s protagonist, Prabir, is gay, and he supposes that his sexual orientation is like an “oxbow lake” produced by a river: something that’s “not part of the flow” of the river, but that the river keeps creating nonetheless (109). Conversely, he is (rightly) angered by the suggestion that homosexuality is adaptive because it has the evolutionary purpose of being “a kind of insurance policy — to look after the others if something happens to the parents” (110). Angry because such an explanation would suggest that his being as a person has no value in its own right, for itself. And this is picked up at the end of the novel, when the new gene crosses species and starts to metastasize in Prabir’s own body. As a ruthless and super-efficient machine for adaptation, it threatens to wipe out Prabir’s own “oxbow lake,” together with anything that might seem “superfluous” from the point of view of adaptive efficiency (310).

By the end of the novel, the new gene has to be contained, for it threatens to “optimize” Prabir, and through him the rest of humanity, into a monstrous reproductive machine. Teranesia suddenly turns, in its last thirty pages or so, into a horror novel; and the final plot twist that saves Prabir is (in contrast to everything that has come before) exceedingly unconvincing and unsatisfying, because it hinges on seeing the malignant gene as purpose-driven to an extent that simply (I mean in the context of Egan’s fiction itself) isn’t credible.

Teranesia thus ends up tracking and reproducing what I am tempted to call (in Kantian style) the antinomies of neodarwinian explanation. Starting from the basic assertion that “life is meaningless” (338 — the very last words of the novel), it nonetheless finds itself compelled to hypothesize a monstrous, totalizing purposiveness. The specter of biological exuberance is exorcized, but monstrosity is not thereby dispelled; it simply returns in an even more extreme form. Even Egan’s recourse to quantum mechanics is symptomatic: because quantum mechanics is so inherently paradoxical — because it is literally impossible to understand in anything like intuitive terms — it becomes the last recourse when you are trying to explain in rationalistic and reductive terms some aspect of reality (and of life especially) that turns out to be stubbornly mysterious. Quantum mechanics allows you to have it both ways: Egan’s use of it can be compared, for instance, to the way Roger Penrose has recourse to quantum effects in order to explain the mysteries of consciousness. In short, Teranesia is a good enough book that it runs up against, and inadvertently demonstrates, the aporias implicit within the scientific rationality to which Egan is committed.

Cosmopolitics

I just finished reading Isabelle Stengers’ great book Cosmopolitiques (originally published in seven brief volumes, now available in two paperbacks; unfortunately, it has not yet been translated into English). It’s a dense and rich book, of something like 650 pages, and it’s forced me to rethink a lot of things. I’ve said before that I think Stengers is our best guide to the “science wars” of the last decade or two, and more generally, to the philosophy of science. In Cosmopolitiques, she massively extends and expands upon what she wrote in earlier books like The Invention of Modern Science.

Stengers, like Bruno Latour, wants us to give up the claim to absolute supremacy that is the greatest legacy of post-Enlightenment modernity. The point is not to abandon science, nor to see it (in cultural-relativist terms) as lacking objective validity. The problem is not with science’s actual, particular positive claims; but rather with its pretensions to universality, its need to deny the validity of all claims and practices other than its own. What Stengers, rightly, wants to take down is the “mobilization” of science as a war machine, which can only make its positive claims by destroying all other discourses and points of view: science presenting itself as rational and as objectively “true,” whereas all other discourses are denounced as superstitious, irrational, grounded in mere “belief,” etc. Stengers isn’t opposing genetics research, for instance, but she is opposing the claim that somehow the “truth” of “human nature” can be found in the genome and nowhere else. She’s opposing Edward O. Wilson’s “consilience” (with its at proclamation that positive science can and will replace psychology, literature, philosophy, religion, and all other “humanistic” forms of knowledge) and Steven Pinker’s reductive, naive and incredibly arrogant and pretentious account of “how the mind works”; not to mention the absurd efforts of “quantitative” social scientists (economists, political scientists, and sociologists) to imagine themselves as arriving at “truth” by writing equations that emulate those of physics.

Stengers wants to understand science in the specificity of its practices, and thereby to reject its transcendent claims, its claims to foundational status which are always made by detaching it from its actual, concrete practices. She defines her own approach as, philosophically, a “constructivist” one. Constructivism in philosophy is non-foundationalist: it denies that truth somehow comes first, denies that it is just there in the world or in the mind. Instead, constructivism looks at how truths are produced through various processes and practices. This does not mean that truth is merely a subjective, human enterprise, either: the practices and processes that produce truths are not just human ones. (Here, Stengers draws profitably upon Whitehead, about whom she has written extensively). For modern science, the constructivist question is to determine how this practice is able (unlike most other human practices, at least) to produce objects that have lives of their own, as it were, so that they remain “answerable” for their actions in the world independently of the laboratory conditions under which they were initially elucidated. This is what makes neutrinos and microbes, for instance, different from codes of justice, or from money, or from ancestral spirits that may be haunting someone. The point of the constructivist approach is to see how these differences work, without thereby asserting that scientific objects are therefore objective, and out there in the world, while all the other sorts of objects would be merely subjective or imaginary or irrational or just inside our heads. The point is not to say that scientific objects are “socially constructed” rather than “objectively true,” but precisely to get away from this binary alternative, when it comes to considering either scientific practices and objects, or (for instance) religious practices and objects.

The other pillar of Stengers’ approach is what she calls an “ecology of practices.” This means considering how particular practices — the practices of science, in particular — impinge upon and relate to other practices that simultaneously exist. This means that the question of what science discovers about the world cannot be separated from the question of how science impinges upon the world. For any particular practice — say, for genetics today — the “ecology of practices” asks what particular demands or requirements (exigences in French, which it’s difficult to translate precisely because the cognate English word, “exigency”, sound kind of weird) are made by the practice, and what particular obligations does the practice impose upon those who practice it, make use of it, or get affected by it.

Constructivism and the ecology of practices allow Stengers to distinguish between science as a creative enterprise, a practice of invention and discovery, and science’s modernist claim to invalidate all other discourses. Actually, such a statement is too broad — for Stengers also distinguishes among various sciences, which are not all alike. The assumptions and criteria, and hence the demands and obligations, of theoretical physics are quite different from those of ethology (the study of animal behavior, which has to take place in the wild, where there is little possibility of controlling for “variables,” as well as under laboratory conditions). The obligations one takes on when investigating chimpanzees, and all the more so human beings, are vastly different from the obligations one takes on when investigating neutrinos or chemical reactions. The demands made by scientific practices (such as the demand that the object discovered not be just an “artifact” of a particular experimental setup) also vary from one practice to another. Constructivism and the ecology of practices allow Stengers to situate the relevance and the limits of various scientific practices, without engaging in critique: that is to say, without asserting the privilege of a transcendent(al) perspective on the basis of which the varying practices are judged.

Much of Cosmopolitiques is concerned with a history of physics, from Galileo through quantum mechanics. Stengers focuses on the question of physical “laws.” She looks especially at the notion of equilibrium, and the modeling of dynamic systems. Starting with Galileo, going through Newton and Leibniz, and then continuing throughout the 18th and especially the 19th centuries, there is a continual growth in the power of mathematical idealizations to describe physical systems. Physicists construct models that work under simplified conditions — ignoring the presence of friction, for instance, when describing spheres rolling down a plane (Galileo) or more generally, motion through space. They then add the effects of “perturbations” like friction as minor modifications of the basic model. Gradually, more and more complex models were developed, which allowed for more and more factors to be incorporated within the models themselves, instead of having to be left outside as mere “perturbations.” These models all assume physical “states” that can be said to exist at an instant, independently of the historical development of the systems in question; and they assume a basic condition of equilibrium, often perturbed but always returned to.

Stengers suggests that we should celebrate these accomplishments as triumphs of scientific imagination and invention. At the same time, she points up the baleful effects of these accomplishments, in terms of how they got (metaphorically) transferred to other physical and scientific realms. The success of models, expressible as physical “laws,” has to do with the particular sorts of questions 19th-century dynamics addressed (having to do with the nature of forces in finite interactions that could be treated mathematically with linear equations). The success of dynamics, however, led physicists to expect that the same procedures would be valid in answering other questions. This extension of the dynamic model beyond the field of its experimental successes, and into other realms, led to the general assumption that all physical processes could similarly be modeled in terms of instantaneous “states” and time-invariant transformations of these states. That is to say, the assumption that all physical processes follow deterministic “laws.” When the “perturbations” that deviate from the ideal cannot be eliminated empirically, this is attributed to the mere limitations of our knowledge, with the assertion that the physical world “really” operates in accordance with the idealized model, which thereby takes precedence over merely empirical observations. This is how physics moved from empirical observation to a quasi-Platonic faith in an essence underlying mere appearances.

It’s because of this underlying idealism, this illicit transference of dynamic modelling into realms that are not suited to it, that the ideology of physics as describing the ultimate nature of “reality” has taken so strong a hold on us today. Thus physicists dismiss the apparent irreversibility of time, and the increase of entropy (disorder) in any closed system, as merely artifacts of our subjectivity, which is to say our ignorance (of the fact that we do not have access to perfect and total information about the physical state of every atom). But Stengers points out the arbitrariness of the generally accepted “statistical” interpretation of entropy; she argues that it is warranted only by physicists’ underlying assumption that the ideal situation of total knowability of every individual atom’s location and path, independent of the atoms’ history of interactions, must obtain everywhere. This ideal is invoked as how nature “really” behaves, even if there is no empirical possibility of obtaining the “knowledge” that the ideal assumes.

There are similar problems in quantum mechanics. Most physicists are not content with Bohr’s injunction not to ask what is “really” going on before the collapse of quantum indeterminacy; they can’t accept that total, deterministic knowledge is an impossibility, so they have recourse to all sorts of strange hypotheses, from multiple worlds to “hidden variables.” But following Nancy Cartwright among others, Stengers suggests that the whole problem of indeterminacy and measurement in quantum mechanics is a false one. Physicists don’t like the fact that quantum mechanics forbids us in principle from having exact knowledge of every particle, as it were independently of our interaction with the particles (since we have to choose, for instance, between knowing the position of an electron and knowing its momentum — we can’t have both, and it is our interaction with the electron that determines which we do find out). But Stengers points out that the limits of our knowledge in quantum mechanics are not really any greater than, say, the limits of my knowledge as to what somebody else is really feeling and thinking. It’s only the physicists’ idealizing assumption of the world’s total knowability and total determinability in accordance with “laws” that leads them to be frustrated and dissatisfied by the limits imposed by quantum mechanics.

Now, my summary of the last two paragraphs has actually done a disservice to Stengers. Because I have restated her analyses in a Kantian manner, as a reflection upon the limits of reason. But for Stengers, such an exercise in transcendental critique is precisely what she wants to get away from; since such a critique means that once again modernist rationality is legislating against practices whose claims differ from its own. She seeks, rather, through constructivism and the ecology of practices, to offer what might be called (following Deleuze) an entirely immanent critique, one that is situated within the very field of practices that it is seeking to change. Stengers exemplifies this with a detailed account of the work of Ilya Prigogine, with whom she collaborated in the 1980s. Prigogine sought, for most of his career, to get the “arrow of time” — the irreversibility of events in time — recognized as among the fundamentals of physics. We cultural studies types tend to adopt Prigogine wholeheartedly for our own critical purposes. But Stengers emphasizes the difficulties that result from the fact that Prigogine is not critiquing physics and chemistry, but seeking to point up the “arrow of time” in such a way that the physicists themselves will be compelled to acknowledge it. To the extent that he is still regarded as a fringe figure by most mainstream scientists, it cannot be said that he succeeded. Stengers points to recent developments in studies of emergence and complexity as possibly pointing to a renovation of scientific thought, but she warns against the new-agey or high-theoretical tendency many of us outside the sciences have to proclaim a new world-view by trumpeting these scientific results as evidence: which means both translating scientific research into “theory” way too uncritically, and engaging in a kind of Kantian critique, instead of remaining within the immanence of the ecology of actual practices, with the demands they make and the obligations they impose.

The biggest question Cosmopolitiques leaves me with is precisely the one of whether it is possible to approach all these questions immanently, without bringing some sort of Kantian critique back into the picture (as I find myself unavoidably tempted to do, even when I am just trying to summarize Stengers’ arguments). One could also pose this question in reverse: whether Kantian critique (in the sense I am using it, which goes back to the Transcendental Dialectic of the First Critique, where Kant tries to use rationality to limit the pretensions of reason itself) can be rescued from Stengers’ objections to the modernist/scientific condemnation of all claims other than its own. The modernist gesture par excellence, in Stengers’ account, would be David Hume’s consignment of theology and speculative philosophy to the flames, as containing “nothing but sophistry and illusion.” Are Kant’s Antinomies and Paralogisms making essentiallly the same gesture? I regard this as a crucial question, and as an open one, something I have only begun to think about.

I have another question about Stengers’ conclusions, one that (I think) follows from that about Kantian critique. Stengers urges us (in the last section of her book) “to have done with tolerance”; because “tolerance” is precisely the condescending attitude by which “we” (scientists, secular modernists in general) make allowances for other world-views which we nonetheless refuse to take seriously. Stengers’ vision, like Latour’s, is radically democratic: science is not a transcending “truth” but one of many “interests” which constantly need to negotiate with one another. This can only happen if all the competing interests are taken seriously (not merely “tolerated”), and actively able to intervene with and against one another. To give an example that Stengers herself doesn’t use: think of the recent disputes over “Kennewick Man” — a 9,000-year-old skull discovered in 1999 near the Columbia River in Washington State. Scientists want to study the remains; Native American groups want to give the remains a proper burial. For the most part, the American press presented the dispute as one between the rational desire to increase our store of knowledge and the irrational, archaic “beliefs” of the “tribes” claiming ownership of the skull. Stengers would have us realize that such an indivious distinction is precisely an instance of scientific imperialism, and that the claims of both the scientists and the native groups — the demands they make and the obligations they feel urged to fulfill — need to be negotiated on an equal basis, that both are particular interests, and both are political: the situation cannot be described as a battle between rationality and superstition, or between “knowledge” and “belief.”

In this way, Stengers (and Latour) are criticising, not just Big Science, but also (and perhaps even more significantly) the default assumptions of post-Enlightenment secular liberalism. Their criticism is quite different from that espoused by such thinkers as Zizek and Badiou; but there is a shared rejection of the way that liberal “tolerance” (the “human face,” you might say, of multinational captial) in fact prevents substantive questions from being asked, and substantive change from happening. This is another Big Issue that I am (again) only beginning to think through, and that I will have to return to in future posts. But as regards Stengers, my real question is this: Where do Stengers’ and Latour’s anti-modernist imperatives leave us, when it comes to dealing with the fundamentalist, evangelical Christians in the United States today? Does the need to deprivilege science’s claims to exclusive truth, and to democratically recognize other social/cultural/political claims, mean, for instance, that we need to give full respect to the claims of “intelligent design” or creationism, and let them negotiate on an equal footing with the claims of evolutionary theory? To say that we shouldn’t tolerate the fundamentalists because they themselves are intolerant is no answer. And I’m not sure that to say, as I have said before, that denying the evolution of species is akin to denying the Holocaust — since both are matters of historical events, rather than of (verifiable or falsifiable) theories — I’m not sure that this answer works either. I realize I am showing my own biases here: it’s one thing to uphold the claims of disenfranchised native peoples, another to uphold the claims of a group that I think is oppressing me as much as they think I and my like are oppressing them. But this is really where the aporia comes for me; where I am genuinely uncertain as to the merits of Stengers’ arguments in comparison to the liberal “tolerance” she so powerfully despises.

I just finished reading Isabelle Stengers’ great book Cosmopolitiques (originally published in seven brief volumes, now available in two paperbacks; unfortunately, it has not yet been translated into English). It’s a dense and rich book, of something like 650 pages, and it’s forced me to rethink a lot of things. I’ve said before that I think Stengers is our best guide to the “science wars” of the last decade or two, and more generally, to the philosophy of science. In Cosmopolitiques, she massively extends and expands upon what she wrote in earlier books like The Invention of Modern Science.

Stengers, like Bruno Latour, wants us to give up the claim to absolute supremacy that is the greatest legacy of post-Enlightenment modernity. The point is not to abandon science, nor to see it (in cultural-relativist terms) as lacking objective validity. The problem is not with science’s actual, particular positive claims; but rather with its pretensions to universality, its need to deny the validity of all claims and practices other than its own. What Stengers, rightly, wants to take down is the “mobilization” of science as a war machine, which can only make its positive claims by destroying all other discourses and points of view: science presenting itself as rational and as objectively “true,” whereas all other discourses are denounced as superstitious, irrational, grounded in mere “belief,” etc. Stengers isn’t opposing genetics research, for instance, but she is opposing the claim that somehow the “truth” of “human nature” can be found in the genome and nowhere else. She’s opposing Edward O. Wilson’s “consilience” (with its at proclamation that positive science can and will replace psychology, literature, philosophy, religion, and all other “humanistic” forms of knowledge) and Steven Pinker’s reductive, naive and incredibly arrogant and pretentious account of “how the mind works”; not to mention the absurd efforts of “quantitative” social scientists (economists, political scientists, and sociologists) to imagine themselves as arriving at “truth” by writing equations that emulate those of physics.

Stengers wants to understand science in the specificity of its practices, and thereby to reject its transcendent claims, its claims to foundational status which are always made by detaching it from its actual, concrete practices. She defines her own approach as, philosophically, a “constructivist” one. Constructivism in philosophy is non-foundationalist: it denies that truth somehow comes first, denies that it is just there in the world or in the mind. Instead, constructivism looks at how truths are produced through various processes and practices. This does not mean that truth is merely a subjective, human enterprise, either: the practices and processes that produce truths are not just human ones. (Here, Stengers draws profitably upon Whitehead, about whom she has written extensively). For modern science, the constructivist question is to determine how this practice is able (unlike most other human practices, at least) to produce objects that have lives of their own, as it were, so that they remain “answerable” for their actions in the world independently of the laboratory conditions under which they were initially elucidated. This is what makes neutrinos and microbes, for instance, different from codes of justice, or from money, or from ancestral spirits that may be haunting someone. The point of the constructivist approach is to see how these differences work, without thereby asserting that scientific objects are therefore objective, and out there in the world, while all the other sorts of objects would be merely subjective or imaginary or irrational or just inside our heads. The point is not to say that scientific objects are “socially constructed” rather than “objectively true,” but precisely to get away from this binary alternative, when it comes to considering either scientific practices and objects, or (for instance) religious practices and objects.

The other pillar of Stengers’ approach is what she calls an “ecology of practices.” This means considering how particular practices — the practices of science, in particular — impinge upon and relate to other practices that simultaneously exist. This means that the question of what science discovers about the world cannot be separated from the question of how science impinges upon the world. For any particular practice — say, for genetics today — the “ecology of practices” asks what particular demands or requirements (exigences in French, which it’s difficult to translate precisely because the cognate English word, “exigency”, sound kind of weird) are made by the practice, and what particular obligations does the practice impose upon those who practice it, make use of it, or get affected by it.

Constructivism and the ecology of practices allow Stengers to distinguish between science as a creative enterprise, a practice of invention and discovery, and science’s modernist claim to invalidate all other discourses. Actually, such a statement is too broad — for Stengers also distinguishes among various sciences, which are not all alike. The assumptions and criteria, and hence the demands and obligations, of theoretical physics are quite different from those of ethology (the study of animal behavior, which has to take place in the wild, where there is little possibility of controlling for “variables,” as well as under laboratory conditions). The obligations one takes on when investigating chimpanzees, and all the more so human beings, are vastly different from the obligations one takes on when investigating neutrinos or chemical reactions. The demands made by scientific practices (such as the demand that the object discovered not be just an “artifact” of a particular experimental setup) also vary from one practice to another. Constructivism and the ecology of practices allow Stengers to situate the relevance and the limits of various scientific practices, without engaging in critique: that is to say, without asserting the privilege of a transcendent(al) perspective on the basis of which the varying practices are judged.

Much of Cosmopolitiques is concerned with a history of physics, from Galileo through quantum mechanics. Stengers focuses on the question of physical “laws.” She looks especially at the notion of equilibrium, and the modeling of dynamic systems. Starting with Galileo, going through Newton and Leibniz, and then continuing throughout the 18th and especially the 19th centuries, there is a continual growth in the power of mathematical idealizations to describe physical systems. Physicists construct models that work under simplified conditions — ignoring the presence of friction, for instance, when describing spheres rolling down a plane (Galileo) or more generally, motion through space. They then add the effects of “perturbations” like friction as minor modifications of the basic model. Gradually, more and more complex models were developed, which allowed for more and more factors to be incorporated within the models themselves, instead of having to be left outside as mere “perturbations.” These models all assume physical “states” that can be said to exist at an instant, independently of the historical development of the systems in question; and they assume a basic condition of equilibrium, often perturbed but always returned to.

Stengers suggests that we should celebrate these accomplishments as triumphs of scientific imagination and invention. At the same time, she points up the baleful effects of these accomplishments, in terms of how they got (metaphorically) transferred to other physical and scientific realms. The success of models, expressible as physical “laws,” has to do with the particular sorts of questions 19th-century dynamics addressed (having to do with the nature of forces in finite interactions that could be treated mathematically with linear equations). The success of dynamics, however, led physicists to expect that the same procedures would be valid in answering other questions. This extension of the dynamic model beyond the field of its experimental successes, and into other realms, led to the general assumption that all physical processes could similarly be modeled in terms of instantaneous “states” and time-invariant transformations of these states. That is to say, the assumption that all physical processes follow deterministic “laws.” When the “perturbations” that deviate from the ideal cannot be eliminated empirically, this is attributed to the mere limitations of our knowledge, with the assertion that the physical world “really” operates in accordance with the idealized model, which thereby takes precedence over merely empirical observations. This is how physics moved from empirical observation to a quasi-Platonic faith in an essence underlying mere appearances.

It’s because of this underlying idealism, this illicit transference of dynamic modelling into realms that are not suited to it, that the ideology of physics as describing the ultimate nature of “reality” has taken so strong a hold on us today. Thus physicists dismiss the apparent irreversibility of time, and the increase of entropy (disorder) in any closed system, as merely artifacts of our subjectivity, which is to say our ignorance (of the fact that we do not have access to perfect and total information about the physical state of every atom). But Stengers points out the arbitrariness of the generally accepted “statistical” interpretation of entropy; she argues that it is warranted only by physicists’ underlying assumption that the ideal situation of total knowability of every individual atom’s location and path, independent of the atoms’ history of interactions, must obtain everywhere. This ideal is invoked as how nature “really” behaves, even if there is no empirical possibility of obtaining the “knowledge” that the ideal assumes.

There are similar problems in quantum mechanics. Most physicists are not content with Bohr’s injunction not to ask what is “really” going on before the collapse of quantum indeterminacy; they can’t accept that total, deterministic knowledge is an impossibility, so they have recourse to all sorts of strange hypotheses, from multiple worlds to “hidden variables.” But following Nancy Cartwright among others, Stengers suggests that the whole problem of indeterminacy and measurement in quantum mechanics is a false one. Physicists don’t like the fact that quantum mechanics forbids us in principle from having exact knowledge of every particle, as it were independently of our interaction with the particles (since we have to choose, for instance, between knowing the position of an electron and knowing its momentum — we can’t have both, and it is our interaction with the electron that determines which we do find out). But Stengers points out that the limits of our knowledge in quantum mechanics are not really any greater than, say, the limits of my knowledge as to what somebody else is really feeling and thinking. It’s only the physicists’ idealizing assumption of the world’s total knowability and total determinability in accordance with “laws” that leads them to be frustrated and dissatisfied by the limits imposed by quantum mechanics.

Now, my summary of the last two paragraphs has actually done a disservice to Stengers. Because I have restated her analyses in a Kantian manner, as a reflection upon the limits of reason. But for Stengers, such an exercise in transcendental critique is precisely what she wants to get away from; since such a critique means that once again modernist rationality is legislating against practices whose claims differ from its own. She seeks, rather, through constructivism and the ecology of practices, to offer what might be called (following Deleuze) an entirely immanent critique, one that is situated within the very field of practices that it is seeking to change. Stengers exemplifies this with a detailed account of the work of Ilya Prigogine, with whom she collaborated in the 1980s. Prigogine sought, for most of his career, to get the “arrow of time” — the irreversibility of events in time — recognized as among the fundamentals of physics. We cultural studies types tend to adopt Prigogine wholeheartedly for our own critical purposes. But Stengers emphasizes the difficulties that result from the fact that Prigogine is not critiquing physics and chemistry, but seeking to point up the “arrow of time” in such a way that the physicists themselves will be compelled to acknowledge it. To the extent that he is still regarded as a fringe figure by most mainstream scientists, it cannot be said that he succeeded. Stengers points to recent developments in studies of emergence and complexity as possibly pointing to a renovation of scientific thought, but she warns against the new-agey or high-theoretical tendency many of us outside the sciences have to proclaim a new world-view by trumpeting these scientific results as evidence: which means both translating scientific research into “theory” way too uncritically, and engaging in a kind of Kantian critique, instead of remaining within the immanence of the ecology of actual practices, with the demands they make and the obligations they impose.

The biggest question Cosmopolitiques leaves me with is precisely the one of whether it is possible to approach all these questions immanently, without bringing some sort of Kantian critique back into the picture (as I find myself unavoidably tempted to do, even when I am just trying to summarize Stengers’ arguments). One could also pose this question in reverse: whether Kantian critique (in the sense I am using it, which goes back to the Transcendental Dialectic of the First Critique, where Kant tries to use rationality to limit the pretensions of reason itself) can be rescued from Stengers’ objections to the modernist/scientific condemnation of all claims other than its own. The modernist gesture par excellence, in Stengers’ account, would be David Hume’s consignment of theology and speculative philosophy to the flames, as containing “nothing but sophistry and illusion.” Are Kant’s Antinomies and Paralogisms making essentiallly the same gesture? I regard this as a crucial question, and as an open one, something I have only begun to think about.

I have another question about Stengers’ conclusions, one that (I think) follows from that about Kantian critique. Stengers urges us (in the last section of her book) “to have done with tolerance”; because “tolerance” is precisely the condescending attitude by which “we” (scientists, secular modernists in general) make allowances for other world-views which we nonetheless refuse to take seriously. Stengers’ vision, like Latour’s, is radically democratic: science is not a transcending “truth” but one of many “interests” which constantly need to negotiate with one another. This can only happen if all the competing interests are taken seriously (not merely “tolerated”), and actively able to intervene with and against one another. To give an example that Stengers herself doesn’t use: think of the recent disputes over “Kennewick Man” — a 9,000-year-old skull discovered in 1999 near the Columbia River in Washington State. Scientists want to study the remains; Native American groups want to give the remains a proper burial. For the most part, the American press presented the dispute as one between the rational desire to increase our store of knowledge and the irrational, archaic “beliefs” of the “tribes” claiming ownership of the skull. Stengers would have us realize that such an indivious distinction is precisely an instance of scientific imperialism, and that the claims of both the scientists and the native groups — the demands they make and the obligations they feel urged to fulfill — need to be negotiated on an equal basis, that both are particular interests, and both are political: the situation cannot be described as a battle between rationality and superstition, or between “knowledge” and “belief.”

In this way, Stengers (and Latour) are criticising, not just Big Science, but also (and perhaps even more significantly) the default assumptions of post-Enlightenment secular liberalism. Their criticism is quite different from that espoused by such thinkers as Zizek and Badiou; but there is a shared rejection of the way that liberal “tolerance” (the “human face,” you might say, of multinational captial) in fact prevents substantive questions from being asked, and substantive change from happening. This is another Big Issue that I am (again) only beginning to think through, and that I will have to return to in future posts. But as regards Stengers, my real question is this: Where do Stengers’ and Latour’s anti-modernist imperatives leave us, when it comes to dealing with the fundamentalist, evangelical Christians in the United States today? Does the need to deprivilege science’s claims to exclusive truth, and to democratically recognize other social/cultural/political claims, mean, for instance, that we need to give full respect to the claims of “intelligent design” or creationism, and let them negotiate on an equal footing with the claims of evolutionary theory? To say that we shouldn’t tolerate the fundamentalists because they themselves are intolerant is no answer. And I’m not sure that to say, as I have said before, that denying the evolution of species is akin to denying the Holocaust — since both are matters of historical events, rather than of (verifiable or falsifiable) theories — I’m not sure that this answer works either. I realize I am showing my own biases here: it’s one thing to uphold the claims of disenfranchised native peoples, another to uphold the claims of a group that I think is oppressing me as much as they think I and my like are oppressing them. But this is really where the aporia comes for me; where I am genuinely uncertain as to the merits of Stengers’ arguments in comparison to the liberal “tolerance” she so powerfully despises.

Confidence Games

Mark C. Taylor’s Confidence Games: Money and Markets in a World Without Redemption is erudite, entertaining, and intellectually wide-ranging — and it has the virtue of dealing with a subject (money and markets) that rarely gets enough attention from people deeply into pomo theory. Why, then, did I find myself so dissatisfied with the book?

Taylor is a postmodern, deconstructionist theologian — if that makes any sense, and in fact when reading him it does — who has written extensively about questions of faith and belief in a world without a center or foundations. Here he writes about the relations between religion, art, and money — or, more philosophically, between theology, aesthetics, and economics. He starts with a consideration of William Gaddis’ underrated and underdiscussed novels The Recognitions and JR (the latter of which he rightly praises as one of the most crucial and prophetic reflections on late-20th-century American culture: in a book published in 1975, Gaddis pretty much captures the entire period from the deregulation and S&L scams of the Reagan 80s through the Enron fiasco of just a few years ago: nailing down both the crazy economic turbulence and fiscal scamming, and its influence on the larger culture). From Gaddis, Taylor moves on to the history of money, together with the history of philosophical reflections upon money. He’s especially good on the ways in which theological speculation gets transmuted into 18th and 19th century aesthetics, and on how both theological and aesthetic notions get subsumed into capitalistic visions of “the market.” In particular, he traces the Calvinist (as well as aestheticist) themes that stand behind Adam Smith’s vision of the “invisible hand” that supposedly ensures the proper functioning of the market.

The second half of Taylor’s book moves towards an account of how today’s “postmodern” economic system developed, in the wake of Nixon’s abandonment of the gold standard in 1971, the Fed’s conversion from Keynesianism to monetarism in 1979, and the general adoption of “neoliberal” economics throughout the world in the 1980s and 1990s. The result of these transformations is the dematerialization of money (since it is no longer tied to gold) and the replacement of a “real” economy by a “virtual” one, in which money becomes a series of ungrounded signs that only refer to one another. Money, in Taylor’s account, has always had something uncanny about it — because, as a general equivalent or medium of exchange, it is both inside and outside the circuits of the items (commodities) being exchanged; money is a liminal substance that grounds the possibility of fixed categories and values, but precisely for that reason, doesn’t itself quite fit into any category, or have any autonomous value. But with the (re-)adoption of free-market fundamentalism in the 1980s, together with the explosive technological changes of the late 20th century — the growth of telecommunications and of computing power that allow for global and entirely ‘fictive’ monetary flows — this all kicks into much higher gear: money becomes entirely “spectral.” Taylor parallels this economic mutation to similar experiences of ungroundedness, and of signs that do not refer to anything beyond themselves, in the postmodern architecture of Venturi and after, in the poststructuralist philosophy of Derrida (at least by Taylor’s somewhat simplistic interpretation of him), and more generally in all facets of our contemporary culture of sampling, appropriation, and simulation. (Though Taylor only really seems familiar with high art, which has its own peculiar relationship to money; he mentions the Guggenheim Museum opening a space in Las Vegas, but — thankfully perhaps — is silent on hiphop, television, or anything else that might be classified as “popular culture”).

I think that Taylor’s parallels are a bit too facile and glib, and underrate the complexity and paradoxicality of our culture of advertising and simulation — but that’s not really the core of my problem with the book. My real differences are — to use Taylor’s own preferred mode of expression — theological ones. I think that Taylor is far too idolatrous in his regard for “the market” and for money, which traditional religion has seen as Mammon, but which he recasts as a sort of Hermes Trismegistus or trickster figure (though he doesn’t directly use this metaphor), as well as a Christological mediator between the human and the divine. Taylor says, convincingly, that economics cannot be disentangled from religion, because any economic system ultimately requires faith — it is finally only faith that gives money its value. But I find Taylor’s faith to be troublingly misplaced: it is at the antipodes from any form of fundamentalism, but for this very reason oddly tends to coincide with it. In postmodern society, money is the Absolute, or the closest that we mortals can come to an Absolute. (Taylor complacently endorses the hegelian dialectic of opposites, without any of the sense of irony that a contemporary christianophile hegelian like Zizek brings to the dialectic). Where fundamentalists seek security, grounding, and redemption, Taylor wants to affirm uncertainty and risk “in a world without redemption.” But this means that the turbulence and ungroundedness of the market makes it the locus for a quasi-religious Nietzschean affirmation (“risk, uncertainty, and insecurity, after all, are pulses of life” — 331) which is ultimately not all that far from the Calvinist faith that everything is in the hands of the Lord.

Taylor at one point works through Marx’s account of the self-valorization of capital; for Taylor, “Marx implicitly draws on Kant’s aesthetics and Hegel’s philosophy” when he describes capital’s “self-renewing circular exchange” (109). That is to say, Marx’s account of capital logic has the same structure as Kant’s organically self-validating art object, or Hegel’s entire system. (Taylor makes much of Marx’s indebtedness to Hegel). What Taylor leaves out of his account, however, is the part where Marx talks about the appropriation of surplus value, which is to say what capital does in the world in order to generate and perpetuate this process of “self-valorization.” I suggest that this omission is symptomatic. In his history of economics, Taylor moves from Adam Smith to such mid-20th-century champions of laissez faire as Milton Friedman and F. A. Hayek; but he never mentions, for instance, Ricardo, who (like Marx after him) was interested in production and consumption, rather than just circulation.

Now, simply to say — as most orthodox Marxists would do — that Taylor ignores production, and the way that circulation is grounded in production, is a more “fundamentalist” move than I would wish to make. Taylor is right to call attention to the eerily ungrounded nature of contemporary finance. Stock market prices are largely disconnected from any underlying economic performance of the companies whose stocks are being traded; speculation on derivatives and other higher-order financial instruments, which have even less relation to actual economic activity, have largely displaced productive investment as the main “business” of financial markets today. But Taylor seems to celebrate this process as a refutation of Marx and Marxism (except to the extent that Marx himself unwittingly endorses the self-valorization of capital, by describing it in implicitly aesthetic and theological terms). Taylor tends to portray Marx as an old-school fundamentalist who is troubled by the way that money’s fluidity and “spectrality” undermine metaphysical identities and essences. But this is a very limited and blinkered (mis)reading of Marx. For Marx himself begins Capital with the notorious discussion of the immense abstracting power of commodities and money. And subsequently, Marx insists on the way that circuits of finance tend, in an advanced capitalist system, to float free of their “determinants” in use-value and labor. The autonomous “capital-logic” that Marx works out in Volumes 2 & 3 of Capital is much more true today than it ever was in Marx’s own time. Marx precisely explores the consequences of these developments without indulging in any “utopian-socialist” nostalgia for a time of primordial plenitude, before money matters chased us out of the Garden.

Let me try to put this in another way. The fact that postmodern financial speculation is (quite literally) ungrounded seems to mean, for Taylor, that it is therefore also free of any extraneous consequences or “collateral damage” (Taylor actually uses this phrase as the title of one section of the book, playing on the notion of “collateral” for loans but not considering any extra-financial effects of financial manipulations). Much of the latter part of Confidence Games is concerned with efforts by financiers and economists, in the 1980s and 1990s, to manage and minimize risk; and with their inability to actually do so. Taylor spends a lot of time, in particular, on the sorry story of Long-Term Capital Management (LTCM), the investment firm that went bankrupt so spectacularly in 1998. After years of mega-profits, LTCM got called on its outrageously leveraged investments, found that it couldn’t repay any of its loans, and had to be bailed out to avoid a domino effect leading to worldwide financial collapse. In Taylor’s view, there’s a kind of moral lesson in this: LTCM wanted to make hefty profits without taking the concomitant risks; but eventually the risks caught up with them, in a dramatic movement of neo-Calvinist retribution, a divine balancing of the books. Taylor doesn’t really reflect on the fact that the “risks” weren’t really all that great for the financiers of LTCM themselves: they lost their paper fortunes, but they didn’t literally lose their shirts or get relegated to the poorhouse. Indeed their losses were largely covered, in order to protect everyone else, who would have suffered from the worldwide economic collapse that they almost triggered. The same holds, more recently, for Enron. Ken Lay got some sort of comeuppance when Enron went under, and (depending on the outcome of his trial) he may even end up having to serve (like Martha Stewart) some minimum-security jail time. But Lay will never be in the destitute position of all the people who lost their life savings and old-age pensions in the fiasco. Gaddis’ JR deals with the cycles of disruption and loss that are triggered by the ungrounded speculations at the center of the novel — but this is one aspect of the text Taylor never talks about.

Taylor sharply criticizes the founding assumptions of mainstream economists and financiers: the ideas that the market is “rational,” and that it tends toward “equilibrium.” And here Taylor is unquestionably right: these founding assumptions — which still pervade mainstream economics in the US and around the world — are indeed nonsensical, as well as noxious. It’s only under ideal, frictionless conditions, that almost never exist in actuality, that Smith’s “invisible hand” actually does operate to create “optimal” outcomes. Marginalist and neoclassical/neoliberal economics is probably the most mystified discipline in the academy today, wedded as it is to the pseudo-rigor of mathematical models borrowed from physics, and deployed in circumstances where none of the idealizations at the basis of physics actually obtain. It’s welcome to see Taylor take on the economists’ “dream of a rationally ordered world” (301), one every bit as out of touch with reality, and as harmful in its effects when people tried to bend the real world to conform to it, as Soviet communism ever was.

But alas — Taylor only dismisses the prevalent neoclassical version of the invisible hand, in order to welcome it back in another form. If the laws of economic equilibrium, borrowed by neoclassical economics from 19th-century physical dynamics, do not work, for Taylor this is because the economy is governed instead by the laws of complex systems, which he borrows from late-20th-century physics in the form of chaos and complexity theory. There is still an invisible hand in Taylor’s account: only now it works through phase transitions and strange attractors in far-from-equilibrium conditions. Taylor thus links the physics of complexity to the free-market theories of F. A. Hayek (Margaret Thatcher’s favorite thinker), for whom the “market” was a perfect information-processing mechanism that calculated optimal outcomes as no “central planning” agency could. According to Hayek’s way of thinking, since any attempt at human intervention in the functioning of the economy — any attempt to alleviate or mitigate circumstances — will necessarily have unintended and uncontrollable consequences, we do best to let the market take its course, with no remorse or regret for the vast amount of human suffering and misery that is created thereby.

Such sado-monetarist cruelty is clearly not Taylor’s intention, but it arises nevertheless from his revised version of the invisible hand, as well as from his determination to separate financial networks from their extra-financial effects. I’ll say it again: the more Taylor celebrates the way that everything is interconnected, and all systems are open, he still maintains a sort of methodological solipsism or blindness to external consequences. The fact that financial networks today (or any other sort of self-perpetuating system of nonreferential signs) are ungrounded self-affecting systems, produced in the unfolding of a “developmental process [that] neither is grounded in nor refers to anything beyond itself” (330) — this fact does not exempt these systems from having extra-systemic consequences: indeed, if anything, the system’s lack of “groundedness” or connection makes the extra-systemic effects all the more intense and virulent. To write off thesse effects as “coevolution,” or as the “perpetual restlessness” of desire, or as a wondrous Nietzschean affirmation of risk, is to be disingenuous at best.

There’s a larger question here, that goes far beyond Taylor. When we think today of networks, or of chaotic systems, we think of patterns that are instantiated indifferently in the most heterogeneous sorts of matter. The same structures, the same movements, the same chaotic bifurcations and phase transitions, are supposedly at work in biological ecosystems, in the weather, and in the stock market. This is the common wisdom of the age — it certainly isn’t specific to Taylor — but it’s an assumption that I increasingly think needs to be criticized. The very fact that the same arguments from theories of chaos/complexity and “self-organization” can be cited with equal relevance by Citibank and by the alterglobalization movement, and can be used to justify both feral capitalism and communal anarchism, should give us pause. For one thing, I don’t think we know yet how well these scientific theories will hold up; they are drastic simplifications, and only time will tell how well they perform, how useful they are, in comparison to the drastic simplifications proposed by the science of, say, the nineteenth century. For another thing, we still need to be dubious about how the idea of the same pattern instantiated indifferently in various sorts of matter is just another extension — powerful in some ways, but severely limiting in others — of Western culture’s tendency to divide mind or meaning from matter, and to devalue the latter. For yet another, we should be very wary of drawing political and ethical consequences from scientific observation and theorization, for usually such drawing-consequences involves a great deal of arbitrariness, as it projects the scientific formulations far beyond the circumstances in which they are meaningful.

Mark C. Taylor’s Confidence Games: Money and Markets in a World Without Redemption is erudite, entertaining, and intellectually wide-ranging — and it has the virtue of dealing with a subject (money and markets) that rarely gets enough attention from people deeply into pomo theory. Why, then, did I find myself so dissatisfied with the book?

Taylor is a postmodern, deconstructionist theologian — if that makes any sense, and in fact when reading him it does — who has written extensively about questions of faith and belief in a world without a center or foundations. Here he writes about the relations between religion, art, and money — or, more philosophically, between theology, aesthetics, and economics. He starts with a consideration of William Gaddis’ underrated and underdiscussed novels The Recognitions and JR (the latter of which he rightly praises as one of the most crucial and prophetic reflections on late-20th-century American culture: in a book published in 1975, Gaddis pretty much captures the entire period from the deregulation and S&L scams of the Reagan 80s through the Enron fiasco of just a few years ago: nailing down both the crazy economic turbulence and fiscal scamming, and its influence on the larger culture). From Gaddis, Taylor moves on to the history of money, together with the history of philosophical reflections upon money. He’s especially good on the ways in which theological speculation gets transmuted into 18th and 19th century aesthetics, and on how both theological and aesthetic notions get subsumed into capitalistic visions of “the market.” In particular, he traces the Calvinist (as well as aestheticist) themes that stand behind Adam Smith’s vision of the “invisible hand” that supposedly ensures the proper functioning of the market.

The second half of Taylor’s book moves towards an account of how today’s “postmodern” economic system developed, in the wake of Nixon’s abandonment of the gold standard in 1971, the Fed’s conversion from Keynesianism to monetarism in 1979, and the general adoption of “neoliberal” economics throughout the world in the 1980s and 1990s. The result of these transformations is the dematerialization of money (since it is no longer tied to gold) and the replacement of a “real” economy by a “virtual” one, in which money becomes a series of ungrounded signs that only refer to one another. Money, in Taylor’s account, has always had something uncanny about it — because, as a general equivalent or medium of exchange, it is both inside and outside the circuits of the items (commodities) being exchanged; money is a liminal substance that grounds the possibility of fixed categories and values, but precisely for that reason, doesn’t itself quite fit into any category, or have any autonomous value. But with the (re-)adoption of free-market fundamentalism in the 1980s, together with the explosive technological changes of the late 20th century — the growth of telecommunications and of computing power that allow for global and entirely ‘fictive’ monetary flows — this all kicks into much higher gear: money becomes entirely “spectral.” Taylor parallels this economic mutation to similar experiences of ungroundedness, and of signs that do not refer to anything beyond themselves, in the postmodern architecture of Venturi and after, in the poststructuralist philosophy of Derrida (at least by Taylor’s somewhat simplistic interpretation of him), and more generally in all facets of our contemporary culture of sampling, appropriation, and simulation. (Though Taylor only really seems familiar with high art, which has its own peculiar relationship to money; he mentions the Guggenheim Museum opening a space in Las Vegas, but — thankfully perhaps — is silent on hiphop, television, or anything else that might be classified as “popular culture”).

I think that Taylor’s parallels are a bit too facile and glib, and underrate the complexity and paradoxicality of our culture of advertising and simulation — but that’s not really the core of my problem with the book. My real differences are — to use Taylor’s own preferred mode of expression — theological ones. I think that Taylor is far too idolatrous in his regard for “the market” and for money, which traditional religion has seen as Mammon, but which he recasts as a sort of Hermes Trismegistus or trickster figure (though he doesn’t directly use this metaphor), as well as a Christological mediator between the human and the divine. Taylor says, convincingly, that economics cannot be disentangled from religion, because any economic system ultimately requires faith — it is finally only faith that gives money its value. But I find Taylor’s faith to be troublingly misplaced: it is at the antipodes from any form of fundamentalism, but for this very reason oddly tends to coincide with it. In postmodern society, money is the Absolute, or the closest that we mortals can come to an Absolute. (Taylor complacently endorses the hegelian dialectic of opposites, without any of the sense of irony that a contemporary christianophile hegelian like Zizek brings to the dialectic). Where fundamentalists seek security, grounding, and redemption, Taylor wants to affirm uncertainty and risk “in a world without redemption.” But this means that the turbulence and ungroundedness of the market makes it the locus for a quasi-religious Nietzschean affirmation (“risk, uncertainty, and insecurity, after all, are pulses of life” — 331) which is ultimately not all that far from the Calvinist faith that everything is in the hands of the Lord.

Taylor at one point works through Marx’s account of the self-valorization of capital; for Taylor, “Marx implicitly draws on Kant’s aesthetics and Hegel’s philosophy” when he describes capital’s “self-renewing circular exchange” (109). That is to say, Marx’s account of capital logic has the same structure as Kant’s organically self-validating art object, or Hegel’s entire system. (Taylor makes much of Marx’s indebtedness to Hegel). What Taylor leaves out of his account, however, is the part where Marx talks about the appropriation of surplus value, which is to say what capital does in the world in order to generate and perpetuate this process of “self-valorization.” I suggest that this omission is symptomatic. In his history of economics, Taylor moves from Adam Smith to such mid-20th-century champions of laissez faire as Milton Friedman and F. A. Hayek; but he never mentions, for instance, Ricardo, who (like Marx after him) was interested in production and consumption, rather than just circulation.

Now, simply to say — as most orthodox Marxists would do — that Taylor ignores production, and the way that circulation is grounded in production, is a more “fundamentalist” move than I would wish to make. Taylor is right to call attention to the eerily ungrounded nature of contemporary finance. Stock market prices are largely disconnected from any underlying economic performance of the companies whose stocks are being traded; speculation on derivatives and other higher-order financial instruments, which have even less relation to actual economic activity, have largely displaced productive investment as the main “business” of financial markets today. But Taylor seems to celebrate this process as a refutation of Marx and Marxism (except to the extent that Marx himself unwittingly endorses the self-valorization of capital, by describing it in implicitly aesthetic and theological terms). Taylor tends to portray Marx as an old-school fundamentalist who is troubled by the way that money’s fluidity and “spectrality” undermine metaphysical identities and essences. But this is a very limited and blinkered (mis)reading of Marx. For Marx himself begins Capital with the notorious discussion of the immense abstracting power of commodities and money. And subsequently, Marx insists on the way that circuits of finance tend, in an advanced capitalist system, to float free of their “determinants” in use-value and labor. The autonomous “capital-logic” that Marx works out in Volumes 2 & 3 of Capital is much more true today than it ever was in Marx’s own time. Marx precisely explores the consequences of these developments without indulging in any “utopian-socialist” nostalgia for a time of primordial plenitude, before money matters chased us out of the Garden.

Let me try to put this in another way. The fact that postmodern financial speculation is (quite literally) ungrounded seems to mean, for Taylor, that it is therefore also free of any extraneous consequences or “collateral damage” (Taylor actually uses this phrase as the title of one section of the book, playing on the notion of “collateral” for loans but not considering any extra-financial effects of financial manipulations). Much of the latter part of Confidence Games is concerned with efforts by financiers and economists, in the 1980s and 1990s, to manage and minimize risk; and with their inability to actually do so. Taylor spends a lot of time, in particular, on the sorry story of Long-Term Capital Management (LTCM), the investment firm that went bankrupt so spectacularly in 1998. After years of mega-profits, LTCM got called on its outrageously leveraged investments, found that it couldn’t repay any of its loans, and had to be bailed out to avoid a domino effect leading to worldwide financial collapse. In Taylor’s view, there’s a kind of moral lesson in this: LTCM wanted to make hefty profits without taking the concomitant risks; but eventually the risks caught up with them, in a dramatic movement of neo-Calvinist retribution, a divine balancing of the books. Taylor doesn’t really reflect on the fact that the “risks” weren’t really all that great for the financiers of LTCM themselves: they lost their paper fortunes, but they didn’t literally lose their shirts or get relegated to the poorhouse. Indeed their losses were largely covered, in order to protect everyone else, who would have suffered from the worldwide economic collapse that they almost triggered. The same holds, more recently, for Enron. Ken Lay got some sort of comeuppance when Enron went under, and (depending on the outcome of his trial) he may even end up having to serve (like Martha Stewart) some minimum-security jail time. But Lay will never be in the destitute position of all the people who lost their life savings and old-age pensions in the fiasco. Gaddis’ JR deals with the cycles of disruption and loss that are triggered by the ungrounded speculations at the center of the novel — but this is one aspect of the text Taylor never talks about.

Taylor sharply criticizes the founding assumptions of mainstream economists and financiers: the ideas that the market is “rational,” and that it tends toward “equilibrium.” And here Taylor is unquestionably right: these founding assumptions — which still pervade mainstream economics in the US and around the world — are indeed nonsensical, as well as noxious. It’s only under ideal, frictionless conditions, that almost never exist in actuality, that Smith’s “invisible hand” actually does operate to create “optimal” outcomes. Marginalist and neoclassical/neoliberal economics is probably the most mystified discipline in the academy today, wedded as it is to the pseudo-rigor of mathematical models borrowed from physics, and deployed in circumstances where none of the idealizations at the basis of physics actually obtain. It’s welcome to see Taylor take on the economists’ “dream of a rationally ordered world” (301), one every bit as out of touch with reality, and as harmful in its effects when people tried to bend the real world to conform to it, as Soviet communism ever was.

But alas — Taylor only dismisses the prevalent neoclassical version of the invisible hand, in order to welcome it back in another form. If the laws of economic equilibrium, borrowed by neoclassical economics from 19th-century physical dynamics, do not work, for Taylor this is because the economy is governed instead by the laws of complex systems, which he borrows from late-20th-century physics in the form of chaos and complexity theory. There is still an invisible hand in Taylor’s account: only now it works through phase transitions and strange attractors in far-from-equilibrium conditions. Taylor thus links the physics of complexity to the free-market theories of F. A. Hayek (Margaret Thatcher’s favorite thinker), for whom the “market” was a perfect information-processing mechanism that calculated optimal outcomes as no “central planning” agency could. According to Hayek’s way of thinking, since any attempt at human intervention in the functioning of the economy — any attempt to alleviate or mitigate circumstances — will necessarily have unintended and uncontrollable consequences, we do best to let the market take its course, with no remorse or regret for the vast amount of human suffering and misery that is created thereby.

Such sado-monetarist cruelty is clearly not Taylor’s intention, but it arises nevertheless from his revised version of the invisible hand, as well as from his determination to separate financial networks from their extra-financial effects. I’ll say it again: the more Taylor celebrates the way that everything is interconnected, and all systems are open, he still maintains a sort of methodological solipsism or blindness to external consequences. The fact that financial networks today (or any other sort of self-perpetuating system of nonreferential signs) are ungrounded self-affecting systems, produced in the unfolding of a “developmental process [that] neither is grounded in nor refers to anything beyond itself” (330) — this fact does not exempt these systems from having extra-systemic consequences: indeed, if anything, the system’s lack of “groundedness” or connection makes the extra-systemic effects all the more intense and virulent. To write off thesse effects as “coevolution,” or as the “perpetual restlessness” of desire, or as a wondrous Nietzschean affirmation of risk, is to be disingenuous at best.

There’s a larger question here, that goes far beyond Taylor. When we think today of networks, or of chaotic systems, we think of patterns that are instantiated indifferently in the most heterogeneous sorts of matter. The same structures, the same movements, the same chaotic bifurcations and phase transitions, are supposedly at work in biological ecosystems, in the weather, and in the stock market. This is the common wisdom of the age — it certainly isn’t specific to Taylor — but it’s an assumption that I increasingly think needs to be criticized. The very fact that the same arguments from theories of chaos/complexity and “self-organization” can be cited with equal relevance by Citibank and by the alterglobalization movement, and can be used to justify both feral capitalism and communal anarchism, should give us pause. For one thing, I don’t think we know yet how well these scientific theories will hold up; they are drastic simplifications, and only time will tell how well they perform, how useful they are, in comparison to the drastic simplifications proposed by the science of, say, the nineteenth century. For another thing, we still need to be dubious about how the idea of the same pattern instantiated indifferently in various sorts of matter is just another extension — powerful in some ways, but severely limiting in others — of Western culture’s tendency to divide mind or meaning from matter, and to devalue the latter. For yet another, we should be very wary of drawing political and ethical consequences from scientific observation and theorization, for usually such drawing-consequences involves a great deal of arbitrariness, as it projects the scientific formulations far beyond the circumstances in which they are meaningful.

In Praise of Plants

In Praise of Plants, by Francis Hallé, is a pop science book (i.e. scientifically informed, but aimed at a general, non-specialist audience) about the biology of plants. The author, a French botanist whose speciality is trees of the tropics, writes explicitly to correct the zoocentrism of mainstream biology: its tendency to take primarily animal models, and to generalize what is true of animals into what is true of biological organisms generally. Hallé argues that this not only does an injustice to plants and other organisms — one rooted in the narcissism and navel-gazing of human beings as a species, since of course we are animals ourselves — but also gives a constricted and distorted view of life’s potentialities.

To a certain extent, In Praise of Plants could be described as an old-fashioned work of “natural history.” This is the sort of biological writing that preceded all the last half-century’s discoveries about DNA and the genome. Such writing is anecdotal, empirical, and broadly comparative; it emphasizes historical contingency, and it pays a lot of attention to morphology, embryology, and other such fields that have been largely ignored in the wake of the genetic revolution. I myself value natural history writing highly, precisely because it presents an alternative to the genetic reductionism, hyper-adaptationism, and use of mathematical formalization that have become so hegemonic in mainstream biology.

Hallé emphasizes precisely those aspects of plant life that are irreducible alike to animal paradigms, and to the hardcore neo-Darwinian synthesis. Plants’ immobility, and their ability to photosynthesize, are the two things that differentiate them most radically from animals, which are usually mobile and unavoidably predatory. But these differences lead to many astonishing consequences. For instance, plants’ inability to move is probably what has led to their astonishing biochemistry: since they cannot defend themselves by running away, they have evolved all sorts of complex compounds that affect animal behavior (from poisons to psychedelics to aphrodisiacs). For similar reasons, plants don’t have fixed “body plans” the way most phyla of animals (like vertebrates or arthropods) do. Instead, plants have far fewer organs than animals, and these organs can be (re)arranged more freely; this allows for a far greater diversity of shapes and sizes among even closely related plant species than would be possible for animals.

More importantly, reproduction is much more fluid and flexible among plants than it is among animals. Plants can — and do — reproduce both sexually and asexually. They are able to hybridize (with fertile offspring) to a far greater extent than animals can. They have separate haploid and diploid life stages, which greatly extends their options for dispersion and recombination. Where mortality is the compulsory fate of animals, most plants (all except for the “annuals”) are potentially immortal: they can continue to grow, and send out fresh shoots, indefinitely. This is (at least in part) because plants do not display the rigid separation between germ and soma that animals do. Acquired characteristics in animals cannot be inherited, because only mutations to the gametes are passed on; mutations to the other 99.999% of the animal’s body play no part in heredity. But since plants do not have the germ/soma distinction, and since all the cells of a plant remain potentially capable of producing fresh shoots and of flowering, plants can accumulate mutations both in themselves and in their offspring (they can exhibit Lamarckian as well as Darwinian inheritance). They are also far more capable than animals are of receiving lateral mutations (i.e. when a mutation is spread, not by inheritance, but by transversal communication, via a plasmid or virus that moves from one species to another, taking genetic material from one organism and inserting it into another). For plants, natural selection therefore takes place less between competing organisms than among the different parts of a single organism; large trees will often contain branches that have different genotypes.

All this is quite mindblowing, and suggests a far broader picture of life than the one derived from zoology alone. The only scientist I can compare Hallé to in this regard is Lynn Margulis, whose now accepted theories about symbiosis, together with her still unorthodox theories about the mechanisms of evolution and speciation, derive to a great extent from her focus on bacteria and monocellular eukaryotes instead of animals. Hallé doesn’t have Margulis’ theoretical breadth, but his presentation has equally subversive implications vis-a-vis the neo-Darwininan orthodoxy.

One consequence, for me, of In Praise of Plants is that Deleuze and Guattari’s distinction between “rhizomatic” and “arborescent” modes of organization needs to be rethought. In point of fact, trees are far less binaristic and hierarchical than Deleuze and Guattari make them out to be. D&G are really describing both the rhizome and the tree in largely zoocentric terms. A better understanding of botany would actually fit in quite well with D&G’s larger philosophical aims. (Deleuze does show a somewhat better understanding of botany in his treatment of the sexuality of flowers in his book on Proust).

The main (and only substantial) flaw in In Praise of Plants is that, in his desire to emphasize the difference between plants and animals, Hallé gives short shrift to the third kingdom of multicellular organisms, the fungi. He basically spends just a single paragraph on them, in the course of which he presents them as intermediate between plants and animals. This means that he sidelines and belittles fungi in precisely the same way that he (rightly) accuses mainstream biologists of sidelining and belittling plants. Since Hallé is a botanist and not a mycologist, I wouldn’t expect him to give a full account of the fungi. But he ought at least to acknowledge that such an account is needed, since fungi are arguably as different from both animals and plants as the latter are from each other. This would certainly seem to be the case from the tantalizingly little I know about the sexuality of fungi. Where will I find a book that does for the fungi what Hallé’s book so magnificently does for the plants?

In Praise of Plants, by Francis Hallé, is a pop science book (i.e. scientifically informed, but aimed at a general, non-specialist audience) about the biology of plants. The author, a French botanist whose speciality is trees of the tropics, writes explicitly to correct the zoocentrism of mainstream biology: its tendency to take primarily animal models, and to generalize what is true of animals into what is true of biological organisms generally. Hallé argues that this not only does an injustice to plants and other organisms — one rooted in the narcissism and navel-gazing of human beings as a species, since of course we are animals ourselves — but also gives a constricted and distorted view of life’s potentialities.

To a certain extent, In Praise of Plants could be described as an old-fashioned work of “natural history.” This is the sort of biological writing that preceded all the last half-century’s discoveries about DNA and the genome. Such writing is anecdotal, empirical, and broadly comparative; it emphasizes historical contingency, and it pays a lot of attention to morphology, embryology, and other such fields that have been largely ignored in the wake of the genetic revolution. I myself value natural history writing highly, precisely because it presents an alternative to the genetic reductionism, hyper-adaptationism, and use of mathematical formalization that have become so hegemonic in mainstream biology.

Hallé emphasizes precisely those aspects of plant life that are irreducible alike to animal paradigms, and to the hardcore neo-Darwinian synthesis. Plants’ immobility, and their ability to photosynthesize, are the two things that differentiate them most radically from animals, which are usually mobile and unavoidably predatory. But these differences lead to many astonishing consequences. For instance, plants’ inability to move is probably what has led to their astonishing biochemistry: since they cannot defend themselves by running away, they have evolved all sorts of complex compounds that affect animal behavior (from poisons to psychedelics to aphrodisiacs). For similar reasons, plants don’t have fixed “body plans” the way most phyla of animals (like vertebrates or arthropods) do. Instead, plants have far fewer organs than animals, and these organs can be (re)arranged more freely; this allows for a far greater diversity of shapes and sizes among even closely related plant species than would be possible for animals.

More importantly, reproduction is much more fluid and flexible among plants than it is among animals. Plants can — and do — reproduce both sexually and asexually. They are able to hybridize (with fertile offspring) to a far greater extent than animals can. They have separate haploid and diploid life stages, which greatly extends their options for dispersion and recombination. Where mortality is the compulsory fate of animals, most plants (all except for the “annuals”) are potentially immortal: they can continue to grow, and send out fresh shoots, indefinitely. This is (at least in part) because plants do not display the rigid separation between germ and soma that animals do. Acquired characteristics in animals cannot be inherited, because only mutations to the gametes are passed on; mutations to the other 99.999% of the animal’s body play no part in heredity. But since plants do not have the germ/soma distinction, and since all the cells of a plant remain potentially capable of producing fresh shoots and of flowering, plants can accumulate mutations both in themselves and in their offspring (they can exhibit Lamarckian as well as Darwinian inheritance). They are also far more capable than animals are of receiving lateral mutations (i.e. when a mutation is spread, not by inheritance, but by transversal communication, via a plasmid or virus that moves from one species to another, taking genetic material from one organism and inserting it into another). For plants, natural selection therefore takes place less between competing organisms than among the different parts of a single organism; large trees will often contain branches that have different genotypes.

All this is quite mindblowing, and suggests a far broader picture of life than the one derived from zoology alone. The only scientist I can compare Hallé to in this regard is Lynn Margulis, whose now accepted theories about symbiosis, together with her still unorthodox theories about the mechanisms of evolution and speciation, derive to a great extent from her focus on bacteria and monocellular eukaryotes instead of animals. Hallé doesn’t have Margulis’ theoretical breadth, but his presentation has equally subversive implications vis-a-vis the neo-Darwininan orthodoxy.

One consequence, for me, of In Praise of Plants is that Deleuze and Guattari’s distinction between “rhizomatic” and “arborescent” modes of organization needs to be rethought. In point of fact, trees are far less binaristic and hierarchical than Deleuze and Guattari make them out to be. D&G are really describing both the rhizome and the tree in largely zoocentric terms. A better understanding of botany would actually fit in quite well with D&G’s larger philosophical aims. (Deleuze does show a somewhat better understanding of botany in his treatment of the sexuality of flowers in his book on Proust).

The main (and only substantial) flaw in In Praise of Plants is that, in his desire to emphasize the difference between plants and animals, Hallé gives short shrift to the third kingdom of multicellular organisms, the fungi. He basically spends just a single paragraph on them, in the course of which he presents them as intermediate between plants and animals. This means that he sidelines and belittles fungi in precisely the same way that he (rightly) accuses mainstream biologists of sidelining and belittling plants. Since Hallé is a botanist and not a mycologist, I wouldn’t expect him to give a full account of the fungi. But he ought at least to acknowledge that such an account is needed, since fungi are arguably as different from both animals and plants as the latter are from each other. This would certainly seem to be the case from the tantalizingly little I know about the sexuality of fungi. Where will I find a book that does for the fungi what Hallé’s book so magnificently does for the plants?

The Invention of Modern Science

Isabelle Stengers’ The Invention of Modern Science is pretty much the best thing anyone has written about the science wars (the disputes between scientists and those in the humanities and ‘soft’ social sciences doing “science studies”: the biggest battles were fought in the 1990s, but I think the issues are still very much alive today). Stengers is close to Bruno Latour (about whom I have expressed reservations), but she goes into the theoretical issues about the status of science’s claims to truth more deeply, and — I think — more cogently and convincingly, than he does.

Stengers starts by asking where science’s claims to truth come from. She goes through various philosophers of science, like Popper, Kuhn, Lakatos, and Feyerabend, and notes the difficulties with their various formulations, as well as how their various accounts relate to scientists’ own images of what they are doing. All these thinkers try to balance the atemporal objectivity of science with their sense that science is a process, which therefore has a history (Kuhn’s paradigm shifts, Popper’s process of falsification) in at least some sense. But Stengers argues that none of these thinkers view science historically enough. The emergence of modern science is an event: in the way he appeals to facts, or more precisely to experiments, to back up his arguments, Galileo introduces a new mode and method of discerning truth into our culture. And every new discovery, every new scientific experiment, is in similar ways a new event.

Understanding science historically, as an event, goes against the claims made for science as revealing the deep truth of the universe, as being the ultimate authority, as promising to provide a “theory of everything.” But it doesn’t really go against how scientists actually work, most of the time, when they perform experiments that lead them to certain results about which they then make theories. The novelty of science, compared to other ways that we make distinctions, account for facts, or explain processes, is that, in science, we put ourselves into an experimental situation, which is to say, into a situation in which we are forced to accept certain results and consequences, which come to us from the outside world with which we are interacting, even when these results and consequences go against our own interests and predispositions. In doing this, science becomes a method for separating truth from fiction — at least in certain well-defined situations.

From this point of view, it’s obvious that science is not merely an arbitrary “social construction” (as scientists often accuse “science studies” people of believing, and as some naive “science studies” people actually may believe). But Stengers’ account also means that “hard” science’s claim to authority is not unlimited, but is grounded in particular events, particular experiments, a particular history. Scientific practice doesn’t vanish into the truth disclosed by that practice. To the contrary: scientific truth remains embedded in the historical practice of science. This is where Latour’s explorations of concrete scientific practices come into the picture. This is also where and why science is, as Stengers repeatedly says, intrinsically political. It’s a matter of scientific claims negotiating with the many other sorts of claims that it encounters, in culture and society, but also in the “natural” world.

In other words: science produces truths, but it doesn’t produce The Truth. It isn’t the sole authority on everything, or the one and only source of legitimate knowledge. To say that science produces truths is also to say that it’s meaningless to ask to what degree these truths are “discovered,” and to what degree they are “invented.” This just isn’t a relevant distinction any longer. What matters is that they are truths, and our very existence is intricated with them. We can’t deny that the Earth goes around the Sun, rather than the reverse, just as we can’t deny the legacy of wars and revolutions that have led to our current world situation. And this is all the more the case when we expand our point of view to consider historical sciences, like biology, alongside experimental ones like physics. Despite the current mathematization of biology, the story of how human beings evolved is an historical one, not one that can be ‘proven’ through experiment. If physics is a matter of events, then biology is even more so. Stengers notes that American creationists explicitly make use of the fact that biology isn’t experimental in the sense that physics is, in order to back up their claims that evolution is just a “theory” rather than something factual. One problem with scientific imperialism — the claim that science is the ONLY source of truth — is that its overreaching precisely encourages these sorts of counter-claims. I’d agree with the creationists when they say that the theory of evolution is not established in the same way as, say, Newton’s laws of motion are established. (Though remember that in quantum situations, and relativistic near-speed-of-light situations, those laws of motion themselves no longer work). Rather, our answer to the creationists should be that denying that human beings evolved through natural selection (combined, perhaps, with other factors having to do with the properties of emergent systems) is exactly the same sort of thing as denying that the Holocaust ever happened.

As for science, the problem comes when it claims to explain everything, when it arrogates to itself the power to declare all other forms of explanation illegitimate, when it abstracts itself away from the situations, the events, in which it distinguishes truth from fiction, and claims to be the repository of all truths, with the authority to relegate all other truth-claims to the status of discredited fictions. As Stengers notes, when science does this (or better, when scientists and their allies do this), science is not just political, but is playing a very particular sort of power politics; and in doing so, science is certainly not disinterested, but in fact expressing extremely strong and powerful interests Whether it is “for” or “against” current power arrangements (in the past, e.g. the eighteenth century, it was often against, while today scientific institutions are more frequently aligned with, and parts of, dominant corporate and governmental powers) science has become a political actor, and a wielder of power in its own right. The point of “science studies,” as both Stengers and Latour articulate it, is to democratize the politics in which science is inevitably involved: both pragmatically (e.g., when the authority of “science,” in the form of genetic engineering, is tied in with the marketing muscle and monopolistic position of Monsanto) and theoretically (there are other truths in addition to scientific ones; science does not have the monopoly upon producing truth; other practices of truth, in other realms, may well necessarily involve certain types of fabulation, rather than being reducible to the particular way that science separates truth from fiction, in particular experimental situations, and in particular ways of sifting through historical evidence).

One way to sum all this up is to say that science, for Stengers, is a process rather than a product; it is creative, rather than foundational. Its inventions/discoveries introduce novelty into the world; they make a difference. Scientific truth should therefore be aligned with becoming (with inciting changes and transformations) rather than with power (with legislating what is and must be). Scientists are wrong when they think they are entitled to explain and determine everything, through some principle of reduction or consilience. But they are right when they see an aesthetic dimension to what they do (scientists subject themselves to different constraints than artists do, but both scientists and artists are sometimes able to achieve beauty and cogency as a result of following their respective constraints).

Isabelle Stengers’ The Invention of Modern Science is pretty much the best thing anyone has written about the science wars (the disputes between scientists and those in the humanities and ‘soft’ social sciences doing “science studies”: the biggest battles were fought in the 1990s, but I think the issues are still very much alive today). Stengers is close to Bruno Latour (about whom I have expressed reservations), but she goes into the theoretical issues about the status of science’s claims to truth more deeply, and — I think — more cogently and convincingly, than he does.

Stengers starts by asking where science’s claims to truth come from. She goes through various philosophers of science, like Popper, Kuhn, Lakatos, and Feyerabend, and notes the difficulties with their various formulations, as well as how their various accounts relate to scientists’ own images of what they are doing. All these thinkers try to balance the atemporal objectivity of science with their sense that science is a process, which therefore has a history (Kuhn’s paradigm shifts, Popper’s process of falsification) in at least some sense. But Stengers argues that none of these thinkers view science historically enough. The emergence of modern science is an event: in the way he appeals to facts, or more precisely to experiments, to back up his arguments, Galileo introduces a new mode and method of discerning truth into our culture. And every new discovery, every new scientific experiment, is in similar ways a new event.

Understanding science historically, as an event, goes against the claims made for science as revealing the deep truth of the universe, as being the ultimate authority, as promising to provide a “theory of everything.” But it doesn’t really go against how scientists actually work, most of the time, when they perform experiments that lead them to certain results about which they then make theories. The novelty of science, compared to other ways that we make distinctions, account for facts, or explain processes, is that, in science, we put ourselves into an experimental situation, which is to say, into a situation in which we are forced to accept certain results and consequences, which come to us from the outside world with which we are interacting, even when these results and consequences go against our own interests and predispositions. In doing this, science becomes a method for separating truth from fiction — at least in certain well-defined situations.

From this point of view, it’s obvious that science is not merely an arbitrary “social construction” (as scientists often accuse “science studies” people of believing, and as some naive “science studies” people actually may believe). But Stengers’ account also means that “hard” science’s claim to authority is not unlimited, but is grounded in particular events, particular experiments, a particular history. Scientific practice doesn’t vanish into the truth disclosed by that practice. To the contrary: scientific truth remains embedded in the historical practice of science. This is where Latour’s explorations of concrete scientific practices come into the picture. This is also where and why science is, as Stengers repeatedly says, intrinsically political. It’s a matter of scientific claims negotiating with the many other sorts of claims that it encounters, in culture and society, but also in the “natural” world.

In other words: science produces truths, but it doesn’t produce The Truth. It isn’t the sole authority on everything, or the one and only source of legitimate knowledge. To say that science produces truths is also to say that it’s meaningless to ask to what degree these truths are “discovered,” and to what degree they are “invented.” This just isn’t a relevant distinction any longer. What matters is that they are truths, and our very existence is intricated with them. We can’t deny that the Earth goes around the Sun, rather than the reverse, just as we can’t deny the legacy of wars and revolutions that have led to our current world situation. And this is all the more the case when we expand our point of view to consider historical sciences, like biology, alongside experimental ones like physics. Despite the current mathematization of biology, the story of how human beings evolved is an historical one, not one that can be ‘proven’ through experiment. If physics is a matter of events, then biology is even more so. Stengers notes that American creationists explicitly make use of the fact that biology isn’t experimental in the sense that physics is, in order to back up their claims that evolution is just a “theory” rather than something factual. One problem with scientific imperialism — the claim that science is the ONLY source of truth — is that its overreaching precisely encourages these sorts of counter-claims. I’d agree with the creationists when they say that the theory of evolution is not established in the same way as, say, Newton’s laws of motion are established. (Though remember that in quantum situations, and relativistic near-speed-of-light situations, those laws of motion themselves no longer work). Rather, our answer to the creationists should be that denying that human beings evolved through natural selection (combined, perhaps, with other factors having to do with the properties of emergent systems) is exactly the same sort of thing as denying that the Holocaust ever happened.

As for science, the problem comes when it claims to explain everything, when it arrogates to itself the power to declare all other forms of explanation illegitimate, when it abstracts itself away from the situations, the events, in which it distinguishes truth from fiction, and claims to be the repository of all truths, with the authority to relegate all other truth-claims to the status of discredited fictions. As Stengers notes, when science does this (or better, when scientists and their allies do this), science is not just political, but is playing a very particular sort of power politics; and in doing so, science is certainly not disinterested, but in fact expressing extremely strong and powerful interests Whether it is “for” or “against” current power arrangements (in the past, e.g. the eighteenth century, it was often against, while today scientific institutions are more frequently aligned with, and parts of, dominant corporate and governmental powers) science has become a political actor, and a wielder of power in its own right. The point of “science studies,” as both Stengers and Latour articulate it, is to democratize the politics in which science is inevitably involved: both pragmatically (e.g., when the authority of “science,” in the form of genetic engineering, is tied in with the marketing muscle and monopolistic position of Monsanto) and theoretically (there are other truths in addition to scientific ones; science does not have the monopoly upon producing truth; other practices of truth, in other realms, may well necessarily involve certain types of fabulation, rather than being reducible to the particular way that science separates truth from fiction, in particular experimental situations, and in particular ways of sifting through historical evidence).

One way to sum all this up is to say that science, for Stengers, is a process rather than a product; it is creative, rather than foundational. Its inventions/discoveries introduce novelty into the world; they make a difference. Scientific truth should therefore be aligned with becoming (with inciting changes and transformations) rather than with power (with legislating what is and must be). Scientists are wrong when they think they are entitled to explain and determine everything, through some principle of reduction or consilience. But they are right when they see an aesthetic dimension to what they do (scientists subject themselves to different constraints than artists do, but both scientists and artists are sometimes able to achieve beauty and cogency as a result of following their respective constraints).

Smart Gels

I’ve seen this in a number of places (Slashdot refers me both to the Wired article and to the original press release): Thomas DeMarse, of the University of Florida, has cultivated a culture of 25,000 living rat neurons, and hooked up the culture to a computer running flight simulator software. The neurons have learned, in effect, to fly the simulated plane.

This is fascinating on a number of grounds. The neurons constitute, in effect, an artificial bio-brain. The neurons have hooked up with one another for the first time, in the same way that neurons actually do hook up in a developing human’s or animal’s brain; and an interface has been successfully established between this bio-brain and the silicon computational machinery of a computer. Strong-AI enthusiasts like Ray Kurzweil fantasize about replacing human neurons with silicon chips, one by one, until the mind has been entirely translated or downloaded into a computer. But neurons and silicon logic chips in fact function in quite different ways, so the idea of interfacing neurons and digital computers, as DeMarse and others have done, is in fact much more plausible. Brains need to be embodied, in a way that electronic computing machines don’t; but an experiment like this suggests a way that this embodiment could in fact be made entirely simulacral, like in the old (updated-Cartesian) ‘brain in a vat’ scenario.

The whole experiment turns on the fact that brains don’t operate the way digital computers do. Brains signal chemically as well as electronically, which makes them different by nature from computer chips; and from what little evidence we have on the subject, it would seem that (as Gerald Edelman, among others, argues), brains are not in fact Turing machines, but operate according to entirely different principles. Indeed, DeMarse’s goal is less to train the neurons to do useful computational work, than he is “to learn how the brain does its computation.”

The SF writer Peter Watts in fact deals with all these questions in his “Rifters” novels Starfish and Maelstrom (I haven’t yet read the just-published third volume in the series, Behemoth: B-Max; a fourth and final volume is scheduled to come out next year). In these novels, neural cultures called “smart gels” do computational tasks — involving pattern recognition, nuanced judgments involving various qualitative factors, and so on — that digital computers are ill-suited for. But the fact that “smart gels” are required to make human-style judgments, but are devoid of human personalities and emotions, itself leads to disturbing and disastrous consequences…. It’s always a problem when “intelligence” is divorced from context.

I’ve seen this in a number of places (Slashdot refers me both to the Wired article and to the original press release): Thomas DeMarse, of the University of Florida, has cultivated a culture of 25,000 living rat neurons, and hooked up the culture to a computer running flight simulator software. The neurons have learned, in effect, to fly the simulated plane.

This is fascinating on a number of grounds. The neurons constitute, in effect, an artificial bio-brain. The neurons have hooked up with one another for the first time, in the same way that neurons actually do hook up in a developing human’s or animal’s brain; and an interface has been successfully established between this bio-brain and the silicon computational machinery of a computer. Strong-AI enthusiasts like Ray Kurzweil fantasize about replacing human neurons with silicon chips, one by one, until the mind has been entirely translated or downloaded into a computer. But neurons and silicon logic chips in fact function in quite different ways, so the idea of interfacing neurons and digital computers, as DeMarse and others have done, is in fact much more plausible. Brains need to be embodied, in a way that electronic computing machines don’t; but an experiment like this suggests a way that this embodiment could in fact be made entirely simulacral, like in the old (updated-Cartesian) ‘brain in a vat’ scenario.

The whole experiment turns on the fact that brains don’t operate the way digital computers do. Brains signal chemically as well as electronically, which makes them different by nature from computer chips; and from what little evidence we have on the subject, it would seem that (as Gerald Edelman, among others, argues), brains are not in fact Turing machines, but operate according to entirely different principles. Indeed, DeMarse’s goal is less to train the neurons to do useful computational work, than he is “to learn how the brain does its computation.”

The SF writer Peter Watts in fact deals with all these questions in his “Rifters” novels Starfish and Maelstrom (I haven’t yet read the just-published third volume in the series, Behemoth: B-Max; a fourth and final volume is scheduled to come out next year). In these novels, neural cultures called “smart gels” do computational tasks — involving pattern recognition, nuanced judgments involving various qualitative factors, and so on — that digital computers are ill-suited for. But the fact that “smart gels” are required to make human-style judgments, but are devoid of human personalities and emotions, itself leads to disturbing and disastrous consequences…. It’s always a problem when “intelligence” is divorced from context.

Against Method

Paul Feyerabend‘s Against Method (originally published in 1975) is another one of those books I have been meaning to read for years, but never got around to before now. Feyerabend (1924-1994) was a philosopher of science, famous (or notorious) for his “epistemological anarchism,” his insistence that “the only principle” that can be justified for scientific research is that “anything goes.” I’ve turned to him no, partly out of my interest in science studies, and partly because I’m supposed to give a talk in a few months at a symposium on “foundations and methods in the humanities,” a task I am finding difficult because I have no belief in foundations, and little use for methodologies.
Feyerabend critiques and rejects the attempt — by philosophers of science, primarily, but also by popular apologists for science, and sometimes by scientists themselves — to establish norms and criteria to govern the way science works, and to establish what practices and results are valid for scientific research. Feyerabend’s particular target is Karl Popper’s doctrine of “falsification,” but more generally he opposes any a priori attempt to legislate what can and cannot be done in science.
Feyerabend’s argument is partly “deconstructive” (by which I mean he showed how rationalist arguments were necessarily internally inconsistent and incoherent — though he does not seem to have much use for Derridean deconstruction as a philosophy), and partly historical and sociological. He argues that actual scientific practice did not, does not, and indeed cannot, make use of the rationalist norms that philosophers of science, and ideologists of science, have proclaimed. He analyzes Galileo’s defense of heliocentrism at great length, and shows that Galileo’s arguments were riddled with non sequiturs, loose analogies, ad hoc assumptions, self-contradictory and easily falsifiable assertions, rhetorical grandstanding, and so on. The point is not to undermine Galileo, or to assert that there are no grounds for choosing between seeing the earth and the sun as the center. Rather, Feyerabend wants to show that such (disreputable) strategies were strictly necessary; without them, Copernicus and Galileo never could have overthrown the earth-centered view, which had both the theoretical knowledge and the “common sense” of their time, as well as the authority of the Church, on their side. It was not a matter of a “more accurate” theory displacing a less accurate one; but rather, a radical shift of paradigms, one which could only be accomplished by violently disrupting both accepted truths and accepted procedures. It is only in the hundreds of years after Galileo convinced the world of the heliocentric theory, that the empirical evidence backing up the theory was generated and catalogued.
Feyerabend is drawing, of course, on Thomas Kuhn’s work on “paradigm shifts,” but he is pushing it in a much more radical direction than Kuhn would accept. Kuhn distinguishes between “normal science,” when generally accepted research programs and paradigms are in place, and rationalistic criteria do in fact function, and times of crisis, when paradigms break down under the weight of accumulating anomalies, thus forcing scientists to cast about for a new paradigm. For Feyerabend, however, there is no “normal science.” There was no crisis, or weight of anomalies, that forced Copernicus and then Galileo to cast about for a new astronomical paradigm; it would be more to the point to say that Galileo deliberately and artificially created a crisis, in order to undermine a paradigm that was generally accepted and that worked well, and put in its place a new paradigm that he supported more out of passion and intuition than out of anything like solid empirical evidence. Because “facts” are never independent of social contexts and theoretical assumptions, Galileo could not have provoked a shift in the theoretical assumptions of his time merely by appealing to what were understood then as the “facts.”
Such an argument was quite shocking in 1975. It has become much less so in the years since, as rhetorical theorists, sociologists, and others in “science studies” have studied in great depth the way science actually works, and have contested many other instances of (capital-S) Science and (capital-R) Reason on historical and sociological grounds.
There remains a subtle but important difference, however, between Feyerabend and more recent science studies historians and thinkers like Bruno Latour, Stephen Shapin, Steve Fuller, and many others. Feyerabend justifies his “epistemological anarchism” on the ground that it is necessary for the actual, successful practice of science, and indeed for the “progress” of science — though he explicitly refuses (page 18) to define what he means by “progress.” What this means is that Feyerabend opposes methodological norms and fixed principles of validation largely on pragmatic grounds : which I do not think is quite true of Latour et al. Where Latour sees a long process of negotiation, and a “settlement,” between Pasteur and the bacilli he was studying, Feyerabend doesn’t see Galileo (or Einstein, for that matter) in engaging in any such process vis-a-vis the earth, or the sun, or the universe. Instead, he sees them as blithely ignoring rules of evidence and of verification or falsification, in order to impose radically new perspectives (less upon the world than upon their own cultures). Galileo’s and Einstein’s justification is that their proposals indeed worked, and were accepted; this is what separates them from crackpots, though no criteria existed that could have assured these successes in advance.
What I don’t see enough of in contemporary science studies — though one finds it in Deleuze and Guattari, in Isabelle Stengers, and in the work of my friend Richard Doyle — is Feyerabend’s sense of the kinship between scientific and aesthetic creativity, in that both are engaged in creating the very criteria according to which they will be judged.
More generally, Feyerabend, like Latour and other more recent science studies thinkers, is deeply concerned with democracy, and with the way that the imperialism of Big Science threatens democracy by trying to decree that its Way is the Only Way. Indeed, one probably sees more of this threat today — in the “science wars” that reached a flash point in the mid 1990s, but that are still smouldering, in the popularization of science, and in the pronouncements of biologists like Richard Dawkins and Edward O. Wilson, and physicists like Steven Weinberg and Alan Sokal — than one did when Feyerabend was writing Against Method. But Feyerabend wisely refuses to get lost (as I fear Latour does at times) in the attempt to propose an alternative “settlement” or “constitution” to the one that Big Science has proclaimed for itself. Feyerabend’s genial anarchism, pluralism, and “relativism” (a term he accepts, but only in certain carefully outlined contexts) simply precludes the need for any single alternative account, such as the one Latour struggles to provide. Finally, for Feyerabend, there is no such thing as Science; we should rather speak of the sciences, as a multitude of often conflicting and contradictory practices, none of which can pretend to ultimate authority, and all of which have to be judged and dealt with according to a range of needs, interests, and contexts.
Pluralism is often derided as wishy-washy, wimpy, “soft,” unwilling to take a stand. None of this is true of Feyerabend’s pluralism, though I am not sure how much of his exemption from such charges is due to the rigor of his arguments, and how much to the charm of his rhetorical style — he’s an engaging, inviting, and unaffected writer, able to be clear and focused without becoming simplistic, and able to argue complexly without becoming abstruse. Of course, the attempt to separate logical rigor from stylistic effects is precisely the sort of pseudo-rational distinction that Feyerabend is continually warning us against.

Paul Feyerabend‘s Against Method (originally published in 1975) is another one of those books I have been meaning to read for years, but never got around to before now. Feyerabend (1924-1994) was a philosopher of science, famous (or notorious) for his “epistemological anarchism,” his insistence that “the only principle” that can be justified for scientific research is that “anything goes.” I’ve turned to him now, partly out of my interest in science studies, and partly because I’m supposed to give a talk in a few months at a symposium on “foundations and methods in the humanities,” a task I am finding difficult because I have no belief in foundations, and little use for methodologies.
Feyerabend critiques and rejects the attempt — by philosophers of science, primarily, but also by popular apologists for science, and sometimes by scientists themselves — to establish norms and criteria to govern the way science works, and to establish what practices and results are valid for scientific research. Feyerabend’s particular target is Karl Popper’s doctrine of “falsification,” but more generally he opposes any a priori attempt to legislate what can and cannot be done in science.
Feyerabend’s argument is partly “deconstructive” (by which I mean he showed how rationalist arguments were necessarily internally inconsistent and incoherent — though he does not seem to have much use for Derridean deconstruction as a philosophy), and partly historical and sociological. He argues that actual scientific practice did not, does not, and indeed cannot, make use of the rationalist norms that philosophers of science, and ideologists of science, have proclaimed. He analyzes Galileo’s defense of heliocentrism at great length, and shows that Galileo’s arguments were riddled with non sequiturs, loose analogies, ad hoc assumptions, self-contradictory and easily falsifiable assertions, rhetorical grandstanding, and so on. The point is not to undermine Galileo, or to assert that there are no grounds for choosing between seeing the earth and the sun as the center. Rather, Feyerabend wants to show that such (disreputable) strategies were strictly necessary; without them, Copernicus and Galileo never could have overthrown the earth-centered view, which had both the theoretical knowledge and the “common sense” of their time, as well as the authority of the Church, on their side. It was not a matter of a “more accurate” theory displacing a less accurate one; but rather, a radical shift of paradigms, one which could only be accomplished by violently disrupting both accepted truths and accepted procedures. It is only in the hundreds of years after Galileo convinced the world of the heliocentric theory, that the empirical evidence backing up the theory was generated and catalogued.
Feyerabend is drawing, of course, on Thomas Kuhn’s work on “paradigm shifts,” but he is pushing it in a much more radical direction than Kuhn would accept. Kuhn distinguishes between “normal science,” when generally accepted research programs and paradigms are in place, and rationalistic criteria do in fact function, and times of crisis, when paradigms break down under the weight of accumulating anomalies, thus forcing scientists to cast about for a new paradigm. For Feyerabend, however, there is no “normal science.” There was no crisis, or weight of anomalies, that forced Copernicus and then Galileo to cast about for a new astronomical paradigm; it would be more to the point to say that Galileo deliberately and artificially created a crisis, in order to undermine a paradigm that was generally accepted and that worked well, and put in its place a new paradigm that he supported more out of passion and intuition than out of anything like solid empirical evidence. Because “facts” are never independent of social contexts and theoretical assumptions, Galileo could not have provoked a shift in the theoretical assumptions of his time merely by appealing to what were understood then as the “facts.”
Such an argument was quite shocking in 1975. It has become much less so in the years since, as rhetorical theorists, sociologists, and others in “science studies” have studied in great depth the way science actually works, and have contested many other instances of (capital-S) Science and (capital-R) Reason on historical and sociological grounds.
There remains a subtle but important difference, however, between Feyerabend and more recent science studies historians and thinkers like Bruno Latour, Stephen Shapin, Steve Fuller, and many others. Feyerabend justifies his “epistemological anarchism” on the ground that it is necessary for the actual, successful practice of science, and indeed for the “progress” of science — though he explicitly refuses (page 18) to define what he means by “progress.” What this means is that Feyerabend opposes methodological norms and fixed principles of validation largely on pragmatic grounds : which I do not think is quite true of Latour et al. Where Latour sees a long process of negotiation, and a “settlement,” between Pasteur and the bacilli he was studying, Feyerabend doesn’t see Galileo (or Einstein, for that matter) in engaging in any such process vis-a-vis the earth, or the sun, or the universe. Instead, he sees them as blithely ignoring rules of evidence and of verification or falsification, in order to impose radically new perspectives (less upon the world than upon their own cultures). Galileo’s and Einstein’s justification is that their proposals indeed worked, and were accepted; this is what separates them from crackpots, though no criteria existed that could have assured these successes in advance.
What I don’t see enough of in contemporary science studies — though one finds it in Deleuze and Guattari, in Isabelle Stengers, and in the work of my friend Richard Doyle — is Feyerabend’s sense of the kinship between scientific and aesthetic creativity, in that both are engaged in creating the very criteria according to which they will be judged.
More generally, Feyerabend, like Latour and other more recent science studies thinkers, is deeply concerned with democracy, and with the way that the imperialism of Big Science threatens democracy by trying to decree that its Way is the Only Way. Indeed, one probably sees more of this threat today — in the “science wars” that reached a flash point in the mid 1990s, but that are still smouldering, in the popularization of science, and in the pronouncements of biologists like Richard Dawkins and Edward O. Wilson, and physicists like Steven Weinberg and Alan Sokal — than one did when Feyerabend was writing Against Method. But Feyerabend wisely refuses to get lost (as I fear Latour does at times) in the attempt to propose an alternative “settlement” or “constitution” to the one that Big Science has proclaimed for itself. Feyerabend’s genial anarchism, pluralism, and “relativism” (a term he accepts, but only in certain carefully outlined contexts) simply precludes the need for any single alternative account, such as the one Latour struggles to provide. Finally, for Feyerabend, there is no such thing as Science; we should rather speak of the sciences, as a multitude of often conflicting and contradictory practices, none of which can pretend to ultimate authority, and all of which have to be judged and dealt with according to a range of needs, interests, and contexts.
Pluralism is often derided as wishy-washy, wimpy, “soft,” unwilling to take a stand. None of this is true of Feyerabend’s pluralism, though I am not sure how much of his exemption from such charges is due to the rigor of his arguments, and how much to the charm of his rhetorical style — he’s an engaging, inviting, and unaffected writer, able to be clear and focused without becoming simplistic, and able to argue complexly without becoming abstruse. Of course, the attempt to separate logical rigor from stylistic effects is precisely the sort of pseudo-rational distinction that Feyerabend is continually warning us against.

Physics 4: Faster Than the Speed of Light

Joao Magueijo’s Faster Than the Speed of Light is a hoot: something that can’t be said about very many science books. Magueijo is lucid but light (make that ‘lite’) on the details of theoretical physics and cosmology, but he’s great at conveying the flavor of how science works in practice.
Actually, the book’s title is a misnomer: Magueijo isn’t claiming that anything can go faster than the speed of light, but rather that the speed of light is itself variable under certain circumstances (like at the initial moments of the Big Bang, or in a black hole). (Hence his approach is called VSL –variable speed of light — theory). VSL was originally concocted in order to offer an alternative to Alan Guth’s inflation theory as an account of how certain features of the universe (its relative homogeneity, and its relative “flatness,” or balance between the opposing forces of expansion and gravity) came about.
VSL theory may or may not be correct; but Magueijo claims it has several advantages in comparison to inflation. On the one side, it hooks up much more interestingly to work on theories of quantum gravity (string theory and/or loop quantum gravity); on the other hand, it seems to make more in the way of potentially testable predictions than inflation, or quantum gravity theories, are able to do.
(Just a few days ago, a new study was released that seems to put VSL theory into doubt, or that at least invalidates an earlier study that seemed to provide support for VSL).
But what’s great about Magueijo’s book is that he frankly recognizes the possibility that his theory will be falsified. His argument is that scientific discovery has to take these sorts of risks; it’s the only way that new ideas, some of which turn out to be important and true, get generated in the first place.
In line with this, the meat of Magueijo’s book is not in his explanation of the details of physical theory. Rather, it’s in the picture he paints of how scientific collaboration works: how small groups, or even communities, of scientists, are needed in order to develop new ideas. Scientific creativity is rarely solitary; as Magueijo points out, even Einstein couldn’t have gotten anywhere without his friends and peers.
The flip side of this, of course, is the sort of rivalry and infighting that takes place in scientific circles; together will all the idiocies of academic bureaucracy and ossification. Magueijo’s stories of “peer review” of journal submissions being used to settle personal scores and to enforce conformity, of theoretical schools taking on a cultlike status, and of ineptitude and imbecility in academia at the administrative level all were quite similar to things I have experienced or known about in my own field. It was exhilarating to find Magueijo calling out such things, often in hilarious and profane detail, instead of relegating them to the shadows.
Magueijo on string theory and loop quantum gravity: “Since they don’t connect with experiment or observations at all, they have become fashion accessories at best, at worst a sort of feudal warfare… As with every cult, people who do not conform to the party line are ostracized and persecuted” (p.236).
And again; “Stringy people have achieved nothing with a theory that doesn’t exist. They are excruciatingly pretentious in their claims for beauty; indeed, we are all assured that we live in an elegant universe, by the grace of stringy gods” (p. 240 — so much for Brian Greene!).
Notwithstanding this, Magueijo has worked on occasion with both string and loop quantum gravity theorists. His own theories currently also lack experimental testing, but at least he’s frank about this fact (and worried about correcting it).
All in all, Magueijo’s brashness and willingness to expose dirty laundry is a welcome alternative to the official story of science that we so often get.

Joao Magueijo’s Faster Than the Speed of Light is a hoot: something that can’t be said about very many science books. Magueijo is lucid but light (make that ‘lite’) on the details of theoretical physics and cosmology, but he’s great at conveying the flavor of how science works in practice.
Actually, the book’s title is a misnomer: Magueijo isn’t claiming that anything can go faster than the speed of light, but rather that the speed of light is itself variable under certain circumstances (like at the initial moments of the Big Bang, or in a black hole). (Hence his approach is called VSL –variable speed of light — theory). VSL was originally concocted in order to offer an alternative to Alan Guth’s inflation theory as an account of how certain features of the universe (its relative homogeneity, and its relative “flatness,” or balance between the opposing forces of expansion and gravity) came about.
VSL theory may or may not be correct; but Magueijo claims it has several advantages in comparison to inflation. On the one side, it hooks up much more interestingly to work on theories of quantum gravity (string theory and/or loop quantum gravity); on the other hand, it seems to make more in the way of potentially testable predictions than inflation, or quantum gravity theories, are able to do.
(Just a few days ago, a new study was released that seems to put VSL theory into doubt, or that at least invalidates an earlier study that seemed to provide support for VSL).
But what’s great about Magueijo’s book is that he frankly recognizes the possibility that his theory will be falsified. His argument is that scientific discovery has to take these sorts of risks; it’s the only way that new ideas, some of which turn out to be important and true, get generated in the first place.
In line with this, the meat of Magueijo’s book is not in his explanation of the details of physical theory. Rather, it’s in the picture he paints of how scientific collaboration works: how small groups, or even communities, of scientists, are needed in order to develop new ideas. Scientific creativity is rarely solitary; as Magueijo points out, even Einstein couldn’t have gotten anywhere without his friends and peers.
The flip side of this, of course, is the sort of rivalry and infighting that takes place in scientific circles; together will all the idiocies of academic bureaucracy and ossification. Magueijo’s stories of “peer review” of journal submissions being used to settle personal scores and to enforce conformity, of theoretical schools taking on a cultlike status, and of ineptitude and imbecility in academia at the administrative level all were quite similar to things I have experienced or known about in my own field. It was exhilarating to find Magueijo calling out such things, often in hilarious and profane detail, instead of relegating them to the shadows.
Magueijo on string theory and loop quantum gravity: “Since they don’t connect with experiment or observations at all, they have become fashion accessories at best, at worst a sort of feudal warfare… As with every cult, people who do not conform to the party line are ostracized and persecuted” (p.236).
And again; “Stringy people have achieved nothing with a theory that doesn’t exist. They are excruciatingly pretentious in their claims for beauty; indeed, we are all assured that we live in an elegant universe, by the grace of stringy gods” (p. 240 — so much for Brian Greene!).
Notwithstanding this, Magueijo has worked on occasion with both string and loop quantum gravity theorists. His own theories currently also lack experimental testing, but at least he’s frank about this fact (and worried about correcting it).
All in all, Magueijo’s brashness and willingness to expose dirty laundry is a welcome alternative to the official story of science that we so often get.