Physics 3: String Theory

I finished reading Brian Greene’s The Elegant Universe, which indeed is the best written and most comprehensible introduction-to-string-theory-for-the-mathematically-challenged (such as myself) that I have yet come across. But still, I found myself underwhelmed and far from convinced.
It’s partly John Horgan’s objection, which I have mentioned before: the fact that almost nothing about string theory can be empirically verified/falsified, since the theory deals with entities that remain forever out of the range of experimental investigation. (Indeed, Greene’s chapter on the possible future empirical verification of string theory is by far the lamest part of the the book. He basically says that, if more powerful atom smashers and space telescopes are built, then within a decade or two we might discover some new stuff that isn’t incompatible with string theory. In no other branch of science would this qualify as “proof”).
The physicists would probably answer Horgan that even if strings can never be actually detected (in the ways that, remember, protons and electrons and photons and even quarks can be actually detected) they are still the universe at its most basic, being what quarks and photons and electrons are made of. And this has to be the case, they would say, because the equations work. String theorist Edward Witten is quoted by both Horgan and Greene as saying that the proof of string theory is that it correctly predicts the existence of gravity.
That is to say, string theory is justified mainly on the basis that its consequences are the very “laws of nature” that we already know without it. It claims to unify general relativity (the theory of gravity) with quantum mechanics (the theory of elementary particles and the other forces in the universe aside from gravity, i.e. electromagnetism and the strong and weak nuclear forces). But the only real “proof” it offers of this unification is that the equations of relativity and quantum mechanics can be derived from the string equations (or could be, if the string theory equations were more fully worked out).
It strikes me that the lack of empirical verification that Horgan points to is really a consequence of a deeper problem, that of what Whitehead called the Fallacy of Misplaced Concreteness: the “error of mistaking the abstract for the concrete.” The wondrous equations of string theory are abstractions. They are artificially isolated simplifications of a concrete reality that is more complex and more complete. This, in itself, is not a problem. All science, indeed all knowledge, works by a process of abstraction. “You cannot think without abstractions,” Whitehead says; but that is why, he adds, “it is of the utmost importance to be vigilant in critically revising your modes of abstraction.”
That is to say, the problem comes when abstractions are mistaken for ultimates. In order to make the mathematics of string theory work, physicists need to postulate a whole group of “supersymmetric” subatomic particles that have never been observed (and I mean this in strong contrast to the ways in which protons and electrons and photons and neutrinos have in fact been observed). They also need to postulate six (or seven) “curled up” spatial dimensions, ones that are entirely, and necessarily, outside experience, in addition to the three spatial dimensions that are the core of our experience. This is supposed to make for an “elegant universe,” because of how well the equations fit together (though if your sense of aesthetics. or logic, includes Occam’s Razor, you are not likely to find these assumptions “elegant”).
Now, mathematical requirements have in fact led to what Tom Siegfried, in his rah-rah-science book Strange Matters, calls “prediscoveries.” Things like neutrinos, and antimatter, were predicted theoretically long before they were actually observed. The theories gave experimentalists strong hints on what to look for, and where; and their predictions were borne out. It’s also worth noting that proto-positivists like Ernst Mach regarded atoms as merely as necessary fictions; something which, I think, is refuted by the fact that we can now actually see atoms in electron scanning microscopes. It has been a losing proposition, for the past century, to bet against the actuality of scientific abstractions.
But it’s hard to see how additional, unobservable spatial dimensions, or a “multiverse” in which different “universes” had different physical parameters, could move from “prediscovery” to empirical verification in the same way that atoms and neutrinos have. And not for positivist, epistemological reasons, but for more profoundly metaphysical ones. The string theorists’ assumptions that science can not only explain how physical forces work, but positively account for them (as in the sense of explaining why physical constants have the particular values they do, which is one thing string theorists hope to accomplish) does seem to be mistaking the explanatory abstractions for the things they have been abstracted from.
Another quote from Whitehead: “Speculative Philosophy is the endeavor to frame a coherent, logical, necessary system of general ideas in terms of which every element of our experience can be interpreted.” By this criterion, string theory fails as speculative philosophy, because its mathematical abstractions fail to be ideas of sufficient generality. But if string theory also fails as empirical science, because it is too purely speculative, where does that leave us?

I finished reading Brian Greene’s The Elegant Universe, which indeed is the best written and most comprehensible introduction-to-string-theory-for-the-mathematically-challenged (such as myself) that I have yet come across. But still, I found myself underwhelmed and far from convinced.
It’s partly John Horgan’s objection, which I have mentioned before: the fact that almost nothing about string theory can be empirically verified/falsified, since the theory deals with entities that remain forever out of the range of experimental investigation. (Indeed, Greene’s chapter on the possible future empirical verification of string theory is by far the lamest part of the the book. He basically says that, if more powerful atom smashers and space telescopes are built, then within a decade or two we might discover some new stuff that isn’t incompatible with string theory. In no other branch of science would this qualify as “proof”).
The physicists would probably answer Horgan that even if strings can never be actually detected (in the ways that, remember, protons and electrons and photons and even quarks can be actually detected) they are still the universe at its most basic, being what quarks and photons and electrons are made of. And this has to be the case, they would say, because the equations work. String theorist Edward Witten is quoted by both Horgan and Greene as saying that the proof of string theory is that it correctly predicts the existence of gravity.
That is to say, string theory is justified mainly on the basis that its consequences are the very “laws of nature” that we already know without it. It claims to unify general relativity (the theory of gravity) with quantum mechanics (the theory of elementary particles and the other forces in the universe aside from gravity, i.e. electromagnetism and the strong and weak nuclear forces). But the only real “proof” it offers of this unification is that the equations of relativity and quantum mechanics can be derived from the string equations (or could be, if the string theory equations were more fully worked out).
It strikes me that the lack of empirical verification that Horgan points to is really a consequence of a deeper problem, that of what Whitehead called the Fallacy of Misplaced Concreteness: the “error of mistaking the abstract for the concrete.” The wondrous equations of string theory are abstractions. They are artificially isolated simplifications of a concrete reality that is more complex and more complete. This, in itself, is not a problem. All science, indeed all knowledge, works by a process of abstraction. “You cannot think without abstractions,” Whitehead says; but that is why, he adds, “it is of the utmost importance to be vigilant in critically revising your modes of abstraction.”
That is to say, the problem comes when abstractions are mistaken for ultimates. In order to make the mathematics of string theory work, physicists need to postulate a whole group of “supersymmetric” subatomic particles that have never been observed (and I mean this in strong contrast to the ways in which protons and electrons and photons and neutrinos have in fact been observed). They also need to postulate six (or seven) “curled up” spatial dimensions, ones that are entirely, and necessarily, outside experience, in addition to the three spatial dimensions that are the core of our experience. This is supposed to make for an “elegant universe,” because of how well the equations fit together (though if your sense of aesthetics. or logic, includes Occam’s Razor, you are not likely to find these assumptions “elegant”).
Now, mathematical requirements have in fact led to what Tom Siegfried, in his rah-rah-science book Strange Matters, calls “prediscoveries.” Things like neutrinos, and antimatter, were predicted theoretically long before they were actually observed. The theories gave experimentalists strong hints on what to look for, and where; and their predictions were borne out. It’s also worth noting that proto-positivists like Ernst Mach regarded atoms as merely as necessary fictions; something which, I think, is refuted by the fact that we can now actually see atoms in electron scanning microscopes. It has been a losing proposition, for the past century, to bet against the actuality of scientific abstractions.
But it’s hard to see how additional, unobservable spatial dimensions, or a “multiverse” in which different “universes” had different physical parameters, could move from “prediscovery” to empirical verification in the same way that atoms and neutrinos have. And not for positivist, epistemological reasons, but for more profoundly metaphysical ones. The string theorists’ assumptions that science can not only explain how physical forces work, but positively account for them (as in the sense of explaining why physical constants have the particular values they do, which is one thing string theorists hope to accomplish) does seem to be mistaking the explanatory abstractions for the things they have been abstracted from.
Another quote from Whitehead: “Speculative Philosophy is the endeavor to frame a coherent, logical, necessary system of general ideas in terms of which every element of our experience can be interpreted.” By this criterion, string theory fails as speculative philosophy, because its mathematical abstractions fail to be ideas of sufficient generality. But if string theory also fails as empirical science, because it is too purely speculative, where does that leave us?

Physics 2: Incompatibility

OK, here’s something (only one of many, alas, but a particularly important one) that I don’t understand:
The big problem in theoretical physics is the incompatibility between quantum mechanics and general relativity. Both theories seem essentially correct (they have been verified time and time again, to an incredible degree of accuracy); but they are logically incompatible. Most of the time, this incompatibility simply doesn’t matter (quantum mechanics works for the microworld, and general relativity for the cosmic macroworld of immense masses).
But when you have both enormous mass AND a subatomic scale of size (basically, in the center of a black hole, and also at the initial instant of the Big Bang), and where you therefore need both theories, the mathematics doesn’t work and the equations turn to nonsense (give infinite results). You can’t say that either theory has been falsified, exactly; since we cannot access the center of a black hole, or the initial nanosecond of the Big Bang, we haven’t actually found any experimental results that either theory has gotten wrong or failed to predict. Indeed, we never actually encounter any of the situations where either of the theories fails.
But theoretical physicists just don’t like the fact that there’s a point where the theories come into conflict.
I’ll leave aside the physicists’ entirely ungrounded metaphysical (or aesthetic) assumption that there must be a single theory in which everything fits together. It could well be that things are incompatible, but they exist anyway, and that’s the end of it.
What I’m interested in now, however, is the REASON for the incompatibility between the two theories. It’s a reason that has been inherited by the theories’ current descendants: string theory (the extension of quantum mechanics) and loop quantum gravity (the extension of general relativity).
Brian Greene, in his book about string theory, The Elegant Universe, explains the incompatibility as follows:
“The notion of a smooth spatial geometry, the central principle of general relativity, is destroyed by the violent fluctuations of the quantum world on small distance scales” (p129). On the large scale, the universe follows a Riemannian geometry as Einstein stipulated. But on the quantum microscale, the smooth space that Riemannian (as well as Euclidian) geometry requires simply cannot exist, given the violent quantum fluctuations of even supposedly empty space. So general relativity needs to be modified to fit the picture of quantum uncertainty — which is what string theory does.
However, Lee Smolin, in his book on loop quantum gravity, Three Roads to Quantum Gravity, gives a rather different explanation. He says that the problem with quantum mechanics, as well as with its successor string theory, is that “it does not respect the fundamental lesson of general relativity that spacetime is an evolving series of relationships” (p149). Again, string theory, like quantum mechanics, “replicates the basic mistake of Newtonian physics in treating space and time as a fixed and unchanging background against which things move and interact… the right thing to do is to treat the whole system of relationships that make up space and time as a single dynamical entity, without fixing any of it. This is how general relativity and loop quantum gracity work” (p159). For Smolin, general relativity shows us that space and time have no original existence; they are generated out of relational processes between events. Events do not ‘take place’ in space and time; rather it is only the relations among events that generates space and time in the first place. (This strikes me as a provocatively Whiteheadian way of looking at things, though Smolin never mentions Whitehead).
Another way to rephrase all of this is to say that string theory approaches the incompatibility between quantum mechanics and general relativity from the side of quantum mechanics; while loop quantum gravity approaches the incompatibility from the side of general relativity. Each approach starts by assuming the problem resides in classical assumptions of the other theory. For Greene, general relativity fails to include the radicality of quantum weirdness; for Smolin, quantum mechanics fails to include the radicality of relativity’s relational notion of space and time.
I should also note that Smolin’s and Greene’s positions are not really symmetrical. String theory has much more widespread acceptance than loop quantum gravity. As a result, Smolin’s book spends a great deal of time on string theory, trying to reconcile it with loop quantum gravity theory; whereas Greene’s book doesn’t even condescend to mention loop quantum gravity, apparently considering it too wrong, or too insignificant, to merit even the slightest notice.
In summary: even before we get to the mathematics, there seems to be a fundamental metaphysical disagreement between the two camps. Though Greene and Smolin characterize general relativity so differently, they don’t even seem to be talking about the same theory. For Greene, relativity wrongly assumes a stable geometry; for Smolin, it is quantum mechanics and string theory that wrongly assume the existence of space and time as an absolute background, rather than deriving them from quantum-level events.
So, I have no sense of how to adjudicate this disagreement. I do have the sense that the metaphysics needs to be paid attention to, rather than just the mathematical complexities of the theories (which I obviously cannot ever hope to make a judgment about).
Addendum: I also wonder how all this might be related to other considerations about the derivativeness of space and time. Manuel DeLanda, in his book which tries to give a Deleuzian basis to physical theory, mostly talks about thermodynamics and complexity theory (especially citing Ilya Prigogine) in order to show how “metrical” space and time are derived from “intensive” space and time. This would seem to be a rather different project from that of deriving metrical space and time from the quanta of events/relationships, as Smolin proposes (though for Smolin, like De Landa and unlike Greene, thermodynamic considerations are a very big part of the picture).
PS: though Smolin says that both space and time must be quantized (have discrete smallest possible values, rather than being infinitely divisible), he mostly talks about quantum space, and says next to nothing about quantum time. What difference would a focus on quantum temporality make to any of these theories?

OK, here’s something (only one of many, alas, but a particularly important one) that I don’t understand:
The big problem in theoretical physics is the incompatibility between quantum mechanics and general relativity. Both theories seem essentially correct (they have been verified time and time again, to an incredible degree of accuracy); but they are logically incompatible. Most of the time, this incompatibility simply doesn’t matter (quantum mechanics works for the microworld, and general relativity for the cosmic macroworld of immense masses).
But when you have both enormous mass AND a subatomic scale of size (basically, in the center of a black hole, and also at the initial instant of the Big Bang), and where you therefore need both theories, the mathematics doesn’t work and the equations turn to nonsense (give infinite results). You can’t say that either theory has been falsified, exactly; since we cannot access the center of a black hole, or the initial nanosecond of the Big Bang, we haven’t actually found any experimental results that either theory has gotten wrong or failed to predict. Indeed, we never actually encounter any of the situations where either of the theories fails.
But theoretical physicists just don’t like the fact that there’s a point where the theories come into conflict.
I’ll leave aside the physicists’ entirely ungrounded metaphysical (or aesthetic) assumption that there must be a single theory in which everything fits together. It could well be that things are incompatible, but they exist anyway, and that’s the end of it.
What I’m interested in now, however, is the REASON for the incompatibility between the two theories. It’s a reason that has been inherited by the theories’ current descendants: string theory (the extension of quantum mechanics) and loop quantum gravity (the extension of general relativity).
Brian Greene, in his book about string theory, The Elegant Universe, explains the incompatibility as follows:
“The notion of a smooth spatial geometry, the central principle of general relativity, is destroyed by the violent fluctuations of the quantum world on small distance scales” (p129). On the large scale, the universe follows a Riemannian geometry as Einstein stipulated. But on the quantum microscale, the smooth space that Riemannian (as well as Euclidian) geometry requires simply cannot exist, given the violent quantum fluctuations of even supposedly empty space. So general relativity needs to be modified to fit the picture of quantum uncertainty — which is what string theory does.
However, Lee Smolin, in his book on loop quantum gravity, Three Roads to Quantum Gravity, gives a rather different explanation. He says that the problem with quantum mechanics, as well as with its successor string theory, is that “it does not respect the fundamental lesson of general relativity that spacetime is an evolving series of relationships” (p149). Again, string theory, like quantum mechanics, “replicates the basic mistake of Newtonian physics in treating space and time as a fixed and unchanging background against which things move and interact… the right thing to do is to treat the whole system of relationships that make up space and time as a single dynamical entity, without fixing any of it. This is how general relativity and loop quantum gracity work” (p159). For Smolin, general relativity shows us that space and time have no original existence; they are generated out of relational processes between events. Events do not ‘take place’ in space and time; rather it is only the relations among events that generates space and time in the first place. (This strikes me as a provocatively Whiteheadian way of looking at things, though Smolin never mentions Whitehead).
Another way to rephrase all of this is to say that string theory approaches the incompatibility between quantum mechanics and general relativity from the side of quantum mechanics; while loop quantum gravity approaches the incompatibility from the side of general relativity. Each approach starts by assuming the problem resides in classical assumptions of the other theory. For Greene, general relativity fails to include the radicality of quantum weirdness; for Smolin, quantum mechanics fails to include the radicality of relativity’s relational notion of space and time.
I should also note that Smolin’s and Greene’s positions are not really symmetrical. String theory has much more widespread acceptance than loop quantum gravity. As a result, Smolin’s book spends a great deal of time on string theory, trying to reconcile it with loop quantum gravity theory; whereas Greene’s book doesn’t even condescend to mention loop quantum gravity, apparently considering it too wrong, or too insignificant, to merit even the slightest notice.
In summary: even before we get to the mathematics, there seems to be a fundamental metaphysical disagreement between the two camps. Though Greene and Smolin characterize general relativity so differently, they don’t even seem to be talking about the same theory. For Greene, relativity wrongly assumes a stable geometry; for Smolin, it is quantum mechanics and string theory that wrongly assume the existence of space and time as an absolute background, rather than deriving them from quantum-level events.
So, I have no sense of how to adjudicate this disagreement. I do have the sense that the metaphysics needs to be paid attention to, rather than just the mathematical complexities of the theories (which I obviously cannot ever hope to make a judgment about).
Addendum: I also wonder how all this might be related to other considerations about the derivativeness of space and time. Manuel DeLanda, in his book which tries to give a Deleuzian basis to physical theory, mostly talks about thermodynamics and complexity theory (especially citing Ilya Prigogine) in order to show how “metrical” space and time are derived from “intensive” space and time. This would seem to be a rather different project from that of deriving metrical space and time from the quanta of events/relationships, as Smolin proposes (though for Smolin, like De Landa and unlike Greene, thermodynamic considerations are a very big part of the picture).
PS: though Smolin says that both space and time must be quantized (have discrete smallest possible values, rather than being infinitely divisible), he mostly talks about quantum space, and says next to nothing about quantum time. What difference would a focus on quantum temporality make to any of these theories?

Physics 1

I’m reading some books on the latest developments in physics — something I do every once in a while. I’m interested in learning about the latest developments in string theory, brane theory, etc — the latest accounts of the “ultimate” structure of the universe; and I’m interested in how the strange anomalies that astronomers continue to make — about the age and size of the universe, the degree of its expansion, and so on — can be explained (most recently, in terms of “dark matter,” “dark energy,” and other such strange concepts). I’m interested, too, or perhaps above all, in the philosophical implications of all this.
There are several points that need to be mentioned, as I begin.
First: although the explanations that pop science books give is never of the same depth as the understandings of the scientists themselves, this is especially true with theoretical physics and cosmology. Because things like quantum mechanics, together with more recent developments like string theory, cannot be understood in “layman’s” or intuitive terms. Things like the uncertainty principle, wave/particle duality, and quantum superposition are so profoundly counter-intuitive — although everything we know tells us that they are true and real — that they are not graspable at all in words, images, or logical concepts. To put it differently, they are only comprehensible as very high level mathematical abstractions. Since I — like the overwhelming majority of human beings — do not understand the math, I am simply not capable of understanding quantum mechanics.
Which means that, to a large extent, I am incapable of judging what’s told to me in the books I’m reading on theoretical physics. I accept what I’m told about quantum mechanics, because quantum effects are really experienced in the physical world, just as much as “classical” physical effects (which I can comprehend) are really experienced. But even if I have an idea about how quantum computation might work, I still can’t grasp what a quantum superposition (the cat in the box being both alive and dead) actually means.
In particular, when scientists disagree (as major physicists currently do, on many matters) my only grounds for deciding between them are aesthetic or metaphysical ones. I am not able to follow the reasoning on the basis of which the scientists themselves argue out their positions; nor can I understand the criteria which could lead one to concede the argument to the other.
(For what it’s worth, the scientists and mathematicians who do understand these matters, do so only at this highly abstract mathematical level; they remain as incapable of grasping it intuitively as anyone else. It’s often said that great physicists, like Einstein, somehow have an intuitive grasp of the math and of the concepts they come up with. But Einstein himself is as good an example as any of the fact that it is humanly impossible to “translate” such mathematical/theoretical intuition back into other, more commonplace terms or frames of reference. Einstein had as much trouble as anyone else in grasping the real implications of relativity and (even more) of quantum mechanics.)
That’s the first problem. The second one, perhaps equally serious, is that physicists themselves seem to be reaching the point where mathematical consistency and elegance seem to have become more important than empirical verification. This is a point that was made particularly strongly by John Horgan in his 1996 book The End of Science. Horgan says, basically, that theoretical physics has gone into the deep end, or “jumped the shark”: it has reached a point where it has become concerned with “speculations [that] cannot be empirically verified” (p65). There is no way that we can ever actually know whether or not the universe is made of 9-dimensional strings, or attached to multidimensional membranes, or whether or not what we call the universe is only one of many, created in innumerable “Big Bangs.” String theory and its kin are no longer making verifiable or falsifiable predictions, the way general relativity and quantum mechanics both did. Instead, the claims for such theories are that, if we solve all their equations, then the theories and facts we already know about the universe come out right.
In other words: there will never be any empirical evidence either in favor of, or against, the existence of six tiny folded-up dimensions in addition to the three spatial dimensions we experience on a regular basis. The claim is rather that, if we start from the postulate of those extra dimensions, then we can derive the laws of quantum mechanics and relativity (as we already know them without any recourse to the extra dimensions). This is supposed to be a good thing, because it saves physics from the embarrassment that, as they are currently formulated without string theory, quantum mechanics and general relativity logically contradict one another — even though they both work, in the sense that they both have been verified, and have measurable consequences in overwhelming numbers of circumstances.
String theory claims to resolve the contradiction; and physicists therefore claim that it describes the ultimate nature of reality. But this is both logically dubious (because it leaves open the possibility that some entirely different theory, making entirely different claims, which nobody has thought of yet, might also resolve the contradiction and mathematically generate the same observed results with regard to gravity and subatomic particles), and metaphysically shaky (since what it claims as “ultimate reality” not only cannot ever be observed, but has no pragmatic consequences whatsoever).
Horgan therefore suggests that what the string and brane theorists are doing is aesthetics, metaphysics, or theology, rather than science.
My feeling about this is that two out of three ain’t bad (I have little use for theology). But if theoretical physicists are really engaging in metaphysics and aesthetics, then we need to think about the philosophical assumptions embedded in, and the philosophical consequences of their arguments: something that they themselves are not very good at doing, since they tend to be ignorant of the history of philosophy (the occasional reference to Leibniz or Spinoza notwithstanding), and to assume that their mathematics gives them a philosophical authority, when they talk about such things as space, time, and why things are the way they are. They often tend to be quite philosophically naive.
So even though I don’t understand most of what the physicists are saying, I think it’s important for us to try to think through these issues, rather than accept their assertions at face value — since, in certain contexts, the physicists may well not understand the presuppositions and implications of what they are saying, either.

I’m reading some books on the latest developments in physics — something I do every once in a while. I’m interested in learning about the latest developments in string theory, brane theory, etc — the latest accounts of the “ultimate” structure of the universe; and I’m interested in how the strange anomalies that astronomers continue to make — about the age and size of the universe, the degree of its expansion, and so on — can be explained (most recently, in terms of “dark matter,” “dark energy,” and other such strange concepts). I’m interested, too, or perhaps above all, in the philosophical implications of all this.
There are several points that need to be mentioned, as I begin.
First: although the explanations that pop science books give is never of the same depth as the understandings of the scientists themselves, this is especially true with theoretical physics and cosmology. Because things like quantum mechanics, together with more recent developments like string theory, cannot be understood in “layman’s” or intuitive terms. Things like the uncertainty principle, wave/particle duality, and quantum superposition are so profoundly counter-intuitive — although everything we know tells us that they are true and real — that they are not graspable at all in words, images, or logical concepts. To put it differently, they are only comprehensible as very high level mathematical abstractions. Since I — like the overwhelming majority of human beings — do not understand the math, I am simply not capable of understanding quantum mechanics.
Which means that, to a large extent, I am incapable of judging what’s told to me in the books I’m reading on theoretical physics. I accept what I’m told about quantum mechanics, because quantum effects are really experienced in the physical world, just as much as “classical” physical effects (which I can comprehend) are really experienced. But even if I have an idea about how quantum computation might work, I still can’t grasp what a quantum superposition (the cat in the box being both alive and dead) actually means.
In particular, when scientists disagree (as major physicists currently do, on many matters) my only grounds for deciding between them are aesthetic or metaphysical ones. I am not able to follow the reasoning on the basis of which the scientists themselves argue out their positions; nor can I understand the criteria which could lead one to concede the argument to the other.
(For what it’s worth, the scientists and mathematicians who do understand these matters, do so only at this highly abstract mathematical level; they remain as incapable of grasping it intuitively as anyone else. It’s often said that great physicists, like Einstein, somehow have an intuitive grasp of the math and of the concepts they come up with. But Einstein himself is as good an example as any of the fact that it is humanly impossible to “translate” such mathematical/theoretical intuition back into other, more commonplace terms or frames of reference. Einstein had as much trouble as anyone else in grasping the real implications of relativity and (even more) of quantum mechanics.)
That’s the first problem. The second one, perhaps equally serious, is that physicists themselves seem to be reaching the point where mathematical consistency and elegance seem to have become more important than empirical verification. This is a point that was made particularly strongly by John Horgan in his 1996 book The End of Science. Horgan says, basically, that theoretical physics has gone into the deep end, or “jumped the shark”: it has reached a point where it has become concerned with “speculations [that] cannot be empirically verified” (p65). There is no way that we can ever actually know whether or not the universe is made of 9-dimensional strings, or attached to multidimensional membranes, or whether or not what we call the universe is only one of many, created in innumerable “Big Bangs.” String theory and its kin are no longer making verifiable or falsifiable predictions, the way general relativity and quantum mechanics both did. Instead, the claims for such theories are that, if we solve all their equations, then the theories and facts we already know about the universe come out right.
In other words: there will never be any empirical evidence either in favor of, or against, the existence of six tiny folded-up dimensions in addition to the three spatial dimensions we experience on a regular basis. The claim is rather that, if we start from the postulate of those extra dimensions, then we can derive the laws of quantum mechanics and relativity (as we already know them without any recourse to the extra dimensions). This is supposed to be a good thing, because it saves physics from the embarrassment that, as they are currently formulated without string theory, quantum mechanics and general relativity logically contradict one another — even though they both work, in the sense that they both have been verified, and have measurable consequences in overwhelming numbers of circumstances.
String theory claims to resolve the contradiction; and physicists therefore claim that it describes the ultimate nature of reality. But this is both logically dubious (because it leaves open the possibility that some entirely different theory, making entirely different claims, which nobody has thought of yet, might also resolve the contradiction and mathematically generate the same observed results with regard to gravity and subatomic particles), and metaphysically shaky (since what it claims as “ultimate reality” not only cannot ever be observed, but has no pragmatic consequences whatsoever).
Horgan therefore suggests that what the string and brane theorists are doing is aesthetics, metaphysics, or theology, rather than science.
My feeling about this is that two out of three ain’t bad (I have little use for theology). But if theoretical physicists are really engaging in metaphysics and aesthetics, then we need to think about the philosophical assumptions embedded in, and the philosophical consequences of their arguments: something that they themselves are not very good at doing, since they tend to be ignorant of the history of philosophy (the occasional reference to Leibniz or Spinoza notwithstanding), and to assume that their mathematics gives them a philosophical authority, when they talk about such things as space, time, and why things are the way they are. They often tend to be quite philosophically naive.
So even though I don’t understand most of what the physicists are saying, I think it’s important for us to try to think through these issues, rather than accept their assertions at face value — since, in certain contexts, the physicists may well not understand the presuppositions and implications of what they are saying, either.

Scientific study of physical beauty

Another example of the silliness and naivete of sociobiology, evolutionary psychology, etc: “Physical beauty involves more than good looks,” according to a recent study.
Now, this is a revisionist study. Previous surveys have used the methodology of showing male undergraduates pictures of various women (and occasionally the reverse), asking them which ones they found the most attractive, measuring various body ratios of the objects deemed most attractive, and concluding that “physically attractive traits include high degrees of bilateral facial symmetries, such as eyes that are identical in shape and size, and waist-to-hip ratios of 0.7 for women and 0.9 for men.” From these findings it is further extrapolated that these ratios must be universally preferred in all human beings, regardless of cultural and individual differences, and therefore must be genetically hardwired for good adaptive reasons (which usually go back to saying that these ratios are indications of the most fertile mates).
The present study determines that this is wrong, or at least that it is not the entire picture:
“There is more to beauty than meets the stranger’s eye, according to results from three studies examining the influence of non-physical traits on people’s perception of physical attractiveness. The results, which show that people perceive physical appeal differently when they look at those they know versus strangers, are published in the recently released March issue of Evolution and Human Behavior.”
What this really means, of course, is that people judge people they know well differently from how they judge complete strangers whom they have not even met, but only encountered through photos that have been shown to them for a few seconds. Scarcely a startling finding.
What the researchers conclude, however, is that:
“the fitness value of potential social partners depends at least as much on non-physical traits — whether they are cooperative, dependable, brave, hardworking, intelligent and so on — as physical factors, such as smooth skin and symmetrical features,…It follows that non-physical factors should be included in the subconscious assessment of beauty.”
This illustrates the solipsistic and self-confirming nature of the whole research project. It is assumed a priori that whatever a study uncovers about human “preferences” or ideas or behavior must be adaptive, i.e. a direct product of natural selection. The “subconscious assessment of beauty” must correspond to what is actually (i.e. statistically) most advantageous to reproduction.
With these assumptions, it doesn’t matter how shoddy the methodology is, nor what is “discovered” (whether it is something banal and obvious, or something totally counter-intuitive); in any case, the results will be explained in terms of selective advantage; and at the same time, the theory of selective advantage will be taken to be strengthened by these “results.” The circularity is perfect: nothing can disconfirm the founding assumptions, and the most simplistic and/or inane “findings” can be validated as significant research.

Another example of the silliness and naivete of sociobiology, evolutionary psychology, etc: “Physical beauty involves more than good looks,” according to a recent study.
Now, this is a revisionist study. Previous surveys have used the methodology of showing male undergraduates pictures of various women (and occasionally the reverse), asking them which ones they found the most attractive, measuring various body ratios of the objects deemed most attractive, and concluding that “physically attractive traits include high degrees of bilateral facial symmetries, such as eyes that are identical in shape and size, and waist-to-hip ratios of 0.7 for women and 0.9 for men.” From these findings it is further extrapolated that these ratios must be universally preferred in all human beings, regardless of cultural and individual differences, and therefore must be genetically hardwired for good adaptive reasons (which usually go back to saying that these ratios are indications of the most fertile mates).
The present study determines that this is wrong, or at least that it is not the entire picture:
“There is more to beauty than meets the stranger’s eye, according to results from three studies examining the influence of non-physical traits on people’s perception of physical attractiveness. The results, which show that people perceive physical appeal differently when they look at those they know versus strangers, are published in the recently released March issue of Evolution and Human Behavior.”
What this really means, of course, is that people judge people they know well differently from how they judge complete strangers whom they have not even met, but only encountered through photos that have been shown to them for a few seconds. Scarcely a startling finding.
What the researchers conclude, however, is that:
“the fitness value of potential social partners depends at least as much on non-physical traits — whether they are cooperative, dependable, brave, hardworking, intelligent and so on — as physical factors, such as smooth skin and symmetrical features,…It follows that non-physical factors should be included in the subconscious assessment of beauty.”
This illustrates the solipsistic and self-confirming nature of the whole research project. It is assumed a priori that whatever a study uncovers about human “preferences” or ideas or behavior must be adaptive, i.e. a direct product of natural selection. The “subconscious assessment of beauty” must correspond to what is actually (i.e. statistically) most advantageous to reproduction.
With these assumptions, it doesn’t matter how shoddy the methodology is, nor what is “discovered” (whether it is something banal and obvious, or something totally counter-intuitive); in any case, the results will be explained in terms of selective advantage; and at the same time, the theory of selective advantage will be taken to be strengthened by these “results.” The circularity is perfect: nothing can disconfirm the founding assumptions, and the most simplistic and/or inane “findings” can be validated as significant research.

Wider Than the Sky

Wider Than the Sky is Gerald Edelman‘s summary/overview of his work on the neural basis of consciousness. (Parts of this work have been explained, in greater detail, in a number of Edelman’s earlier books; the ones I have previously read are Bright Air, Brilliant Fire and The Remembered Present).
Edelman has a peculiar position in neuroscience, from what I have been able to gather: he is disliked by many because of his egocentric insistence on reinventing the wheel. That is to say, he insists so unilaterally on his own theories that he ignores work by others that in many ways is parallel to his, and that his own work would benefit by communicating with.
Be that as it may, Edelman offers an interesting and plausible (albeit largely unproven) theory about how consciousness is generated, and how it works, in the brain. His basic thesis is the hypothesis of “neural Darwinism”: he argues that both the growth and “wiring” of neurons during fetal and childhood development, and the activation of neurons in memory and in response to the environment are governed by a process analogous to Darwinian natural selection. (Edelman previously won the Nobel Prize for his work showing that such selection mechanisms were at work in the mammalian immune system, as populations of antibodies mutate and grow in response to infections). Groups of neurons are selected on the basis of their effectiveness in responding to multiple stimuli from the outside world, and in classifying and responding to these stimuli in terms of categories derived from previous, remembered experiences (what Edelman calls “value-category memory”). Consciousness arises as a result of “reentry”, a kind of hyper-feedback among groups of neurons allowing for coordination among, and unification of, what would otherwise be disconnected percepts. (Edelman defines reentry as “the dynamic ongoing process of recursive signaling across massively parallel reciprocal fibers…” Such a process “allows coherent and synchronous events to emerge in the brain.” These events are the contents of consciousness, and processes of reentry explain how consciousness can be both unified, and yet extremely diverse and continually changing).
There are many more details, involving such things attention, emotion, and the difference between “primary consciousness,” which presumably all mammals and birds have, and “higher-order consciousness” (or what I would call reflexive consciousness, or self-consciousness) which only really emerges with language (though Edelman allows for the possibility that cruder, emergent versions of it may exist among the great apes).
A lot of this would seem to be speculation; a lot of it isn’t really experimentally grounded (at least so far), and some of it may in fact not be ‘scientific’ at all, because not empirically testable or falsifiable.
But to my mind, this is not necessarily a deficiency. Though Edelman throughout expresses his admiration for, and frequent agreement with, the psychology of William James, he begins the book by disclaiming any metaphysical intent, and by expressing puzzlement over James’ claim that, when consciousness finally is explained, “the necessities of the case will make [these explanations] ‘metaphysical'” (Edelman quoting James in his Preface, page xii).
It seems to me that, even in spite of himself, Edelman proves James right, by giving a theory of consciousness that is to some extent unavoidably metaphysical. Edelman shies away from such a term because he insists, rightly, that in any explanation of consciousness “principles of physics must be strictly obeyed and that the world defined by physics is causally closed. No spooky forces that contravene thermodynamics can be included” (page 114). –But I think that James himself would have entirely accepted this qualification, and that what he meant by “metaphysical” is something else. A theory of consciousness can’t help being “metaphysical,” because it’s impossible to “translate” between first-person phenomenal sensation, and third-person, scientifically objective observation. The point, precisely, is to do “metaphysical” justice to first-person consciousness, without thereby positing its objective existence as a phenomenon in the world (which would mean believing in “spooky forces” like “spirit” or “mind energy” or something else extra-physical).
Edelman’s theory of consciousness is “metaphysical” in what I consider the good, Jamesian sense, because his way of finessing the difference between observable-from-outside neural states and inside-only conscious feeling is to reject both those theories that would give causal efficacy to consciousness and will and those theories that dismiss consciousness as “merely” epiphenomenal. In effect, Edelman is saying that consciousness is indeed epiphenomenal rather than actually causal, but that there is nothing “mere” about such epiphenomenality. This latter because consciousness is “entailed” by neural processes that are themselves causal (which could perhaps be read — though I am unsure that this is right — as a weak version of Spinoza’s mind/body parallelism).
So far I’ve left out what is perhaps the most important part of Edelman’s theory: the assertion that neural processes are massively “degenerate” (a better word, in terms of vocabularies that I am familiar with, would be “redundant”). (Edelman defines “degeneracy” as “the ability of different structures to carry out the same function or yield the same output”). This is something that does seem to be empirically valid (different neural pathways can result in the same emotion or memory or other conscious perception; if one particular brain system or sub-system breaks down, another one can ‘cover’ for it or adaptively take its place), and that is logically coherent with (and indeed necessitated by) the assumption of “neural Darwinism” (if mind states are the result of statistical selection among large populations of neurons, then there cannot be one and only one uniquely privileged pathway to generate a given result).
What’s crucial here is that, if we accept the “degeneracy”/redundancy of the brain operating by this sort of “selection,” then “much of cognitive science is ill-founded” (page 111): the brain does not operate algorithmically (as Daniel Dennett claims), or by a process of computation analogous to what goes on in digital computers. Thought is not a process of taking symbolic representations and performing calculations, or logical operations, upon them. There is no “language of thought” (page 105), of which actual language would merely be a “translation.”
Thus, though Edelman shows no signs of being aware of the anti-representationalist arguments in recent continental philosophy and “theory”, he comes to many of the same conclusions, in opposition to the reigning (in American psychology and computer science, at least) ideology of cognitivism. And he does this by being a better Darwinian than all those loudly and explicitly Darwinian “evolutionary psychologists” who are so willfully dismissive of neuroscience.

Wider Than the Sky is Gerald Edelman‘s summary/overview of his work on the neural basis of consciousness. (Parts of this work have been explained, in greater detail, in a number of Edelman’s earlier books; the ones I have previously read are Bright Air, Brilliant Fire and The Remembered Present).
Edelman has a peculiar position in neuroscience, from what I have been able to gather: he is disliked by many because of his egocentric insistence on reinventing the wheel. That is to say, he insists so unilaterally on his own theories that he ignores work by others that in many ways is parallel to his, and that his own work would benefit by communicating with.
Be that as it may, Edelman offers an interesting and plausible (albeit largely unproven) theory about how consciousness is generated, and how it works, in the brain. His basic thesis is the hypothesis of “neural Darwinism”: he argues that both the growth and “wiring” of neurons during fetal and childhood development, and the activation of neurons in memory and in response to the environment are governed by a process analogous to Darwinian natural selection. (Edelman previously won the Nobel Prize for his work showing that such selection mechanisms were at work in the mammalian immune system, as populations of antibodies mutate and grow in response to infections). Groups of neurons are selected on the basis of their effectiveness in responding to multiple stimuli from the outside world, and in classifying and responding to these stimuli in terms of categories derived from previous, remembered experiences (what Edelman calls “value-category memory”). Consciousness arises as a result of “reentry”, a kind of hyper-feedback among groups of neurons allowing for coordination among, and unification of, what would otherwise be disconnected percepts. (Edelman defines reentry as “the dynamic ongoing process of recursive signaling across massively parallel reciprocal fibers…” Such a process “allows coherent and synchronous events to emerge in the brain.” These events are the contents of consciousness, and processes of reentry explain how consciousness can be both unified, and yet extremely diverse and continually changing).
There are many more details, involving such things attention, emotion, and the difference between “primary consciousness,” which presumably all mammals and birds have, and “higher-order consciousness” (or what I would call reflexive consciousness, or self-consciousness) which only really emerges with language (though Edelman allows for the possibility that cruder, emergent versions of it may exist among the great apes).
A lot of this would seem to be speculation; a lot of it isn’t really experimentally grounded (at least so far), and some of it may in fact not be ‘scientific’ at all, because not empirically testable or falsifiable.
But to my mind, this is not necessarily a deficiency. Though Edelman throughout expresses his admiration for, and frequent agreement with, the psychology of William James, he begins the book by disclaiming any metaphysical intent, and by expressing puzzlement over James’ claim that, when consciousness finally is explained, “the necessities of the case will make [these explanations] ‘metaphysical'” (Edelman quoting James in his Preface, page xii).
It seems to me that, even in spite of himself, Edelman proves James right, by giving a theory of consciousness that is to some extent unavoidably metaphysical. Edelman shies away from such a term because he insists, rightly, that in any explanation of consciousness “principles of physics must be strictly obeyed and that the world defined by physics is causally closed. No spooky forces that contravene thermodynamics can be included” (page 114). –But I think that James himself would have entirely accepted this qualification, and that what he meant by “metaphysical” is something else. A theory of consciousness can’t help being “metaphysical,” because it’s impossible to “translate” between first-person phenomenal sensation, and third-person, scientifically objective observation. The point, precisely, is to do “metaphysical” justice to first-person consciousness, without thereby positing its objective existence as a phenomenon in the world (which would mean believing in “spooky forces” like “spirit” or “mind energy” or something else extra-physical).
Edelman’s theory of consciousness is “metaphysical” in what I consider the good, Jamesian sense, because his way of finessing the difference between observable-from-outside neural states and inside-only conscious feeling is to reject both those theories that would give causal efficacy to consciousness and will and those theories that dismiss consciousness as “merely” epiphenomenal. In effect, Edelman is saying that consciousness is indeed epiphenomenal rather than actually causal, but that there is nothing “mere” about such epiphenomenality. This latter because consciousness is “entailed” by neural processes that are themselves causal (which could perhaps be read — though I am unsure that this is right — as a weak version of Spinoza’s mind/body parallelism).
So far I’ve left out what is perhaps the most important part of Edelman’s theory: the assertion that neural processes are massively “degenerate” (a better word, in terms of vocabularies that I am familiar with, would be “redundant”). (Edelman defines “degeneracy” as “the ability of different structures to carry out the same function or yield the same output”). This is something that does seem to be empirically valid (different neural pathways can result in the same emotion or memory or other conscious perception; if one particular brain system or sub-system breaks down, another one can ‘cover’ for it or adaptively take its place), and that is logically coherent with (and indeed necessitated by) the assumption of “neural Darwinism” (if mind states are the result of statistical selection among large populations of neurons, then there cannot be one and only one uniquely privileged pathway to generate a given result).
What’s crucial here is that, if we accept the “degeneracy”/redundancy of the brain operating by this sort of “selection,” then “much of cognitive science is ill-founded” (page 111): the brain does not operate algorithmically (as Daniel Dennett claims), or by a process of computation analogous to what goes on in digital computers. Thought is not a process of taking symbolic representations and performing calculations, or logical operations, upon them. There is no “language of thought” (page 105), of which actual language would merely be a “translation.”
Thus, though Edelman shows no signs of being aware of the anti-representationalist arguments in recent continental philosophy and “theory”, he comes to many of the same conclusions, in opposition to the reigning (in American psychology and computer science, at least) ideology of cognitivism. And he does this by being a better Darwinian than all those loudly and explicitly Darwinian “evolutionary psychologists” who are so willfully dismissive of neuroscience.

The Origin of Consciousness in the Breakdown of the Bicameral Mind

The Origin of Consciousness in the Breakdown of the Bicameral Mind, by Julian Jaynes, is one of those books, proposing a radical new thesis, that had an enormous impact when it was first published (1976), but has since fallen into the backwaters of intellectual fashion. Today it still has an ardent cult following, but otherwise it is not so much rejected as it is not taken seriously in the first place, and thereby, it is almost totally ignored.
This neglect is somewhat unfortunate. While I don’t think there is any scientific “proof” for Jaynes’ argument, and while certain of his assertions are almost certainly wrong, his book remains intellectually provocative; it opens up some very important questions, even if we are not ready to follow its conclusions.
Basically, Jaynes argues that consciousness, as we understand it today, has only been possessed by human beings for the last four thousand years or so. (By “consciousness” he means, not the primary perceptual awareness that all mammals, and perhaps many other ‘lower’ organisms as well, seem to possess, but what I would prefer to call self-consciousness, or second-order consciousness: the ability to reflect upon oneself, to introspect, to narrate one’s existence). Jaynes proposes that, in the second millennium BC and before, human beings were not self-conscious, and did not reflect upon what they did; rather, people heard voices instructing them in what to do, and they obeyed these voices immediately and unreflectively. These voices were believed (to the extent that “belief” is a relevant category in such circumstances) to be the voices of gods; their neurological cause was probably language issuing from the right hemisphere of the brain, and experienced hallucinatorily, and obeyed, by the left hemisphere (which is where speech is localized today).
This is why Jaynes calls the archaic mind a non-conscious, “bicameral” one. Thought was linguistic, but it did not have any correlates in consciousness; people didn’t make decisions, but instead the decisions were made automatically, and conveyed by the voices. One half of the brain commanded the other, so that decision-making and action were entirely separate functions. Neither of these hemispheres was “conscious” in the modern sense.
It was only as the result of catastrophic events in the second millennium BC that these voices fell silent, and were replaced by a new invention, that which we now know as self-conscious, reflective thought.
Jaynes introduces his theory by making reference to the Iliad, in which there is almost no description of interiority and subjectivity, or of conscious decision-making; instead, all the characters act at the promptings of the gods, who give them commands that they obey without question. Jaynes suggests that we take these descriptions literally, that this was the way the mind worked for thousands of years of human history. After the opening section of the book, where he quite interestingly discusses a range of philosophical issues having to do with the nature of consciousness and its relation to language, Jaynes supports his argument almost entirely through an analysis of ancient texts and of archaeological discoveries.
Where to begin in discussing such a suggestive, even if overly simple and overly totalizing, thesis? First of all, Jaynes argues that language is a prerequisite for consciousness, rather than the (common-sensical) reverse. This seems to me to be unarguably true, if we mean reflexive, or second-order consciousness. His arguments for this thesis, coming out of the tradition of Anglo-American empirically-grounded psychology, are interesting precisely in their difference from deconstructionist, and other Continental philosophical, arguments to much the same effect. This is useful because Jaynes thereby is able to point to the (relative) primacy of language in the human mind, without getting lost in those rather silly skeptical paradoxes that the deconstructionists are partial to.
Second, I find incredibly valuable the way Jaynes presents his picture of the schizophrenic, pre-conscious “bicameral mind” as a mechanism of social control. The bicameral mind arises, according to Jaynes, in tandem with the development of agriculture and the creation of the first cities (i.e. the first stirrings of “civilization” in Mesopotamia, and perhaps also Egypt, the Indus River Valley, and the Yellow River Valley, at around 9000 BC). Its purpose is to ensure obedience and social harmony; it entails, and enables, the creation of vast, rigid, theocratic hierarchies, such as existed in ancient Sumeria and Egypt (and also, much later, in the Mayan cities of the Western Hemisphere, and in other civilizations around the world). This is the aspect of Jaynes that interested William Burroughs, with his investigations into language as a form of social control and as a virus infecting, even as it created, the human mind.
In describing the passage from bicamerality to self-consciousness, Jaynes is really proposing a genealogy of different regimes of language and subjectivity, in a manner that resonates with ideas proposed by Deleuze and Guattari at around the same time (see especially the chapter “On Several Regimes of Signs” in A Thousand Plateaus). For Jaynes as for Deleuze/Guattari (I assume that Jaynes was unacquainted with D&G’s work, and vice versa), a “despotic” regime is displaced and replaced by a passional, subjectifying one. (I need to be a bit careful here, because I don’t want to merely translate Jaynes’ terms and arguments into deleuzoguattarian ones. The specific interest of Jaynes’ book is how he defamiliarizes the bicameral mindset, shows how it cannot be reduced to the categories that we, subjective people, take for granted).
The latter parts of Jaynes’ book, where he gives massive evidence for his thesis, are somewhat disappointing; in part because the readings of the historical and literary record are so obviously so tendentious, and in part because Jaynes seems content just to reiterate his big idea, rather than really exploring its potential ramifications and implications. He does have a short and interesting discussion about how so many aspects of our world today, from scientists’ search for a “theory of everything” to the worldwide fundamentalist backlash, can be seen as continuing responses to the collapse of the bicameral mind, which still casts its considerable shadow, thousands of years after it happened. But all this is sketchy in the extreme. I note that Jaynes never published (hence, probably, was unable to complete) a promised second volume, devoted to The Consequences of Consciousness, in the twenty years he lived after publishing The Origin of Consciousness in the Breakdown of the Bicameral Mind.
How “true” is Jaynes’ theory, however? Some of Jaynes’ speculations on neurobiology clearly need to be revised, in the light of our far greater knowledge of the subject today compared to 1976. And I don’t know enough about classical texts to pass any judgment on his readings of the Iliad or Babylonian cuneiform.
But on a larger scale, Jaynes’ theory is pretty much like those of psychoanalysis and evolutionary psychology, and like Terence McKenna’s speculations on the psychedelic origins of consciousness: all of these are stories that cannot be backed up or “proven” scientifically, but that also can’t be simply dismissed, because they refer to issues that themselves demand some sort of self-conscious narrativization, that cannot be resolved by positivist means alone. Empirical investigation can disconfirm particular theories, but it can never succeed in getting rid of our need for such unprovable, metaphorical “just-so stories.”
(The weakness of sociobiological, or evolutionary psychological explanations, in comparison to those of Freud, McKenna, or Jaynes, is that the evolutionary psychologists lay claim to a positivistic grounding that they do not really have, as well as that these theories are totally reductive and unimaginative to boot. Evolutionary psychology theorists generally cannot see beyond their own noses; they fail to realize how tautological they are, in their repetitions of the cliches of our own culture, especially in matters of sex and gender. For all their differences, Freud, McKenna, and Jaynes, at least, are trying to think beyond the narrow prejudices of their own cultural situations; for they are all profoundly aware, as Steven Pinker is not, that their own perspectives are culturally constrained).
As an unprovable but tantalizing “just-so” story of how consciousness came to be, then, Jaynes’ book is valuable precisely for its sense of the contingency of what we take most for granted, of the ways that very deep parts of our mentality are culturally specific and variably, rather than being inscribed “in the genes.” (Or more precisely, how we are genetically endowed, precisely, with such a wide and weird range of mental potentialities). Jaynes’ observations on the neural substrate of bicamerality, on the one hand, and subjective self-consciousness, on the other, suggest new and as yet unfollowed possibilities of research, even if his particular formulation proves to be (as it probably is) incorrect.
The biggest flaw in Jaynes’ scheme, I think, is his failure to consider in any adequate way what might have come before the great bicameral despotisms. Though he only looks at a very narrow sample of ancient history — that of the Middle East, Greece, and Egypt — he claims results that are universally valid. (He suggests, for instance, that similar events took place in China, though he says that he cannot pursue the investigation himself, since he does not know Chinese). But even if we accept that the bicameral model applies to China, and to the Maya, Aztec, and Inca empires, this says nothing about all the so-called “primitive” peoples around the world, who never experienced the bicameral despotic state. He is right to suggest that such peoples are by no means outside of history, and that today they are as fully self-conscious as people anywhere else: the idea of “noble savages,” untainted by contact with “civilization,” is nothing but a racist and imperialist myth. Still, Jaynes seems to assume that all the peoples of the earth went through a period of bicamerality, that the pre-bicameral mind is not a fully linguistic one, and that “hunter-gatherer groups” have either already “been a part of a bicameral theocracy” in the past, or else were “like other primates, being neither bicameral nor conscious,” until learning consciousness by contact with other groups. But this is obviously wrong; Jaynes wants to locate the origin of language as recently as 12,000 years ago, when it certainly has to be much earlier, before the ancestors of modern Homo sapiens spread out from Africa. A major expansion of Jaynes’ theory is therefore needed, one that would consider the mentality of gift societies (Mauss), stateless societies (Pierre Clastres), etc, societies that know nothing of (and might even actively “ward off”) bicameral despotism.

The Origin of Consciousness in the Breakdown of the Bicameral Mind, by Julian Jaynes, is one of those books, proposing a radical new thesis, that had an enormous impact when it was first published (1976), but has since fallen into the backwaters of intellectual fashion. Today it still has an ardent cult following, but otherwise it is not so much rejected as it is not taken seriously in the first place, and thereby, it is almost totally ignored.
This neglect is somewhat unfortunate. While I don’t think there is any scientific “proof” for Jaynes’ argument, and while certain of his assertions are almost certainly wrong, his book remains intellectually provocative; it opens up some very important questions, even if we are not ready to follow its conclusions.
Basically, Jaynes argues that consciousness, as we understand it today, has only been possessed by human beings for the last four thousand years or so. (By “consciousness” he means, not the primary perceptual awareness that all mammals, and perhaps many other ‘lower’ organisms as well, seem to possess, but what I would prefer to call self-consciousness, or second-order consciousness: the ability to reflect upon oneself, to introspect, to narrate one’s existence). Jaynes proposes that, in the second millennium BC and before, human beings were not self-conscious, and did not reflect upon what they did; rather, people heard voices instructing them in what to do, and they obeyed these voices immediately and unreflectively. These voices were believed (to the extent that “belief” is a relevant category in such circumstances) to be the voices of gods; their neurological cause was probably language issuing from the right hemisphere of the brain, and experienced hallucinatorily, and obeyed, by the left hemisphere (which is where speech is localized today).
This is why Jaynes calls the archaic mind a non-conscious, “bicameral” one. Thought was linguistic, but it did not have any correlates in consciousness; people didn’t make decisions, but instead the decisions were made automatically, and conveyed by the voices. One half of the brain commanded the other, so that decision-making and action were entirely separate functions. Neither of these hemispheres was “conscious” in the modern sense.
It was only as the result of catastrophic events in the second millennium BC that these voices fell silent, and were replaced by a new invention, that which we now know as self-conscious, reflective thought.
Jaynes introduces his theory by making reference to the Iliad, in which there is almost no description of interiority and subjectivity, or of conscious decision-making; instead, all the characters act at the promptings of the gods, who give them commands that they obey without question. Jaynes suggests that we take these descriptions literally, that this was the way the mind worked for thousands of years of human history. After the opening section of the book, where he quite interestingly discusses a range of philosophical issues having to do with the nature of consciousness and its relation to language, Jaynes supports his argument almost entirely through an analysis of ancient texts and of archaeological discoveries.
Where to begin in discussing such a suggestive, even if overly simple and overly totalizing, thesis? First of all, Jaynes argues that language is a prerequisite for consciousness, rather than the (common-sensical) reverse. This seems to me to be unarguably true, if we mean reflexive, or second-order consciousness. His arguments for this thesis, coming out of the tradition of Anglo-American empirically-grounded psychology, are interesting precisely in their difference from deconstructionist, and other Continental philosophical, arguments to much the same effect. This is useful because Jaynes thereby is able to point to the (relative) primacy of language in the human mind, without getting lost in those rather silly skeptical paradoxes that the deconstructionists are partial to.
Second, I find incredibly valuable the way Jaynes presents his picture of the schizophrenic, pre-conscious “bicameral mind” as a mechanism of social control. The bicameral mind arises, according to Jaynes, in tandem with the development of agriculture and the creation of the first cities (i.e. the first stirrings of “civilization” in Mesopotamia, and perhaps also Egypt, the Indus River Valley, and the Yellow River Valley, at around 9000 BC). Its purpose is to ensure obedience and social harmony; it entails, and enables, the creation of vast, rigid, theocratic hierarchies, such as existed in ancient Sumeria and Egypt (and also, much later, in the Mayan cities of the Western Hemisphere, and in other civilizations around the world). This is the aspect of Jaynes that interested William Burroughs, with his investigations into language as a form of social control and as a virus infecting, even as it created, the human mind.
In describing the passage from bicamerality to self-consciousness, Jaynes is really proposing a genealogy of different regimes of language and subjectivity, in a manner that resonates with ideas proposed by Deleuze and Guattari at around the same time (see especially the chapter “On Several Regimes of Signs” in A Thousand Plateaus). For Jaynes as for Deleuze/Guattari (I assume that Jaynes was unacquainted with D&G’s work, and vice versa), a “despotic” regime is displaced and replaced by a passional, subjectifying one. (I need to be a bit careful here, because I don’t want to merely translate Jaynes’ terms and arguments into deleuzoguattarian ones. The specific interest of Jaynes’ book is how he defamiliarizes the bicameral mindset, shows how it cannot be reduced to the categories that we, subjective people, take for granted).
The latter parts of Jaynes’ book, where he gives massive evidence for his thesis, are somewhat disappointing; in part because the readings of the historical and literary record are so obviously so tendentious, and in part because Jaynes seems content just to reiterate his big idea, rather than really exploring its potential ramifications and implications. He does have a short and interesting discussion about how so many aspects of our world today, from scientists’ search for a “theory of everything” to the worldwide fundamentalist backlash, can be seen as continuing responses to the collapse of the bicameral mind, which still casts its considerable shadow, thousands of years after it happened. But all this is sketchy in the extreme. I note that Jaynes never published (hence, probably, was unable to complete) a promised second volume, devoted to The Consequences of Consciousness, in the twenty years he lived after publishing The Origin of Consciousness in the Breakdown of the Bicameral Mind.
How “true” is Jaynes’ theory, however? Some of Jaynes’ speculations on neurobiology clearly need to be revised, in the light of our far greater knowledge of the subject today compared to 1976. And I don’t know enough about classical texts to pass any judgment on his readings of the Iliad or Babylonian cuneiform.
But on a larger scale, Jaynes’ theory is pretty much like those of psychoanalysis and evolutionary psychology, and like Terence McKenna’s speculations on the psychedelic origins of consciousness: all of these are stories that cannot be backed up or “proven” scientifically, but that also can’t be simply dismissed, because they refer to issues that themselves demand some sort of self-conscious narrativization, that cannot be resolved by positivist means alone. Empirical investigation can disconfirm particular theories, but it can never succeed in getting rid of our need for such unprovable, metaphorical “just-so stories.”
(The weakness of sociobiological, or evolutionary psychological explanations, in comparison to those of Freud, McKenna, or Jaynes, is that the evolutionary psychologists lay claim to a positivistic grounding that they do not really have, as well as that these theories are totally reductive and unimaginative to boot. Evolutionary psychology theorists generally cannot see beyond their own noses; they fail to realize how tautological they are, in their repetitions of the cliches of our own culture, especially in matters of sex and gender. For all their differences, Freud, McKenna, and Jaynes, at least, are trying to think beyond the narrow prejudices of their own cultural situations; for they are all profoundly aware, as Steven Pinker is not, that their own perspectives are culturally constrained).
As an unprovable but tantalizing “just-so” story of how consciousness came to be, then, Jaynes’ book is valuable precisely for its sense of the contingency of what we take most for granted, of the ways that very deep parts of our mentality are culturally specific and variably, rather than being inscribed “in the genes.” (Or more precisely, how we are genetically endowed, precisely, with such a wide and weird range of mental potentialities). Jaynes’ observations on the neural substrate of bicamerality, on the one hand, and subjective self-consciousness, on the other, suggest new and as yet unfollowed possibilities of research, even if his particular formulation proves to be (as it probably is) incorrect.
The biggest flaw in Jaynes’ scheme, I think, is his failure to consider in any adequate way what might have come before the great bicameral despotisms. Though he only looks at a very narrow sample of ancient history — that of the Middle East, Greece, and Egypt — he claims results that are universally valid. (He suggests, for instance, that similar events took place in China, though he says that he cannot pursue the investigation himself, since he does not know Chinese). But even if we accept that the bicameral model applies to China, and to the Maya, Aztec, and Inca empires, this says nothing about all the so-called “primitive” peoples around the world, who never experienced the bicameral despotic state. He is right to suggest that such peoples are by no means outside of history, and that today they are as fully self-conscious as people anywhere else: the idea of “noble savages,” untainted by contact with “civilization,” is nothing but a racist and imperialist myth. Still, Jaynes seems to assume that all the peoples of the earth went through a period of bicamerality, that the pre-bicameral mind is not a fully linguistic one, and that “hunter-gatherer groups” have either already “been a part of a bicameral theocracy” in the past, or else were “like other primates, being neither bicameral nor conscious,” until learning consciousness by contact with other groups. But this is obviously wrong; Jaynes wants to locate the origin of language as recently as 12,000 years ago, when it certainly has to be much earlier, before the ancestors of modern Homo sapiens spread out from Africa. A major expansion of Jaynes’ theory is therefore needed, one that would consider the mentality of gift societies (Mauss), stateless societies (Pierre Clastres), etc, societies that know nothing of (and might even actively “ward off”) bicameral despotism.

Bruno Latour

I’ve long felt a bit ambivalent about Bruno Latour, and I feel all the more that way after reading his book Pandora’s Hope. (I’ve previously read We Have Never Been Modern, plus a good number of essays).
I like the way Latour focuses on the details of actual scientific practice, and how he uses these details to argue for a complex set of mediations and links in the course of which humans are bound together with nonhumans – a model that he cogently argues is far preferable to the common one that simply confronts a linguistic statement, or a mental model, with a state of affairs in the world, and asks whether the statement representationally corresponds with, or accurately points to, the state of affairs. Latour is right to say that this dualistic, correspondence theory of truth (or its inversion, the deconstructionist abyss of language that cannot reach out beyond itself to the world) ignores the way that things like scientific theories, statements, and models are themselves actions or events or performances in the world. Latour is not the first thinker to resituate language in the world in this way, but he is the one who has applied it to the understanding of science, and specifically scientific practice.
Latour thus cuts the Gordian knot of the dispute between realism (‘the facts of science exist independently of us’) and constructionism (scientific entities are “socially constructed”). He says that the fallacy shared by both sides to this dispute is to think that “constructed” and “real” are opposites, when in fact they go in tandem: the more something is “constructed” (socially or otherwise) the realer it is, because the more it is interconnected with other things, the more it operates with and upon, and affects, other things, and so on. This seems to me exactly right
(It’s also a point that is consonant with Ian Hacking’s arguments, in The Social Construction of What, about the use of the phrase “social construction.” Hacking shows how many different meanings this phrase has; he suggests that it really functions as a marker of difference. We say that gender is “socially constructed” in order to argue against claims that it is entirely “in the genes”; we do not say that a bridge is “socially constructed,” because nobody argues that the Golden Gate Bridge somehow arose by itself).
Nonetheless, I am enough of a realist that I am made uneasy when Latour says, for instance, that yeast did not cause lactic acid fermentation until 1864, when Pasteur established this action in the laboratory. I agree that Pasteur’s experiments did not just reveal an always-existing truth; since those experiments mobilized the yeast, made it interact with human interests, both by establishing new scientific doctrine, and by making the commercial exploitation of the fermentation process possible on a scale and in a manner that it was not before. In pragmatist terms, Pasteur’s experiments, and his theoretical extrapolation from those experiments, made it possible for us to predict and control the fermentation process, and the life history of yeast, for the first time.
But it still seems disingenuous to me for Latour to say that it was only after 1864 that the process took place, or (to put his point as precisely as possible) that it is only after 1864 that the process of fermentation by the action of yeast (rather than fermentation as a byproduct of organic decay, as was previously believed) can be said to have taken place before 1864. In one sense, Latour’s statement is a tautology; but I think that Latour is trying to pull a fast one, by using this tautology to insinuate a deeper meaning, according to which the change in the world that took place in 1864 affected something more than certain instrumental activities of human beings with yeast.
Latour says that he is simply including yeast as well as human beings in history, rather than seeing yeast as unchanging and ahistorical “in and of itself.” But this begs the question of how the actions of yeast in fact affected human beings well before Pasteur mobilized yeast into what Latour calls the “collective.”
Latour’s sleight-of-hand becomes a still more serious matter when he presents his grand view of science and politics. He wants to repeal what he calls the modern “settlement” that radically separated subject from object, as well as Truth from Opinion, Knowledge from Power, Right from Might. He cleverly suggests that the Platonic and Cartesian dictatorship of Reason shares common assumptions with the view of the Sophists, of Hobbes, and of Nietzsche, that would seek to deconstruct it. He suggests that both Socrates and his opponents, and more recently both the scientific rationalists and Nietzsche, both the positivists and Foucault, distrust the “people” or the “mob”, and disagree only on whether the violent imposition to reign in this “mob” should be that of a hypostasized Reason or that of a more naked Power.
It’s not that I would want to defend a renewed elitism against Latour’s populism here. But Latour idealizes what a fully engaged politics (as opposed to one governed from without by the forceful imposition of scientific reason) would actually be. He idealizes and sentimentalizes the civility and consensus of a “body politic” uninfected by the dictatorship of an abstract Reason. One can observe the intractability of many human disputes and political conflicts (having to do with such things as class and other forms of privilege, wealth, and prestige, or with the control of the regime of productivity and the distribution of whatever social surplus there may be) without believing, as Latour accuses defenders of rationalism from Socrates to Steven Weinberg of doing, that “scientific” objectivity is the one thing that saves humankind from descending into barbarity and a Hobbesian “war of all against all.” One can agree that the rage of modernist iconoclasm often produces the very dehumanizing phenomena that it claims to be waging war against, without sharing Latour’s piety towards “fetishes” and “icons.”
In making “modernism” and its “settlement” his enemy, Latour can’t help reproducing modernity’s own logic, in the form of an idealized depiction of that which preceded the modern. Although he rightly says that the unalienated “pre-modern” is nothing but a modernist fantasy, he himself reproduces the very same fantasy, in his picture of a world uninfected by modernism, as well as in his assertion that “we have never been modern,” that modernity has only given greater scope to nonmodern “mixtures” in practice, by refusing them admission into theory.
In short: we must add to Latour’s account the additional awareness that we have never not been modern, that we have never been free of modernist divisions and impositions.
(This is a more Derridean conclusion than I wanted to get to; I think the way out is to ask different sorts of questions, and indeed this is what Latour says we should do; but Latour doesn’t ask the right different questions. He doesn’t quite succeed in pointing the way to his self-confessed goal, a Whiteheadean account that does justice both to science and to other modes of human experience of the world).

I’ve long felt a bit ambivalent about Bruno Latour, and I feel all the more that way after reading his book Pandora’s Hope. (I’ve previously read We Have Never Been Modern, plus a good number of essays).
I like the way Latour focuses on the details of actual scientific practice, and how he uses these details to argue for a complex set of mediations and links in the course of which humans are bound together with nonhumans – a model that he cogently argues is far preferable to the common one that simply confronts a linguistic statement, or a mental model, with a state of affairs in the world, and asks whether the statement representationally corresponds with, or accurately points to, the state of affairs. Latour is right to say that this dualistic, correspondence theory of truth (or its inversion, the deconstructionist abyss of language that cannot reach out beyond itself to the world) ignores the way that things like scientific theories, statements, and models are themselves actions or events or performances in the world. Latour is not the first thinker to resituate language in the world in this way, but he is the one who has applied it to the understanding of science, and specifically scientific practice.
Latour thus cuts the Gordian knot of the dispute between realism (‘the facts of science exist independently of us’) and constructionism (scientific entities are “socially constructed”). He says that the fallacy shared by both sides to this dispute is to think that “constructed” and “real” are opposites, when in fact they go in tandem: the more something is “constructed” (socially or otherwise) the realer it is, because the more it is interconnected with other things, the more it operates with and upon, and affects, other things, and so on. This seems to me exactly right
(It’s also a point that is consonant with Ian Hacking’s arguments, in The Social Construction of What, about the use of the phrase “social construction.” Hacking shows how many different meanings this phrase has; he suggests that it really functions as a marker of difference. We say that gender is “socially constructed” in order to argue against claims that it is entirely “in the genes”; we do not say that a bridge is “socially constructed,” because nobody argues that the Golden Gate Bridge somehow arose by itself).
Nonetheless, I am enough of a realist that I am made uneasy when Latour says, for instance, that yeast did not cause lactic acid fermentation until 1864, when Pasteur established this action in the laboratory. I agree that Pasteur’s experiments did not just reveal an always-existing truth; since those experiments mobilized the yeast, made it interact with human interests, both by establishing new scientific doctrine, and by making the commercial exploitation of the fermentation process possible on a scale and in a manner that it was not before. In pragmatist terms, Pasteur’s experiments, and his theoretical extrapolation from those experiments, made it possible for us to predict and control the fermentation process, and the life history of yeast, for the first time.
But it still seems disingenuous to me for Latour to say that it was only after 1864 that the process took place, or (to put his point as precisely as possible) that it is only after 1864 that the process of fermentation by the action of yeast (rather than fermentation as a byproduct of organic decay, as was previously believed) can be said to have taken place before 1864. In one sense, Latour’s statement is a tautology; but I think that Latour is trying to pull a fast one, by using this tautology to insinuate a deeper meaning, according to which the change in the world that took place in 1864 affected something more than certain instrumental activities of human beings with yeast.
Latour says that he is simply including yeast as well as human beings in history, rather than seeing yeast as unchanging and ahistorical “in and of itself.” But this begs the question of how the actions of yeast in fact affected human beings well before Pasteur mobilized yeast into what Latour calls the “collective.”
Latour’s sleight-of-hand becomes a still more serious matter when he presents his grand view of science and politics. He wants to repeal what he calls the modern “settlement” that radically separated subject from object, as well as Truth from Opinion, Knowledge from Power, Right from Might. He cleverly suggests that the Platonic and Cartesian dictatorship of Reason shares common assumptions with the view of the Sophists, of Hobbes, and of Nietzsche, that would seek to deconstruct it. He suggests that both Socrates and his opponents, and more recently both the scientific rationalists and Nietzsche, both the positivists and Foucault, distrust the “people” or the “mob”, and disagree only on whether the violent imposition to reign in this “mob” should be that of a hypostasized Reason or that of a more naked Power.
It’s not that I would want to defend a renewed elitism against Latour’s populism here. But Latour idealizes what a fully engaged politics (as opposed to one governed from without by the forceful imposition of scientific reason) would actually be. He idealizes and sentimentalizes the civility and consensus of a “body politic” uninfected by the dictatorship of an abstract Reason. One can observe the intractability of many human disputes and political conflicts (having to do with such things as class and other forms of privilege, wealth, and prestige, or with the control of the regime of productivity and the distribution of whatever social surplus there may be) without believing, as Latour accuses defenders of rationalism from Socrates to Steven Weinberg of doing, that “scientific” objectivity is the one thing that saves humankind from descending into barbarity and a Hobbesian “war of all against all.” One can agree that the rage of modernist iconoclasm often produces the very dehumanizing phenomena that it claims to be waging war against, without sharing Latour’s piety towards “fetishes” and “icons.”
In making “modernism” and its “settlement” his enemy, Latour can’t help reproducing modernity’s own logic, in the form of an idealized depiction of that which preceded the modern. Although he rightly says that the unalienated “pre-modern” is nothing but a modernist fantasy, he himself reproduces the very same fantasy, in his picture of a world uninfected by modernism, as well as in his assertion that “we have never been modern,” that modernity has only given greater scope to nonmodern “mixtures” in practice, by refusing them admission into theory.
In short: we must add to Latour’s account the additional awareness that we have never not been modern, that we have never been free of modernist divisions and impositions.
(This is a more Derridean conclusion than I wanted to get to; I think the way out is to ask different sorts of questions, and indeed this is what Latour says we should do; but Latour doesn’t ask the right different questions. He doesn’t quite succeed in pointing the way to his self-confessed goal, a Whiteheadean account that does justice both to science and to other modes of human experience of the world).

Genetically Modified Crops

I don’t usually put links without my own extended commentary into this blog, but this time I couldn’t resist. Warren Ellis has a wonderful rant about fanatic protesters against genetically modified food.

I don’t usually put links without my own extended commentary into this blog, but this time I couldn’t resist. Warren Ellis has a wonderful rant about fanatic protesters against genetically modified food.

Literary Darwinism?

In today’s Science section of The New York Times, there’s an article about so-called “Darwinian literary studies,” which purports to find confirmation of evolutionary psychology in works of literature. Female college students were given two passages from Sir Walter Scott, one describing one of Scott’s “dark heroes, rebellious and promiscuous,” and the other describing one of Scott’s “proper heroes, law-abiding and monogamous.” And lo and behold, it turned out that “the women preferred the proper heroes for long-term unions,” but said that the dark heroes “appealed to them most for short-term affairs.”
The psychologist who did this study says that it “demonstrates that the distinction between long-term and short-term mating strategies” postulated by evolutionary psychology “is instinctive.” The reasoning seems to be that only biological “instinct” could explain the response to a two-centuries-old text by women today.
Of course, this is nonsense. Nobody who knows anything about the history of popular culture, or for that matter who has ever gone to the movies or watched TV, will be the least bit surprised that the stereotypes that Scott drew upon, and contributed to, two hundred years ago are still stereotypes today. The cliches and commonplaces that the evolutionary psychologists draw upon when they make their theories are the same ones that Scott drew upon when he wrote his novels. The study proves nothing whatsoever, because it is completely tautological; it is just like Wittgenstein’s witticism about the man who bought several copies of the newspaper in order to assure himself that what it said was true.
Actually, I think that there is a use for Darwinism in literary studies. But it is not this drivel about literature confirming the hoariest cliches about innate instinct and male/female behavior. It is rather what Morse Peckham suggested years ago: that mutation due to “accident, or chance, or randomness” plays a crucial part in cultural innovation, just as it does in biological evolution. So it is “the brain’s capacity to produce random responses” that causes “the indetermination in human behavior of response to any given stimulus”; this indetermination, in turn, is why we have cultural variability and cultural change, and why no society succeeds in totally controlling the behavior of its members. Continual mutation, not a fixed, innate “human nature” is the lesson that literary study can profitably extract from biology.

In today’s Science section of The New York Times, there’s an article about so-called “Darwinian literary studies,” which purports to find confirmation of evolutionary psychology in works of literature. Female college students were given two passages from Sir Walter Scott, one describing one of Scott’s “dark heroes, rebellious and promiscuous,” and the other describing one of Scott’s “proper heroes, law-abiding and monogamous.” And lo and behold, it turned out that “the women preferred the proper heroes for long-term unions,” but said that the dark heroes “appealed to them most for short-term affairs.”
The psychologist who did this study says that it “demonstrates that the distinction between long-term and short-term mating strategies” postulated by evolutionary psychology “is instinctive.” The reasoning seems to be that only biological “instinct” could explain the response to a two-centuries-old text by women today.
Of course, this is nonsense. Nobody who knows anything about the history of popular culture, or for that matter who has ever gone to the movies or watched TV, will be the least bit surprised that the stereotypes that Scott drew upon, and contributed to, two hundred years ago are still stereotypes today. The cliches and commonplaces that the evolutionary psychologists draw upon when they make their theories are the same ones that Scott drew upon when he wrote his novels. The study proves nothing whatsoever, because it is completely tautological; it is just like Wittgenstein’s witticism about the man who bought several copies of the newspaper in order to assure himself that what it said was true.
Actually, I think that there is a use for Darwinism in literary studies. But it is not this drivel about literature confirming the hoariest cliches about innate instinct and male/female behavior. It is rather what Morse Peckham suggested years ago: that mutation due to “accident, or chance, or randomness” plays a crucial part in cultural innovation, just as it does in biological evolution. It is “the brain’s capacity to produce random responses,” Peckham says, that causes “the indetermination in human behavior of response to any given stimulus”; this indetermination, in turn, is why meanings can never be fixed once and for all (as the deconstructionists are always reminding us), why we have cultural variability and cultural change, and why no society succeeds in totally controlling the behavior of its members. Continual mutation, not a fixed, innate “human nature,” is the lesson that literary study can profitably extract from biology. And it is by drawing on these Darwinian lessons about mutation that Peckham anticipated most of what theorists like Derrida and Foucault said, only without the European metaphysical baggage.

What Genes Can’t Do

What Genes Can’t Do, by Lenny Moss, doesn’t quite deliver on its title’s promise of a thorough critique of genetic determinism. The book is much more limited in its scope than the title would suggest. But within its own boundaries, the book does argue cogently and make some important points. Moss is a philosopher with a background in cell biology; he’s able to go into detail on both the history of biologiy, and on current work in the field…

What Genes Can’t Do, by Lenny Moss, doesn’t quite deliver on its title’s promise of a thorough critique of genetic determinism. The book is much more limited in its scope than the title would suggest. But within its own boundaries, the book does argue cogently and make some important points. Moss is a philosopher with a background in cell biology; he’s able to go into detail on both the history of biologiy, and on current work in the field…
Continue reading “What Genes Can’t Do”