Insides

It’s very strange to imagine — let alone to actually see — the insides of one’s own body. Today I went to the hospital and had a flexible sigmoidoscopy: the bottom third of my colon was examined for polyps, or other signs of incipient cancer. (Nothing was found; I got a clean bill of health, at least as far as the lower third of my colon is concerned).

The procedure is done without sedation, and it didn’t hurt — it was barely noticeable. After I had cleansed myself with the requisite laxatives and enemas, the doctor inserted a small tube, with a light and a miniature video camera, up my rectum. I was lying on my side, and I could see the camera’s output on a video screen. The camera went up my insides for a distance of 60 centimeters. I saw the opening of the rectum, some minor hemerrhoids just inside, then a sort of glide through the twists and turns of my colon: it was a fleshly tunnel, mostly smooth, with networks or meshes of blood vessels visible just beneath the surface of the skin. At one point, a bit of excrement — which appeared somewhat greenish in this light — floated in the tunnel, but the doctor (I mean the device he was controlling) pushed it aside and continued inward. Finally things became a bit congested, at which point the instrument reversed and came back out. The whole thing was over in ten minutes.

Now maybe this is the sort of thing you (my readers) might rather not hear about. But it wasn’t grotesque, or even particularly scatoalogical or sexual in how it felt. It was more just the odd sense of displacement, seeing an unfamiliar, indeed alien, landscape that yet exists just inside me. When we speak of “interiority”, we usually are referring to the mind, to the recesses of thought that other people can’t know, that even I myself can’t really know, but only vaguely feel and sense. And yet what I saw on that video monitor, although in a certain sense it isn’t me at all, but merely part of a hole that runs right through me — correction: not although, but precisely because it is a hole connected on both ends to the outside — was a deeper “interiority” than any to be found in depths of my thought (or in the convolutions of my brain). We are living organisms, which means that we exist by separating the inside from from the outside; but the Inside really is nothing other but the Outside, folded back upon itself to constitute the interiority that is “me.” (This is what Deleuze says, more or less). To see inside myself (with all the sexual, as well as mental and physiological, connotations of “inside”) is to sense both my precariousness, and the miraculous strangeness that I should exist at all. It’s to be displaced from myself, to realize that intimacy — including self-intimacy — is always with someone who remains a stranger.

It’s very strange to imagine — let alone to actually see — the insides of one’s own body. Today I went to the hospital and had a flexible sigmoidoscopy: the bottom third of my colon was examined for polyps, or other signs of incipient cancer. (Nothing was found; I got a clean bill of health, at least as far as the lower third of my colon is concerned).

The procedure is done without sedation, and it didn’t hurt — it was barely noticeable. After I had cleansed myself with the requisite laxatives and enemas, the doctor inserted a small tube, with a light and a miniature video camera, up my rectum. I was lying on my side, and I could see the camera’s output on a video screen. The camera went up my insides for a distance of 60 centimeters. I saw the opening of the rectum, some minor hemerrhoids just inside, then a sort of glide through the twists and turns of my colon: it was a fleshly tunnel, mostly smooth, with networks or meshes of blood vessels visible just beneath the surface of the skin. At one point, a bit of excrement — which appeared somewhat greenish in this light — floated in the tunnel, but the doctor (I mean the device he was controlling) pushed it aside and continued inward. Finally things became a bit congested, at which point the instrument reversed and came back out. The whole thing was over in ten minutes.

Now maybe this is the sort of thing you (my readers) might rather not hear about. But it wasn’t grotesque, or even particularly scatoalogical or sexual in how it felt. It was more just the odd sense of displacement, seeing an unfamiliar, indeed alien, landscape that yet exists just inside me. When we speak of “interiority”, we usually are referring to the mind, to the recesses of thought that other people can’t know, that even I myself can’t really know, but only vaguely feel and sense. And yet what I saw on that video monitor, although in a certain sense it isn’t me at all, but merely part of a hole that runs right through me — correction: not although, but precisely because it is a hole connected on both ends to the outside — was a deeper “interiority” than any to be found in depths of my thought (or in the convolutions of my brain). We are living organisms, which means that we exist by separating the inside from from the outside; but the Inside really is nothing other but the Outside, folded back upon itself to constitute the interiority that is “me.” (This is what Deleuze says, more or less). To see inside myself (with all the sexual, as well as mental and physiological, connotations of “inside”) is to sense both my precariousness, and the miraculous strangeness that I should exist at all. It’s to be displaced from myself, to realize that intimacy — including self-intimacy — is always with someone who remains a stranger.

Confidence Games

Mark C. Taylor’s Confidence Games: Money and Markets in a World Without Redemption is erudite, entertaining, and intellectually wide-ranging — and it has the virtue of dealing with a subject (money and markets) that rarely gets enough attention from people deeply into pomo theory. Why, then, did I find myself so dissatisfied with the book?

Taylor is a postmodern, deconstructionist theologian — if that makes any sense, and in fact when reading him it does — who has written extensively about questions of faith and belief in a world without a center or foundations. Here he writes about the relations between religion, art, and money — or, more philosophically, between theology, aesthetics, and economics. He starts with a consideration of William Gaddis’ underrated and underdiscussed novels The Recognitions and JR (the latter of which he rightly praises as one of the most crucial and prophetic reflections on late-20th-century American culture: in a book published in 1975, Gaddis pretty much captures the entire period from the deregulation and S&L scams of the Reagan 80s through the Enron fiasco of just a few years ago: nailing down both the crazy economic turbulence and fiscal scamming, and its influence on the larger culture). From Gaddis, Taylor moves on to the history of money, together with the history of philosophical reflections upon money. He’s especially good on the ways in which theological speculation gets transmuted into 18th and 19th century aesthetics, and on how both theological and aesthetic notions get subsumed into capitalistic visions of “the market.” In particular, he traces the Calvinist (as well as aestheticist) themes that stand behind Adam Smith’s vision of the “invisible hand” that supposedly ensures the proper functioning of the market.

The second half of Taylor’s book moves towards an account of how today’s “postmodern” economic system developed, in the wake of Nixon’s abandonment of the gold standard in 1971, the Fed’s conversion from Keynesianism to monetarism in 1979, and the general adoption of “neoliberal” economics throughout the world in the 1980s and 1990s. The result of these transformations is the dematerialization of money (since it is no longer tied to gold) and the replacement of a “real” economy by a “virtual” one, in which money becomes a series of ungrounded signs that only refer to one another. Money, in Taylor’s account, has always had something uncanny about it — because, as a general equivalent or medium of exchange, it is both inside and outside the circuits of the items (commodities) being exchanged; money is a liminal substance that grounds the possibility of fixed categories and values, but precisely for that reason, doesn’t itself quite fit into any category, or have any autonomous value. But with the (re-)adoption of free-market fundamentalism in the 1980s, together with the explosive technological changes of the late 20th century — the growth of telecommunications and of computing power that allow for global and entirely ‘fictive’ monetary flows — this all kicks into much higher gear: money becomes entirely “spectral.” Taylor parallels this economic mutation to similar experiences of ungroundedness, and of signs that do not refer to anything beyond themselves, in the postmodern architecture of Venturi and after, in the poststructuralist philosophy of Derrida (at least by Taylor’s somewhat simplistic interpretation of him), and more generally in all facets of our contemporary culture of sampling, appropriation, and simulation. (Though Taylor only really seems familiar with high art, which has its own peculiar relationship to money; he mentions the Guggenheim Museum opening a space in Las Vegas, but — thankfully perhaps — is silent on hiphop, television, or anything else that might be classified as “popular culture”).

I think that Taylor’s parallels are a bit too facile and glib, and underrate the complexity and paradoxicality of our culture of advertising and simulation — but that’s not really the core of my problem with the book. My real differences are — to use Taylor’s own preferred mode of expression — theological ones. I think that Taylor is far too idolatrous in his regard for “the market” and for money, which traditional religion has seen as Mammon, but which he recasts as a sort of Hermes Trismegistus or trickster figure (though he doesn’t directly use this metaphor), as well as a Christological mediator between the human and the divine. Taylor says, convincingly, that economics cannot be disentangled from religion, because any economic system ultimately requires faith — it is finally only faith that gives money its value. But I find Taylor’s faith to be troublingly misplaced: it is at the antipodes from any form of fundamentalism, but for this very reason oddly tends to coincide with it. In postmodern society, money is the Absolute, or the closest that we mortals can come to an Absolute. (Taylor complacently endorses the hegelian dialectic of opposites, without any of the sense of irony that a contemporary christianophile hegelian like Zizek brings to the dialectic). Where fundamentalists seek security, grounding, and redemption, Taylor wants to affirm uncertainty and risk “in a world without redemption.” But this means that the turbulence and ungroundedness of the market makes it the locus for a quasi-religious Nietzschean affirmation (“risk, uncertainty, and insecurity, after all, are pulses of life” — 331) which is ultimately not all that far from the Calvinist faith that everything is in the hands of the Lord.

Taylor at one point works through Marx’s account of the self-valorization of capital; for Taylor, “Marx implicitly draws on Kant’s aesthetics and Hegel’s philosophy” when he describes capital’s “self-renewing circular exchange” (109). That is to say, Marx’s account of capital logic has the same structure as Kant’s organically self-validating art object, or Hegel’s entire system. (Taylor makes much of Marx’s indebtedness to Hegel). What Taylor leaves out of his account, however, is the part where Marx talks about the appropriation of surplus value, which is to say what capital does in the world in order to generate and perpetuate this process of “self-valorization.” I suggest that this omission is symptomatic. In his history of economics, Taylor moves from Adam Smith to such mid-20th-century champions of laissez faire as Milton Friedman and F. A. Hayek; but he never mentions, for instance, Ricardo, who (like Marx after him) was interested in production and consumption, rather than just circulation.

Now, simply to say — as most orthodox Marxists would do — that Taylor ignores production, and the way that circulation is grounded in production, is a more “fundamentalist” move than I would wish to make. Taylor is right to call attention to the eerily ungrounded nature of contemporary finance. Stock market prices are largely disconnected from any underlying economic performance of the companies whose stocks are being traded; speculation on derivatives and other higher-order financial instruments, which have even less relation to actual economic activity, have largely displaced productive investment as the main “business” of financial markets today. But Taylor seems to celebrate this process as a refutation of Marx and Marxism (except to the extent that Marx himself unwittingly endorses the self-valorization of capital, by describing it in implicitly aesthetic and theological terms). Taylor tends to portray Marx as an old-school fundamentalist who is troubled by the way that money’s fluidity and “spectrality” undermine metaphysical identities and essences. But this is a very limited and blinkered (mis)reading of Marx. For Marx himself begins Capital with the notorious discussion of the immense abstracting power of commodities and money. And subsequently, Marx insists on the way that circuits of finance tend, in an advanced capitalist system, to float free of their “determinants” in use-value and labor. The autonomous “capital-logic” that Marx works out in Volumes 2 & 3 of Capital is much more true today than it ever was in Marx’s own time. Marx precisely explores the consequences of these developments without indulging in any “utopian-socialist” nostalgia for a time of primordial plenitude, before money matters chased us out of the Garden.

Let me try to put this in another way. The fact that postmodern financial speculation is (quite literally) ungrounded seems to mean, for Taylor, that it is therefore also free of any extraneous consequences or “collateral damage” (Taylor actually uses this phrase as the title of one section of the book, playing on the notion of “collateral” for loans but not considering any extra-financial effects of financial manipulations). Much of the latter part of Confidence Games is concerned with efforts by financiers and economists, in the 1980s and 1990s, to manage and minimize risk; and with their inability to actually do so. Taylor spends a lot of time, in particular, on the sorry story of Long-Term Capital Management (LTCM), the investment firm that went bankrupt so spectacularly in 1998. After years of mega-profits, LTCM got called on its outrageously leveraged investments, found that it couldn’t repay any of its loans, and had to be bailed out to avoid a domino effect leading to worldwide financial collapse. In Taylor’s view, there’s a kind of moral lesson in this: LTCM wanted to make hefty profits without taking the concomitant risks; but eventually the risks caught up with them, in a dramatic movement of neo-Calvinist retribution, a divine balancing of the books. Taylor doesn’t really reflect on the fact that the “risks” weren’t really all that great for the financiers of LTCM themselves: they lost their paper fortunes, but they didn’t literally lose their shirts or get relegated to the poorhouse. Indeed their losses were largely covered, in order to protect everyone else, who would have suffered from the worldwide economic collapse that they almost triggered. The same holds, more recently, for Enron. Ken Lay got some sort of comeuppance when Enron went under, and (depending on the outcome of his trial) he may even end up having to serve (like Martha Stewart) some minimum-security jail time. But Lay will never be in the destitute position of all the people who lost their life savings and old-age pensions in the fiasco. Gaddis’ JR deals with the cycles of disruption and loss that are triggered by the ungrounded speculations at the center of the novel — but this is one aspect of the text Taylor never talks about.

Taylor sharply criticizes the founding assumptions of mainstream economists and financiers: the ideas that the market is “rational,” and that it tends toward “equilibrium.” And here Taylor is unquestionably right: these founding assumptions — which still pervade mainstream economics in the US and around the world — are indeed nonsensical, as well as noxious. It’s only under ideal, frictionless conditions, that almost never exist in actuality, that Smith’s “invisible hand” actually does operate to create “optimal” outcomes. Marginalist and neoclassical/neoliberal economics is probably the most mystified discipline in the academy today, wedded as it is to the pseudo-rigor of mathematical models borrowed from physics, and deployed in circumstances where none of the idealizations at the basis of physics actually obtain. It’s welcome to see Taylor take on the economists’ “dream of a rationally ordered world” (301), one every bit as out of touch with reality, and as harmful in its effects when people tried to bend the real world to conform to it, as Soviet communism ever was.

But alas — Taylor only dismisses the prevalent neoclassical version of the invisible hand, in order to welcome it back in another form. If the laws of economic equilibrium, borrowed by neoclassical economics from 19th-century physical dynamics, do not work, for Taylor this is because the economy is governed instead by the laws of complex systems, which he borrows from late-20th-century physics in the form of chaos and complexity theory. There is still an invisible hand in Taylor’s account: only now it works through phase transitions and strange attractors in far-from-equilibrium conditions. Taylor thus links the physics of complexity to the free-market theories of F. A. Hayek (Margaret Thatcher’s favorite thinker), for whom the “market” was a perfect information-processing mechanism that calculated optimal outcomes as no “central planning” agency could. According to Hayek’s way of thinking, since any attempt at human intervention in the functioning of the economy — any attempt to alleviate or mitigate circumstances — will necessarily have unintended and uncontrollable consequences, we do best to let the market take its course, with no remorse or regret for the vast amount of human suffering and misery that is created thereby.

Such sado-monetarist cruelty is clearly not Taylor’s intention, but it arises nevertheless from his revised version of the invisible hand, as well as from his determination to separate financial networks from their extra-financial effects. I’ll say it again: the more Taylor celebrates the way that everything is interconnected, and all systems are open, he still maintains a sort of methodological solipsism or blindness to external consequences. The fact that financial networks today (or any other sort of self-perpetuating system of nonreferential signs) are ungrounded self-affecting systems, produced in the unfolding of a “developmental process [that] neither is grounded in nor refers to anything beyond itself” (330) — this fact does not exempt these systems from having extra-systemic consequences: indeed, if anything, the system’s lack of “groundedness” or connection makes the extra-systemic effects all the more intense and virulent. To write off thesse effects as “coevolution,” or as the “perpetual restlessness” of desire, or as a wondrous Nietzschean affirmation of risk, is to be disingenuous at best.

There’s a larger question here, that goes far beyond Taylor. When we think today of networks, or of chaotic systems, we think of patterns that are instantiated indifferently in the most heterogeneous sorts of matter. The same structures, the same movements, the same chaotic bifurcations and phase transitions, are supposedly at work in biological ecosystems, in the weather, and in the stock market. This is the common wisdom of the age — it certainly isn’t specific to Taylor — but it’s an assumption that I increasingly think needs to be criticized. The very fact that the same arguments from theories of chaos/complexity and “self-organization” can be cited with equal relevance by Citibank and by the alterglobalization movement, and can be used to justify both feral capitalism and communal anarchism, should give us pause. For one thing, I don’t think we know yet how well these scientific theories will hold up; they are drastic simplifications, and only time will tell how well they perform, how useful they are, in comparison to the drastic simplifications proposed by the science of, say, the nineteenth century. For another thing, we still need to be dubious about how the idea of the same pattern instantiated indifferently in various sorts of matter is just another extension — powerful in some ways, but severely limiting in others — of Western culture’s tendency to divide mind or meaning from matter, and to devalue the latter. For yet another, we should be very wary of drawing political and ethical consequences from scientific observation and theorization, for usually such drawing-consequences involves a great deal of arbitrariness, as it projects the scientific formulations far beyond the circumstances in which they are meaningful.

Mark C. Taylor’s Confidence Games: Money and Markets in a World Without Redemption is erudite, entertaining, and intellectually wide-ranging — and it has the virtue of dealing with a subject (money and markets) that rarely gets enough attention from people deeply into pomo theory. Why, then, did I find myself so dissatisfied with the book?

Taylor is a postmodern, deconstructionist theologian — if that makes any sense, and in fact when reading him it does — who has written extensively about questions of faith and belief in a world without a center or foundations. Here he writes about the relations between religion, art, and money — or, more philosophically, between theology, aesthetics, and economics. He starts with a consideration of William Gaddis’ underrated and underdiscussed novels The Recognitions and JR (the latter of which he rightly praises as one of the most crucial and prophetic reflections on late-20th-century American culture: in a book published in 1975, Gaddis pretty much captures the entire period from the deregulation and S&L scams of the Reagan 80s through the Enron fiasco of just a few years ago: nailing down both the crazy economic turbulence and fiscal scamming, and its influence on the larger culture). From Gaddis, Taylor moves on to the history of money, together with the history of philosophical reflections upon money. He’s especially good on the ways in which theological speculation gets transmuted into 18th and 19th century aesthetics, and on how both theological and aesthetic notions get subsumed into capitalistic visions of “the market.” In particular, he traces the Calvinist (as well as aestheticist) themes that stand behind Adam Smith’s vision of the “invisible hand” that supposedly ensures the proper functioning of the market.

The second half of Taylor’s book moves towards an account of how today’s “postmodern” economic system developed, in the wake of Nixon’s abandonment of the gold standard in 1971, the Fed’s conversion from Keynesianism to monetarism in 1979, and the general adoption of “neoliberal” economics throughout the world in the 1980s and 1990s. The result of these transformations is the dematerialization of money (since it is no longer tied to gold) and the replacement of a “real” economy by a “virtual” one, in which money becomes a series of ungrounded signs that only refer to one another. Money, in Taylor’s account, has always had something uncanny about it — because, as a general equivalent or medium of exchange, it is both inside and outside the circuits of the items (commodities) being exchanged; money is a liminal substance that grounds the possibility of fixed categories and values, but precisely for that reason, doesn’t itself quite fit into any category, or have any autonomous value. But with the (re-)adoption of free-market fundamentalism in the 1980s, together with the explosive technological changes of the late 20th century — the growth of telecommunications and of computing power that allow for global and entirely ‘fictive’ monetary flows — this all kicks into much higher gear: money becomes entirely “spectral.” Taylor parallels this economic mutation to similar experiences of ungroundedness, and of signs that do not refer to anything beyond themselves, in the postmodern architecture of Venturi and after, in the poststructuralist philosophy of Derrida (at least by Taylor’s somewhat simplistic interpretation of him), and more generally in all facets of our contemporary culture of sampling, appropriation, and simulation. (Though Taylor only really seems familiar with high art, which has its own peculiar relationship to money; he mentions the Guggenheim Museum opening a space in Las Vegas, but — thankfully perhaps — is silent on hiphop, television, or anything else that might be classified as “popular culture”).

I think that Taylor’s parallels are a bit too facile and glib, and underrate the complexity and paradoxicality of our culture of advertising and simulation — but that’s not really the core of my problem with the book. My real differences are — to use Taylor’s own preferred mode of expression — theological ones. I think that Taylor is far too idolatrous in his regard for “the market” and for money, which traditional religion has seen as Mammon, but which he recasts as a sort of Hermes Trismegistus or trickster figure (though he doesn’t directly use this metaphor), as well as a Christological mediator between the human and the divine. Taylor says, convincingly, that economics cannot be disentangled from religion, because any economic system ultimately requires faith — it is finally only faith that gives money its value. But I find Taylor’s faith to be troublingly misplaced: it is at the antipodes from any form of fundamentalism, but for this very reason oddly tends to coincide with it. In postmodern society, money is the Absolute, or the closest that we mortals can come to an Absolute. (Taylor complacently endorses the hegelian dialectic of opposites, without any of the sense of irony that a contemporary christianophile hegelian like Zizek brings to the dialectic). Where fundamentalists seek security, grounding, and redemption, Taylor wants to affirm uncertainty and risk “in a world without redemption.” But this means that the turbulence and ungroundedness of the market makes it the locus for a quasi-religious Nietzschean affirmation (“risk, uncertainty, and insecurity, after all, are pulses of life” — 331) which is ultimately not all that far from the Calvinist faith that everything is in the hands of the Lord.

Taylor at one point works through Marx’s account of the self-valorization of capital; for Taylor, “Marx implicitly draws on Kant’s aesthetics and Hegel’s philosophy” when he describes capital’s “self-renewing circular exchange” (109). That is to say, Marx’s account of capital logic has the same structure as Kant’s organically self-validating art object, or Hegel’s entire system. (Taylor makes much of Marx’s indebtedness to Hegel). What Taylor leaves out of his account, however, is the part where Marx talks about the appropriation of surplus value, which is to say what capital does in the world in order to generate and perpetuate this process of “self-valorization.” I suggest that this omission is symptomatic. In his history of economics, Taylor moves from Adam Smith to such mid-20th-century champions of laissez faire as Milton Friedman and F. A. Hayek; but he never mentions, for instance, Ricardo, who (like Marx after him) was interested in production and consumption, rather than just circulation.

Now, simply to say — as most orthodox Marxists would do — that Taylor ignores production, and the way that circulation is grounded in production, is a more “fundamentalist” move than I would wish to make. Taylor is right to call attention to the eerily ungrounded nature of contemporary finance. Stock market prices are largely disconnected from any underlying economic performance of the companies whose stocks are being traded; speculation on derivatives and other higher-order financial instruments, which have even less relation to actual economic activity, have largely displaced productive investment as the main “business” of financial markets today. But Taylor seems to celebrate this process as a refutation of Marx and Marxism (except to the extent that Marx himself unwittingly endorses the self-valorization of capital, by describing it in implicitly aesthetic and theological terms). Taylor tends to portray Marx as an old-school fundamentalist who is troubled by the way that money’s fluidity and “spectrality” undermine metaphysical identities and essences. But this is a very limited and blinkered (mis)reading of Marx. For Marx himself begins Capital with the notorious discussion of the immense abstracting power of commodities and money. And subsequently, Marx insists on the way that circuits of finance tend, in an advanced capitalist system, to float free of their “determinants” in use-value and labor. The autonomous “capital-logic” that Marx works out in Volumes 2 & 3 of Capital is much more true today than it ever was in Marx’s own time. Marx precisely explores the consequences of these developments without indulging in any “utopian-socialist” nostalgia for a time of primordial plenitude, before money matters chased us out of the Garden.

Let me try to put this in another way. The fact that postmodern financial speculation is (quite literally) ungrounded seems to mean, for Taylor, that it is therefore also free of any extraneous consequences or “collateral damage” (Taylor actually uses this phrase as the title of one section of the book, playing on the notion of “collateral” for loans but not considering any extra-financial effects of financial manipulations). Much of the latter part of Confidence Games is concerned with efforts by financiers and economists, in the 1980s and 1990s, to manage and minimize risk; and with their inability to actually do so. Taylor spends a lot of time, in particular, on the sorry story of Long-Term Capital Management (LTCM), the investment firm that went bankrupt so spectacularly in 1998. After years of mega-profits, LTCM got called on its outrageously leveraged investments, found that it couldn’t repay any of its loans, and had to be bailed out to avoid a domino effect leading to worldwide financial collapse. In Taylor’s view, there’s a kind of moral lesson in this: LTCM wanted to make hefty profits without taking the concomitant risks; but eventually the risks caught up with them, in a dramatic movement of neo-Calvinist retribution, a divine balancing of the books. Taylor doesn’t really reflect on the fact that the “risks” weren’t really all that great for the financiers of LTCM themselves: they lost their paper fortunes, but they didn’t literally lose their shirts or get relegated to the poorhouse. Indeed their losses were largely covered, in order to protect everyone else, who would have suffered from the worldwide economic collapse that they almost triggered. The same holds, more recently, for Enron. Ken Lay got some sort of comeuppance when Enron went under, and (depending on the outcome of his trial) he may even end up having to serve (like Martha Stewart) some minimum-security jail time. But Lay will never be in the destitute position of all the people who lost their life savings and old-age pensions in the fiasco. Gaddis’ JR deals with the cycles of disruption and loss that are triggered by the ungrounded speculations at the center of the novel — but this is one aspect of the text Taylor never talks about.

Taylor sharply criticizes the founding assumptions of mainstream economists and financiers: the ideas that the market is “rational,” and that it tends toward “equilibrium.” And here Taylor is unquestionably right: these founding assumptions — which still pervade mainstream economics in the US and around the world — are indeed nonsensical, as well as noxious. It’s only under ideal, frictionless conditions, that almost never exist in actuality, that Smith’s “invisible hand” actually does operate to create “optimal” outcomes. Marginalist and neoclassical/neoliberal economics is probably the most mystified discipline in the academy today, wedded as it is to the pseudo-rigor of mathematical models borrowed from physics, and deployed in circumstances where none of the idealizations at the basis of physics actually obtain. It’s welcome to see Taylor take on the economists’ “dream of a rationally ordered world” (301), one every bit as out of touch with reality, and as harmful in its effects when people tried to bend the real world to conform to it, as Soviet communism ever was.

But alas — Taylor only dismisses the prevalent neoclassical version of the invisible hand, in order to welcome it back in another form. If the laws of economic equilibrium, borrowed by neoclassical economics from 19th-century physical dynamics, do not work, for Taylor this is because the economy is governed instead by the laws of complex systems, which he borrows from late-20th-century physics in the form of chaos and complexity theory. There is still an invisible hand in Taylor’s account: only now it works through phase transitions and strange attractors in far-from-equilibrium conditions. Taylor thus links the physics of complexity to the free-market theories of F. A. Hayek (Margaret Thatcher’s favorite thinker), for whom the “market” was a perfect information-processing mechanism that calculated optimal outcomes as no “central planning” agency could. According to Hayek’s way of thinking, since any attempt at human intervention in the functioning of the economy — any attempt to alleviate or mitigate circumstances — will necessarily have unintended and uncontrollable consequences, we do best to let the market take its course, with no remorse or regret for the vast amount of human suffering and misery that is created thereby.

Such sado-monetarist cruelty is clearly not Taylor’s intention, but it arises nevertheless from his revised version of the invisible hand, as well as from his determination to separate financial networks from their extra-financial effects. I’ll say it again: the more Taylor celebrates the way that everything is interconnected, and all systems are open, he still maintains a sort of methodological solipsism or blindness to external consequences. The fact that financial networks today (or any other sort of self-perpetuating system of nonreferential signs) are ungrounded self-affecting systems, produced in the unfolding of a “developmental process [that] neither is grounded in nor refers to anything beyond itself” (330) — this fact does not exempt these systems from having extra-systemic consequences: indeed, if anything, the system’s lack of “groundedness” or connection makes the extra-systemic effects all the more intense and virulent. To write off thesse effects as “coevolution,” or as the “perpetual restlessness” of desire, or as a wondrous Nietzschean affirmation of risk, is to be disingenuous at best.

There’s a larger question here, that goes far beyond Taylor. When we think today of networks, or of chaotic systems, we think of patterns that are instantiated indifferently in the most heterogeneous sorts of matter. The same structures, the same movements, the same chaotic bifurcations and phase transitions, are supposedly at work in biological ecosystems, in the weather, and in the stock market. This is the common wisdom of the age — it certainly isn’t specific to Taylor — but it’s an assumption that I increasingly think needs to be criticized. The very fact that the same arguments from theories of chaos/complexity and “self-organization” can be cited with equal relevance by Citibank and by the alterglobalization movement, and can be used to justify both feral capitalism and communal anarchism, should give us pause. For one thing, I don’t think we know yet how well these scientific theories will hold up; they are drastic simplifications, and only time will tell how well they perform, how useful they are, in comparison to the drastic simplifications proposed by the science of, say, the nineteenth century. For another thing, we still need to be dubious about how the idea of the same pattern instantiated indifferently in various sorts of matter is just another extension — powerful in some ways, but severely limiting in others — of Western culture’s tendency to divide mind or meaning from matter, and to devalue the latter. For yet another, we should be very wary of drawing political and ethical consequences from scientific observation and theorization, for usually such drawing-consequences involves a great deal of arbitrariness, as it projects the scientific formulations far beyond the circumstances in which they are meaningful.

Attali’s Noise

Jacques Attali’s Noise: The Political Economy of Music made something of a stir when it was published roughly a quarter-century ago (it came out in France in 1977, and in English translation in 1985). Noise comes from a time when “theory” had greater ambitions than it does today; it’s an audacious, ambitious book, linking the production, performance, and consumption of music to fundamental questions of power and order in society. I read it for the first time in many years, in order to see how well it holds up in the 21st century.

Noise presents itself as a “universal history”: it presents a schema of four historical phases, which it claims are valid for all of human history and culture (or at least for European history and culture: Attali, like so many European thinkers, consigns everything that lies outside Europe and its Near Eastern antecedents to a vague and undifferentiated ‘primitive’ category, as if there were no differences worth noting among them, and nothing that any of these other cultures could offer that was different from the European lineage). The mania for “universal history” was strong among late-20th-century Parisian thinkers; both Deleuze & Guattari, and Baudrillard, offer such grand formulations. Though I doubt that any of these schemas are “true” — they leave out too much, oversimplify, reduce the number of actual structural orders — at their best (as, I would argue, in Deleuze & Guattari, in the “Savages, Barbarians, Civilized Men” section of Anti-Oedipus, and in the chapter “On Several Regimes of Signs” in A Thousand Plateaus) they are richly suggestive, and help us at least to trace the genealogy of what we take for granted in the present, and to see the contingency of, and the possibility therefore of differing from, what we take for granted in the present. Attali’s “universal history,” however, is much weaker than Deleuze and Guattari’s; it really just consists in shunting everything that is pre-capitalist, or simply non-capitalist, into a single category.

Still, Attali offers some valuable, or at least thought-provoking, insights. Music is the organization of sound; by channelling certain sounds in certain orders, it draws a distinction between sounds that are legitimate, and those that are not: the latter are relegated to the (negative) category of “noise.” Music, like other arts, is often idealized as the imposition of form upon chaos (Wallace Stevens’ “blessed rage for order”). Attali rightly insists that there’s a politics at work here: behind the idealization, there’s an act of exclusion. The history of music can be read as a series of battles for legitimation, disputes over what is acceptable as sound, and what is only “noise” (think of the rise of dissonance in European concert music in the 19th and early 20th centuries: or the way punk in the late 1970s, like many other movements before and since, affirmed “noise” against the gentility of mainstream pop and officially sanctioned rock, or why Public Enemy wanted to “Bring the Noise,” a gesture at once aesthetic and political).

Now, the imposition of order is always a kind of violence, albeit one that claims to put an end to violence. The State has a legal monopoly of violence, and this is what allows it to provide peace and security to its citizens. This is why, as Foucault put it, “the history which bears and determines us has the form of a war rather than that of a language: relations of power, not relations of meaning.” Attali draws an analogy — actually, more than an analogy, virtually an identity — between the imposition of order in society, and the imposition of sonic order that is music. Social order and musical order don’t just formally resemble one another; since music is inherently social and communal, music as an action (rather than a product), like Orpheus’ taming of the beasts, is itself part of the imposition of order, the suppression of violence by a monopolization of violence. Music excludes the violence of noise (unwanted sound) by violently imposing order upon sound. And music is addressed to everybody — it “interpellates” us into society. Music thus plays a central role in social order — which is why Plato, for instance, was so concerned with only allowing the ‘right’ sorts of music into his Republic; and why the Nazis paid so much attention to music (favoring Wagner and patriotic songs, and banning “degenerate” music like jazz).

Attali specifies this further by assimilating music to sacrifice, as the primordial religious origin of all social order. I find this a powerful and deeply suggestive insight, even though Attali understands the logic of sacrifice in the terms set forth by Rene Girard, rather than in the much richer and more ambiguous formulations of Georges Bataille.(To my mind, everything Girard says can be traced back to Bataille, but Girard only offers us a reductive, normalized, idealized, and overly pious version of Bataille. The impulsion to sacrifice, the use of the scapegoat as sacrificial substitution, the creation of community by mutual implication in the sacrifice, and so on — all these can only be understood in the context of Bataille’s notion of expenditure, and in relation to Maussian gift economies; only in this way can we see how sacrifice, in its religious and erotic, as well as political dimensions, doesn’t just rescue us from “mimetic rivalry,” but also institutes a whole set of unequal power relations).

In any case: music as a sacrificial practice, and more generally as a form of “community” (a word which I leave in quotes because I don’t want to forget its ambiguous, and often obnoxious, connotations), is central to the way that order exists in a given society. Music is not a mere part of what traditional Marxists called the “superstructure”; rather, it is directly one of the arenas in which the power struggles that shape and change the society take place. (These “power struggles” might be Marxist class warfare, or Foucauldian conflicts of power and resistance seeping up from below and interfering with one another, or indeed the more peaceful contentions, governed by a “social contract,” that are noted by liberal political theory). Attali argues that music is one of the foremost spheres in which the struggles, inventions, innovations, and mutations that determine the structure of society take place; and therefore that music is in a strong sense “prophetic,” in that its changes anticipate and forecast what happens in society as a whole.

All this is background, really; though music’s “Sacrificing” role is the first of Attali’s four historical phases. Attali’s real interest (and mine as well), and the subject of his three remaining historical phases, is what happens to music under capitalism. The 19th century concert hall is the center of the phase of “Representing.” The ritual function of music in “primitive” societies, and even in Europe up to feudalism and beyond, gets dissolved as a result of the growth of mercantile, and then industrial capitalism. Music is separated from everyday life; it becomes a specialized social function, with specialized producers and performers. The musician becomes a servant of the Court in 17th and 18th century Europe; by the 19th century, with the rise to power of the bourgeoisie after the French Revolution, the musician must become an entrepreneur. Music “become[s] institutionalized as a commodity,” and “acquire[s] an autonomous status and monetary value,” for the first time in human history (51). The musical emphasis on harmony in this period is strictly correlated, according to Attali, with an economic system based upon exchange, and the equilibrium that is supposed to result from processes of orderly economic exchange. Music and money both work, in the 19th century, according to a logic of representation. Money is the representation of physical goods, in the same way that the parliament, in representative democracy, is the representation of the populace. And the resolution of harmonic conflict in the course of 19th century compositions works alongside the resolution of conflicting desires through the (supposed) equilibrium of the “free market.” In the cases both of music and the market, sacrifice is repressed and disavowed, and replaced by what is both the representation of (social and musical) harmony, and the imposition of harmony through the process of representation itself. Playing on the multiple French meanings of the word “representation,” Attali includes in all this the formal “representation” (in English, more idiomatically, the “performance”) of music in the concert hall as the main process by means of which music is disseminated. The links Attali draws here are all quite clever, and much of it might even be true.

Finally, though, however important a role representation continues to play in the ideology of late-capitalist society, the twentieth century has effectively moved beyond it. For Attali, the crucial development is the invention of the phonograph, the radio, and other means of mechanical (and now, electronic) reproduction and dissemination: this is what brings music (and society) out of the stage of “Representing” and into one grounded instead in “Repeating.” Of course, Attali is scarcely the first theorist to point out how radically these technologies have changed the ways in which we experience music. Nor is he alone in noting how these changes — with musical recordings becoming primary, rather than their being merely reproductions of ‘real’ live performances — can be correlated with the hypercommodification of music. More originally, Attali comments on the “stockpiling” of recordings: in effect, once I buy a record or CD or file, I don’t really have to listen to the music contained therein: the essence of consumption lies in purchasing and collecting, not in “using” the music through actual listening. He also makes an ingenious parallel between the pre-programmed and managed production of “pop” music, and the instrumental rationality of musical avant-gardes (both the serialists of the 50s and the minimalists of the 70s). But all in all, “Repeating” is the weakest chapter of Noise, because for the most part Attali pretty much just echoes Adorno’s notorious critique of popular music. I’d argue — as I have implicitly suggested in previous posts — that the real problem with Adorno’s and Attali’s denunciations is that they content themselves with essentially lazy and obvious criticisms of commodity culture, while failing to plumb the commodity experience to its depths, refusing to push it to its most extreme consequences. The only way out is through. The way to defend popular music against the Frankfurt School critique — not that I think it even needs to be defended — is not by taking refuge in notions of “authenticity” in order to deny its commodity status, but rather to work out how the power of this music comes out of — rather than existing in spite of — its commodity status, how it works through the logic of repetition and commodification, and pushes this further than any capitalist apologetics would find comfortable.

Such an approach is not easy to articulate; I haven’t yet succeeded in doing so, and I can’t blame Attali for not successfully doing so either. “Composing,” the brief last chapter of Noise, at least attempts just such a reinvention — in a way that Frankfurt School thinkers like Adorno would never accept. Which is why I liked this final chapter, even though in certain respects it feels quite dated. Attali here reverses the gloomy vision of his “Repeating” chapter, drawing on music from the 1960s (free jazz, as well as the usual rock icons), in order to envision a new historical stage, a liberated one entirely beyond the commodity, when music is no longer a product, but a process that is engaged in by everyone. Attali doesn’t really explain how each person can become his/her own active composer/producer of music, rather than just a passive listener; but what’s brilliant about the argument, nonetheless, is that it takes off from a hyperbolic intensification of the position of the consumer of recorded music (instead of negating this consumer as a good Hegelian Marxist would do). As the consumption of music (and of images) becomes ever more privatized and solipsistic, Attali says, it mutates into a practice of freedom:

Pleasure tied to the self-directed gaze: Narcissus after Echo… the consumer, completing the mutation that began with the tape recorder and photography, will thus become a producer and will derive at least as much of his satisfaction from the manufacturing process itself as from the object he produces. He will institute the spectacle of himself as the supreme usage. (144)

Writing before the Walkman, let alone the iPod and the new digital tools that can cut, paste, and rearrange sounds with just the click of a mouse, Attali seems to anticipate (or to find in the music of his time, which itself had a power of anticipation) our current culture of sampling, remixing, and file-trading, as well as the solipsistic enjoyment of music that Simon Reynolds finds so creepy (“those ads for ipods creep me out, the idea of people looking outwardly normal and repressed and grey-faced on the subway but inside they’re freaking out and going bliss-crazy”). And if Attali writes about these (anticipated) developments with some of the naive utopianism that has been so irritating among more recent cyber-visionaries, he has the excuse both of the time in which he was writing AND the fact that his vision makes more sense — as a project for liberation, rather than as a description of what technology all by itself is alleged to accomplish — in the context of, and counterposed to, the previous chapter’s Adornoesque rant. Despite all his irritating generalizations and dubiously overstated claims, Attali may really have been on to something here. The problem, of course, is how to follow it up.

Jacques Attali’s Noise: The Political Economy of Music made something of a stir when it was published roughly a quarter-century ago (it came out in France in 1977, and in English translation in 1985). Noise comes from a time when “theory” had greater ambitions than it does today; it’s an audacious, ambitious book, linking the production, performance, and consumption of music to fundamental questions of power and order in society. I read it for the first time in many years, in order to see how well it holds up in the 21st century.

Noise presents itself as a “universal history”: it presents a schema of four historical phases, which it claims are valid for all of human history and culture (or at least for European history and culture: Attali, like so many European thinkers, consigns everything that lies outside Europe and its Near Eastern antecedents to a vague and undifferentiated ‘primitive’ category, as if there were no differences worth noting among them, and nothing that any of these other cultures could offer that was different from the European lineage). The mania for “universal history” was strong among late-20th-century Parisian thinkers; both Deleuze & Guattari, and Baudrillard, offer such grand formulations. Though I doubt that any of these schemas are “true” — they leave out too much, oversimplify, reduce the number of actual structural orders — at their best (as, I would argue, in Deleuze & Guattari, in the “Savages, Barbarians, Civilized Men” section of Anti-Oedipus, and in the chapter “On Several Regimes of Signs” in A Thousand Plateaus) they are richly suggestive, and help us at least to trace the genealogy of what we take for granted in the present, and to see the contingency of, and the possibility therefore of differing from, what we take for granted in the present. Attali’s “universal history,” however, is much weaker than Deleuze and Guattari’s; it really just consists in shunting everything that is pre-capitalist, or simply non-capitalist, into a single category.

Still, Attali offers some valuable, or at least thought-provoking, insights. Music is the organization of sound; by channelling certain sounds in certain orders, it draws a distinction between sounds that are legitimate, and those that are not: the latter are relegated to the (negative) category of “noise.” Music, like other arts, is often idealized as the imposition of form upon chaos (Wallace Stevens’ “blessed rage for order”). Attali rightly insists that there’s a politics at work here: behind the idealization, there’s an act of exclusion. The history of music can be read as a series of battles for legitimation, disputes over what is acceptable as sound, and what is only “noise” (think of the rise of dissonance in European concert music in the 19th and early 20th centuries: or the way punk in the late 1970s, like many other movements before and since, affirmed “noise” against the gentility of mainstream pop and officially sanctioned rock, or why Public Enemy wanted to “Bring the Noise,” a gesture at once aesthetic and political).

Now, the imposition of order is always a kind of violence, albeit one that claims to put an end to violence. The State has a legal monopoly of violence, and this is what allows it to provide peace and security to its citizens. This is why, as Foucault put it, “the history which bears and determines us has the form of a war rather than that of a language: relations of power, not relations of meaning.” Attali draws an analogy — actually, more than an analogy, virtually an identity — between the imposition of order in society, and the imposition of sonic order that is music. Social order and musical order don’t just formally resemble one another; since music is inherently social and communal, music as an action (rather than a product), like Orpheus’ taming of the beasts, is itself part of the imposition of order, the suppression of violence by a monopolization of violence. Music excludes the violence of noise (unwanted sound) by violently imposing order upon sound. And music is addressed to everybody — it “interpellates” us into society. Music thus plays a central role in social order — which is why Plato, for instance, was so concerned with only allowing the ‘right’ sorts of music into his Republic; and why the Nazis paid so much attention to music (favoring Wagner and patriotic songs, and banning “degenerate” music like jazz).

Attali specifies this further by assimilating music to sacrifice, as the primordial religious origin of all social order. I find this a powerful and deeply suggestive insight, even though Attali understands the logic of sacrifice in the terms set forth by Rene Girard, rather than in the much richer and more ambiguous formulations of Georges Bataille.(To my mind, everything Girard says can be traced back to Bataille, but Girard only offers us a reductive, normalized, idealized, and overly pious version of Bataille. The impulsion to sacrifice, the use of the scapegoat as sacrificial substitution, the creation of community by mutual implication in the sacrifice, and so on — all these can only be understood in the context of Bataille’s notion of expenditure, and in relation to Maussian gift economies; only in this way can we see how sacrifice, in its religious and erotic, as well as political dimensions, doesn’t just rescue us from “mimetic rivalry,” but also institutes a whole set of unequal power relations).

In any case: music as a sacrificial practice, and more generally as a form of “community” (a word which I leave in quotes because I don’t want to forget its ambiguous, and often obnoxious, connotations), is central to the way that order exists in a given society. Music is not a mere part of what traditional Marxists called the “superstructure”; rather, it is directly one of the arenas in which the power struggles that shape and change the society take place. (These “power struggles” might be Marxist class warfare, or Foucauldian conflicts of power and resistance seeping up from below and interfering with one another, or indeed the more peaceful contentions, governed by a “social contract,” that are noted by liberal political theory). Attali argues that music is one of the foremost spheres in which the struggles, inventions, innovations, and mutations that determine the structure of society take place; and therefore that music is in a strong sense “prophetic,” in that its changes anticipate and forecast what happens in society as a whole.

All this is background, really; though music’s “Sacrificing” role is the first of Attali’s four historical phases. Attali’s real interest (and mine as well), and the subject of his three remaining historical phases, is what happens to music under capitalism. The 19th century concert hall is the center of the phase of “Representing.” The ritual function of music in “primitive” societies, and even in Europe up to feudalism and beyond, gets dissolved as a result of the growth of mercantile, and then industrial capitalism. Music is separated from everyday life; it becomes a specialized social function, with specialized producers and performers. The musician becomes a servant of the Court in 17th and 18th century Europe; by the 19th century, with the rise to power of the bourgeoisie after the French Revolution, the musician must become an entrepreneur. Music “become[s] institutionalized as a commodity,” and “acquire[s] an autonomous status and monetary value,” for the first time in human history (51). The musical emphasis on harmony in this period is strictly correlated, according to Attali, with an economic system based upon exchange, and the equilibrium that is supposed to result from processes of orderly economic exchange. Music and money both work, in the 19th century, according to a logic of representation. Money is the representation of physical goods, in the same way that the parliament, in representative democracy, is the representation of the populace. And the resolution of harmonic conflict in the course of 19th century compositions works alongside the resolution of conflicting desires through the (supposed) equilibrium of the “free market.” In the cases both of music and the market, sacrifice is repressed and disavowed, and replaced by what is both the representation of (social and musical) harmony, and the imposition of harmony through the process of representation itself. Playing on the multiple French meanings of the word “representation,” Attali includes in all this the formal “representation” (in English, more idiomatically, the “performance”) of music in the concert hall as the main process by means of which music is disseminated. The links Attali draws here are all quite clever, and much of it might even be true.

Finally, though, however important a role representation continues to play in the ideology of late-capitalist society, the twentieth century has effectively moved beyond it. For Attali, the crucial development is the invention of the phonograph, the radio, and other means of mechanical (and now, electronic) reproduction and dissemination: this is what brings music (and society) out of the stage of “Representing” and into one grounded instead in “Repeating.” Of course, Attali is scarcely the first theorist to point out how radically these technologies have changed the ways in which we experience music. Nor is he alone in noting how these changes — with musical recordings becoming primary, rather than their being merely reproductions of ‘real’ live performances — can be correlated with the hypercommodification of music. More originally, Attali comments on the “stockpiling” of recordings: in effect, once I buy a record or CD or file, I don’t really have to listen to the music contained therein: the essence of consumption lies in purchasing and collecting, not in “using” the music through actual listening. He also makes an ingenious parallel between the pre-programmed and managed production of “pop” music, and the instrumental rationality of musical avant-gardes (both the serialists of the 50s and the minimalists of the 70s). But all in all, “Repeating” is the weakest chapter of Noise, because for the most part Attali pretty much just echoes Adorno’s notorious critique of popular music. I’d argue — as I have implicitly suggested in previous posts — that the real problem with Adorno’s and Attali’s denunciations is that they content themselves with essentially lazy and obvious criticisms of commodity culture, while failing to plumb the commodity experience to its depths, refusing to push it to its most extreme consequences. The only way out is through. The way to defend popular music against the Frankfurt School critique — not that I think it even needs to be defended — is not by taking refuge in notions of “authenticity” in order to deny its commodity status, but rather to work out how the power of this music comes out of — rather than existing in spite of — its commodity status, how it works through the logic of repetition and commodification, and pushes this further than any capitalist apologetics would find comfortable.

Such an approach is not easy to articulate; I haven’t yet succeeded in doing so, and I can’t blame Attali for not successfully doing so either. “Composing,” the brief last chapter of Noise, at least attempts just such a reinvention — in a way that Frankfurt School thinkers like Adorno would never accept. Which is why I liked this final chapter, even though in certain respects it feels quite dated. Attali here reverses the gloomy vision of his “Repeating” chapter, drawing on music from the 1960s (free jazz, as well as the usual rock icons), in order to envision a new historical stage, a liberated one entirely beyond the commodity, when music is no longer a product, but a process that is engaged in by everyone. Attali doesn’t really explain how each person can become his/her own active composer/producer of music, rather than just a passive listener; but what’s brilliant about the argument, nonetheless, is that it takes off from a hyperbolic intensification of the position of the consumer of recorded music (instead of negating this consumer as a good Hegelian Marxist would do). As the consumption of music (and of images) becomes ever more privatized and solipsistic, Attali says, it mutates into a practice of freedom:

Pleasure tied to the self-directed gaze: Narcissus after Echo… the consumer, completing the mutation that began with the tape recorder and photography, will thus become a producer and will derive at least as much of his satisfaction from the manufacturing process itself as from the object he produces. He will institute the spectacle of himself as the supreme usage. (144)

Writing before the Walkman, let alone the iPod and the new digital tools that can cut, paste, and rearrange sounds with just the click of a mouse, Attali seems to anticipate (or to find in the music of his time, which itself had a power of anticipation) our current culture of sampling, remixing, and file-trading, as well as the solipsistic enjoyment of music that Simon Reynolds finds so creepy (“those ads for ipods creep me out, the idea of people looking outwardly normal and repressed and grey-faced on the subway but inside they’re freaking out and going bliss-crazy”). And if Attali writes about these (anticipated) developments with some of the naive utopianism that has been so irritating among more recent cyber-visionaries, he has the excuse both of the time in which he was writing AND the fact that his vision makes more sense — as a project for liberation, rather than as a description of what technology all by itself is alleged to accomplish — in the context of, and counterposed to, the previous chapter’s Adornoesque rant. Despite all his irritating generalizations and dubiously overstated claims, Attali may really have been on to something here. The problem, of course, is how to follow it up.

SIPs

One of the more amusing features added to amazon.com recently is the inclusion, for many books, of SIPs: Statistically Improbable Phrases. As it is explained on the website:

Amazon.com’s Statistically Improbable Phrases, or “SIPs”, show you the interesting, distinctive, or unlikely phrases that occur in the text of books in Search Inside the Book. Our computers scan the text of all books in the Search Inside program. If they find a phrase that occurs a large number of times in a particular book relative to how many times it occurs across all Search Inside books, that phrase is a SIP in that book.

Just now I was looking at the page for an academic essay anthology called The New Economic Criticism: Studies at the Interface of Literature and Economics, and among the SIPs I found the following:

gold humbug, rentier culture, ersatz economics, scriptural money, imperial grammar, rhetorical tetrad, symbolic money, critical economists, economic genre, constitutive metaphors, ethical economy, universal equivalent, metaphorical field, novel machine, doing economics, realistic writing, economic discourse, feminist economists, general equivalent, hot pressure, beautiful shirts

Maybe I should leave this list to speak for itself. I don’t fined “general equivalent” or “doing economics” or even “feminist economists” to be all that surprising… but “beautiful shirts”?

One of the more amusing features added to amazon.com recently is the inclusion, for many books, of SIPs: Statistically Improbable Phrases. As it is explained on the website:

Amazon.com’s Statistically Improbable Phrases, or “SIPs”, show you the interesting, distinctive, or unlikely phrases that occur in the text of books in Search Inside the Book. Our computers scan the text of all books in the Search Inside program. If they find a phrase that occurs a large number of times in a particular book relative to how many times it occurs across all Search Inside books, that phrase is a SIP in that book.

Just now I was looking at the page for an academic essay anthology called The New Economic Criticism: Studies at the Interface of Literature and Economics, and among the SIPs I found the following:

gold humbug, rentier culture, ersatz economics, scriptural money, imperial grammar, rhetorical tetrad, symbolic money, critical economists, economic genre, constitutive metaphors, ethical economy, universal equivalent, metaphorical field, novel machine, doing economics, realistic writing, economic discourse, feminist economists, general equivalent, hot pressure, beautiful shirts

Maybe I should leave this list to speak for itself. I don’t fined “general equivalent” or “doing economics” or even “feminist economists” to be all that surprising… but “beautiful shirts”?

Pop Music

The yearly Pop Music Conference at the Experience Music Project in Seattle takes place this weekend. I’ve gone to all the previous conferences, and they have been great, but unfortunately this year I am unable to attend, due to family circumstances. I was supposed to be giving a talk on the Kleptones, but I had to cancel.

The conference has always had a wide and open definition of “pop” — pretty much anything goes — but this doesn’t really address the question of what it might mean, in somewhat narrower terms, to talk of “pop” as a genre (alongside, and only partly overlapping with, genres like rock or heavy metal or alternative, or hip hop or crunk or grime or reggaeton. These days, invoking “pop” is inherently problematic: in some contexts, it sounds like a dated term from the 1960s; and in others, it bears a weight that certainly is not innocent, when it is invoked in relation to “rockism,” or when it is contrasted to music that is deemed more adventurous, more experimental, or more authentic.

Woebot raises the question with his usual sharpness and polemical verve in a thread on dissensus. I suppose it is a bit crass of me to respond with my thoughts here, instead of joining the dialogue there; but I need the space the blog affords me — rather than the rapid fire of post and response — to really work things out to my (at least semi-) satisfaction.

Anyway: Woebot doesn’t find the term “pop” to be either coherent or interesting; he works through several possible definitions, and finds them all to be lame, self-contradictory, and (to the extent that they do articulate any sort of identifiable tendency) worthy only of being resisted. It’s too vague, he says, to define “pop” as whatever music is in the charts, or to think that the Top Forty any given week somehow mirrors with precision what is happening in (American or British) society that same week. And it’s tired and unilluminating to trot out the old cliches of high culture vs. low. That doesn’t explain, Woebot says, what the positive appeal of “pop” — of defining “low” or “mass” culture in that populist way — might be, given so many other ways of working through the issue.

Which leaves the most polemically charged of Woebot’s possible definitions of “pop”: he suggests that it is just a marketing term:

When I discovered that by Pop music people meant “music for imaginary rather than real communities” I was depressed for about a month. That people could consume Grime as “Pop”, that they could do the pick’n’mix shake and vac ting and “consume” something oblivious to its source, well for me it just didn’t bear thinking about. That all music could be subjected to the whim of the consumer like this, that there were people out there for whom all music was essentially reducible to a quotient of it’s entertainment value (a mark out of ten, an “A” minus, a four star rating in their iPod ratings menu)…… sad innit. Each song becomes a unit, an equal unit, stripped of anything approaching life. How murderously void.

I think that there is a real issue here, an unavoidable one, since recorded music today really is on the leading edge of consumerist commodification. (A situation that is not really undermined by the nonetheless delicious irony that I, like millions of other people, choose on principle to download music for free as much as possible; I’ll spend hours of my time to find a song that I could order almost instantly from the Apple Music Store for 99 cents. This is not out of penny-pinching — since the time I waste tracking down the song is worth far more to me than 99 cents — but out of a kind of Kantian categorical-imperative sense that it is morally wrong to remunerate the record companies and the current copyright system).

Getting back to the main point: the fact is that music is one of the most social of all human activities (I risk this assertion despite the fact that all human activities are social, that ‘human’ and ‘social’ are virtually synonymous). Because music is so social and collective an activity, it is inevitably tied, in modern societies, to money and the commodity form (which capitalism makes into the primary, if not exclusive, conduits of sociality). Which paradoxically means, in turn, that music today is close to being the most reified and privatized of all human activities. I take myself as an example: a quintessential music consumer (even if I often don’t pay). I download music online, or order it over the web — I’m scarcely ever in one of those quaint old places formerly known as ‘record stores.’ I don’t listen to vinyl, or even very much to CDs: I rip whatever music I get in CD format, and listen to music almost exclusively over headphones, on my laptop or my iPod. Though I live in Detroit, a center of musical activity and production, I’ve never even gone to a live gig here, which means I’ve never listened to music here in the company of other people. What’s more, most of my favorite genres of the moment — grime, reggaeton, baile funk — are produced geographically far away from me, for audiences with whom I will probably never enter into contact (for reasons of race and class and age as well as geography). What’s more, I’ve ‘softened’ considerably since my twenties and early thirties, when I would never listen to music that was less grating than the Sex Pistols or Teenage Jesus and the Jerks, or less hardcore than Run/DMC, or less dissonant than Sonic Youth. Now I’m at the point where I listen to a lot of “pop”: my favorite songs of the moment include (alongside a bunch of heavy grime tracks) things like Amerie’s “One Thing” and Tweet’s “Turn Da Lights Off” and Tori Alamaze’s “Don’t Cha” and M.I.A.’s “Pull Up the People.”

I suppose this makes me into Woebot’s “Online Pop Straw-man”, listening to all sorts of cultural detritus indiscriminately while being ignorant of its particularities and its provenance, “cautious about aspiring to belong to subcultural groups (like, er, Grime) on the basis that he’s Middle Class, White and Old,” and ultimately only willing “to accept something less-threatening and fake in some compromised quasi-ironic manner. To give up on the real because it underlines the uncomfortable reality of one’s own situation.”
The very fact that I like M.I.A. so much pretty much convicts me of these charges. (“In fairness,” as Jerry Springer likes to say, Woebot never makes this point explicitly; but blissblogger — Simon Reynolds, I presume? — pretty much does, later on in the thread. Referring to the M.I.A. controversy, he complains about “the tone of sheer indignation voiced” by M.I.A.’s supporters responding to the criticisms of her: “how DARE you interfere with my pleasure, how dare you pose any impediment to my unproblematic enjoyment of this thing… that debate was so fierce because of a displacement involved… they weren’t defending M.I.A.’s right to be a dilettante-producer, they were defending their own right to be a dilettante-consumer… pop is invested in so intensely i think because it’s about the right to consume, and in this day age consumerism, that’s one of the few areas of power and agency anyone has”).

An anecdote: a couple of years ago, in a class I was teaching, a student gave a presentation on “underground hip hop,” and the dangers of its co-optation by the commercial manistream. His definition of what made the music “underground” was pretty vague; I pressed him, and he ultimately came to the position that it had to be music that I (as an outsider, from an older generation) had never heard of, let alone actually heard. But when it came down to listing specific examples of what he considered “underground hip hop,” it turned out all to be stuff that I was familiar with, and even had on my iPod.

My point in recounting this story is not to boast of my extensive musical connoisseurship (which really isn’t all that extensive, anyway). But rather to suggest that the widespread dissemination (precisely via reification and commodification, enabled by the global communications networks of transnational capital) of all sorts of music (together with all sorts of other things, from sexual fetishes to images of celebrities) makes any sort of “alternative” or “underground” position untenable. Even if you accept (as I am pretty much inclined to) that NOTHING is ever invented by Capital, that creativity is ALWAYS from below, from outside, from “the streets”… and hence in the public sphere, in that very “society” whose existence Margaret Thatcher denied — still, at the very moment that creativity is first expressed, it has already been privatized, commodified, locked up as “intellectual property,” and sold by massive corporations to individualized/privatized consumers worldwide. It has already become solipsistic jouissance, or what blissblogger describes as “the absolute denial of the producer’s existence — the absolute blanking out of the actual material origins and conditions of existence of the pleasure-source you’re enjoying — something for nothing.”

To decry this situation — as blissblogger and woebot seem to do — and to suggest there is a more acceptable alternative to it, is really to contribute to the very myths (of authenticity, of “realness”, of plucky underground inventiveness at odds with mainstream pop) that support the situation of capitalist appropriation and bourgeois-consumption-as-private-jouissance in the first place. Which is why I don’t accept woebot’s maxim that “meaning is always dwindling in Pop, it’s never accreating in the way it does in the underground rhizomes.” Rhizomes aren’t underground anymore; it’s the whole Net, the whole so-called “market”, that is now a rhizome (or, more accurately, that is now rhizomatic). And movements of both accretion and diminution are pretty much going on everywhere.

Or again: blissblogger says, summarizing the situation: “everything that once exploded into public space, becomes interiorized, corralled, quarantined from the world, insulated from ever changing anything.” Here it’s that “once” that I’m suspicious of; the same way I’m suspicious when Guy Debord writes that “all that once was directly lived has become mere representation.” The point being, not that things are always the same, but that — in both blissblogger and the translation of Debord — the “once” has no historical applicability, for it is merely a back-projection from, and inversion of, our current circumstances. It’s a fictive negation of the oppressive circumstances of the present; it provides no path to freedom, no “line of escape,” for it is only a reflection and a symptom of the oppressive circumstances.

Which is why, though I don’t really think of myself as a devotee of “pop” — and in cultural politics terms I am not in the least a populist — I am also unable to join the anti-pop bandwagon. Brecht said somewhere that we shouldn’t start with the good old days, but with the bad new ones. I seriously think that the only way out is through, and that we have to find some way of working through the paradoxes of solipsistic, hedonistic consumerism, pushing them to their limit, rather than moralistically condemning them by refusing to listen to M.I.A. or go to Starbucks.

The yearly Pop Music Conference at the Experience Music Project in Seattle takes place this weekend. I’ve gone to all the previous conferences, and they have been great, but unfortunately this year I am unable to attend, due to family circumstances. I was supposed to be giving a talk on the Kleptones, but I had to cancel.

The conference has always had a wide and open definition of “pop” — pretty much anything goes — but this doesn’t really address the question of what it might mean, in somewhat narrower terms, to talk of “pop” as a genre (alongside, and only partly overlapping with, genres like rock or heavy metal or alternative, or hip hop or crunk or grime or reggaeton. These days, invoking “pop” is inherently problematic: in some contexts, it sounds like a dated term from the 1960s; and in others, it bears a weight that certainly is not innocent, when it is invoked in relation to “rockism,” or when it is contrasted to music that is deemed more adventurous, more experimental, or more authentic.

Woebot raises the question with his usual sharpness and polemical verve in a thread on dissensus. I suppose it is a bit crass of me to respond with my thoughts here, instead of joining the dialogue there; but I need the space the blog affords me — rather than the rapid fire of post and response — to really work things out to my (at least semi-) satisfaction.

Anyway: Woebot doesn’t find the term “pop” to be either coherent or interesting; he works through several possible definitions, and finds them all to be lame, self-contradictory, and (to the extent that they do articulate any sort of identifiable tendency) worthy only of being resisted. It’s too vague, he says, to define “pop” as whatever music is in the charts, or to think that the Top Forty any given week somehow mirrors with precision what is happening in (American or British) society that same week. And it’s tired and unilluminating to trot out the old cliches of high culture vs. low. That doesn’t explain, Woebot says, what the positive appeal of “pop” — of defining “low” or “mass” culture in that populist way — might be, given so many other ways of working through the issue.

Which leaves the most polemically charged of Woebot’s possible definitions of “pop”: he suggests that it is just a marketing term:

When I discovered that by Pop music people meant “music for imaginary rather than real communities” I was depressed for about a month. That people could consume Grime as “Pop”, that they could do the pick’n’mix shake and vac ting and “consume” something oblivious to its source, well for me it just didn’t bear thinking about. That all music could be subjected to the whim of the consumer like this, that there were people out there for whom all music was essentially reducible to a quotient of it’s entertainment value (a mark out of ten, an “A” minus, a four star rating in their iPod ratings menu)…… sad innit. Each song becomes a unit, an equal unit, stripped of anything approaching life. How murderously void.

I think that there is a real issue here, an unavoidable one, since recorded music today really is on the leading edge of consumerist commodification. (A situation that is not really undermined by the nonetheless delicious irony that I, like millions of other people, choose on principle to download music for free as much as possible; I’ll spend hours of my time to find a song that I could order almost instantly from the Apple Music Store for 99 cents. This is not out of penny-pinching — since the time I waste tracking down the song is worth far more to me than 99 cents — but out of a kind of Kantian categorical-imperative sense that it is morally wrong to remunerate the record companies and the current copyright system).

Getting back to the main point: the fact is that music is one of the most social of all human activities (I risk this assertion despite the fact that all human activities are social, that ‘human’ and ‘social’ are virtually synonymous). Because music is so social and collective an activity, it is inevitably tied, in modern societies, to money and the commodity form (which capitalism makes into the primary, if not exclusive, conduits of sociality). Which paradoxically means, in turn, that music today is close to being the most reified and privatized of all human activities. I take myself as an example: a quintessential music consumer (even if I often don’t pay). I download music online, or order it over the web — I’m scarcely ever in one of those quaint old places formerly known as ‘record stores.’ I don’t listen to vinyl, or even very much to CDs: I rip whatever music I get in CD format, and listen to music almost exclusively over headphones, on my laptop or my iPod. Though I live in Detroit, a center of musical activity and production, I’ve never even gone to a live gig here, which means I’ve never listened to music here in the company of other people. What’s more, most of my favorite genres of the moment — grime, reggaeton, baile funk — are produced geographically far away from me, for audiences with whom I will probably never enter into contact (for reasons of race and class and age as well as geography). What’s more, I’ve ‘softened’ considerably since my twenties and early thirties, when I would never listen to music that was less grating than the Sex Pistols or Teenage Jesus and the Jerks, or less hardcore than Run/DMC, or less dissonant than Sonic Youth. Now I’m at the point where I listen to a lot of “pop”: my favorite songs of the moment include (alongside a bunch of heavy grime tracks) things like Amerie’s “One Thing” and Tweet’s “Turn Da Lights Off” and Tori Alamaze’s “Don’t Cha” and M.I.A.’s “Pull Up the People.”

I suppose this makes me into Woebot’s “Online Pop Straw-man”, listening to all sorts of cultural detritus indiscriminately while being ignorant of its particularities and its provenance, “cautious about aspiring to belong to subcultural groups (like, er, Grime) on the basis that he’s Middle Class, White and Old,” and ultimately only willing “to accept something less-threatening and fake in some compromised quasi-ironic manner. To give up on the real because it underlines the uncomfortable reality of one’s own situation.”
The very fact that I like M.I.A. so much pretty much convicts me of these charges. (“In fairness,” as Jerry Springer likes to say, Woebot never makes this point explicitly; but blissblogger — Simon Reynolds, I presume? — pretty much does, later on in the thread. Referring to the M.I.A. controversy, he complains about “the tone of sheer indignation voiced” by M.I.A.’s supporters responding to the criticisms of her: “how DARE you interfere with my pleasure, how dare you pose any impediment to my unproblematic enjoyment of this thing… that debate was so fierce because of a displacement involved… they weren’t defending M.I.A.’s right to be a dilettante-producer, they were defending their own right to be a dilettante-consumer… pop is invested in so intensely i think because it’s about the right to consume, and in this day age consumerism, that’s one of the few areas of power and agency anyone has”).

An anecdote: a couple of years ago, in a class I was teaching, a student gave a presentation on “underground hip hop,” and the dangers of its co-optation by the commercial manistream. His definition of what made the music “underground” was pretty vague; I pressed him, and he ultimately came to the position that it had to be music that I (as an outsider, from an older generation) had never heard of, let alone actually heard. But when it came down to listing specific examples of what he considered “underground hip hop,” it turned out all to be stuff that I was familiar with, and even had on my iPod.

My point in recounting this story is not to boast of my extensive musical connoisseurship (which really isn’t all that extensive, anyway). But rather to suggest that the widespread dissemination (precisely via reification and commodification, enabled by the global communications networks of transnational capital) of all sorts of music (together with all sorts of other things, from sexual fetishes to images of celebrities) makes any sort of “alternative” or “underground” position untenable. Even if you accept (as I am pretty much inclined to) that NOTHING is ever invented by Capital, that creativity is ALWAYS from below, from outside, from “the streets”… and hence in the public sphere, in that very “society” whose existence Margaret Thatcher denied — still, at the very moment that creativity is first expressed, it has already been privatized, commodified, locked up as “intellectual property,” and sold by massive corporations to individualized/privatized consumers worldwide. It has already become solipsistic jouissance, or what blissblogger describes as “the absolute denial of the producer’s existence — the absolute blanking out of the actual material origins and conditions of existence of the pleasure-source you’re enjoying — something for nothing.”

To decry this situation — as blissblogger and woebot seem to do — and to suggest there is a more acceptable alternative to it, is really to contribute to the very myths (of authenticity, of “realness”, of plucky underground inventiveness at odds with mainstream pop) that support the situation of capitalist appropriation and bourgeois-consumption-as-private-jouissance in the first place. Which is why I don’t accept woebot’s maxim that “meaning is always dwindling in Pop, it’s never accreating in the way it does in the underground rhizomes.” Rhizomes aren’t underground anymore; it’s the whole Net, the whole so-called “market”, that is now a rhizome (or, more accurately, that is now rhizomatic). And movements of both accretion and diminution are pretty much going on everywhere.

Or again: blissblogger says, summarizing the situation: “everything that once exploded into public space, becomes interiorized, corralled, quarantined from the world, insulated from ever changing anything.” Here it’s that “once” that I’m suspicious of; the same way I’m suspicious when Guy Debord writes that “all that once was directly lived has become mere representation.” The point being, not that things are always the same, but that — in both blissblogger and the translation of Debord — the “once” has no historical applicability, for it is merely a back-projection from, and inversion of, our current circumstances. It’s a fictive negation of the oppressive circumstances of the present; it provides no path to freedom, no “line of escape,” for it is only a reflection and a symptom of the oppressive circumstances.

Which is why, though I don’t really think of myself as a devotee of “pop” — and in cultural politics terms I am not in the least a populist — I am also unable to join the anti-pop bandwagon. Brecht said somewhere that we shouldn’t start with the good old days, but with the bad new ones. I seriously think that the only way out is through, and that we have to find some way of working through the paradoxes of solipsistic, hedonistic consumerism, pushing them to their limit, rather than moralistically condemning them by refusing to listen to M.I.A. or go to Starbucks.

The Savage Girl

Alex Shakar‘s The Savage Girl is a novel about advertising, marketing, and coolhunting. The landscape is allegorical (a purgatorial city built on the slopes of a live volcano), but the details of life are recognizably present-day American. I was less interested in the characters and plot than in the way the book (like much SF) works as a kind of social theory.

The world of The Savage Girl is dominated, not by scarcity and need, but by abundance, aesthetics, and artificially created desires. Thanks to consumer capitalism, human beings have passed from the realm of Necessity to the realm of Freedom. We stand on the verge of the “Light Age” — sometimes spelled the “Lite Age” — “a renaissance of self-creation,” when, thanks to the wonders of niche marketing, “we’ll be able to totally customize our life experience — our beliefs, our rituals, our tribes, our whole personal mythology — and we’ll choose everything that makes us who we are from a vast array of choices” (24). In such an Age, “beauty is the PR campaign of the human soul” (25), inspiring us to aspire to more and more. Virginia Postrel herself couldn’t have put it any better; only Shakar is dramatizing the ambiguities and ironies of what Postrel proclaims all too smugly and self-congratulatorily.

I just mentioned “ironies”; but Shakar suggests that this utopia of product differentiation has as its correlate a “postironic” consciousness. (All the enthusiastic theorizing in the novel is done by the various characters; which allows Shakar’s narrative voice, by contrast, to remain perfectly poker-faced and deadpan). This is something emerging on the far side of the pervasive, David Letterman-esque irony that informs advertising today. For “our culture has become so saturated with ironic doubt that it is beginning to doubt its own mode of doubting… Postironists create their own set of serviceable realities and live in them independently of any facets of the outside world that they choose to ignore… Practitioners of postironic consciousness blur the boundaries between irony and earnestness in ways we traditional ironists can scarcely understand, creating a state of consciousness wherein critical and uncritical responses are indistinguishable. Postirony seeks not to demystify, but to befuddle…” (140). This sounds a lot like the Bush White House, and its supporters in the “faith-based community.” But Shakar suggests that it is much more applicable, even for “reality-based” liberals, because it is in process of becoming the universal mode of being of the consumer. Postirony leads to “a mystical relationship with consumption.” The commodity is sublime. In a world without scarcity or need, it is only through the products we purchase that we can maintain a relationship with the Infinite.

Shakar’s other, related crucial idea is that of the paradessence (short for “paradoxical essence”). “Every product has this paradoxical essence. Two opposing desires that it can promise to satisfy simultaneously.” The paradessence is the “schismatic core” or “broken soul” of every consumer product. Thus coffee promises both “stimulation and relaxation”; ice cream connotes both “eroticism and innocence,” or (more psychoanalytically) both “semen and mother’s milk” (60-61). The paradessence is not a dialectical contradiction; its opposing terms do not interact, conflict, or produce some higher synthesis. Nothing changes or evolves. Rather, the paradessence is a matter of “having everything both ways and every way and getting everything [one] wants” (179). This is a promise that only the commodity can make; it’s a way of being that cannot be sustained in natural, ‘unalienated’ life, but only through the artificial paradise of consumerism. I don’t know how familiar Shakar is with Deleuze and Guattari; but his analysis runs parallel to theirs, when he has his marketing-guru character declare that the pure form of postirony and paradessence is literally schizophrenia (141).

The Savage Girl centers around an advertising campaign for a product that promises everything, precisely because it is literally nothing. This product is called “diet water”: “an artificial form of water… that passes through the body completely unabsorbed. It’s completely inert, completely harmless”, and has no effect whatsoever. It doesn’t actually quench thirst; but as a result, it also doesn’t add to the drinker’s weight, doesn’t make her feel bloated. If you still feel thirsty after a drink of diet water, all you have to do is “buy more.” The consumers “can drink all they want, guilt-free” (44).

Diet water is pure exchange value, image value, and sign value. It’s the perfect product for a world beyond scarcity, as beyond guilt: for it remains scrupulously apart from any use or need. The wildly successful advertising campaign for diet water simultaneously manipulates images of schizophrenic breakdown and primitivist innocence. The ads express the paradessence of diet water; more, they underline how diet water, the perfect commodity, is postironic paradessence personified.

There’s more, like the idea of trans-temporal marketing: marketers from the future have come back in time to colonize us, so that we purchase their not-yet-existent products, which consumer decision on our part will cause those products to come into being, together with the controlling marketers themselves. But I won’t summarize the book’s concepts (or its plot) any further. For the most important thing about The Savage Girl is the way it situates us (the readers/consumers) in relation to the practices it depicts. For Shakar, there’s no outside to the world of commodity culture, no escape from the paradise of marketing that it depicts. There’s no external point from which to launch a critique, no way to make an ironic dismissal that isn’t already compromised.

And I think this is precisely right; the market society can’t be dismantled by stepping outside of its premises. Anti-commercial activists always come off sounding puritanical and moralistic; telling people to stop shopping is no way to build an oppositional political movement. We can only change things when we begin by affirming the whole extent of our own implication in the system we say we are trying to change. We get nowhere by criticizing capitalism for its abundance, or by accusing it of lacking ‘lack.’ If consumerist capitalism is an empty utopia, as I think it is, it’s only by exploiting and expanding its utopianism, rather than rejecting it, that we can hope to move it beyond its limits, and dislocate it from itself.

Alex Shakar‘s The Savage Girl is a novel about advertising, marketing, and coolhunting. The landscape is allegorical (a purgatorial city built on the slopes of a live volcano), but the details of life are recognizably present-day American. I was less interested in the characters and plot than in the way the book (like much SF) works as a kind of social theory.

The world of The Savage Girl is dominated, not by scarcity and need, but by abundance, aesthetics, and artificially created desires. Thanks to consumer capitalism, human beings have passed from the realm of Necessity to the realm of Freedom. We stand on the verge of the “Light Age” — sometimes spelled the “Lite Age” — “a renaissance of self-creation,” when, thanks to the wonders of niche marketing, “we’ll be able to totally customize our life experience — our beliefs, our rituals, our tribes, our whole personal mythology — and we’ll choose everything that makes us who we are from a vast array of choices” (24). In such an Age, “beauty is the PR campaign of the human soul” (25), inspiring us to aspire to more and more. Virginia Postrel herself couldn’t have put it any better; only Shakar is dramatizing the ambiguities and ironies of what Postrel proclaims all too smugly and self-congratulatorily.

I just mentioned “ironies”; but Shakar suggests that this utopia of product differentiation has as its correlate a “postironic” consciousness. (All the enthusiastic theorizing in the novel is done by the various characters; which allows Shakar’s narrative voice, by contrast, to remain perfectly poker-faced and deadpan). This is something emerging on the far side of the pervasive, David Letterman-esque irony that informs advertising today. For “our culture has become so saturated with ironic doubt that it is beginning to doubt its own mode of doubting… Postironists create their own set of serviceable realities and live in them independently of any facets of the outside world that they choose to ignore… Practitioners of postironic consciousness blur the boundaries between irony and earnestness in ways we traditional ironists can scarcely understand, creating a state of consciousness wherein critical and uncritical responses are indistinguishable. Postirony seeks not to demystify, but to befuddle…” (140). This sounds a lot like the Bush White House, and its supporters in the “faith-based community.” But Shakar suggests that it is much more applicable, even for “reality-based” liberals, because it is in process of becoming the universal mode of being of the consumer. Postirony leads to “a mystical relationship with consumption.” The commodity is sublime. In a world without scarcity or need, it is only through the products we purchase that we can maintain a relationship with the Infinite.

Shakar’s other, related crucial idea is that of the paradessence (short for “paradoxical essence”). “Every product has this paradoxical essence. Two opposing desires that it can promise to satisfy simultaneously.” The paradessence is the “schismatic core” or “broken soul” of every consumer product. Thus coffee promises both “stimulation and relaxation”; ice cream connotes both “eroticism and innocence,” or (more psychoanalytically) both “semen and mother’s milk” (60-61). The paradessence is not a dialectical contradiction; its opposing terms do not interact, conflict, or produce some higher synthesis. Nothing changes or evolves. Rather, the paradessence is a matter of “having everything both ways and every way and getting everything [one] wants” (179). This is a promise that only the commodity can make; it’s a way of being that cannot be sustained in natural, ‘unalienated’ life, but only through the artificial paradise of consumerism. I don’t know how familiar Shakar is with Deleuze and Guattari; but his analysis runs parallel to theirs, when he has his marketing-guru character declare that the pure form of postirony and paradessence is literally schizophrenia (141).

The Savage Girl centers around an advertising campaign for a product that promises everything, precisely because it is literally nothing. This product is called “diet water”: “an artificial form of water… that passes through the body completely unabsorbed. It’s completely inert, completely harmless”, and has no effect whatsoever. It doesn’t actually quench thirst; but as a result, it also doesn’t add to the drinker’s weight, doesn’t make her feel bloated. If you still feel thirsty after a drink of diet water, all you have to do is “buy more.” The consumers “can drink all they want, guilt-free” (44).

Diet water is pure exchange value, image value, and sign value. It’s the perfect product for a world beyond scarcity, as beyond guilt: for it remains scrupulously apart from any use or need. The wildly successful advertising campaign for diet water simultaneously manipulates images of schizophrenic breakdown and primitivist innocence. The ads express the paradessence of diet water; more, they underline how diet water, the perfect commodity, is postironic paradessence personified.

There’s more, like the idea of trans-temporal marketing: marketers from the future have come back in time to colonize us, so that we purchase their not-yet-existent products, which consumer decision on our part will cause those products to come into being, together with the controlling marketers themselves. But I won’t summarize the book’s concepts (or its plot) any further. For the most important thing about The Savage Girl is the way it situates us (the readers/consumers) in relation to the practices it depicts. For Shakar, there’s no outside to the world of commodity culture, no escape from the paradise of marketing that it depicts. There’s no external point from which to launch a critique, no way to make an ironic dismissal that isn’t already compromised.

And I think this is precisely right; the market society can’t be dismantled by stepping outside of its premises. Anti-commercial activists always come off sounding puritanical and moralistic; telling people to stop shopping is no way to build an oppositional political movement. We can only change things when we begin by affirming the whole extent of our own implication in the system we say we are trying to change. We get nowhere by criticizing capitalism for its abundance, or by accusing it of lacking ‘lack.’ If consumerist capitalism is an empty utopia, as I think it is, it’s only by exploiting and expanding its utopianism, rather than rejecting it, that we can hope to move it beyond its limits, and dislocate it from itself.

The Passion of the Christ

I have finally, belatedly, seen Mel Gibson’s The Passion of the Christ. Probably everything that can be said about this film , and about the media event of its release, has been said already. Nonetheless, I will try to sort out some of my reactions.

First of all, it is undeniably a powerful film. One can understand why the faithful flocked to see it. The Passion of the Christ owes this power almost exclusively to its unstinting display of tormented, suffering flesh. This display has ample precedents in Christian iconography — the lighting and cinematography owe a lot to hundreds of years of European paintings, many of which Gibson quite consciously called upon as models. But the sight of Jesus’ tortured body in this film has an affective power that cannot be reduced to iconographic references alone; Also, the duration of the body’s torment is crucial to the film, and this is something that can only be captured on film, not in a durationless medium like paint.

More of this in a moment. But there’s an overarching question to be answered first: Is The Passion of the Christ anti-Semitic? Unquestionably it is — but this is not as simple an issue as it might appear. The Jews (given much more “Semitic” features than Jesus and his disciples have) are depicted as monsters of depravity, whose hatred is not slaked even by the torture of Jesus in the very intense whipping scene; they want more suffering, more torture, even to the point of death (Caiaphas demands crucifixion because, he says, Jewish law does not have capital punishment — which is why he needs the Romans to do it). To the contrary, Pontius Pilate is depicted quite sympathetically; as are the other upper-class Romans. (The plebs, or ROman common soldiers, to the contrary, are shown as being as depraved as the Jews, whooping and hollering like drunken frat boys every time they inflict more suffering on Jesus). BUT… in all this, Gibson isn’t really singling out the Jews; he is pretty much just following what the Bible actually says. (There’s one scene where a Roman soldier, grabbing a man from the crowd and drafting him to help Jesus carry the Cross, calls him a dirty Jew, or something like that: an indication that Gibson is aware of the issue). In short, it’s the Gospels that really need to be convicted of anti-Semitism, much more than Gibson himself: though this is an issue that neither Jews nor Christians today are ever willing to face up to squarely. (Though it should be remembered, too, that Gibson quite deliberately stirred up controversy as to whether the film was anti-Semitic, in the months leading up to release, as a marketing ploy to increase anticipation for the film, and to rally the faithful behind him).

The homophobia of Gibson’s portrayal of Herod and his court should also be mentioned. Even as Herod refuses to condemn Jesus (saying that he is insane rather than a criminal), Gibson portrays him as a screaming queen (in the metaphorical sense in which this word is applied to gay men) lording over a court of screaming hysterics of both genders. Homophobia is nothing new for Gibson (there was a lot of it in Braveheart), but it’s worth noting here, if only because (as reported in today’s New York Times) the prospect of a gay pride rally in Jerusalem is the one thing that can bring the Orthodox head rabbis, the Christian Patriarchs, and the Mufti of Jerusalem together in partnership — they all got together to oppose it.

Still, the issue of villainy, or of who is responsible for Jesus’ death, is not really a central concern of The Passion of the Christ. Rather, the display of torture, and the obscene spectacle of Jesus’ flayed and exhausted flesh, is where the libidinal center of the movie lies. Comparisons of The Passion of the Christ to pornography are very much to the point. The film is in many ways quite literally and concertedly sadistic. The figure of Jesus can really only be compared to the Marquis de Sade’s Justine: a body whose innocence is directly correlated to her miraculous, infinite ability to bear and suffer pain: Justine cannot be killed throughout the course of Sade’s immense novel, because that would mean a limit to the libertines’ ability to torture her. As the novel goes on, the torments become ever more extreme, ever more Baroque: but no matter how far they go, Justine survives, and indeed retains consciousness, in order that she may receive and suffer still more pain. This is precisely the logic at work in Gibson’s film. It’s a moot question to ask whether this means that Sade is really a Christian in spite of himself, or whether it means that Gibson’s particular version of Christianity is sadistic: these two are just sides of the same coin. What is important is that Gibson’s film gets its emotional power almost exclusively from its depiction of the human body, the flesh, reduced to meat, reduced to pain, reduced to a spectacle, and yet still fully conscious and able to suffer more. Jesus’ actual death is weirdly anticlimactic; and the last scene of the film, the Resurrection, is almost laughably perfunctory. (In this way it’s almost the polar opposite of Dreyer’s Ordet, arguably the greatest Christian film ever made, which is all about resurrection and redemption). Jesus died for our sins — or more precisely, suffered for them — is where Gibson’s theology begins and ends.

I want to insist that, in specifically cinematic terms, sadism and not masochism is at work here. (This despite the fact that — in terms of film theory — I have committed myself in print, at great length, to supporting the masochistic models of spectatorial identification put forth by Gaylyn Studlar and Carol Clover, against the sadistic model proposed by Laura Mulvey). Masochism implies a pleasure in submission, an ambivalent giving-oneself-over to a all-powerful yet unreliable figure (usually female), and the endurance of an infinitude of postponement and delay. These characteristics may well describe Jesus’ relation to the Father in The Passion of the Christ; but they do not describe the viewer’s relation to Jesus. For the viewer, the film proposes the direct, visceral enjoyment — the Lacanians would call it the “obscene jouissance” — of the spectacle of agonizing, lacerated flesh.

That is to say, the film solicits the viewer to (quite literally) enjoy this spectacle — which is not quite the same thing as identifying with Jesus-as-victim. We can’t identify with Jesus — though we are supposed to emulate or imitate him — precisely because his torment is too extreme, too excessive, to be borne. (Gibson makes it clear that the two thieves who are crucified alongside Jesus do not suffer anywhere near as much as he does: they haven’t been beaten and flayed first, their bodies aren’t anywhere near as bloody, and their agonies are much shorter). Nor, of course, can we identify with Jesus’ tormentors — Gibson uses every trick in the Hollywood playbook to signify that these tormentors are despicable and hateful — despite the fact that Jesus prays to forgive them, “for they know not what they do.” Nonetheless, the film is set up so that we are gratified by Jesus’ torment: the more horrifying, the more explicit it is, the more the believer is justified in his/her faith, and the more the viewers — regardless of whether those viewers are empirically believers — is filled with a kind of sublime convulsion. All we want is more, more, more: we find ourselves in the frenzy of a kind of negative ecstasy that is heightened even further, the more the horror is poured on, the more directly the obscenity is burned into our eyeballs, the more Jesus’ body convulses or collapses in exhaustion, the more the rivulets of blood stream from his flesh.

It little matters that we, the viewers, feel this jouissance in the form of horror and indignation, rather than with the grim self-satisfaction of Caiphas and the other rabbis, or with the brute delight of the Roman legions. It’s still something that we directly revel in, as it takes us outside ourselves, beyond ourselves. And I insist on this “we”, rather than saying “I”; I can think of no film, besides Triumph of the Will or Battleship Potemkin, that so powerfully and emphatically addresses its audience as a collective, rather than as a mere collection of isolated selves.

If this were all that The Passion of the Christ did, I would have to say it was a great work of art, however unsavory — and however unacceptable to most believers — its astonishing sadistic jubilation might be. But unfortunately, it is not the whole story. There’s a whole apparatus that surrounds the sadistic spectacle: and that is where the problem really lies. The torture of Jesus is intercut with lengthy reaction shots, depicting the empathetic sadness of the Virgin Mother, of Mary Magdelen, of the Apostles, and even of some mere onlookers who distinguish themselves from the ugly Jewish mob. The torture is interrupted with flashbacks to the Sermon on the Mount, to “let him who is without sin cast the first stone,” to the Last Supper, even to a scene of Jesus as a young child slipping and falling, and being comforted by his mother. The torture is supplemented with scenes (both in the present and in these flashbacks) in absolutely dreadful slo-mo. And the whole is accompanied by an overbearing soundtrack (as insistent and bombastic as the ones John Williams provides for Spielberg) of santimonious sacramental music. All these aspects of the film are incredibly lame — they manifest the continuing presence of “Hollywood” at its stupidest, laziest, and most cliched — and so overdone that you cannot ignore them.

The effect of this weighty apparatus is to muffle the impact of the sadistic spectacle, to frame it and distance it in a way that makes it “socially acceptable.” This apparatus disavows the jouissance at the film’s core, allowing it to wend its way into the hearts and minds of the spectators, while at the same time reassuring us that we aren’t really enjoying something so cruel and barbaric. Now, of course, Gibson never could have made the film — and Christians would never have flocked to see it — without this elaborate scaffolding of disavowal. But that is precisely what is so insidious about it. What I am calling the film’s superstructure, or surrounding Hollywood apparatus, is what allows us, the viewers, to walk away from the film with a good conscience. And this normalization by way of good conscience is the one substantial way in which Gibson’s art does differ from that of the Marquis de Sade. Gibson restores, as Sade does not, the veneer of civilzation; he gives us the sadistic jouissance, but then he lets us off the hook.

One might make a Christian argument that Gibson’s capital sin as a filmmaker is precisely to forget original sin, to forget that each one of us — every human being — is guilty of Jesus’ death. Since I’m not a Christian, I will not follow up such a line of argument. I will say, though, that Gibson’s maneuver is exactly the one that allows people to support violence and torture — at the limit to become killers and torturers themselves — in “good faith.” The combination of sadistic jouissance and self-exculpating distance is what allows us to approve of foul means because they are in a good cause, or for a valuable ideal. And this is where the film does make contact with the “culture wars” and political struggles taking place in America today. It is what allows people (like President Bush) to mourn Terri Schiavo as a martyr, and to champion the rights of 12-week fetuses, while at the same time gleefully applying capital punishment to scores of inmates, and defending the torture in Abu Ghraib on the (inconsistent) grounds that it was either harmless “blowing off steam,” or a grim necessity in order to win the “war on terror.”

What it finally comes down to, I think, is a kind of exceptionalism. The word is often used to describe the United States of America, allegedly radically different from any other society on Earth, and by virtue of that justified in exempting itself from the obligations and mutual agreements that bind all other nations and societies. But I am thinking of “exceptionalism” in a related, but slightly different, sense. The argument of The Passion of the Christ is finally that Jesus’ Passion is greater than, qualitatively different from, and incommensurate with, any other inflictions of torture and pain that have ever occurred in the course of human history. And this incommensurability is what authorizes Christians to see themselves as uniquely victimized and persecuted, no matter how much actual power they have, and therefore authorizes them to perform (and indeed to institutionalize) actions that they would not allow to anyone else.

Lest I be accused of anti-Christian bigotry here, let me note that the same phenomenon runs rampant among my own people. Jewish identity today is largely built around the memory of the Holocaust, and on the idea that the Holocaust is unique, greater than and absolutely incommensurate with any other incidents of massacre, slaughter, genocide, enslavement, etc., in all of human history. And this in turn provides an alibi for Jewish (anti-black) racism in the United States, as for Israel’s mistreatment of the Palestinians. We’ve suffered more than they have, the argument runs; with the implicit (but rarely stated outright) corollary that therefore we are justified in what we do to them. This kind of thinking, however much it arises out of high ethical principles — in the cases both of the Jews and the Christians — can only lead to extending the cycle of pain and oppression.

I have finally, belatedly, seen Mel Gibson’s The Passion of the Christ. Probably everything that can be said about this film , and about the media event of its release, has been said already. Nonetheless, I will try to sort out some of my reactions.

First of all, it is undeniably a powerful film. One can understand why the faithful flocked to see it. The Passion of the Christ owes this power almost exclusively to its unstinting display of tormented, suffering flesh. This display has ample precedents in Christian iconography — the lighting and cinematography owe a lot to hundreds of years of European paintings, many of which Gibson quite consciously called upon as models. But the sight of Jesus’ tortured body in this film has an affective power that cannot be reduced to iconographic references alone; Also, the duration of the body’s torment is crucial to the film, and this is something that can only be captured on film, not in a durationless medium like paint.

More of this in a moment. But there’s an overarching question to be answered first: Is The Passion of the Christ anti-Semitic? Unquestionably it is — but this is not as simple an issue as it might appear. The Jews (given much more “Semitic” features than Jesus and his disciples have) are depicted as monsters of depravity, whose hatred is not slaked even by the torture of Jesus in the very intense whipping scene; they want more suffering, more torture, even to the point of death (Caiaphas demands crucifixion because, he says, Jewish law does not have capital punishment — which is why he needs the Romans to do it). To the contrary, Pontius Pilate is depicted quite sympathetically; as are the other upper-class Romans. (The plebs, or ROman common soldiers, to the contrary, are shown as being as depraved as the Jews, whooping and hollering like drunken frat boys every time they inflict more suffering on Jesus). BUT… in all this, Gibson isn’t really singling out the Jews; he is pretty much just following what the Bible actually says. (There’s one scene where a Roman soldier, grabbing a man from the crowd and drafting him to help Jesus carry the Cross, calls him a dirty Jew, or something like that: an indication that Gibson is aware of the issue). In short, it’s the Gospels that really need to be convicted of anti-Semitism, much more than Gibson himself: though this is an issue that neither Jews nor Christians today are ever willing to face up to squarely. (Though it should be remembered, too, that Gibson quite deliberately stirred up controversy as to whether the film was anti-Semitic, in the months leading up to release, as a marketing ploy to increase anticipation for the film, and to rally the faithful behind him).

The homophobia of Gibson’s portrayal of Herod and his court should also be mentioned. Even as Herod refuses to condemn Jesus (saying that he is insane rather than a criminal), Gibson portrays him as a screaming queen (in the metaphorical sense in which this word is applied to gay men) lording over a court of screaming hysterics of both genders. Homophobia is nothing new for Gibson (there was a lot of it in Braveheart), but it’s worth noting here, if only because (as reported in today’s New York Times) the prospect of a gay pride rally in Jerusalem is the one thing that can bring the Orthodox head rabbis, the Christian Patriarchs, and the Mufti of Jerusalem together in partnership — they all got together to oppose it.

Still, the issue of villainy, or of who is responsible for Jesus’ death, is not really a central concern of The Passion of the Christ. Rather, the display of torture, and the obscene spectacle of Jesus’ flayed and exhausted flesh, is where the libidinal center of the movie lies. Comparisons of The Passion of the Christ to pornography are very much to the point. The film is in many ways quite literally and concertedly sadistic. The figure of Jesus can really only be compared to the Marquis de Sade’s Justine: a body whose innocence is directly correlated to her miraculous, infinite ability to bear and suffer pain: Justine cannot be killed throughout the course of Sade’s immense novel, because that would mean a limit to the libertines’ ability to torture her. As the novel goes on, the torments become ever more extreme, ever more Baroque: but no matter how far they go, Justine survives, and indeed retains consciousness, in order that she may receive and suffer still more pain. This is precisely the logic at work in Gibson’s film. It’s a moot question to ask whether this means that Sade is really a Christian in spite of himself, or whether it means that Gibson’s particular version of Christianity is sadistic: these two are just sides of the same coin. What is important is that Gibson’s film gets its emotional power almost exclusively from its depiction of the human body, the flesh, reduced to meat, reduced to pain, reduced to a spectacle, and yet still fully conscious and able to suffer more. Jesus’ actual death is weirdly anticlimactic; and the last scene of the film, the Resurrection, is almost laughably perfunctory. (In this way it’s almost the polar opposite of Dreyer’s Ordet, arguably the greatest Christian film ever made, which is all about resurrection and redemption). Jesus died for our sins — or more precisely, suffered for them — is where Gibson’s theology begins and ends.

I want to insist that, in specifically cinematic terms, sadism and not masochism is at work here. (This despite the fact that — in terms of film theory — I have committed myself in print, at great length, to supporting the masochistic models of spectatorial identification put forth by Gaylyn Studlar and Carol Clover, against the sadistic model proposed by Laura Mulvey). Masochism implies a pleasure in submission, an ambivalent giving-oneself-over to a all-powerful yet unreliable figure (usually female), and the endurance of an infinitude of postponement and delay. These characteristics may well describe Jesus’ relation to the Father in The Passion of the Christ; but they do not describe the viewer’s relation to Jesus. For the viewer, the film proposes the direct, visceral enjoyment — the Lacanians would call it the “obscene jouissance” — of the spectacle of agonizing, lacerated flesh.

That is to say, the film solicits the viewer to (quite literally) enjoy this spectacle — which is not quite the same thing as identifying with Jesus-as-victim. We can’t identify with Jesus — though we are supposed to emulate or imitate him — precisely because his torment is too extreme, too excessive, to be borne. (Gibson makes it clear that the two thieves who are crucified alongside Jesus do not suffer anywhere near as much as he does: they haven’t been beaten and flayed first, their bodies aren’t anywhere near as bloody, and their agonies are much shorter). Nor, of course, can we identify with Jesus’ tormentors — Gibson uses every trick in the Hollywood playbook to signify that these tormentors are despicable and hateful — despite the fact that Jesus prays to forgive them, “for they know not what they do.” Nonetheless, the film is set up so that we are gratified by Jesus’ torment: the more horrifying, the more explicit it is, the more the believer is justified in his/her faith, and the more the viewers — regardless of whether those viewers are empirically believers — is filled with a kind of sublime convulsion. All we want is more, more, more: we find ourselves in the frenzy of a kind of negative ecstasy that is heightened even further, the more the horror is poured on, the more directly the obscenity is burned into our eyeballs, the more Jesus’ body convulses or collapses in exhaustion, the more the rivulets of blood stream from his flesh.

It little matters that we, the viewers, feel this jouissance in the form of horror and indignation, rather than with the grim self-satisfaction of Caiphas and the other rabbis, or with the brute delight of the Roman legions. It’s still something that we directly revel in, as it takes us outside ourselves, beyond ourselves. And I insist on this “we”, rather than saying “I”; I can think of no film, besides Triumph of the Will or Battleship Potemkin, that so powerfully and emphatically addresses its audience as a collective, rather than as a mere collection of isolated selves.

If this were all that The Passion of the Christ did, I would have to say it was a great work of art, however unsavory — and however unacceptable to most believers — its astonishing sadistic jubilation might be. But unfortunately, it is not the whole story. There’s a whole apparatus that surrounds the sadistic spectacle: and that is where the problem really lies. The torture of Jesus is intercut with lengthy reaction shots, depicting the empathetic sadness of the Virgin Mother, of Mary Magdelen, of the Apostles, and even of some mere onlookers who distinguish themselves from the ugly Jewish mob. The torture is interrupted with flashbacks to the Sermon on the Mount, to “let him who is without sin cast the first stone,” to the Last Supper, even to a scene of Jesus as a young child slipping and falling, and being comforted by his mother. The torture is supplemented with scenes (both in the present and in these flashbacks) in absolutely dreadful slo-mo. And the whole is accompanied by an overbearing soundtrack (as insistent and bombastic as the ones John Williams provides for Spielberg) of santimonious sacramental music. All these aspects of the film are incredibly lame — they manifest the continuing presence of “Hollywood” at its stupidest, laziest, and most cliched — and so overdone that you cannot ignore them.

The effect of this weighty apparatus is to muffle the impact of the sadistic spectacle, to frame it and distance it in a way that makes it “socially acceptable.” This apparatus disavows the jouissance at the film’s core, allowing it to wend its way into the hearts and minds of the spectators, while at the same time reassuring us that we aren’t really enjoying something so cruel and barbaric. Now, of course, Gibson never could have made the film — and Christians would never have flocked to see it — without this elaborate scaffolding of disavowal. But that is precisely what is so insidious about it. What I am calling the film’s superstructure, or surrounding Hollywood apparatus, is what allows us, the viewers, to walk away from the film with a good conscience. And this normalization by way of good conscience is the one substantial way in which Gibson’s art does differ from that of the Marquis de Sade. Gibson restores, as Sade does not, the veneer of civilzation; he gives us the sadistic jouissance, but then he lets us off the hook.

One might make a Christian argument that Gibson’s capital sin as a filmmaker is precisely to forget original sin, to forget that each one of us — every human being — is guilty of Jesus’ death. Since I’m not a Christian, I will not follow up such a line of argument. I will say, though, that Gibson’s maneuver is exactly the one that allows people to support violence and torture — at the limit to become killers and torturers themselves — in “good faith.” The combination of sadistic jouissance and self-exculpating distance is what allows us to approve of foul means because they are in a good cause, or for a valuable ideal. And this is where the film does make contact with the “culture wars” and political struggles taking place in America today. It is what allows people (like President Bush) to mourn Terri Schiavo as a martyr, and to champion the rights of 12-week fetuses, while at the same time gleefully applying capital punishment to scores of inmates, and defending the torture in Abu Ghraib on the (inconsistent) grounds that it was either harmless “blowing off steam,” or a grim necessity in order to win the “war on terror.”

What it finally comes down to, I think, is a kind of exceptionalism. The word is often used to describe the United States of America, allegedly radically different from any other society on Earth, and by virtue of that justified in exempting itself from the obligations and mutual agreements that bind all other nations and societies. But I am thinking of “exceptionalism” in a related, but slightly different, sense. The argument of The Passion of the Christ is finally that Jesus’ Passion is greater than, qualitatively different from, and incommensurate with, any other inflictions of torture and pain that have ever occurred in the course of human history. And this incommensurability is what authorizes Christians to see themselves as uniquely victimized and persecuted, no matter how much actual power they have, and therefore authorizes them to perform (and indeed to institutionalize) actions that they would not allow to anyone else.

Lest I be accused of anti-Christian bigotry here, let me note that the same phenomenon runs rampant among my own people. Jewish identity today is largely built around the memory of the Holocaust, and on the idea that the Holocaust is unique, greater than and absolutely incommensurate with any other incidents of massacre, slaughter, genocide, enslavement, etc., in all of human history. And this in turn provides an alibi for Jewish (anti-black) racism in the United States, as for Israel’s mistreatment of the Palestinians. We’ve suffered more than they have, the argument runs; with the implicit (but rarely stated outright) corollary that therefore we are justified in what we do to them. This kind of thinking, however much it arises out of high ethical principles — in the cases both of the Jews and the Christians — can only lead to extending the cycle of pain and oppression.

M.I.A.

Over the last year, I’ve probably been listening with more pleasure to M.I.A. than to any other musical artist. I first heard her first single, Galang (iTunes), last summer, when I got it off an mp3 blog (I don’t remember which). I had no idea what it was, or who she was, but I immediately fell for it: there was something about the upbeat yet aggressive girl-group-y vocals, the strange lyrics, plus the spare, underproduced beats… and then there was that chorus, that finally came in, right at the end of the song, like a gleeful, swooping affirmation.

Gradually, I learned more about M.I.A., and heard more of her songs, as they dribbled out over the Net. She’s a Tamil Sri Lankan, now a Londoner, having come to the UK with her mother when she was 11, as a political refugee (her father is apparently involved with the Tamil Tigers, which has been mounting a bloody rebellion against the Sinhalese Sri Lankan government for years). Though a musical newcomer, she is apparently well-connected, and not raw from the streets (as almost nobody ever is, despite the frequent hype): art school, visual arts recognition, former housemate of the lead guitarist for Elastica, etc.

M.I.A.’s album Arular (iTunes) is finally out, after months of delays, rumors, net hype and net sniping (of which more below), and it’s simply great. The music is pretty much just primitive/dirty/analog synthesizer riffs, plus a bunch of samples (Dr. Buzzard’s Original Savannah Band, for one!), with vocals rapped, chanted, sung, and everything in between, no voice besides M.I.A.’s (though it is often multitracked). The beats are derived from hiphop and UK garage, and especially from such up-to-the-minute genres as Grime and BaileFunk. But M.I.A. doesn’t really sound like any of her sources: and it’s as important as it is difficult to explain precisely why.

There’s a certain sense in M.I.A.’s music that all her sources (various genres, or, more precisely, various funky beats) have been promiscuously mixed, and passed through a blender, and this is what came out. But such a metaphor implies a certain blandness, or homogenization, and nothing could be further from the truth. Everything in Arular is sharply etched and singular. The beats crackle and jump, and the energy level is high. There’s a lot of space to this music, it’s the diametrical opposite of a wall of sound. And M.I.A.’s vocals reverberate through the space, suggesting a kind of ongoing expansion, as if this were music streaming outward from some primordial Big Bang. M.I.A.’s rhythmic sources, particularly Grime and BaileFunk, are heavy, grounded, and immersive (even though BaileFunk is quite minimal, often little more than a bass line accompanying a rap); but M.I.A. reconfigures their beats as being light and expansive/centrifugal. That is to say, M.I.A.’s music is POP — which Grime, BaileFunk, and the heavier sorts of HipHop certainly are not. And its Pop quality is precisely what I love about it. Arular is irresistably cheerful and breezy, without being syrupy; direct and simple without being simple-minded; girl-centered but not girly; extroverted, and more interested in making bodies move than in expressing emotions or psychological states. M.I.A.’s lyrics are loopy and scattershot: boasts, taunts, political slogans, military and video-game metaphors, made-up slang and fake advertising jingles, all mixed up promiscuously. Altogether joyous and affirmative music.

(I should add as a footnote, though, that my definition of Pop isn’t everybody’s — despite the fact that the only reasonable definition of Pop should include that it appeals to everybody. If the world shared my sense of what’s Pop, Basement Jaxx would be the most popular and best-selling band in the world. To judge by the response on Metafilter, M.I.A. is way too esoteric for the “average” listener, though she is scorned by the purists for being way too pop).

(I should also add a note about the anti-M.I.A. backlash: extreme distaste for her and her music has been expressed in the blogosphere by music critics I generally respect, like Simon Reynolds (whose blog has a pretty comprehensive set of links to the controversy) and woebot (can’t verify the link right now, but I think it’s this). The line seems to be that M.I.A. is a vapid middle class rip off artist, stealing the beats from authentic music-from-below like Grime and BaileFunk, making them safely bland and non-abrasive and mainstream, turning harsh, abrasive sounds into pop in other words. Like white people stealing black people’s music, even though M.I.A. is herself a woman of color. She’s also accused of being either irresponsible or a poseur because of the political sloganeering in her lyrics. I’m sorry, but I really can’t see anything in these criticisms but a moralistic, holier-than-thou, knee-jerk-anti-pop purism. I love the sounds of Grime and BaileFunk, even though obviously I can’t relate to these musics and their communities in any other way than as a distant and privileged outsider; and I don’t know what sort of relationship M.I.A. has to them. (She’s a Londoner, but not part of the Grime scene). But in this case, I don’t see that M.I.A.’s “appropriation” has anything in common with Elvis or the Stones doing r’n’b, let alone with something like Beck’s smarmy simulation/putdown of black music on Midnite Vultures. She’s transformed the beats by making them Pop, in a way that is irreducible either to slavish imitation or to one-up-manship or to making-bland-and-safe. And that’s really all I can say.).

Over the last year, I’ve probably been listening with more pleasure to M.I.A. than to any other musical artist. I first heard her first single, Galang (iTunes), last summer, when I got it off an mp3 blog (I don’t remember which). I had no idea what it was, or who she was, but I immediately fell for it: there was something about the upbeat yet aggressive girl-group-y vocals, the strange lyrics, plus the spare, underproduced beats… and then there was that chorus, that finally came in, right at the end of the song, like a gleeful, swooping affirmation.

Gradually, I learned more about M.I.A., and heard more of her songs, as they dribbled out over the Net. She’s a Tamil Sri Lankan, now a Londoner, having come to the UK with her mother when she was 11, as a political refugee (her father is apparently involved with the Tamil Tigers, which has been mounting a bloody rebellion against the Sinhalese Sri Lankan government for years). Though a musical newcomer, she is apparently well-connected, and not raw from the streets (as almost nobody ever is, despite the frequent hype): art school, visual arts recognition, former housemate of the lead guitarist for Elastica, etc.

M.I.A.’s album Arular (iTunes) is finally out, after months of delays, rumors, net hype and net sniping (of which more below), and it’s simply great. The music is pretty much just primitive/dirty/analog synthesizer riffs, plus a bunch of samples (Dr. Buzzard’s Original Savannah Band, for one!), with vocals rapped, chanted, sung, and everything in between, no voice besides M.I.A.’s (though it is often multitracked). The beats are derived from hiphop and UK garage, and especially from such up-to-the-minute genres as Grime and BaileFunk. But M.I.A. doesn’t really sound like any of her sources: and it’s as important as it is difficult to explain precisely why.

There’s a certain sense in M.I.A.’s music that all her sources (various genres, or, more precisely, various funky beats) have been promiscuously mixed, and passed through a blender, and this is what came out. But such a metaphor implies a certain blandness, or homogenization, and nothing could be further from the truth. Everything in Arular is sharply etched and singular. The beats crackle and jump, and the energy level is high. There’s a lot of space to this music, it’s the diametrical opposite of a wall of sound. And M.I.A.’s vocals reverberate through the space, suggesting a kind of ongoing expansion, as if this were music streaming outward from some primordial Big Bang. M.I.A.’s rhythmic sources, particularly Grime and BaileFunk, are heavy, grounded, and immersive (even though BaileFunk is quite minimal, often little more than a bass line accompanying a rap); but M.I.A. reconfigures their beats as being light and expansive/centrifugal. That is to say, M.I.A.’s music is POP — which Grime, BaileFunk, and the heavier sorts of HipHop certainly are not. And its Pop quality is precisely what I love about it. Arular is irresistably cheerful and breezy, without being syrupy; direct and simple without being simple-minded; girl-centered but not girly; extroverted, and more interested in making bodies move than in expressing emotions or psychological states. M.I.A.’s lyrics are loopy and scattershot: boasts, taunts, political slogans, military and video-game metaphors, made-up slang and fake advertising jingles, all mixed up promiscuously. Altogether joyous and affirmative music.

(I should add as a footnote, though, that my definition of Pop isn’t everybody’s — despite the fact that the only reasonable definition of Pop should include that it appeals to everybody. If the world shared my sense of what’s Pop, Basement Jaxx would be the most popular and best-selling band in the world. To judge by the response on Metafilter, M.I.A. is way too esoteric for the “average” listener, though she is scorned by the purists for being way too pop).

(I should also add a note about the anti-M.I.A. backlash: extreme distaste for her and her music has been expressed in the blogosphere by music critics I generally respect, like Simon Reynolds (whose blog has a pretty comprehensive set of links to the controversy) and woebot (can’t verify the link right now, but I think it’s this). The line seems to be that M.I.A. is a vapid middle class rip off artist, stealing the beats from authentic music-from-below like Grime and BaileFunk, making them safely bland and non-abrasive and mainstream, turning harsh, abrasive sounds into pop in other words. Like white people stealing black people’s music, even though M.I.A. is herself a woman of color. She’s also accused of being either irresponsible or a poseur because of the political sloganeering in her lyrics. I’m sorry, but I really can’t see anything in these criticisms but a moralistic, holier-than-thou, knee-jerk-anti-pop purism. I love the sounds of Grime and BaileFunk, even though obviously I can’t relate to these musics and their communities in any other way than as a distant and privileged outsider; and I don’t know what sort of relationship M.I.A. has to them. (She’s a Londoner, but not part of the Grime scene). But in this case, I don’t see that M.I.A.’s “appropriation” has anything in common with Elvis or the Stones doing r’n’b, let alone with something like Beck’s smarmy simulation/putdown of black music on Midnite Vultures. She’s transformed the beats by making them Pop, in a way that is irreducible either to slavish imitation or to one-up-manship or to making-bland-and-safe. And that’s really all I can say.).

The Big Red One

I saw Samuel Fuller’s The Big Red One on the very first day of its initial theatrical release in 1980. My auteurist passion for Fuller has never wavered, but I did not see The Big Red One again until today, a quarter-century after my initial viewing, when I finally got to see the reconstructed version, released last year with 45 minutes or so of additional footage.

There were only two scenes that I remembered from my initial viewing. There’s the moment when a solider has been exploded by a mine, and Lee Marvin’s crusty sergeant picks up a bloody mass of flesh and hurls it away, telling the soldier that this is one of his balls, and he should feel lucky that nature gave him two. And there’s the near-climax when Marvin’s unit liberates a German concentration camp, and a soldier opens a door and stares numbly into one of the ovens (we aren’t shown much of the horror, but mostly just this sublimely inexpressive reaction shot).

The Big Red One is an utterly amazing film, though it isn’t necessarily even Fuller’s best war movie. (That would probably be The Steel Helmet; Merrill’s Marauders is also first-rate). But as a World War II epic, it clearly transcends most of the genre — both its many predecessors, and such subsequent films as Spielberg’s meretricious Saving Private Ryan, and even Terence Malick’s sublime The Thin Red Line. Lee Marvin is great — his world-weariness even exceeds his toughness — and the rest of the ensemble cast is convincingly grim. The film says a lot about The Horrors of War — at the end, the narrator tells us that the only glory in warfare is survival.

The Big Red One, like many of Fuller’s films, combines often corny dialogue, amazing camerawork, and an over-the-top narrative audacity. The first half of the film is dominated by gripping battle scenes, alternating between tight close-ups and chaotic (but actually finely controlled) long shots. These scenes are grueling, but somewhat distanced by Fuller’s adherence to familiar genre conventions. (It was evidently Spielberg’s ambition in the opening Omaha Beach sequence of Ryan to surpass Fuller, which I guess he does in technical terms, and also in intensity by dint of sheer relentlessness, but Fuller still seems to me to be superior in terms of affective resonance).

But perhaps “adherence to familiar genre conventions” is not quite right. Fuller blows up genre conventions to monstrous proportions, and makes explicit what the genre usually keeps as subtext. Thus in an early scene, during an amphibious landing, the soldiers protect their rifles from the water by covering them with condoms. Homoeroticism is always close to the surface, and nearly every verbal reference to sex, or narrative suggestion of the soldiers possibly being able to have sex, is followed almost instantly by an unexpected attack, so that battle is figured repeatedly as coitus interruptus.

As the film progresses, things become increasingly bizarre, surreal, and absurdist. Straight battle sequences give way to insane, floridly operatic scenarios: the GIs must help a boy bury his mother, whose stinking corpse is being donkey-carted through the Sicilian countryside; the Germans stage an elaborate ambush by pretending to be already dead, in order to lure the US soldiers off guard, but the Americans kill them anyway; a French woman whose husband has just been killed gives birth inside a tank (the medic puts condoms on all his fingers in lieu of sanitary gloves); an elaborate infiltration/shoot-out takes place in an insane asylum. There are also spooky scenes like a gun battle in the forest, with the fog so thick that nobody can see whom they are shooting at, or who is shooting at them.

Fuller famously expressed scorn for the idea that a war movie could ever be “realistic.” He said that the only realistic war movie would be one in which a machine gun behind the screen would fire directly at the audience. (It’s not surprising, in Fuller’s terms, that Spielberg combines a claim to depict war realistically with an uncritical recapitulation of all the cliches about heroism, etc., that Fuller is rather concerned to demystify). So The Big Red One does not strive for realism; rather, it suggests precisely that war stands so far outside the parameters of everyday experience, and of livability, that it can only be represented as being profoundly “unrealistic.” It cannot, and does not, make normative sense: and its absurdity is something that Fuller’s soldiers respond to with little more than a stoic shrug of the shoulders. The film is littered with corpses, and Marvin walks among them with a grim refusal, or failure, to react. He repeats the mantra that killing is different from murder: we kill the enemy just as we kill animals. But his conscience is tormented by the repeated scenario of killing an enemy after the armistice, which makes it murder after all.

The result is a film of powerfully skewed affect. You feel numbness rather than horror, but this numbness is itself highly charged (if that isn’t too outrageous an oxymoron). The film creates a kind of schizophrenic derealization: an estrangement-effect that paralyzes the intellect instead of energizing it. The result is a kind of stunned disengagement, which is also on a meta-level a kind of positive engagement, only with an impossible, strictly unthinkable, situation. This is, I think, the anti-fascist way of “aestheticizing” war, a phenomenon that I hope never to encounter outside of the movies.

I saw Samuel Fuller’s The Big Red One on the very first day of its initial theatrical release in 1980. My auteurist passion for Fuller has never wavered, but I did not see The Big Red One again until today, a quarter-century after my initial viewing, when I finally got to see the reconstructed version, released last year with 45 minutes or so of additional footage.

There were only two scenes that I remembered from my initial viewing. There’s the moment when a solider has been exploded by a mine, and Lee Marvin’s crusty sergeant picks up a bloody mass of flesh and hurls it away, telling the soldier that this is one of his balls, and he should feel lucky that nature gave him two. And there’s the near-climax when Marvin’s unit liberates a German concentration camp, and a soldier opens a door and stares numbly into one of the ovens (we aren’t shown much of the horror, but mostly just this sublimely inexpressive reaction shot).

The Big Red One is an utterly amazing film, though it isn’t necessarily even Fuller’s best war movie. (That would probably be The Steel Helmet; Merrill’s Marauders is also first-rate). But as a World War II epic, it clearly transcends most of the genre — both its many predecessors, and such subsequent films as Spielberg’s meretricious Saving Private Ryan, and even Terence Malick’s sublime The Thin Red Line. Lee Marvin is great — his world-weariness even exceeds his toughness — and the rest of the ensemble cast is convincingly grim. The film says a lot about The Horrors of War — at the end, the narrator tells us that the only glory in warfare is survival.

The Big Red One, like many of Fuller’s films, combines often corny dialogue, amazing camerawork, and an over-the-top narrative audacity. The first half of the film is dominated by gripping battle scenes, alternating between tight close-ups and chaotic (but actually finely controlled) long shots. These scenes are grueling, but somewhat distanced by Fuller’s adherence to familiar genre conventions. (It was evidently Spielberg’s ambition in the opening Omaha Beach sequence of Ryan to surpass Fuller, which I guess he does in technical terms, and also in intensity by dint of sheer relentlessness, but Fuller still seems to me to be superior in terms of affective resonance).

But perhaps “adherence to familiar genre conventions” is not quite right. Fuller blows up genre conventions to monstrous proportions, and makes explicit what the genre usually keeps as subtext. Thus in an early scene, during an amphibious landing, the soldiers protect their rifles from the water by covering them with condoms. Homoeroticism is always close to the surface, and nearly every verbal reference to sex, or narrative suggestion of the soldiers possibly being able to have sex, is followed almost instantly by an unexpected attack, so that battle is figured repeatedly as coitus interruptus.

As the film progresses, things become increasingly bizarre, surreal, and absurdist. Straight battle sequences give way to insane, floridly operatic scenarios: the GIs must help a boy bury his mother, whose stinking corpse is being donkey-carted through the Sicilian countryside; the Germans stage an elaborate ambush by pretending to be already dead, in order to lure the US soldiers off guard, but the Americans kill them anyway; a French woman whose husband has just been killed gives birth inside a tank (the medic puts condoms on all his fingers in lieu of sanitary gloves); an elaborate infiltration/shoot-out takes place in an insane asylum. There are also spooky scenes like a gun battle in the forest, with the fog so thick that nobody can see whom they are shooting at, or who is shooting at them.

Fuller famously expressed scorn for the idea that a war movie could ever be “realistic.” He said that the only realistic war movie would be one in which a machine gun behind the screen would fire directly at the audience. (It’s not surprising, in Fuller’s terms, that Spielberg combines a claim to depict war realistically with an uncritical recapitulation of all the cliches about heroism, etc., that Fuller is rather concerned to demystify). So The Big Red One does not strive for realism; rather, it suggests precisely that war stands so far outside the parameters of everyday experience, and of livability, that it can only be represented as being profoundly “unrealistic.” It cannot, and does not, make normative sense: and its absurdity is something that Fuller’s soldiers respond to with little more than a stoic shrug of the shoulders. The film is littered with corpses, and Marvin walks among them with a grim refusal, or failure, to react. He repeats the mantra that killing is different from murder: we kill the enemy just as we kill animals. But his conscience is tormented by the repeated scenario of killing an enemy after the armistice, which makes it murder after all.

The result is a film of powerfully skewed affect. You feel numbness rather than horror, but this numbness is itself highly charged (if that isn’t too outrageous an oxymoron). The film creates a kind of schizophrenic derealization: an estrangement-effect that paralyzes the intellect instead of energizing it. The result is a kind of stunned disengagement, which is also on a meta-level a kind of positive engagement, only with an impossible, strictly unthinkable, situation. This is, I think, the anti-fascist way of “aestheticizing” war, a phenomenon that I hope never to encounter outside of the movies.

Collateral

Michael Mann’s Collateral is a film of many small virtues, notably its modesty. For a Tom Cruise vehicle, it’s surprisingly free of affectation. Cruise’s own performance as the heavy is quite disciplined — despite the character’s built-in potential for over-the-top hamminess. Cruise also deserves praise for making room for Jamie Foxx’s fine turn as the reluctant, didn’t-know-he-had-it-in-him hero. (If it had been up to me, Foxx would have won the Best Supporting Actor Oscar in addition to the Best Actor one he did win).

Michael Mann is content (like Clint Eastwood) to work within genre formulas, rather than hyperbolizing and hybridizing them as Tarantino does. Mann turns the familiarity of the form to his advantage by basically letting the plot take care of itself, the better to focus on character and on character interactions. This includes both revealing facets of the characters that are unknown to themselves as well as to others; but it also includes impersonation and fabulation, the putting on of masks, the becoming somebody utterly different than oneself. The ostensibly realistic character development of a film like Collateral is also a self-reflexive meditation upon acting. (Foxx’s taxi driver constantly has to figure out what he can and cannot get away with, faced with Cruise’s killer for hire; and then, at one point, he is even compelled to impersonate Cruise’s character itself). The banter between Cruise and Foxx itself becomes sort of philosophical, as it reflects on the existential and ontological dimensions of the characters’ roles and actions. And it’s precisely because of the unpretentious genre framework of the film that Mann, Cruise, and Foxx are able to get away with this.

Collateral is also distinguished by Mann’s visual poetry. He’s always been a master of depicting urban landscapes, usually being glided through by car: this goes back to Thief, his first major feature, as well as, of course, to Miami Vice. Here, nocturnal Los Angeles is ghostly and beautiful, by turns open and closed, free and deadly. Mann’s Los Angeles is a postmodern landscape of lateral motion, anonymous architecture, middles without beginnings or ends, hubs of intense activity where everyone is in your face (the hospital, the disco) surrounded by vast spaces that are never inhabited but only moved through at speed by drivers invisible to one another from within the protected coccoons of their cars. Mann’s LA, like Johnnie To’s Hong Kong, is one of those phantasmic, yet all-too-real, future (postmodern) spaces that are altering our very notion of landscape, changing our sense of what it means to inhabit a space.

Michael Mann’s Collateral is a film of many small virtues, notably its modesty. For a Tom Cruise vehicle, it’s surprisingly free of affectation. Cruise’s own performance as the heavy is quite disciplined — despite the character’s built-in potential for over-the-top hamminess. Cruise also deserves praise for making room for Jamie Foxx’s fine turn as the reluctant, didn’t-know-he-had-it-in-him hero. (If it had been up to me, Foxx would have won the Best Supporting Actor Oscar in addition to the Best Actor one he did win).

Michael Mann is content (like Clint Eastwood) to work within genre formulas, rather than hyperbolizing and hybridizing them as Tarantino does. Mann turns the familiarity of the form to his advantage by basically letting the plot take care of itself, the better to focus on character and on character interactions. This includes both revealing facets of the characters that are unknown to themselves as well as to others; but it also includes impersonation and fabulation, the putting on of masks, the becoming somebody utterly different than oneself. The ostensibly realistic character development of a film like Collateral is also a self-reflexive meditation upon acting. (Foxx’s taxi driver constantly has to figure out what he can and cannot get away with, faced with Cruise’s killer for hire; and then, at one point, he is even compelled to impersonate Cruise’s character itself). The banter between Cruise and Foxx itself becomes sort of philosophical, as it reflects on the existential and ontological dimensions of the characters’ roles and actions. And it’s precisely because of the unpretentious genre framework of the film that Mann, Cruise, and Foxx are able to get away with this.

Collateral is also distinguished by Mann’s visual poetry. He’s always been a master of depicting urban landscapes, usually being glided through by car: this goes back to Thief, his first major feature, as well as, of course, to Miami Vice. Here, nocturnal Los Angeles is ghostly and beautiful, by turns open and closed, free and deadly. Mann’s Los Angeles is a postmodern landscape of lateral motion, anonymous architecture, middles without beginnings or ends, hubs of intense activity where everyone is in your face (the hospital, the disco) surrounded by vast spaces that are never inhabited but only moved through at speed by drivers invisible to one another from within the protected coccoons of their cars. Mann’s LA, like Johnnie To’s Hong Kong, is one of those phantasmic, yet all-too-real, future (postmodern) spaces that are altering our very notion of landscape, changing our sense of what it means to inhabit a space.