A Note on Evil

My comment in the previous post on how voting for McCain is evil drew a lot of negative respnse, both in the comments here and in those on Jodi’s blog. This led to Jodi’s own explicit comments on evil in politics, to which, I think, I need to add my own. Like Jodi, though perhaps for different reasons, I am not in general prone to use moral categories to address political issues. I think that the leap from the political to the moral register often leads to the effacement of contextual complexities, through the simplistic imposition of absolute, transcendent modes of judgment. In Deleuze’s terms, the appeal to moral categories is a way of evading the difficult work of developing immanent perspectives and immanent criteria, by simply imposing judgment from outside. It’s a policing action, short-circuiting both political economy and aesthetics.

Nonetheless, there are times when such a judgment seems necessary. At the risk of being excessively pedantic, I want to point out that my use of the term “evil” in the previous posting was quite precise in its reference to Kant — rather than just generally using it as a means of rhetorical posturing. In particular, I was referring to Kant’s essay “An Old Question Raised Again: Is the Human Race Constantly Progressing?”, which forms one part of the late (post-Critical) book The Conflict of the Faculties. I think that this essay deserves a contemporary rethinking and “updating” — in much the same spirit in which Foucault rethought and “updated” Kant’s essay “What is Enlightenment?”.Foucault rejects the way that, in the hands of Habermas and others, Kant’s Enlightenment principles have become the basis for what Foucault “like[s] to call the ‘blackmail’ of the Enlightenment.” Foucault says that it is ridiculous to demand “that one has to be ‘for’ or ‘against’ the Enlightenment.” For “the Enlightenment is an event, or a set of events and complex historical processes,” rather than a permanent set of values to be identified with “rationality” or “humanism” tout court. Indeed, for Foucault it is precisely in refusing this for-or-against “blackmail” that one can most truly remain faithful to the Kantian task of a continued “historico-critical investigation” of our own assumptions and presuppositions, including precisely and especially the ones that seem to us to be most self-evidently “rational” and “humanistic.”

With regard to “An Old Question Raised Again,” similarly, we might do well to rethink Kant’s interrogation of the possibility of “progress,” precisely because we now find ourselves in a world where nobody can believe any longer in “progress” in the sense that Kant meant it. Lyotard wrote in the 1980s that nobody could believe in “grand narratives” (like the Enlightenment and Marxist one of progressive human emancipation) any longer; Francis Fukuyama wrote in the 1990s that the perpetuity of neoliberal capitalism was the only “end of history” that we could ever hope to attain. Today, in 2008, we are if anything even more cynical, as years of booms and busts in the market — with the biggest bust of all currently looming over us — have all the more firmly established capital accumulation, with its concomitant technological improvements, as the only form of “progress” that we can at all believe in.

But it is precisely in this context that Kant’s essay speaks to us with a new relevance. “An Old Question Raised Again” makes the point that there is no empirical evidence whatsoever to maintain the proposition that the human race is progressing — by which Kant means morally progressing, to a state of emancipation instead of slavery, mutual respect (treating all human beings as means, rather than just as ends) instead of subordination and hierarchy, and cosmopolitan peace instead of strife and war. (In other words, Kant is implicitly referring to the three watchwords of the French Revolution — Liberty, Equality, Fraternity — though we might well want to replace the last one with “cosmopolitanism,” to avoid the gendered connotations of “fraternity”). There is no empirical way to assert that humanity is progressing in these terms, rather than regressing or merely remaining at the same point. (It is worth maintaining this Kantian point against all those fatuous attempts to claim that the USA is benevolently improving the lot of the rest of the world, or somehow standing up for “freedom” and “democracy,” when in fact it is exporting the imperious demands of neoliberal capital, whether by outright war or by other forms of influence or coercion, to other parts of the world).

However — and this is the real crux of Kant’s argument — although there is no empirical evidence in favor of the proposition that “progress” has taken place, there is a reason, or an empirical ground, for us to believe in progress, to hope for it, and even to work for it — rejecting the cynicism that tells us that any such hope or belief is deluded or “utopian” (this latter word is most often used pejoratively, in the form of the claim that any attempt to make human life better, such as all the efforts of the Left in the 19th and 20th centuries, inevitably has “unintended consequences” that end up making things worse). This ground is the occurrence of certain events — for Kant, the French Revolution — whose sheer occurrence, in itself, however badly these events miscarried subsequently, “demonstrates a character of the human race at large and all at once… a moral character of humanity, at least in its predisposition, a character which not only permits people to hope for progress toward the better, but is already itself progress in so far as its capacity is sufficient for the present.” Humanity hasn’t actually gotten any better, but its active ability to imagine and project betterment, on a social and cosmopolitan scale, is itself evidence that a “predisposition” to betterment does in fact exist.

Now, I left out a couple of phrases in the citation above; the entire sentence actually reads: “Owing to its universality, this mode of thinking demonstrates a character of the human race at large and all at once; owing to its disinteredness, a moral character of humanity, at least in its predisposition, a character which not only permits people to hope for progress toward the better, but is already itself progress in so far as its capacity is sufficient for the present.” The two key terms here are universality and disinterestedness. Kant is not merely praising enthusiasm and fervor. He is almost oppressively aware that enthusiasm and fervor guarantee nothing, and that they have propelled many of the worst happenings and the worst movements in human history — something that is all the more evident today, after the horrors of the twentieth century. Nothing that is narrowly drawn, chauvanistic, nationalistic, etc., can stand as evidence for a predisposition towards betterment.

But beyond that: Kant is not saying that the French Revolution in itself is the evidence of a human predisposition to betterment. He is saying, rather, that the “universal yet disinterested sympathy” that “spectators” from afar felt for the French Revolution is such evidence. Our “moral predisposition” for betterment is revealed in the way that “all spectators (who are not engaged in this game themselves” feel a “sympathy,” or “a wishful participation that borders closely on enthusiasm,” for the distant revolutionary events of which they are the witnesses. Such sympathy-from-afar can be “dangerous,” Kant warns us; but it is genuine evidence for the potentiality or “predisposition” toward improvement of the human condition — at least to the extent that it is “universal” (rather than being partial, chauvinistic, or favoring one “nation” or “race” against another — as fascist enthusiasm always is), and that it is “disinterested” (not motivated by any expectation of personal gain; an aesthetic concern rather than a merely self-aggrandizing one). (I think that, for example, Foucault’s enthusiasm from afar for the Iranian revolution can be regarded in the same way as Kant’s enthusiasm from afar for the French revolution; in both cases, the bad outcomes of these revolutions does not disqualify the reasons for which Kant and Foucault found themseves in sympathy with them; and this is why such events, and such expressions of sympathy, must be radically distinguished from the enthusiasm for fascism that consumed so many early-20th-century artists and intellectuals).

I suppose that, genealogically, all this is Kant’s secular-Enlightenment updating of the old Christian virtue of hope. But it locates what is hoped for in this life, this world, rather than in an afterlife, or in some sort of post-apocalyptic recovery (in this way, it is actually more secular, and less mystical and religious, than, say, Walter Benjamin’s messianism; and although it refers, or defers, to an as-yet-unaccomplished future, it is more materially and empirically grounded than, say, Derrida’s “democracy to come.” Benjamin and Derrida must both be honored as true descendants of Kant, yet arguably they have both diminished him). The human predisposition towards betterment already exists in the here and now, even if its fulfillment does not. Quoting Kant again:

For such a phenomenon in human history is not to be forgotten, because it has revealed a tendency and faculty in human nature for improvement such that no politician, affecting wisdom, might have conjured out of the course of things hitherto existing, and one which nature and freedom alone, united in the human race in conformity with inner principles of right, could have promised. But so far as time is concerned, it can promise this only indefinitely and as a contingent event.

Human improvement depends upon happening that have not yet taken place, and that in fact may never take place — it requires a “contingent event” in order to be realized. But nonetheless, the “phenomenon” of a capacity towards such improvement is in itself perfectly and altogethe real. In Deleuze’s terms, a “predisposition” is something virtual. Our predisposition towards improvement exists virtually, even if it has not been actualized in our social, political, and economic systems. It is for this reason that the denial of our potential or predisposition towards improvement is a secular version of what the Christians call a “sin against the Holy Spirit” — in Kant’s terms, such a denial is “radical evil”, in that it negates the very potentiality that makes any sort of moral choice thinkable in the first place. (Hence, Kant insists that human beings have a predisposition towards betterment in precisely the same way, and for the same reasons, that we all also have a “propensity to evil” or depravity).

In the grander scheme of things, this means that we must reject, on Kantian grounds, all ideologies that declare that humanity is incapable of betterment because human beings are inherently limited and imperfect (such is the tenor of the anti-“utopian” rejections of anything that goes beyond the limits of contemporary predatory capitalism), and all ideologies that declare that the narrow self-interested maximizing behavior of Homo oeconomicus cannot ever be transcended, as well as all ideologies that limit the prospects of emancipation to any particular group, nation, religion, etc. And in the narrow, tawdry limits of contemporary US politics — to move from great things to small — this is why the boundless cynicism of the Republican Party must be rejected as evil. The Democrats may well be playing games with our hopes for betterment, hypocritically encouraging those hopes only the better to betray them, etc., etc.; but at least they represent a world in which such hopes stil exist.

Paolo Virno, Multitude: Between Innovation and Negation

Paolo Virno’s newly-translated book, Multitude: Between Innovation and Negation, is somewhat misleadingly titled, since it has very little to say about the concept of the multitude as featured in Virno’s previously-translated book, as well as in the work of Michael Hardt and Antonio Negri. Rather, it is a text composed of three essays, a longish one about jokes and the logic of innovation, flanked by two much shorter ones that deal with the ambivalent legacy of humanity’s linguistic powers.

The first essay argues against the notion, crystallized by Carl Schmitt but held more generally in the “common sense” of political philosophy and conceptual thought (from Hobbes, we might say, through Freud, right down to Steven Pinker), that any democratic or liberatory political theory is founded in the naive view that human nature is innately harmonious and good, whereas the more “realistic” view of the human capacity for “evil” mandates belief in a strong and repressive State. Virno argues, to the contrary, that if we are to worry about the “evil” in human nature — which is really our “openness to the world,” or our underdetermination by our biology, which is what makes it possible for us to have “a virtually unlimited species-specific ambivalence” — then we have all the more reason to worry about what happens when the power to act (to do evil as well as to mitigate it) is concentrated in something like the State’s “monopoly of violence.”

Theorists of the State, from Hobbes to Schmitt, posit the transition from a state of nature to a civil state, involving the rule of a sovereign (in the conservative version), or the rule of law (in the liberal version), as a defense against this innate aggressiveness that would be endemic to the state of nature. But Virno says that this transition is never complete; even a sovereignty based on laws still has to declare a “state of exception” in order to maintain its rule; and this state of exception is, in effect, a return to the never-surpassed “state of nature.” The state of exception is a state in which rules are never firm, but are themselves subject to change and reinvention. We move back from the fixed rules to the human situation that gave rise to them in the first place. Though the “state of exception” has often been described as the totalitarian danger of our current situation, it is also a state in which the multitude can itself elaborate new practices and new forms of invention.

The third essay in the book makes a similar argument, in a somewhat simpler form. Sympathy with others of our kind is an innate biological endowment of our species — here Virno makes reference to recent discoveries involving mirror neurons. But language frees us, for both good and ill, from this state of sympathy. Language gives us the power of negation, which is the ability to deny the humanity of the other (the Jew, the “Musselman,” the non-white) and hence to torture and kill them mercilessly. Since there is no possibility of returning to a prelinguistic state, the only solution to this potentiality for evil is to potentialize language to a further level, make it go meta-, have it reflect back on itself, in a “negation of the negation.” The power to objectify and kill is also the power to heal, to establish “reciprocal recognition.” Just as the state of exception is the ambivalent locus both of tyrannical imposition and of democratic redemption, so the potentiality of language is the ambivalent locus both of murderous destruction and of the elaboration of community, or of the multitude.

But both these essays are little more than footnotes to the long central essay, “Jokes and Innovative Action,” that is most of the book. Virno rather curiously takes Freud’s book on jokes as his primary text, despite disclaiming any interest in the Freudian theory of the unconscious. All his examples of jokes come from Freud; but he reclassifies these jokes in terms of their status as public acts of expression (“performative utterances” in a way, though precisely they do not positively refer back to institutions in the way that a performative utterance like “I sentence you to a year in prison” does), as gestures that disrupt the “normal” functioning of a rule, and as “paralogisms” (logical fallacies, or defective syllogisms).

The point behind all these classifications is a Wittgensteinian one. Most of the time, in “normal” situations, we apply rules to concrete situations unproblematically. But in fact a rule is never sufficient to dictate how it is to be applied in any situation whatsoever — any attempt to do so involves making a second rule to explain how to apply the first rule, then a third rule to explain how to apply the second rule, and so on in any infinite regress. There is always an incommensurability between abstract rules and pragmatic acts of applying those rules. We have to appeal, as Wittgenstein says, to actual practices in a given “form of life.” But these forms of life are themselves subject to change. A joke is a disruptive intervention in this process; it introduces an “aberrant” application of a rule, thus exposing to view the inherent incommensurability between rule and application. It throws us back upon the “form of life” in which the language game of which the rule is a part is embedded. It exposes the contingency of the form of life, the way it could be otherwise. It returns us to what Wittgenstein calls “the common behavior of humankind.”

Virno interprets this “common behavior” to be our species-specific biological endowment (basic “human nature”) — or with the “regularities” of human behavior that ultimately underlie all rules, but which explicit rules cannot fully encompass. The gap between an explicit rule and the way we can apply it refers back to this prior gap between rules and the regularities upon which they are based, but which they are never able to encompass. This is in turn the case because Virno — as we have seen –defines basic human, species-specific and biological regularities not as a fixed “nature” but precisely as an underdetermination, a reservoir of potentiality — something whose incompleteness can only be given fixed form by the still-more-indeterminate, and still-more-open-to-potentiality, power of language. Language is what fixes our biological potentiality into specific forms, but it is also (as jokes witness) what allows us to rupture any given fixity, and reconfigure things otherwise. Wittgenstein’s return to the “regularity” of empirically-observed human nature as the court of last appeal for what cannot be guaranteed or grounded by rational argument is also a kind of return to the state-of-exception-as-state-of-nature, or to the moment of emergece when language first emerges out of our innate drives, both reshaping and giving form to these drives, and opening them up to a still more radical indeterminacy.

Virno claims that this is what is happening, in miniature, in jokes when they twist intentions and laws, multiply meanings, and turn seemingly fixed principles into their opposites, or into sheer absurdity. He therefore takes the joke as a miniaturized version, or as a paradigm case, of innovation and creativity in general. The way that jokes play with and disrupt previously fixed and accepted meanings, is a small version of the way that any form of social innovation or creativity alters relations that were previously taken for granted or seen as fixed.

Ultimately, Virno says that jokes and all forms of social innovation play on the indeterminacy between grammatical statements and empirical statements — an indeterminacy that is the major focus of Wittgenstein’s last writing, collected in the volume On Certainty. Wittgenstein says, on the one hand, that certain statements are not in themselves either true or false — because they express the presuppositions that we are already taking for granted and pointing back to when we make any judgment of truth or falsity. For Wittgenstein, it is a weird category error to assert the truth of a statement like “I know that I have two hands” — because we do not “know” this, so much as we already presuppose it whenever we learn something, or come to know something. My sense of having two hands is precognitive (which is precisely why I do not have to check all the time to make sure that I really do have two hands, neither more nor less).

On the other hand, however, and at the same time, Wittgenstein says that this pre-knowledge is not absolute. Over time, there can be shifts in which sorts of statements are empirical ones (that can be true or false), and which statements are foundational or grammatical ones (already presupposed in an act of cognition). I might lose one of my hands in a horrible accident, for instance. Or some empirical fact might become so central to my understanding of everything that it would come to take on the form of a pre-assumed (grammatical) statement, rather than a merely empirical one. These things can and do change over the course of time. One language game morphs or mutates into a different one. For Virno, this is where social innovation takes place. Jokes are the simplest example of such a process of change: one in which “an openly ‘fallacious’ conjecture… reveals in a flash a different way of applying the rules of the game” (163), and thereby changes the nature of the game altogether, or allows us to stop playing one game and to play a different one instead. Virno expands this reading, in order to suggest that it really comprises a theory of crisis in Wittgenstein, so that his naturalism is something more than just a passive cataloging of various “forms of life” — something which he says is “stubbornly ignored by all of Wittgenstein’s scholars” (163).

How useful and convincing is all of this? To my mind, the best part of Virno’s argument is the last thing I mentioned: his parsing of Wittgenstein on the shadowy and always-changing boundary between the “grammatical” and the “empirical.” I think that this is a more informal and naturalistic version of what Deleuze calls “transcendental empiricism.” At any given moment there is a transcendental field that determines what is possible and what is not, and that delineates for us the shape of the empirical (which cannot be interpreted without it). At the same time, this “transcendental field” is not only not an absolute (in Kant’s language, as transcendental it is precisely not transcendent), but is itself something that has an empirical genesis within time, and that varies through time. (This is the point that I was trying to make in a previous posting: capitalism arises entirely contingently, but once it has imposed itself it takes on the shape of a transcendental, circumscribing both what we can experience, and how we can experience it).

Now, doubtless this always-open possibility of shifting the boundary between the empirical and the transcendental, or of turning one into the other, is where creativity and innovation are located. The bad, or mainstream, interpretation of Kant is the one that always insists upon the necessity of separating the transcendental (the regulative, the norm) from the empirical — that is how you get Habermas, for instance. A much better Kantianism is the one — it can be found explicitly in Lyotard, for instance; and I argue that it also works implicitly in Whitehead and in Deleuze) — that sees the gap or incommensurability between the transcendental/regulative and the empirical not as a barrier, so much as a space that is sufficiently open as to allow for innovative transformation.

So, to this extent I find Virno’s formulations (including his reading of Wittgenstein) extremely useful. But I also find Virno’s discussion curiously bland and incomplete, and this because of its failure (due to its “naturalistic” orientation?) to say enough either about aesthetics, or about political economy. I think, on the one hand, that the view of creativity and innovation implicit in Virno’s discussion needs to be thought at greater length within the framework of a post-Kantian aesthetics, and that this aesthetics needs to be affirmed precisely against the temptation (all too common in current academic discourse) to render it in “ethical” terms. (I won’t say more about this here, because it is the implicit argument of my entire book on Whitehead and Deleuze). On the other hand, I find Virno’s silence on matters of political economy quite disappointing in someone who explicitly presents himself as a Marxist or post-Marxist philosopher. Rather than deepening a sense of how we might understand the “multitude” in the framework of contemporary global capitalism, Virno opts for a much vaguer, and context-free, understanding of how social and cultural change is possible. He prefers to speak in terms of the State, and of the foundations of law and sovereignty, than in terms of modes and relations of production. I know my position here is an unpopular one, but I am enough of a “vulgar Marxist” to think that these sorts of political-philosophy distinctions are too vague and abstract to have any sort of traction when they are separated from “economic” considerations. (Again, this is an argument that needs to be pursued at greater length than I have the time or the patience to do here).

But the limitations of Virno’s argument in this respect are most evident when he discusses the forms of social change. Basically, he lists two. One of them is “exodus”: the Israelites, faced with the choice between submitting to the Pharaoh and rebelling against him, instead make the oblique move of leaving Egypt altogether. This for Virno is the exemplary situation of changing the parameters of what is possible, changing the rules of the game instead of just moving within an already-given game or form of life. The obvious reference, beyond the Bible, is to the Italian “autonomist” movement of the 1960s/1970s, which is the point of origin for Virno’s thought just as it is for Negri’s. Now, much as I admire the emphasis on obliqueness rather than dialectical oppositions, I also suspect that the idea of “exouds” is a too easy one — in the sense that, when capitalism subsumes all aspects of contemporary life, outside the factory as well as inside, it is as difficult actually to find a point of exodus as it is easy to make the declaration that one is doing so. “Lateral thinking” is a business buzzword more than an anti-capitalist strategy. Things like “open software” and “creative commons” copyright licenses are not anywhere near as radical as they sound — if anything, they not only coexist easily with a capitalist economy, but presuppose a capitalist economy for their functioning. All too often, what we celebrate as escapes from the capitalist machine in fact work as comfortable niches within it.

But Virno’s other form of change, “innovation,” is even more problematic. It seems to me to be symptomatic that Virno introduces his discussion of what he calls entrepreneurial innovation with the disclaimer that this involves “a meaning of the term ‘entrepreneur’ that is quite distinct from the sickening an odious meaning of the word that is prevalent among the apologists of the capitalist mode of production” (148); and yet, immediately after this caveat, he goes on to explain what he means by “entrepreneurial innovation” by referring to the authority of Joseph Schumpeter, the one theorist of the entire 20th century who is most responsible for the “sickening and odious” meaning that Virno ostensibly rejects. Virno insists that, for Schumpeter, “it would be a mistake to confuse the entrepreneur with the CEO of a capitalistic enterprise, or even worse, with its owner.” This is because, for Schumpeter, entrepreneurism is “a basically human aptitude… a species-specific faculty.” However, this disclaimer will not stand. On the one hand, the entrepreneur is not the same as the CEO or owner, only because the former refers to a moment of “invention,” whereas the latter refers to an already-established enterprise. When the businessman ceases to innovate actively, and instead simply reaps the fruits of his market dominance, then he has become a CEO instead of an entrepreneur. Bill Gates was a Schumpeterian entrepreneur in the 1970s; by the 1990s he had become just another CEO. The owners of Google, whose innovations surpassed those of Microsoft, are now making the same transition. Even if the entrepreneur is not yet a CEO, his actions are only intelligible in the framework of a capitalist economy. If the entrepreneur is successful, then he inevitably becomes a CEO. To say that Schumpeterian entrepreneurship is a basic human aptitude is precisely to say (as Virno doesn’t want to say) that capitalism is intrinsic to, and inevitablely a part of, human nature. (My own commentary on Schumpeter is available here).

I think that Virno’s reference to Schumpeter is symptomatic, because it offers the clearest example of how he fumbles what seems to me to be one of the great issues of our age: which is, precisely, how to disarticulate notions of creativity and innovation and the New from their current hegemony in the business schools and in the ways that actually-existing capitalism actually functions. VIrno fails to work through this disarticulation, precisely because he has already preassumed it. I myself don’t claim by any means to have solved this problem — the fact that we can neither give up on innovation, creativity, and the New, nor accept the way that the relentless demand for them is precisely the motor that drives capitalism and blocks any other form of social and economic organization from being even minimally thinkable — but I feel that Virno fails to acknowledge it sufficiently as a problem. In consequence, for all that his speculation in this book offers a response to the Hobbesian or Schmittian glorification of the State, it doesn’t offer any response to the far more serious problem of our subordination to the relentless machinery, or monstrous body, of capital accumulation.

Some thoughts on “character”

I’ve been thinking for a long time about the following quote from Warren Ellis:

Chris Claremont once said of Alan Moore, “if he could plot, we’d all have to get together and kill him.” Which utterly misses the most compelling part of Alan’s writing, the way he develops and expresses ideas and character. Plot does not define story. Plot is the framework within which ideas are explored and personalities and relationships are unfolded. If all you want is plot, go and read a Tom Clancy novel.

For me, this is a key to understanding genre fiction — or maybe I should just say, fiction in general. Plot is overrated. SF novels and comics and movies and the like where it’s all about the plot, how well it is put together, how if a gun is on the table in Act One, it has to be used in Act Three, and so on, bore the hell out of me. The better and more cleverly it is put together, the more it seems to me to be just a dumb, creaky mechanism which provides neither pleasure nor insight. I know that lots of people (readers/viewers as well as creators) get off on carefully crafted plots; but such things do nothing for me.

Which doesn’t mean that all I want to read is avant-garde novels which have no narrative whatsoever. Fiction entirely devoid of a plot is like movies entirely devoid of sound (i.e., not like “silent films” as they were actually exhibited pre-1928, because they always had musical accompaniment, but silent films shown today without the needed music, or arty silent films that have no soundtrack by choice) — they are extremely difficult to follow, or to keep my attention on, or to have any sort of temporal dimension at all — it is like not having short-term memory, each moment happens and then disappears in a void, never to be recalled or related to anything else. [One of the many brilliant things about Kenneth Anger’s films is how, even if he has no specific soundtrack in mind, he just drops in pop songs almost randomly because that energizes the films somehow, allows us to apprehend the images in their duration better — I love, for instance, the way that silly “things that go bump in the night” song accompanies Pierrot gesturing at the moon (or am I confusing two of his early films?)].

So. Warren is right, I think. Plot has to be there, but only as a “framework” allowing for the development of what really matters: “ideas and character.” Plot is like a medium or an atmosphere, “within which ideas are explored and personalities and relationships are unfolded.” Just like you need the atmosphere in order to live, so you need the plot in order to explore those ideas, and to see those personalities and relationships. But you’ve got to look through the plot, just as you look through the air to see somebody or something. Of course, there are times when the air itself is important (like when there is a tornado, or when it is heavily polluted), and there are certain times when the plot in itself is important. But for the most part, this is a realm in which McLuhan’s “the medium is the message” does not work.

Of course, ideas are important especially in SF; maybe not as much in some other genres. But what I am interested in here is the question of characters, or “personalities and relationships.” It seems to me that this is something very hard to talk about, yet it is an incredibly important part of how we react to fictions, and why we like some of them so much. It’s common to talk about “identification” with a protagonist, but I think this notion is so vague and general that it doesn’t get us very far. Indeed, William Flesch’s Comeuppance argues quite cogently against both the “identification” theory and the Aristotelian notion of an intrinsic human delight in mimesis. Our relations to the characters we encounter in fictional narratives (and to some extent in non-fictional narratives as well) is much more indirect and convoluted than an “identification” theory can account for.

But for the moment I want to think about only a certain subset of the question of characters in narrative. I want to think about the ways in which characters in genre fiction tend to be, well, generic. Or rather, I want to think about why the best and most interesting characters we encounter in fiction are generic ones. The greatest, most memorable, and most enjoyable characters in English-language fiction (leaving aside Shakespeare for the time being) are almost certainly those of Charles Dickens. And Dickens has no interest at all in anything like interiority, or psychological depth, or Freudian unconscious complexes. Of course, these terms are all 20th-century (or at least late-19th-century) ones, so that their application to Dickens could only be anachronistic. But that’s not the crucial point. What I mean is that Dickens’ characters are, in a curious way, indexical. By this I mean that they are each defined by a single trait, or at most by a couple of traits. These traits tend to be exaggerated, even caricatural. And the characters flagrantly exhibit these traits each time we see them — it is almost as if they were machines programmed to exhibit the same tic over and over. Or else, as if they were maniacal exhibitionists, except that the exhibitionism is not felt by the characters themselves, it is only orchestrated by the author.

What I am trying to get at is that Dickens’ characters are, in a certain sense, not “psychological” at all — they are all outer display, not inner depth. And that, far from detracting from either their “realism” or how compelling they are to the reader, it is precisely on account of what I am calling their outwardness, and their indexicality, that they are both naturalistically plausible and emotionally compelling to us. In a curious way, this mode of presenting character is “truer to life” than any degree of introspection or stream-of-consciousness detail could ever possibly be.

Why is this? I’m not entirely sure, but I think it is because Dickens’ indexical style of character presentation is very close to the way that we actually encounter, get impressions of, and judge people in the real world. I’d even say that, not only do we (obviously) not know other people through introspection, but also we do not even know ourselves through introspection. Self-knowledge is the hardest type of knowledge to obtain. I lack the disinterest that would be required in order for me to see myself objectively. And when I do introspect, the more I strain to examine myself, the more blurry and confused I appear to myself, and the less I am able to apprehend myself in any well-delineated way at all. I am unable to objectify myself, to caricature myself, let alone to characterize myself. Introspective fiction, similarly, tends not to be about character at all. It blurs and dissolves character into something else: at best, perhaps into Language or Style, or sometimes Time, in the great modernist novels. (Leopold, Stephen, and Molly are functions of Joyce’s stupendous linguistic inventiveness, rather than the reverse; Proust’s narrator is so profoundly introspective, so drawn into the flows of duration and memory that he scarcely exists as a “character” at all).

When characters are indexical, as they are in Dickens — and as they also are, for example, in classical Hollywood movies, i.e. those from approximately 1930-1960) — they have the odd quality of being both generic and singular. Generic, in the sense that they are all recognizable types. Singular, in that each one has some sort of unique inflection, something that is wholly idiosyncratic. What’s left out is everything in between these two poles: the individualizing characteristics that are less than generic, but more than the mere idiosyncrasies or tics that enable instant recognition.

Here, I am thinking in part of Thomas Wall‘s wonderful discussion of “character actors” in old Hollywood movies:

Character actors are absolutely familiar to us but they never possess “star quality”… [They] never work hard to disguise themselves or to dissolve into a role as in “method” acting. To the contrary, they play their various roles in much the same way, film after film, decade after decade. They are actors who become so familiar because their reality is entirely made up of their various roles such that their mannerisms, habits, looks, vocal tonalities, and gestures all become characteristic and as familiar as the actors themselves remain unfamiliar to us… They always play “types” and they are nothing apart from the types they play… We know them only as images and we see them only as images, that is, as allegories of themselves. Each role is another allegory.

These marvelous actors are therefore singularities… [They] are completely absorbed into the celluloid, the stock, the stereotypes they play so perfectly. They are “types” and they have assumed themselves as such. The character actor cannot be identified with any particular role but neither do they evoke nor express anything other than the role. They have a pure relation to cinema.

Wall describes these character actors as being at the same time “types,” or purely generic, and “singularities” — by which Wall means something like individual instances that can serve as “examples” of some generic quality, but that have no content to them aside from being, in this way, exemplary. A certain character actor cannot be equated with Cowardice or Drunkenness overall — for he is only an individual instance (or instantiation) of one of these generic qualities. He is certainly not the quality in general. This is what makes him singular. But at the same time he is nothing other than a coward or a drunkard — this entirely defines his being. Walter Brennan is never anything more than the Western hero’s old geezer sidekick. Donald Meek is never anything more than the “timid, worrisome, reticent, cowering” figure he plays so often.

Wall is describing Hollywood character actors in explicit contrast to movie stars. Yet I would claim that, in classic Hollywood at least, the stars can also be characterized by this strange combination of the wholly generic and the wholly singular, with nothing in between. Of course, as Wall points out, we recognize the big stars by name, which is not the case with the character actors. But “John Wayne,” “Cary Grant,” and “Katherine Hepburn” are in fact as much generic figures as the character actors are. It is just that the “types” they embody are themselves. You can’t imagine John Wayne playing Hamlet, because whatever role he plays, he is always John Wayne. In each of his roles, Wayne is a singular instance of John-Wayne-ness, nothing more and nothing less. In some of his roles, he is unabashedly heroic, while in others he may even deconstruct his own heroicness (e.g. The Searchers). But even though he is the only instance of the generic type that he embodies, he still appears always, and only, as a singular example of that type.

The telling contrast, I think, is not between character actors and stars, but between the actors of classical Hollywood (significantly, Wall names Thelma Ritter, Elisha Cook Jr., and Thelma Ritter) and those who belong to more recent times, from the “New Hollywood” of the 1970s to today. Today, for the most part, characters are supposed to be specified — they are all supposed to have “plausible” backstories and motivations. This is partly due to the ubiquity of Method Acting (Wall emphasizes how the generic/singular nature of character actors is incompatible with anything like the Method), and partly due to the way screenwriting conventions have changed, or screenwriting has become “rationalized” (this is due in part to screenwriting classes). The idea is that everything in the narrative — every detail, everything about a character — has to be “motivated,” assigned a plausible rationale. The generic is scorned as being cliche — which leads, in fact, to the much worse cliche that everything has its own particular reason for being, incomparable with anything else. In America today, each of us has his or her own “individuality” — and this is precisely the way in which we are all exactly alike, all atomized consumers with our own bundle of “preferences.” The generic and the singular are both repressed, in favor of the in-between ground of busy particularity.

The result of imposing motivation and backstories on everything is that the film’s characters lose their generic quality — and by that fact, they lose their singularity as well. Today’s character actors are completely interchangeable, in a way that Walter Brennan, Donald Meek, Thelma Ritter, etc., were not. They are interchangeable precisely because they are not typecast, but are rather each crafted as an “individual.” The trouble with such “individuals” is that they are not singular; the fact that each of them has his or her individual differences is precisely the characteristic by which they are all the same. And I think this is true of stars as well. Daniel Day-Lewis and Robert Downey Jr. are both totally brilliant actors whose performances I greatly admire. But they aren’t iconic in the way that John Wayne, James Stewart, and Cary Grant were; and this is because they are somehow too immersed in their particular roles, which vary from film to film.

[There are some exceptions to this, of course. The two Toms — Hanks and Cruise — are much more in the tradition of old-style Hollywood iconicity; but I can’t help it, they both seem to me to be completely lame when compared to Stewart or Grant. It may also be that the generic/singular formation can be revived, if not by individual contemporary actors, then by the pairing of such actors, as with the Ed Norton/Brad Pitt doublet of Fight Club. — Also, my old friend Philip Wohlstetter once suggested to me, rightly I think, that part of the brilliance of Titanic was precisely that it had dispensed with post-Method motivation, and gone back to the old Hollywood style of generic typecasting.]

To turn to another type of narrative — science fiction novels — you can see the same sort of contrast if you compare a novel by Philip K Dick to one by a contemporary hard-SF writer like Greg Bear. Nobody praises Dick for his delineation of character; yet part of what makes Dick’s novels so poignant is precisely that his metaphysical and socio-commodified predicaments, and paranoid dislocations, are always experienced by everyman, ordinary-Joe-who-just-wants-to-get-by characters. Joe Chip in Ubik is entirely generic/singular in the ways I have been describing; he has no “depth”. But isn’t this precisely why we feel so drawn in by his struggles, whether he is being shaken down by his refrigerator for a ten cent payment, or struggling for his life-after-death against a spirit of entropy and decay?

On the other hand, I just finished reading Greg Bear’s Darwin’s Radio, which is very interesting from an ideas point of view (it imagines a form of directed evolution, in which an endogenous retrovirus emerges from its thousands of years of sleep as “junk DNA” in the human genome, in order to orchestrate a new speciation, or at least subspeciation, of humanity). But I am really irritated by the way Bear introduces characters. Each one is givien a specification when first introduced. For example, chosen entirely at random: “A middle-aged Republic Security officer with the formidable name of Vakhtang Chikurishvili, handsome in a burly way, with heavy shoulders and a thick, often-broken nose, stepped forward.” No matter that we only meet this character for an instant on page 20, and never again in the course of a 524-page novel; we have to be given some “hook” that will de-genericize him, that will give him a certain “plausibility” as a character. The result is that all the characters of Darwin’s Radio get muddled, precisely because of the way that we have been given details to distinguish them. We know a lot more about the novel’s protagonists, Kaye Lang and Mitch Rafelson, than we ever do about Joe Chip; but we never care about them anywhere near as much as we do about Dick’s hero. And this is precisely because, like contemporary Hollywood actors and characters, they lack the generic dimension, and lack singularity as well, falling in between, so that their very plausibility turns them into stick figures without any deeper resonance.

I don’t mean to single out Bear for special censure; he is, in fact, one of the more interesting SF writers at work today. But, although I value SF very largely for its ideas, for the ways it tries to think through the hints of futurity that have already arrived in our present, and to negotiate the tricky shoals of the meeting between technology and socio-political actuality — I also like it because (in contrast to Bear’s technique, which is closer to mainstream fiction) it is one of the places where the generic-singular mode of character presentation is still viable. The same is true, and perhaps even more so, for comics — the medium of comics, with its tendency towards iconic images rather than fully naturalistic ones (as Scott McCloud notes), and with its linguistic compression (for reasons of space alone, text has to be brief and pointed, resonant and charged, rather than over-specific), almost requires the kind of iconic, generic/singular approach to character that I have been discussing.

Which is why, for instance, I am so looking forward to Matt Fraction’s forthcoming run with Invincible Iron Man, in which we are promised an epic battle between Tony Stark’s corporate fascism and the “post-national…open source ideological terrorism” of “bad guy” Zeke Stane.

Cognitive capitalism?

I just finished reading Yann Moulier Boutang’s Le capitalisme cognitif (Cognitive Capitalism). Boutang is the editor of Multitudes, a French journal closely associated with Toni Negri. The basic thesis of his book — in accord with what Hardt and Negri say in Empire and Multitude — is that we are entering into a new phase of capitalism, the “cognitive” phase, which is as different from classical industrial capitalism as that capitalism was from the mercantile and slavery-based capitalism that preceded it. This is a thesis that, in general, I am sympathetic to. On the one hand, it recognizes the ways in which 19th-century formulations of the categories of class and property are increasingly out of date in our highly virtualized “network society”; while on the other hand, it recognizes that, for all these changes, we are still involved in what has to be called “capitalism”: a regime in which socially produced surpluses are coded financially, expropriated from the actual producers, and accumulated as capital.

Ah, but, as always, the devil is in the details. And I didn’t find the details of Boutang’s exposition particularly satisfying or convincing. To be snide about it, it would seem that Boutang, like all too many French intellectuals, has become a bit too enamored of California. He takes those Silicon Valley/libertarian ideas — about the value of continual innovation, the worthiness of the free software movement, and the possibilities of unlimited digital dissemination — more seriously, or at least to a much greater extent, than they merit. The result is a sort of yuppie view of the new capitalism, one that ignores much that is cruel and repressive about the current regime of financial accumulation.

There, I’ve said it. But let me go through Boutang’s argument a bit more carefully. His starting point, like that of Hardt and Negri, and of Paolo Virno as well, is what Marx calls “General Intellect” — a concept that only comes up briefly in Marx, in the “Fragment on Machines” which is part of that vast notebook (never published by Marx) known today as the Grundrisse; but that has become a central term for (post-)Marxist theorists trying to come to grips with the current “post-Fordist” economy. (Here’s Paolo Virno’s discussion of general intellect). Basically, “general intellect” refers to the set of knowledges, competencies, linguistic uses, and ways-of-doing-things that are embedded in society in general, and that are therefore more or less available to “everybody.” According to the argument of Virno, Mauricio Lazzarato, Hardt and Negri, Boutang, and others, Post-Fordist capitalism has moved beyond just the exploitation of workers’ (ultimately physical) labor-power, and is now also involved in the appropriation, or the extraction of a surplus from, all this embodied and embedded social no-how. Rather than just drawing on the labor-power that the worker expends in the eight hours he or she spends each day in the workplace, “cognitive capitalism” also draws on the workers’ expertise and “virtuosity” (Virno) and ability to conceptualize and to make decisions: capacities that extend beyond the hours of formal labor, since they involve the entire lifespan of the workers. My verbal ability, my skill at networking, my gleanings of general knowledge which can be applied in unexpected situations in order to innovate and transform: these have been built up over my entire life; and they become, more than labor-power per se, the sources of economic value. Corporations can only profit if, in addition to raw labor power, they also appropriate this background of general intellect as well. General intellect necessarily involves collaboration and cooperation; it arises, through, and is cultivated within, the networks that have become so important, and of such wide extent, in the years since the invention of the Internet. In this way, general intellect can be thought of as a “commons” (as Lawrence Lessig and other cybertheorists say), or as the overall framework of what defines us now as a “multitude” (rather than as a particular social class, or as a “people” confined to a single nation, as was the case in the age of industrial capitalism and the hegemony of print media).

All this is well and good, as far as it goes. While I would note that the phenomena described under the term “general intellect” have not just been invented since 1975, but have existed for a much longer time — and have been exploited by capitalism for a much longer time — I don’t doubt that they have been so expanded in recent years as to constitute (as the dialecticians would put it) “a transformation of quantity into quality.” (See my past discussion of McLuhanite Marxism). Let’s provisionally accept, then, Boutang’s assertion that enough has changed in the last 30 years or so that we are moving into a new regime of capitalist accumulation. The question is, how do we describe this new regime?

It’s the form of Boutang’s description of this transformation that I find problematic. He says that the new cognitive capitalism is concerned, not so much with the transformations of material energy (labor-power) into physical goods, as with the reproduction of affects and subjectivities, of knowledges and competencies, of everything mental (or spiritual?) that cannot be reduced to mere binarized “information.” I don’t really disagree with this, to the extent that it is a question of “in addition” rather than “instead.” But Boutang leans a little too far to the opinion that “cognitive” or virtual production (what Hardt and Negri call “affective labor,” and what Robert Reich calls “symbolic analysis”) has displaced, rather than supplemented, the production and distribution of physical goods and services. The source of wealth is no longer labor-power, he says, nor even that dead labor-power congealed into things that constitutes “capital” in the traditionally Marxist sense, but rather the “intellectual capital” that is possessed less by individuals than by networks of individuals, and that is expressed in things like capacity for innovation, institutional know-how, etc.

Boutang claims that this “intellectual capital” [a phrase I hate, because an individual’s skills, knowledge, etc. is precisely not “capital”] is not depleted daily (so that it needs to be replenished) in the way that physical labor-power is under industrial capitalism; rather, it is something that increases with use (as you do more of these things, you become better at them), so that the process of replenishment (learning more, gaining skills, improving these skills or virtuosities through practice) is itself what adds value. Also, this “intellectual capital” is an intrinsically common or social good, rather than a private or individualized one. It can only be realized through network-wide (ultimately world-wide) collaboration and cooperation. For both these reasons, the appropriation of this “general intellect” is a vastly different process from that of appropriating individual workers’ labor-power. All this is exemplified for Boutang in phenomena like online peer to peer file trading, and in the open source software movement — he sees collaborative production in the manner of Linux as the new economic paradigm.

Now, I am in favor, as much as anybody is, of violating copyright, and of open source (for things like academic publications as well as for software); but I do not believe that these can constitute a new economic paradigm — they still exist very much as marginal practices within a regime that is still based largely on private property “rights” and the extortion of a surplus on the basis of those “rights.” [I should say, as I have said many times before, that I am happy for my words to be disseminated in any form, without payment, as long as the attribution of the words to my authorship — to use a dubious but unavoidable word or concept — is retained]. Boutang is so excited by the “communist” aspects of networked collaboration, or general intellect, that he forgets to say anything about how all this “cognitive” power gets expropriated and transformed into (privately owned) capital — which is precisely what “cognitive capitalism” does. He optimistically asserts that the attempts of corporations to control “intellectual property,” or extract it from the commons, will necessarily fail — something that I am far less sure of. “Intellectual property” is an oxymoron, but this doesn’t mean that “intellectual property rights” cannot be successfully enforced. You can point to things like the record companies’ gradual (and only partial) retreat from insisting upon DRM for all music files; but this retreat coincides with, and is unthinkable without, a general commodification of things like ideas, songs, genetic traits, and mental abilities in the first place.

Boutang gives no real account of just how corporations, or the owners of capital, expropriate general intellect (or, as he puts it in neoliberal economistic jargon, how they capture “positive externalities”). He seems to think that the switch from mere “labor-power” to “general intellect” as the source of surplus value is basically a liberating change. I would argue precisely the opposite: that now capital is not just expropriating from us the product of the particular hours that we put in at the workplace; but that it is expropriating, or extracting surplus value from, our entire lives: our leisure time, our time when we go to the movies or watch TV, and even when we sleep. The switch to general intellect as a source of value is strictly correlative with the commodification of all aspects of human activity, far beyond the confines of the workplace. Just as the capitalist cannot exploit the worker’s labor per se, but must extract it in the form of labor power, so the capitalist cannot exploit general intellect without transforming it into something like “cognition-power” — and this is extracted from individuals just as labor-power is. When the division between physical and mental labor is made less pronounced than it was in the Fordist factory, this only means that the “mental” no less than the “physical” is transformed into a commodified “capacity” that the employer can purchase from the employee in a way that is lesser than, and incommensurate with, the “use” the employer gets from that power or capacity. Boutang makes much of the fact that cognition is not “used up” in the way that the physical expenditure of energy is; but I don’t think this contrast is as telling as he claims. The fatigue of expending cognitive power in an actual work situation is strictly comparable to the fatigue of expending physical power in a factory. And the stocking-up of physical power and cognitive ability over the lifetime of the workers entirely go together, rather than being subject to opposite principles.

Boutang seems to ignore the fact that the regime of “intellectual property” leads to grotesque consequences such as the fact that an idea that a Microsoft employee might have when she is taking a bath, or even when she is asleep (consider all the stories of innovative ideas that come to people in dreams, like Kekule’s discovery of the “ring” structure of benzene) “belong” to the corporation, and must be left behind if and when she moves on to another job. (Let me add that it is just as absurd to assert that an idea that I come up with from a dream “belongs” to me as it is to assert that the idea belongs to my employer. All ideas come out of other ideas; nothing I do is independent of all the store of “general intellect” that I draw upon).

Boutang also seems to buy into many other of the myths of cognitive capitalism. He endorses the idea that the “flexibilization” of employment (or what in Europe is often called “precarization”) is on the whole a good and progressive thing: it “liberates” workers from the oppression of the “salariat” (I am not sure how to translate this word into English — the “regime of salary,” perhaps?). Boutang goes so far as to point to the way “new economy” corporations in the late 1990s gave out stock options in lieu of higher salary as a harbinger of the way things are being rearranged under cognitive capitalism. This seems entirely wrong to me, because it is only a subset of highly skilled programmers, and executives, who get these options. As far as I know, the people who wash the windows or sweep the floors at Microsoft or Google do not get stock options. (I don’t think the people who sit at the phones to answer consumer complaints do either).

Not to mention that you’d never know from Boutang’s discussion that over a billion people in the world currently live in what Mike Davis calls “global slums”. William Gibson is right to say that “the street finds its own uses for things”; and there are certainly a lot of interesting and inventive and innovative things going on in the ways that people in these slums are using mobile phones and other “trickle-down” digital technology. (See Ian Macdonald’s SF novel Brasyl for a good speculative account, or extrapolation, of this). But all this goes on in an overall situation of extreme oppression and deprivation, and it can only be understood in the context of the “hegemonic” uses of these technologies in the richer parts of the world (or richer segments of the societies in which these slums are located).

Also, Boutang needs to account for the fact that WalMart, rather than Microsoft or Google, is the quintessential example of a corporation operating under the conditions of cognitive capitalism. Walmart could not exist in its present form without the new technologies of information and communication — it draws upon the resources of “general intellect” and the force of continual, collectively-improvised innovation for everything that it does. Also, and quite significantly, it focuses entirely upon circulation and distribution, rather than upon old-style manufacturing — showing that the sphere of circulation now (in contrast to Marx’s own time) plays a major role in the actual extraction of surplus value. Yet WalMart shows no signs of unleashing the “creativity of the multitude” in its workings, nor of replacing the “salariat” with things like stock options for its workers. On that front, its largest innovation consists in getting rid of the central Fordist principle of paying the workers enough so that they can afford to buy what they manufacture. Instead, WalMart has pioneered the inverse principle: paying the workers so little that they cannot afford to shop anywhere other than at WalMart. It might even be said, not too hyperbolically, that WalMart has singlehandedly preserved the American economy from total collapse, in that their lowered prices are the only thing that has allowed millions of the “working poor” to retain the status of consumers at all, rather than falling into the “black hole” of total immiseration. WalMart is part and parcel of how the “new economy” has largely been founded upon transferring wealth from the less wealthy to the already-extremely-rich. But this is a process that Boutang altogether ignores; he writes as if “neoliberalism” were some sort of rear-guard action by those who simply “don’t get” the new cognitive economy. In fact, though, neoliberalism is no mere ideology: it is the actual “cognitive” motor of cognitive capitalism’s development.

Boutang even buys into the neoliberal program, to the extent that he maintains that the role of financial speculation in the current postfordist regime is largely a benevolent one, having to do with the management of the newly impalpable sources of value in the “cognitive” economy. He denies that financial speculation increasingly drives economic processes, rather than merely reflecting them or being of use to them. He needs to think more about the functioning of derivatives in “actually existing capitalism.”

All in all, Le capitalisme cognitif buys into the current capitalist mythology of “innovation” and “creativity” way too uncritically — without thinking through what it might mean to detach these notions from their association with startups and marketing plans and advertising campaigns (and how this might be done). (As a philosophical question, this is what my work with Whitehead and Deleuze leads me to).

The book ends, however, with an excellent proposal. Boutang argues for an unconditional “social wage”: to be given to everyone, without exception, and without any of the current requirements that welfare and unemployment programs impose on their recipients (requirements like behaving properly, or having to look for work, or whatever). This social wage — he gives a provisional figure of 700 euros per month, or about $1000/month at today’s exchange rates) would be paid in recompense for the fact that “general intellect,” from which corporations extract profit, is in fact the work of everyone — even and especially outside of formal work situations. Boutang spends a lot of energy showing how this proposal is fiscally feasible in Europe today, how it would rejuvenate the economy (and thus lead, in the long run, to enhanced profits for the corporations whose tax payments would finance it). What he doesn’t say, however — and perhaps does not recognize — is that, even though this proposal is perfectly feasible in terms of the overall wealth of the world economy), if it were really adopted universally — that is to say, worldwide, to all human beings on the face of the planet — it would severly disrupt the regime of appropriation that he calls “cognitive capitalism.” This is yet another example of bat020’s and k-punk’s maxim that (reversing a slogan from May 1968) we must “be unrealistic, demand the possible.” The unconditional social wage is entirely possible in terms of what the world can economically afford, but it is “unrealistic” in terms of the way that “cognitive capitalism” is structured. Demanding it pushes the system to a point of paradox, a critical point — at least notionally.

Whitehead’s God

It took me much longer than I had hoped, but I have finally finished a first draft of my chapter on Whitehead’s notion of God. It’s longer than it should be, and a bit all over the place (digressive) — and yet touches too briefly on a number of things that it would be good to flesh out in greater detail. And I didn’t quite manage to explain how and why Whitehead’s God might be preferable even to Spinoza’s God (its only competitor) for the role of the philosophers’ God, or the atheists’ God. In any case, the God I discern in Whitehead is (as far as I can tell) rather different from the one found in process theology.

For what it’s worth, the article is here (pdf).

The Head Trip; consciousness and affect

I’ve been reading Jeff Warren’s The Head Trip: Adventures on the Wheel of Consciousness, basically on the recommendation of Erik Davis. It’s a good pop-science-cum-therapy book, which explores basic modes of conscious experience, both nocturnal and diurnal, and combines accounts of what scientific researchers and therapists are actually doing with a narrative of Warren’s own subjective experiences with such modes of consciousness-alteration as lucid dreaming, hypnotic trances, meditation, neurofeedback, and so on. Warren maps out a whole series of conscious states (including ones during sleep), and suggests that consciousness in general (to the extent that there is such a thing “in general”) is really a continuum, a mixture of different sorts of mental activity, and different degrees of attentiveness, including those at work during sleep. These various sorts of conscious experience can be correlated with (but not necessarily reduced to) various types of brain activity (both the electric activity monitored by EEGs and the chemical activity of various neurotransmitters; all this involves both particular “modules” or areas of the brain, and systematic patterns running through the entire brain and nervous system).

The Head Trip is both an amiable and an illuminating book, and I really can’t better Erik Davis’ account of it, which I encourage you to read. Erik calls Jeff Warren “an experiential pragmatist in the William Jamesian mode,” which is both high praise and a fairly accurate description. Warren follows James in that he insists upon conscious self-observation, and looks basically at what James was the first to call the “stream of consciousness.” Like James, Warren insists upon the pragmatic aspect of such self-observation (what our minds can do, both observing and being observed, in all its messy complexity), rather than trying to isolate supposedly “pure” states of attention and intention the way that the phenomenologists do.

At one point, Warren cites Rodolfo Llinas and D. Pare, who argue that consciousness is not, as James claimed, “basically a by-product of sensory input, a tumbling ‘stream’ of point-to-point representations,” because it is ultimately more about “the generation of internal states” than about responding to stimuli (p. 138). But this revised understanding of the brain and mind does not really contradict James’ overall pragmatic style, nor his doctrine of “radical empiricism.” James’ most crucial point is to insist that everything within “experience” has its own proper reality (as opposed to the persistent dualism that distinguishes between “things” and “representations” of those things). Not the least of Warren’s accomplishments is that he is able to situate recent develops in neurobiological research within an overall Jamesian framework, as opposed to the reductive dogmas of cognitivism and neural reductionism.

Nonetheless, what I want to do here is not talk about Warren’s book, but rather speculate about what isn’t in the book: which is any account of emotion or of affect. Shouldn’t we find it surprising that in a book dedicated to consciousness in all its richness and variety, there is almost nothing about fear, or anger, or joy, or shame, or pride? (There’s also nothing about desire or passion or lust or erotic obsession: I am not sure that these can rightly be called “emotions,” but they also aren’t encompassed within what Warren calls “the wheel of consciousness”). There are some mentions of a sense of relaxation, in certain mental states; and of feeling a sort of heightened intensity, and even triumph, when Warren has a sort of breakthrough (as when he finally succeeds in having a lucid dream, or when his neurofeedback sessions are going well). Correlatively, there are also mentions of frustration (as when these practices don’t go well — when he cannot get the neurofeedback to work right, for instance). But that’s about it, as far as the emotions are concerned.

The one passage where Warren even mentions the emotions (and where he briefly cites the recent work on emotions by neurobiologists like Antonio Damasio and Joseph LeDoux) is in the middle of a discussion of meditation (pp. 309ff.). The point of this passage is basically to discuss the difference between how Western rationalism has just tried to repress (in a Freudian sense) the emotions, whereas the Buddhist tradition has instead tried to “cultivate” them (by which he seems to mean something like what Freud called “sublimation”). Warren oddly equates any assertion of the power of the emotions with evolutionary psychiatry’s doctrine that we are driven (or “hardwired”) by instincts that evolved during the Pleistocene. The existence of neuroplasticity (as recognized by contemporary neurobiologists) effectively refutes the claims of the evolutionary psychologists — this is something that I am entirely agree with Warren about. But Warren seems thereby to assert, as a corollary, that emotions basically do not matter to the mind (or to consciousness) at all — and this claim I find exceedingly bizarre. Warren seems to be saying that Buddhist meditation (and perhaps other technologies, like neurofeedback, as well) can indeed, as it claims, dispose of any problems with the emotions, because it effectively does “rewire” our brains and nervous systems.

What is going on here? I have said that I welcome the way that Warren rejects cognitivism, taking in its place a Jamesian stance that refuses to reject any aspect of experience. I find it salubrious, as well, that Warren gives full scope to neurobiological explanations in terms of chemical and electronic processes in the brain, without thereby accepting a Churchland-style reductionism that rejects mentalism or any other sort of explanatory language. Warren thus rightly resists what Whitehead called the “bifurcation of nature.” Nonetheless, when it comes to affect or emotion, some sort of limit is reached. The language that would describe consciousness from the “inside” is admitted, but the language that would express affective experience is not. I think that this is less a particular failing or blind spot on Warren’s part, than it is a (socially) symptomatic omission. Simply by omitting what does not seem to him to be important, Warren inadvertently testifies to how little a role affect or emotion plays in the accounts we give of ourselves today, accounts both of how our minds work (the scientific dimension) and of how we conceive ourselves to be conscious (the subjective-pragmatic dimension).

Some modes of consciousness are more expansive (or, to the contrary, more sharply focused) than others; some are more clear and distinct than others; some are more bound up with logical precision, while others give freer reign to imaginative leaps and to insights that break away from our ingrained habits of association. But in Warren’s account, none of these modes seem to be modulated by different affective tones, and none of them seem to be pushed by any sort of desire, passion, or obsession. Affects and desires would seem to be, for Warren, nothing more than genetically determined programs inherited from our reptilian ancestors (and exaggerated in importance by the likes of Steven Pinker) which our consciousness largely allows us to transcend.

Another way to put this is to say that Warren writes as if we could separate the states (or formal structures) of attentiveness, awareness, relaxation, concern, focus, self-reflection, and so on, from the contents that inhabit these states or structures. This is more or less equivalent to the idea — common in old-style AI research — that we can separate syntactics from semantics, and simply ignore the latter. Such a separation has never worked out in practice: it has entirely failed in AI research and elsewhere. And we may well say that this separation is absurd and impossible in principle. Yet we make this kind of separation implicitly, and nearly all the time; it strikes us as almost axiomatic. We may well be conscious of “having” certain emotions; but we cannot help conceiving how we have these emotions as something entirely separate from the emotions themselves.

It may be that consciousness studies and affect studies are too different as approaches to the mind (or, as I’d rather say, to experience) to be integrated at all easily). Indeed, in this discussion I have simply elided the difference between “affect” and “emotion”: the terms are sometimes used more or less interchangeably, but I think any sort of coherent explanation requires a distinction between the two. Brian Massumi uses “affect” to refer to the pre-personal aspects (both physical and mental) of feelings, the ways that these forces form and impel us; he reserves “emotion” to designate feelings to the extent that we experience them as already-constituted conscious selves or subjects. By this account, affects are the grounds of conscious experience, even though they may not themselves be conscious. Crucial here is James’ sense of how what he calls “emotions” are visceral before they are mental: my stomach doesn’t start churning because I feel afraid; rather, I feel afraid because my stomach has started churning (as a pre-conscious reaction to some encounter with the outside world, or to some internally generated apprehension). The affect is an overall neurological and bodily experience; the emotion is secondary, a result of my becoming-conscious of the affect, or focusing on it self-reflexively. This means that my affective or mental life is not centered upon consciousness; although it gives a different account of non-conscious mental life than either psychoanalysis (which sees it in terms of basic sexual drives) or cognitive theory (which sees non-conscious activity only as “computation”).

There’s more to the affect/emotion distinction than James’ account; one would want to bring in, as well, Sylvan Tompkins’ post-Freudian theory of affect, Deleuze’s Spinozian theory of affect, and especially Whitehead’s “doctrine of feelings.” Rather than go through all of that here, I will conclude by saying that, different as the field of consciousness studies (as described by Jeff Warren) is from cognitivism, they both ultimately share a sense of the individual as a sort of calculating (or better, computational) entity that uses the information available to it in order to maximize its own utility, or success, or something like that. Such an account — which is also, as it happens, the basic assumption of our current neoliberal moment — updates the 18th century idea of the human being as Homo economicus into an idea of the human being as something like Homo cyberneticus or Homo computationalis. For Warren, this is all embedded in the idea that, on the one hand, our minds are self-organizing systems, and parts of larger self-organizing systems; and on the other hand, that “we can learn to direct our own states of consciousness” (p. 326). Metaphysically speaking, we are directed by the feedback processes of an Invisible Hand; instrumentally speaking, however, we can intervene in these feedback processes, and manipulate the Hand that is manipulating us. The grounds for our decision to do this — to intervene in our own behalf — are themselves recursively generated in the course of the very processes in which we determine to intervene. The argument is circular; but, as with cybernetics, the circularity is not vicious so long as we find ourselves always-already within it. This is in many ways an enticing picture: if only because it is the default assumption that we cannot help starting out with. And Jeff Warren gives an admirably humane and expansive version of it. Still, I think we need to spend more time asking what such a picture leaves out. And for me, affect theory is a way to begin this process.

Comeuppance

William Flesch‘s Comeuppance: Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction is, I think (by which I mean, to the best of my knowledge) the best work of Darwinian literary criticism since the writings of Morse Peckham. That may sound like faint praise, considering how lame most recent lit crit based on “evolutionary psychology” has been; but Comeuppance is a brilliant and startlingly original book, making connections that have heretofore passed unnoticed, but that seem almost self-evident once Flesch has pointed them out.

Comeuppance combines attention to cutting-edge biological theory with a set of aesthetic concerns that are, in a certain sense, so “old-fashioned” that most contemporary theorists and critics have completely forgotten even to think about them. Flesch is concerned with the question of vicarious experience: that is to say, he wants to know why we have so much interest in, and emotional attachment to, fictional characters, narratives, and worlds. He tries to account for why we are so inclined to the “suspension of disbelief” when we encounter a fiction; why we root for the good guys and hiss at the bad guys in novels and movies; why we find it so satisfying when Sherlock Holmes solves a case, or when Spiderman defeats the Green Goblin, or when Hamlet finally avenges his father’s death, or when we imagine a torrid romance between Captain Kirk and Mr. Spock.

When such pleasures are thought about at all, they are usually attributed either to our delight in mimesis, or imitation (which was Aristotle’s theory) or to our identification with the protagonist of the fiction (which was Freud’s). But Flesch suggests that both these accounts are wrong, or at least inadequate. Far from identifying with Sherlock Holmes, Spiderman, Hamlet, or Captain Kirk, we admire and love them from a spectatorial distance, and with an intense awareness of that distance. And while our engagement with narratives requires a certain degree of “verisimilitude,” neither resemblance nor plausibility is enough in itself to generate the sort of engagement and attachment with which we encounter fictions.

Flesch proposes a very different explanation for this engagement and attachment from either the Aristotelian or the Freudian one. He bases it on recent developments in evolutionary biology, and particularly 1)the use of game-theoretical simulations to explain the development of intraspecies (and even inter-species) cooperation, ever since Robert Axelrod, in the 1980s, first used the “Prisoner’s Dilemma” game to model how competition could give rise to altruism; and 2)the studies by Amotz and Avishag Zahavi of “costly signaling” and the “handicap principle.” I will not try to reproduce here the details of these studies, nor the elegant logic that Flesch uses in order to put them together, and to bring them to bear on the problematics of fiction; I only wish to summarize them briefly, in order to move on to the consequences of Flesch’s arguments.

In brief, Flesch maintains: that evolution can lead, and evidently has led, to the development (in human beings, and evidently other organisms as well) of “true altruism,” or the impulse to help others, or the group in general, even at considerable cost to oneself; that this altruism requires that we continually monitor one another for signs or selfishness or cheating (because otherwise, selfish cheaters would always prosper at the expense of those who were honestly altruistic); that, as a result of this monitoring, we get vicarious pleasure from the punishment of cheaters and (to a lesser extent) from the reward of those who enforce this by actively ferreting out and punishing the cheaters; that altruism cannot just be enforced by the punishment of individual cheaters, but needs to be signaled, and made evident to everybody (including the cheater) as well; that — given the way that everyone is continually monitoring everyone else — the best way to make evident that one is indeed an altruist rather than a cheater is to engage in “costly signaling,” or altruistic behavior that is sufficiently costly (draining of wealth or energy, involving risks) to the one engaging in it that it has to be authentic rather than a sham; and that our constant monitoring and reading of these signals, our constant emotional reaction to vicarious experience, is what gives us the predisposition to be absorbed in, or at least emotionally affected by, fictions, so that we respond to fictional characters in narratives in much the same way that we do to real people whom we do not necessarily know, but continually observe and monitor. (There’s not that great a difference, really, between my reaction to Captain Kirk, and my reaction to Bill Clinton).

I haven’t done justice to the full subtlety and range of Flesch’s argument; nor have I conveyed an adequate sense of how plausible and convincing it is, in the detail with which he works it through. But the argument is as careful and nuanced as it is ambitious. It’s true that Flesch places his argument under the mantle of “evolutionary psychology,” something about which I remain deeply dubious. The proponents of evolutionary psychology tend to make global or universalist claims which radically underestimate the extent of human diversity and of historical and cultural differences. I am willing to accept, until shown otherwise, that in all human cultures people sing songs, and experience the physiological reactions that we know as “fear”, and have certain rituals of hospitality. But even if these are biological givens, “human nature” is radically underdetermined by them. For instance, there are enormous differences among cultures and histories as to which vocal performances count as songs, and why and when we sing, and what it means to sing, and when it is appropriate to sing and when not, and what emotions are aroused by singing, and who knows the songs and is expected to sing them, and what technologies are associated with singing, and so on almost ad infinitum.

But even as Flesch adopts the mantle of evolutionary psychology, and makes some general claims about “universal” human attributes, he is careful to avoid — and indeed, he severely criticizes — the reductiveness that often comes with such claims. For his evolutionist arguments have nothing to do with the usual twaddle about how women are supposedly genetically hardwired to prefer older, high-status men, and so on and so forth. Rather, Flesch’s arguments are directed mostly at showing how altruism and cooperation could have emerged despite the Hobbesian nature of conflict among Dawkinsian “selfish genes”; and, more broadly, at demonstrating how “biology has brought humans to a place where genetic essence does not necessarily ‘precede human existence’ ” (219 — Flesch says this after wryly noting that he is probably the first to have cited Sartre and evolutionary theorists together). Since altruism and cooperation — and for that matter cultural variability — evidently do exist among human beings, the potential for these things must have itself arisen in the course of evolution. So Flesch’s real argument is with those sociobiologists and evolutionary psychologists — like Edward O. Wilson, and especially Steven Pinker, to cite the most famous names — who argue, basically, that all these things are a sham, and that underneath appearances we “really” are still only engaged in a Hobbesian war of all against all, and a situation of Malthusian triage.

That said, the real importance of the evolutionary categories that Flesch bases his argument upon — especially game-theoretically-defined altruism and Zahavian costly signaling — resides less in how adequate an explanation they provide of human origins, than in how useful they prove to be to help us think about our own investments in narrative, and the particular (Western) tradition of narrative that is most familiar to us. (Evolutionary categories are nor more nor less “universal” than, say, psychoanalytic categories; and both sorts of formulations work in many contexts, in the sense they provide insights, and allow us to generate further insights — whether or not they are actually valid “universally”). I have also long felt that one of the problems with evolutionary accounts of complex phenomena like human culture is that they commit the elementary logical fallacy of thinking that how a certain feature or trait originated historically determines the use and meaning of that feature today. But as Gould and Lewontin’s arguments about “spandrels” pointed out long ago, this need not be the case, and probably most often is not — many traits are non-adaptive byproducts of adaptations that occurred for entirely different reasons; and even directly adaptive traits are always being hijacked or “exapted” for different uses than those on account of which they originally evolved.

Flesch, unlike most of those who have tried to apply evolutionary arguments to human cultural contexts, takes full account of these complications and multiplications. The real justification of his use of ideas about costly signaling and “strong reciprocity” (or altruism that extends to the monitoring of the altruism of others), is that, in understanding the sorts of narratives that Flesch is interested in, they prove to be very useful indeed. The concepts that Flesch draws from evolutionary theory both elucidate, and are themselves in turn elucidated by, a wide range of familiar narratives, from Shakespeare to Hitchcock, and of characters in narratives, from Achilles to Superman. Even as Flesch provides a series of dazzling close readings, he uses these readings pretty much in the same way that he uses citations of biological research, so that Prince Hal and King Lear stand alongside peacocks and cichlids as exemplars of things like costly signaling and altrustic extravagance, and as subjects of our concern and fascination.

Flesch’s argument thus reflexively provides an account of both why the content of narrative moves us as it does, and of why the narrative form, as such, should be especially suited as a focus for meaningful emotional reactions. (I should note that, although Harold Bloom, in a blurb on the book’s back cover, praises Flesch for “giving a surprisingly fresh account of the workings of high literature,” and although the great majority of Flesch’s own readings and citations do in fact come from “high literature,” one of the great virtues of Flesch’s argument is that it applies equally well to “low” narrative forms (and that he also does cite these forms). The things that interest us in reading Shakespeare’s plays, or novels by Henry James and James Joyce and Marcel Proust, are pretty much the same things that interest us in reading stories about Superman, or Conan the Barbarian).

Beyond this, Flesch’s argument is noteworthy, and important, because of how it uses the tools of (usually reductive) science for determinedly nonreductive ends. Usually, the language of game-theory payoffs and cost-benefit calculations drives me crazy, because it is a hyperbolic example of what the Frankfurt School critics denounced as “instrumental reason.” The “rational choice” theory so prevalent these days among economists and political scientists idiotically assumes, against nearly all concrete experience, that human beings (and other organisms as well) make cognitive, calculated decisions (even if not consciously) on the basis of maximizing their own utility. More recently, some social scientists have sought to incorporate into their mathematica models the empirical evidence that people in fact respond emotionally, and non-rationally, to many situations. But most of this research has remained reductive, in that the calculus of probabilities and payoffs has remained at the center — the assumptions are still essentially cognitive, and calculative, even if emotions are admitted as factors that skew the calculations. Flesch is really the only author I have read who pushes these models to the point where they flip around, so that cognition is effectively subordinated to affect, rather than the other way around (“Reason is, and ought to be, only the slave of the passions,” as Hume — one of Flesch’s favorite theoretical sources — once wrote).

This is largely because of the way that Flesch defines his central concept of “altruism.” Drawing both on Hume and Adam Smith (his “moral philosophy” rather than The Wealth of Nations), as well as on contemporary biological game theorists and on the Zahavis, Flesch defines “altruism,” basically, as any other-directed action that is not driven by “maximizing one’s own utility,” and that indeed is pursued in spite of the fact that it decreases one’s own utility.” This means that things like vengefulness and vindictiveness, not to mention Achilles valuing glory more than his own life, or Bill Gates dispensing his fortune in order that he may congratulate himself for being a great philanthropist, are also examples of altruism, in that they are other-directed even at a cost to oneself, and therefore they absolutely contradict the “utility-maximization” assumptions of orthodox economics and “rational choice” theory. As Flesch puts it epigrammatically, “the satisfactions of altruism,” like Gates’ self-congratulation, “don’t undercut the altruism itself. Satisfaction in a losing act or disposition to act is itself a sign of altruism… Pleasure in altruism doesn’t mean that you’re not an altruist. It almost certainly means that you are” (35).

This is useful for the way that it undercuts both the model of Homo economicus that is the default understanding of humankind in the current neoliberal consensus, and the cynicism that sneers at the very possibility of altruism, generosity, cooperation, and collectivity on the grounds that these are “really” just expressions of egotism. Of course, egotism is involved; how could it be otherwise? But as Flesch insists, this doesn’t prevent the altruism, or concern for others at one’s own expense, from being genuine.

Altruism is by no means an unconditional good, of course; in Flesch’s account, it allows for, and can lead, not just to an insane vengefulness, but also to the kind of surveillance of people by one another that enforces social conformity and involves the persecution of anybody who acts innovatively, or merely differently. Nonetheless, the important point remains that we all act and feel in a social matrix, rather than as atomized individuals, and that people’s actions are not merely determined by the considerations of personal well-being (or at most, those of one’s genetic kin), but by a much broader range of social concerns and relationships and emotions (including vicarious emotional relationships with, or feelings about, strangers).

For this reason, even though Flesch states in his introduction that his aim is “to give an account… [of] why [narrative] should be as strange, complex and intellectual — as cognitive — as it is” (6), his arguments really contribute more to an affective approach to narrative than to a cognitive one. The tricky evolutionary arguments that Flesch works his way through are used in order to show how evolutionary processes — which in a certain sense, because they are based upon a competitive weeding-out of alternate possibilities under conditions of scarcity and stress, are necessarily “rational,” even though no actual rationality is involved in their workings — can nonetheless produce an outcome that is not itself “rational,” but instead involves extravagance, waste and “expenditure” (in the sense Bataille gives to this word), and that necessitates cooperation of some sort, rather than a continual war of all against all. And once the affects that drive us in these non-rational ways have evolved, they continue to have a life of their own (they may well be reinforced even when they are counter-productive; but they also may, in evolutionary terms, aid the survival of groups that adopt them or are driven by them, in contrast to groups that don’t: here Flesch draws upon the recent attempt by Elliott Sober and David Sloan Wilson to rehabilitate the notion of “group selection”).

This brings Flesch’s arguments in line with, and makes them an important contribution to, any attempt to think about social relations (and aesthetics as well) in terms that owe more to Marcel Mauss (with his complex notions of how gift-giving involves both gain and loss, both economic calculation and an openness to loss, both power/prestige and generosity) than to the currently hegemonic assumptions of neoclassical and neoliberal economics. Rather than Derridean musings on how an absolute gift is “impossible,” because there is always some sort of return, we get something more in line with Mauss’s (and Bataille’s) sense of how expenditure and potlatch, and other forms of gift-giving (including what might be called the “gift” of narrative, though Flesch conceives this much more complexly than Lewis Hyde, for instance, does) involve the intertwining of self-aggrandizing and altruistic motives, and allow a place in practice for the openings and ambivalences that both a rational-economic calculus, and a deconstructionist negatively absolutized logic would forbid us.

I’ll conclude with one small additional comment, which is that Comeuppance is not really a book about “narrative theory,” even though it sometimes presents itself as such. Though it tries to delineate the “conditions of possibility” for us to enjoy, crave, and develop an emotional investment in fictional narratives, it is (quite properly) much more concerned with these affects and investments than it is in the structure of narrative per se. And this turns out to involve our relation to characters much more than our relation to narrative as such (even when Flesch considers the latter, he does it in the framework of the relation between the audience and the fiction’s narrator, including both the fictive narrator and the author-as narrator). This seems to me to be in line both with Orson Welles’ insistence on the enigma of character as the center of our interest in film and other arts, and with Warren Ellis’ insistence that what he is really concerned with in the fictions he writes is the characters and the ideas; the plot is just a contrivance to convey those characters and ideas.

Rancière (2)

So… democracy.

Rancière doesn’t see democracy as a form of government, or form of State. It is something both more and less than that. States are all more or less despotic, including supposedly “democratic” ones. And non-State forms of authority tend to be based on other forms of unequal power relationships, with authority grounded in age (patriarchy), birth (aristocracy), violence and military prowess (I’m not sure of the name of this), or money and wealth (plutocracy). Our current neoliberal society combines the rule of Capital with the rule of bureaucratic States with their own levels of authority based upon expertise and guardianship of the “rights” of property or Capital. Even though we have a legislature and executive that are chosen by majority, or at least plurality, vote, our society is not very democratic by Rancière’s standards. The role of money in the electoral process, the fact that there are career politicians, the management of increasing aspects of our lives by non-political “experts” (e.g. the Federal Reserve), all militate against what Rancière considers to be even the minimal requirements for democracy.

To a great extent, Rancière uses the idea of “democracy” adjectivally (a society may be more or less democratic) rather than as a noun. For democracy is a tendency, a process, a collective action, rather than a state of affairs, much less an organized State. Democracy is an event; it happens when, for isntance, people militate to change the distribution of what is public and what is private. In the US, the civil rights movement and (more recently) the alterna-globalization protests would be examples of democracy in action. Rancière rightly stresses the activity, which always needs to be renewed, rather than the result. This might be thought of, in Deleuzian terms, as a revolutionary-becoming, rather than an established “revolutionary” State, which is nearly always a disappointment (if not something worse). While I am inclined to agree with Zizek that State power often may need to be actively used in order, for instance, to break the power of Capital, I still find Zizek’s apparent worship of State forms and Party dictatorship reprehensible (it would seem that Zizek has never found an ostensibly left-wing dictator he doesn’t like — except for Tito and Milosevic). Collective processes should not be reduced to State organization, though they may include it. Chavismo is more important than Chavez (whereas Zizek seems to admire Chavez because, rather than in spite of, his tendency to do things that allow his opponents to apply the cliche of “banana-republic dictator” to him). It is admirable that Chavez is using a certain amount of State power, as well as extra-State collective action, in order to break the power of Capital; but to identify a revolutionary process with its leader and authority figure is worse than insane.

But I digress. To value the process of revolutionary-becoming, as Deleuze does, and as Rancière does in a different way, rather than the results of such action, is not to gvie up on lasting change. It is rather to say that change continues to need to happen, as against the faux-utopia of a final resting place, an actually-achieved utopia (which always turns out to be something more like “actually-existing socialism,” as they used to say, precisely because it congeals when the process comes to a stop).

I need to be cautious here about assimilating Rancière too much to Deleuze and Guattari. I am only trying to say that Rancière’s notion of democracy gives substance to something that often sounds too glib and vague when Deleuze and Guattari say it. For Rancière, “democracy” means that no one person or group of people is intrinsically suited to rule, or more suited to rule than anyone else. Democracy means radical contingency, because there is no foundation for the social order. Democracy means absolute egalitarianism; there is no differential qualification that can hierarchize people, or divide rulers from ruled, the worthy from the unworthy. In a democratic situation, anybody is as worthy of respect as anybody else. This means that, for Rancière, the purest form of democracy would be selection by lot (with frequent rotation and replacement), rather than “representative” elections. Selection by chance is grounded in the idea that anyone can exercise a power-function, regardless of “qualifications” or “merit” (let alone the desire to rule or control; if anything, those who desire to have administrative or legislative power are the ones least worthy to have it — to the extent that we can make such a distinction at all).

It is unclear to me whether Rancière actually believes that a total democracy could exist in practice — as opposed to being an ideal to strive for, a kind of Kantian ethical imperative, something we must strive for to the utmost possible, regardless of the degree to which we succeed. (In my previous post, I was privileging both the political and the aesthetic at the expense of the ethical. Here I would add that Kantian morality is not ethics, but perhaps can be seen as the limit of ethics, the point at which it comes closest to politics).

But here’s the point. For Rancière, egalitarianism is not a “fact” (though we can and should continually strive to “verify” it), but an axiom and an imperative. That is to say, it has nothing to do with empirical questions of how much particular people are similar to, or different from, one another (in terms of qualities like manual dexterity or mathematical ability, or for that matter “looks” and “beauty”). Egalitarianism doesn’t deny the fact that any professional tennis player, even a low-ranked one, could effortlessly beat me at tennis, or that Rancière’s philosophical writings are far more profound than mine, or that I couldn’t pass a sophomore college math class. And egalitarianism doesn’t mean that somehow we all ought to be “the same,” whatever that might entail, genetically or experientially. What egalitarianism means, for Rancière, is that we are all intelligent speaking beings, able to communicate with one another. Our very social interaction means that we are on the same level in a very fundamental sense. The person who follows orders is equal to the person who gives orders, in the precise sense that the one who obeys is able to understand the one who commands. In this sense, Rancière says, equality is always already presupposed in any social relation of inequality. You couldn’t have hierarchies and power relations without this more fundamental, axiomatic, equality lying beneath it.

This seems to me to be (though I presume Rancière wouldn’t accept these terms) a sort of Kantian radicalization of Foucault’s claim that power is largely incitative rather than repressive, that it always relies, in almost the last instance (i.e. up to the point of death) upon some sort of consent or acceptance on the part of the one being dominated. Without these fundamental relations of equality, it would not be possible for there to be elites, masters, bosses, people who tell other people what to do, and who have the backing or the authority to do this. So the question of equality is (in Kantian terms) a question of a communication which is not based upon the quantitative rankings that are imposed by the adoption of a “universal equivalent” (money as the commodity against which all other commodities are exchanged) — therefore this, too, relates to the Kantian problematic that I discussed in my previous posting on Rancière.

Of course, in our personal lives, we never treat everyone else with total equality. I love some people, and not others. I am always haunted by Jean Genet’s beautiful text on Rembrandt, where he mourns the way that Rembrandt’s revelation of the common measure, or equality, of everybody means, in a certain register, the death of his desire, the end of lusting after, and loving, and privileging, one individual in particular. But the power of Genet’s essay resides in the fact that, in the ultimate state of things, this universal equality cannot be denied any more than the singularity of desire can be. And that is why, or how, I think that the lesson Genet draws from Rembrandt is close to the lesson on equality that Rancière draws from, among others, the 19th-century French pedagogue Jacotet (the subject of Rancière’s book The Ignorant Schoolmaster).

Democracy, or egalitarianism, is not a question of singular desire; but it is very much a question of how we can, and should, live together socially, given that we are deeply social animals. Which is why I see it a kind of imperative, and as something that we always need to recall ourselves to, amidst the atomization — and deprivation for many — enforced by the neoliberal State and the savage “law” of the “market.” To that extent, I think that Rancière is invaluable.

There is something I miss in Rancière, however, and that is a sense of political economy, as opposed to just politics. This absence may have something to do with Rancière’s rejection of his Althusserian Marxist past. He is certainly aware of the plutocratic aspects of today’s neoliberal network society. He doesn’t make the mistake of focusing all his ire on the State, while ignoring the pseudo-spontaneity of the Market and its financial instruments. But he never addresses, in the course of his account of democracy, the way in which economic organization, as well as political organization, needs to be addressed. Here, again, is a place where I think that Marx remains necessary (and also, as I said in the previous post, Mauss — as expounded, for example, by Kevin Hart). Exploitation cannot be reduced to domination, and the power of money cannot be reduced to the coercive power of the State or of other hierarchies. Aesthetics needs to be coupled with political economy, and not just with politics. So I still find a dimension lacking in Rancière — but he helps, as few contemporary thinkers do, in starting to get us there.