Afterparty – Daryl Gregory

May 4th, 2014

Daryl Gregory’s Afterparty is a near-future science fiction thriller about designer drugs — specifically designer neurochemicals. It seems to be set about twenty years from now, with a flashback to events ten years or so from now  — enough time for its scientific vision to be plausible. In terms of plot, it’s an extremely well-done thriller; but I agree with Warren Ellis that what matters in fiction of this sort isn’t the plot — which is there to get us involved — so much as what it gets us involved in, which is the characters and the ideas. The characters in Afterparty are all pretty compelling, and all pretty much damaged, as a result of the neurochemicals they have ingested — which is to say, they are all affected by, and embodied symptoms of, the novel’s ideas, which are themselves made real in the form of the drugs that the novel describes. The book’s main actor, if I can put it that way, is a drug called Numinous on the street (though it has other, more official, names). It was developed by the protagonist, Lyda Rose, and her collaborators in a small start-up; the idea was to make a drug that would enhance feelings of well-being by promoting the growth of neurons in the temporal lobe. The drug works extraordinarily well on mice; but when human beings take it, it turns out that the way it enhances feelings of well-being is by generating a hallucination of God. The drug’s stimulation of the temporal lobe is similar to what happens in cases of epilepsy. The user experiences the vision, voice, and feeling of a Deity who has a close personal relationship with him or her, assuring him/her that he/she is loved and cared for, and has a place in the cosmos. Each person has a different vision of God, but these Gods all appear to them as absolutely physically real, despite being invisible and inaudible to anyone else. (Though the novel does not reference this in particular, I was immediately reminded of Julian Jaynes‘ thesis that, as recently as Homeric times, people literally heard voices in their heads, which really were one brain hemisphere “speaking” to the other, but which they took to be the voices of gods.

One problem with the drug is that you become emotionally dependent upon it — if you can’t get it anymore, it feels as if God has abandoned you — which is extremely depressing and can lead to suicide. Another problem, it turns out, is that if you have an extreme overdose of the drug, the God hallucination becomes permanent. For most of the novel, Lyda Rose is torn between her absolute and unshakeable emotional conviction of the truth of her personal God, and her knowledge that this is just a neurochemical effect (backed up with her Dawkins-esque intellectual certitude that religion can never be anything more than such an effect. She argues with her personal God, telling her that she (her God-version is a female angel) is nothing more a mental projection; but she also cannot do without the help and reassurances given to her by this God (who tells her at one point that Dawkins and Hitchens have been sent to Hell), and at times even experiences her God’s actions as physically efficacious. 

Anyway, all this is tied in — as how could it not be? — with business and political implications. Lyda and her partners in the startup quarrel about selling their company (which seems to be on the verge of success due to Numinous) to a major pharmaceutical firm. It is at a party celebrating the sale, which will make them all millionaires, that the partners are all blasted by an overdose of Numinous. They are all pretty much fucked up by their permanent condition of unasked for religious ecstasy. There’s also a baby who is dosed with Numinous in the womb. The present-time events of the novel take place ten years later; they have to do with picking up the pieces of shattered lives, and also with the ongoing question of major corporations peddling these drugs despite their questionable side effects. 

As the plot advances, we encounter other victims of other designer neurochemicals. There’s Clarity, a drug that enhances your ability to recognize patterns when sifting through vast quantities of data, by stimulating neural growth in the prefrontal cortex. This drug is taken, with official encouragement, by analysts working for the NSA. The trouble is, that Clarity also foments paranoia, by leading the user to infer patterns that do not actually exist. There’s also a drug used to treat victims of post-traumatic stress syndrome; when taken in high enough doses, it reduces qualms and emotional difficulties enough that the user can act as a remorseless contract killer. All these drugs have their antagonists, which however have equally bad side effects. When Lyda is hospitalized, her religious visions are neutralized by anti-epileptics; the paranoid effects of Clarity are nullified by anti-psychotics that make it difficult for the patient to recognize any patterns at all (including the shapes of bodies and familiar objects). 

All of this highlights the radical contingency of our mental states; the novel also contains discussions of what “free will” can possibly mean under such conditions. In a way, Afterparty presents us with a 21st-century version of the old Cartesian dilemma. Even in an age where we have definitively discredited any dualism, and established beyond doubt that the mind is entirely physical — because we can in fact manipulate it physically — I am still left with the actuality of inner experience, which is full and efficacious regardless of my intellectual knowledge that it has no objective validity, but is generated entirely by neurochemical processes. It may well be, the novel suggests, that the experiences generated by Numinous make us both more capable and more empathetic, and therefore better people (this would remain the case even if Dawkins and Hitchens are right about the pernicious effects of organized religion). Such a possibility will not seem strange or ridiculous to anyone who has taken LSD or other mind-altering chemicals. Afterparty doesn’t give us any political or philosophical answers; but it suggests that the age of brain manipulation is rapidly approaching, and cannot be averted; and it at least suggests that brain self-manipulations might be workable from below, on an individual or microsocial basis, rather than only being imposed from above, by government security agencies and large corporations. There are no panaceas here, and no seamless alterations of (either inner or outer) reality without unforeseeable and uncontrollable side effects; but Gregory’s vision is not as grim as that of Scott Bakker in Neuropath.


April 23rd, 2014

Next week, I will be speaking in New York at this conference. For reasons I do not understand, they have asked for all the papers to be submitted in advance, and say they will give them out to whoever comes to hear the talks. To my mind, this makes the conference itself superfluous — why sit through a bunch of talks, when you can read them much more easily and quickly? But whatever.

In any case, here is my talk on “The Aesthetics of Workflow.” My initial plan was much more ambitious — in addition to speaking about Anthony Mandler’s video for Rihanna’s “Disturbia”, I was also going to discuss two other music videos: Grant Singer’s video for Sky Ferreira’s “Night Time, My Time”, and Tom Beard and FKA twigs’ video for FKA twigs’ “Papi Pacify”. Part of the idea was to discuss videos for women singers at three levels of the music industry: superstar (Rihanna), emerging (& so far) midlevel artist (Sky Ferreira), and little-known independent (FKA twigs). All three videos are poweful and challenging, but I think that the different economic scales makes for different modes of expression as well.

However, this was not to be. So far I haven’t had enough time to write about these other two videos. And even if I had, I would end up with a talk that would probably be two hours long — something much better read than delivered live. So I will leave these two additional videos for later consideration. (I should add, however, that I showed “Night Time, My Time” to my Intro to Film students. One of them remarked, in response, that he would play this video at the end of a party, when he wanted to get everyone to leave. I take this as a strong compliment to the video — and I hope that Ferreira and Singer would take the comment in this spirit as well).

As for what I have written already, I feel like I haven’t really gotten to the bottom of this. There’s a lot more to say about Rihanna; and, as always when I write about music videos, I feel self-conscious about my inability to say enough about the music in formalist terms.

But anyway, here’s the text of my talk, in two parts:


“Workflow” is a term that is increasingly being used today in digital audiovisual production. In the words of the film critic Ignatiy Vishnevetsky:

“Workflow” describes the relationship between production and post-production — shooting and editing. A workflow encompasses everything from the hard drives on which image data is recorded to the final delivery of the film for distribution. Workflow theories emphasize flexibility and maneuverability.

Vishnevetsky’s point is not just that the term “workflow” is increasingly being used by film- and videomakers themselves, in order to designate how they employ digital tools; but also, and just as importantly, that the new practices designated by the term need to be analyzed by critics and theorists of audiovisual media. Recent developments in digital technology have not only led to radical changes in film and video production methods; they have also led to new audiovisual formal structures and styles — or more loosely, to a new sort of “look and feel” for audiovisual artifacts.

Of course, this workflow “look and feel” is far from universal; and there is no necessary correspondence between technologies of production and final products. Neoliberal economic activity in general is organized on the basis of supposed “flexibility and maneuverability.” Even when they are making works that still look and sound like older, more traditional productions, Film- and videomakers employ workflow methods in order to save time and money, cut corners, and respond quickly to frequent demands from clients. There are also artists — Michel Gondry is a good example — who exhibit a laudable streak of stubborn perversity: they go out of their way to produce works in which digital-seeming effects are in fact created through older, analog means.

Nonetheless, the new technologies and production practices of workflow offer new affordances to film- and videomakers. They open the way for new expressive possibilities — even though these possibilities are often not taken up. As Stanley Cavell puts it in relation to film, “the aesthetic possibilities of a medium are not givens…. Only the art itself can discover its possibilities, and the discovery of a new possibility is the discovery of a new medium.” Similarly, the potential expressive uses of today’s new digital technologies cannot be known in advance; they can only be discovered or invented, and then elaborated, in the course of actual audiovisual production. At their best, the technologies and practices of workflow lead to a new audiovisual aesthetic.

Vishnevetsky refers to “workflow” mostly in the context of digital cinema: his article explicitly discusses recent movies by David Fincher and Steven Soderbergh. But I am more interested in looking instead at contemporary music videos. There are several reasons for this. On a personal level, working with music videos allows me to combine my enthusiasm as a pop music fan (despite my lack of musical training) with my more formal concerns as a scholar of audiovisual media. On a more distanced level, music videos fascinate me because they are so complexly overdetermined. Something like Italian operas in the 19th century, or Broadway and MGM musicals in the 20th, music videos are a hybrid, impure form — and indeed a necessarily compromised one. They almost never have the status of independent, self-subsistent works. Even more than other pop-cultural forms, they are subject to the whims of marketers and publicists. They are based upon pre-existing musical material, which usually wasn’t created with them in mind. They most often serve the derivative purposes of advertising the songs for which they were made, and of contributing to the larger-than-life, transmedia personae of their performers. In addition, music videos frequently remediate older media contents: alluding to, sampling and recombining, or unambiguously plagiarizing material from movies, television, fashion photography, and experimental art. Music videos thus always remain illustrative and externally referential — the cardinal sins according to modernist theories of art.

And yet, despite all this — or is it rather because of all this? — music videos are often deeply self-reflexive, and strikingly innovative in form and technique. They push the latest production technologies to their limits; and they experiment with new modes of visualization and expression. Precisely because their sonic content already comes ready-made, and because they are of such limited length, they do not need to conform to older modes of audiovisual organization. They are free to ignore Hollywood mainstays like narrative structures and continuity conventions. Even before the rise of digital sound and image technologies, Michel Chion noted that music videos exhibited “a joyous rhetoric of images,” made possible by the fact that “the image is conspicuously attaching itself to some music that was sufficient to itself.” Music videos are strictly speaking superfluous — and this is what gives them a space for free invention. Chion adds that, in many music videos, “the rapid succession of shots creates a sense of visual polyphony and even of simultaneity, even as we see only a single image at a time.”

If this was already the case when Chion was writing in 1990, it is even more fully so today, in the age of digital workflow. As Vishnevetsky notes, the chief characteristic of the workflow aesthetic is that it blurs the line between production and post-production. Of course, this makes it easier for film- and videomakers to change their minds, and to recast all their material even at the last moment. But workflow practices also have ontological consequences, because they erase the distinction between those aspects of audiovisual material that are actually placed before the camera and sound recorder, and those that are subsequently invented, added, or altered on the computer. The very nature of audiovisual construction and reproduction is thereby put into question.

In other words, the whole issue of cinematic realism, so fervently debated on both sides for much of the twentieth century, loses its relevance in the age of workflow. It no longer makes sense to distinguish, as André Bazin did, between a self-subsistent reality that is indexically captured by the mechanistic recording apparatus of the cinema, and an “image,” defined as “everything that the depiction of a thing on the screen can add to the thing itself.”

Rather, the real itself is always in production — just as Deleuze and Guattari already intimated in the 1970s. Sonic and visual material is continually being worked and reworked: in the physical spaces before the camera and sound recorder, in these mechanisms’ own processes of capture and transmission, in the digital transformations accomplished through the computer, and in our own subjective acts of perception, reception, and synthesis. All of these are equally “real”: they are best thought of as particular stages in the never-ending adventure of materials. Thanks to the recent digital technologies epitomized by workflow, these diverse stages of actuality now interpenetrate one another more than ever before. This does not mean that Bazin has been discredited by the non-indexical nature of digital audiovisual capture. To the contrary, he is more in the right than he even knew: “the image may be out of focus, distorted, devoid of colour and without documentary value; nevertheless, it has been created out of the ontology of the model. It is the model.”

When I teach Introduction to Film (which I do every semester) much of my effort is spent explaining to students the basic formal structures of mise en scène, cinematography, and editing. These can be understood as, respectively, that which is presented before the camera, that which the camera itself does, and that which is subsequently assembled out of the material recorded by the camera. Mise en scène, cinematography, and editing are crucial categories for film studies, because they refer at once to operations performed in the course of making the film, and to formal aspects of the completed film as it is experienced by its audience.

Now, filmmakers have often played fast and loose with these categories, exploiting their ambiguities and loopholes. Long takes by Hitchcock and Welles, for instance, work to express through camera movement and framing alone what usually needs to be conveyed by means of editing. Godard’s calculated violations of continuity editing rules depend, for their effect, upon our unconscious expectations that these rules will be maintained. But if classical filmmaking established such basic formal categories, and modernist filmmaking troubled their boundaries, the most adventurous recent audiovisual production seems to dispense with them altogether. It is not just that there are new ways of putting together moving image sequences during post-production, but also that the resulting sequences look and sound differently than heretofore, and are organized by an entirely different logic. For instance, the rapid editing of many music videos, and of “chaos cinema” action sequences, bears a certain formal resemblance to 1920s Soviet montage; but the aesthetic aims of contemporary music video directors, no less than their technological means of production, are quite distant from those of Vertov and Eisenstein.

Contemporary music videos are quite strikingly different, not just from traditional cinema, but even from the “MTV-style” videos of the genre’s first decade, the 1980s. Carol Vernallis notes, for instance, how our experience of color in music videos, and of the textures of such diffuse substances as “dust, water, smoke, and clouds,” has been transformed by the use of DI (digital intermediate), a process that only came into common use after 2005 or so. Now that color and texture can be controlled and transformed on a pixel-by-pixel basis, Vernallis says, “each element is marked off so clearly it is almost as if we were examining the video’s detail through a magnifying glass.” Digital compositing already allows for image juxtapositions that are simultaneous, instead of (as in radical cinematic montage) sequential. But DI raises this multiplicity to a higher power, rendering for us something like the perception of machines, or of insect-style compound eyes. Vernallis adds that DI also allows video directors to do things like “cut quickly” from “an extreme wide-angle shot” to “an extreme closeup,” or “cut three or four fast shots around the face at well-judged off-angles” (here she refers in particular to the work of Jonas Åkerlund). These fast edits are generally “hard to see”; but they create a subliminal sense of “deeper immersion” for the viewer.

The new workflow aesthetics affects audiovisual production and reception alike; it works equally on the level of the “subject,” and on that of the “object.” On the one hand, it simulates and stimulates new modes of subjective perception; on the other, it produces new configurations of the objects being perceived.


I turn now to a music video that epitomizes the aesthetics of workflow. Anthony Mandler’s 2008 video for Rihanna’s song “Disturbia” is so densely layered and cluttered and compressed that it seems to require a whole new formal language to do it justice. The proliferation of images in the video definitely gives us a sense of what Chion calls “visual polyphony.” What is more, we do not see just “a single image at a time”; the video is literally polyphonic — or better, polyoptic — in that images are continually being layered transparently over one another. However, as befits the song, this “visual rhetoric” is oppressive rather than “joyous.” It suggests visual overload, more than promiscuous multiplication.

It would be impossible to do a conventional shot breakdown of this video; images proliferate and propagate in ways that do not conform to the traditional logic of cinematography and editing. At times, Rihanna’s face or figure detaches into two images, one of which jitters and shakes as if it were trying to break free from itself — or as if the camera itself were having some sort of seizure. There are also quick movements in and out of focus. Sometimes images are doubled by layering; usually a detail like Rihanna’s face in closeup is placed semi-transparently on top of a broader scene. The overall effect is at once unstable and claustrophobic.

At still other times, barbed wire and spider web patterns are splayed across Rihanna’s skin. The video’s images fade into one another as well as appearing on top of one another. There is also a lot of rapid cutting, without fades, not only between scenes that show the singer in different locales and wearing different costumes (as is common in music videos), but also among fragments of individual scenes or set-ups. Almost everything is presented frontally (as is also a common practice in music videos), but the way that the images are both fragmented and juxtaposed nonetheless disrupts any stable sense of perspective.

The video’s setting looks like a Victorian insane asylum. This impression is reinforced by the sound effects — dissonant arpeggios accompanied by creaking sounds — in the first thirty seconds of the video, before the song proper begins. My students found the video’s visual decor reminiscent of that in the computer game and movie series Silent Hill; but I think this is a matter, not of homage or direct imitation, but simply of the fact that both works draw upon the same Victorian-asylum imagery. In the video, in any case, Rihanna appears in numerous roles: she is both the director of the asylum, and a number of imprisoned patients.

In the former role, she is seated in an enormous rotating chair. She wears a sort of Victorian bondage wear, with black dress and knee-high boots, and long nails in black nail polish. She fans herself while turning in the chair, or walks about languidly and pats the head of a docile prisoner. Rihanna is also surrounded by strong, menacing figures. An enormous man with an eyepatch and a black-and-white striped prison uniform turns some large, creaky mechanical wheels; another large man, shirtless, beats out the song’s brutal rhythm on two enormous drums. While sitting in the chair, Rihanna is also assaulted by her own double: a feral female figure, on all fours, with wrists bound together and a punkish shock of blonde hair, who snarls and lunges at her, like a bad pet.

Meanwhile, Rihanna takes on multiple guises in her role as a patient-cum-prisoner in the asylum. She is caged in a jail cell, with straight blonde hair and empty zombie eyes. In other sequences, she pulls furiously upon chains that bind her ankles together and rivet her arms to a peg in the floor; or she jerks about violently while seemingly confined within an empty bedframe. Another sequence suggests slavery, as she stands chained to a pillar, with a collar around her neck, hands bound behind her back, and grease smeared on her naked shoulder. In the latter part of the video, we see her hanging immobilized in a corridor, her lower arms stuck inside the walls, and a tarantula on her upper arm, near her shoulder. We see also her splayed out, behind latticework, making love to a life-sized male mannequin; and in other shots, wearing what looks like an Indian headdress.

Most of these images are composed in dark tones, often approaching a monochrome dark blue. The darkness is only relieved by the highlighting of Rihanna’s face. Sometimes, however, there are horizontal bars of illumination or patches of color at the back of the set. In addition, the relatively dark images are continually interrupted by vertical streaks of light that seem to erupt out of the screen; or else by flames in the foreground, that ostentatiously do not fit into the same space as the rest of the image. One recurring sequence, however, is violently illuminated, in contrast to everything else in the video: the screen is bathed from above and behind in a glaring white and orange light. Rihanna’s prone body is held up into the light by a group of backlit dancers, perhaps suggesting a human sacrifice.

The song “Disturbia” is about mental anguish: the lyrics suggest paranoia, unbearable compulsion, and other such ugly feelings. “A disease of the mind,/ It can control you,” Rihanna sings, “I feel like a monster.” And again, in a line that seems to set forth the visual strategy of the video: “It’s like the darkness is the light.” I am inclined to give this line its full weight, as a statement of via negativa mysticism. The song is an affective expression, but it isn’t the representation of a mental state. For the very condition of feelings like these is that they cannot be represented, or brought into the light of full consciousness. Rihanna — or her persona in this song — suffers precisely from the absence, and the impossibility, of illumination. And this is the ironic situation that the song and video strive to “represent.”

It is worth noting that “Distubia” was co-written by Chris Brown, who passed it on to Rihanna after originally having planned to record it himself. Even though the song and video were released before the horrific incident in which Brown beat up Rihanna, it is hard not to regard “Disturbia” in the context of this subsequent history. The lyrics do not explicitly state any cause for the compulsion and paranoia that they express; but these symptoms can easily be interpreted as the effects of jealousy or a broken relationship. Robin James, writing specifically about Rihanna’s 2012 album Unapologetic, discusses the singer’s embrace of “melancholic damage,” and her absolute refusal of any narrative of recovery and resilience in the wake of the assault. I think that such a “damaged” and willfully unapologetic stance is already evident in “Disturbia,” both the song and the video.

“Disturbia” is undeniably a very catchy song, and it was a big dance hit when it was first released in 2008. The song was even described by one critic at the time (Alex Fletcher) as “a fun-packed electro treat filled with sizzling beats and crazy vocal effects.” Another critic (Fraser McAlpine), despite admiring “the spooky, gothy sounds in the first 30 seconds of the video,” and praising what he calls “Rihanna’s icy whine,” nonetheless complains that overall “Disturbia” is “FAR too chirpy a tune to suit the parade of Marilyn Manson fetish-wear and leathery bedlam which accompanies it” in the video.

Nonetheless, I think that the video responds to, and indeed brings out, disturbing undercurrents that already exist in the song. “Disturbia” is dominated by a harsh and pounding beat, which lacks the brightness and bounciness of similarly repetitive bass lines in electro tunes (such as that in “Blue (Da Ba Dee)” by Eiffel 65, which McAlpine compares to that of “Disturbia”). The harshness of the song’s rhythm is reinforced, not only by the “Bum bum be-dum bum bum be-dum bum” repeated refrain, but even more by the video’s choreography. Rihanna and her surrounding figures dance with spastic jerks of the head or of the whole body, emphasizing the beat’s relentless, inorganic regularity. There is no smooth or graceful movement here; everything is shaped by violent constraints.

This sort of dancing works in tandem with the twitching of the camera, the image juxtapositions, and the aggressive editing. All in all, Mandler’s metamorphosing images closely respond to the musical elements of the song: both its shifts of register and its propulsive force. But the hyperactive image track also leads us to notice different aspects of the sound than we might have done otherwise. In particular, it draws our attention to the song’s treatment of Rihanna’s voice, with its Autotune vocoder-like effects, and its surprising changes of vocal register. In short, we don’t hear the music of “Disturbia” in the same way when we watch the video, as we would without it.

The video for “Disturbia” uses the resources of workflow aesthetics in order both to track the song’s rhythmic intensities, and to follow the logic of the via negativa outlined by the song’s lyrics. In other words, the video “visualizes” an emotional state that is first presented acoustically, and that — strictly speaking — is not susceptible to visualization. The video assaults our senses with too thick an agglomeration of too many images. And yet these all tend to collapse into a black hole of negativity, or a night in which all cows are black. The ultimate effect of the video’s profusion is not psychedelic plenitude, but rather an amplification of darkness and obliqueness.

Rihanna clearly has a place in the genealogy of Afrofuturism. She is a robo-diva, as Robin James describes her, picking up on a term originally coined by Tom Breihan. James sees Rihanna’s self-robotization as a gesture of resistance: a way of refusing white patriarchy’s normative identification of itself with the human per se. In opposition to this, Stan Hawkins “perceive[s] Rihanna’s subjectivizing of the posthuman body as primarily nonemancipatory.” Actually, these positions are not necessarily in contradiction with one another, since the figure of the robot in Afrofuturism has long pointed in both directions at once: towards slavery, but also towards a “machine mythology” that rejects “the human” as “a pointless and treacherous category” of oppression (Kodwo Eshun).

But the technologies of workflow, as evidenced in the video for “Disturbia,” operate, I think, in something of a transversal register, in contrast to either of these directions. For what is at issue is not just “Rihanna’s subjectivizing of the posthuman body,” but also a correlative desubjectivizing of what it might mean to perceive her body. Workflow aesthetics, in both its production and its reception, operates on a fine microlevel. It allows for the most minute perceptual discriminations: one hue from another, one grain of sound from another, one pixel or one millisecond from another. These distinctions, like Leibniz’s petites perceptions, fall beneath the threshold of human recognition and voluntary action. We feel them, we are affected by them; but we cannot grasp them or comprehend them. In this way, an inhuman, machinic mode of experiencing is substituted for our own. Pop music is always about the most basic human emotions, at least as they are validated within our culture: love and sex, romance and rivalry, hatred and jealousy, ecstasy and pain. But insofar as the medium is the message, we are now experiencing even these conditions in ways that extend beyond and beneath our subjectivity.

Only Lovers Left Alive

April 10th, 2014

I have now watched Jim Jarmusch’s new film, Only Lovers Left Alive, several times. And I can’t stop thinking about it. I consider it the best film that Jarmusch has ever made. Only Lovers Left Alive is a vampire movie, but not in the way you might think. There is no onscreen violence, and no sense of transgression or damnation. It has a certain dark humor, but it is devoid of the dumb facetiousness that has sometimes annoyed me in Jarmusch’s movies in the past. This is largely a reflective, actionless movie.

[Warning: ample SPOILERS in what follows]

Adam (Tom Hiddleston) and Eve (Tilda Swinton) are a vampire couple who have been together for centuries — although they live in separate parts of the world, Eve in Tangier and Adam in Detroit. At one point we see a wedding photo, dated 1868; though they both seem to have had extensive adventures before then. Adam is supposed to be about 500 years old (according to interviews; this is never specified in the film). He remembers hobnobbing with great scientists and poets (like Byron, whom he nonetheless describes as a “pompous ass”). Eve’s memory goes back further; she is 3000 years old (according to interviews), and refers explicitly to “the Middle Ages, the Tartars, the Inquisitions.” But Adam and Eve are civilized and refined — “this is the bloody 21st century,” not the “fucking 15th” — and so they have given up their ancient habits of predation. This civilized restraint is also a response to the fact that most human blood these days is “contaminated.” (The reason for this contamination is not made clear; environmental pollution and recreational drugs are both referred to, but AIDS is never mentioned). Instead, Adam and Eve secretly get their blood from hospitals: “the really good stuff,” pure O-negative. They drink it sparingly, in small cocktail glasses, like a fine liqueur.

Adam and Eve each have a moment when they see an ordinary human being bleeding (Adam sees someone receiving medical care, and Eve someone who cuts his finger while opening a juice can). In both cases, they stare avidly for a moment, but then restrain themselves and look away. In any case, it’s only when they are about to consume blood that their fangs come out and they adopt feral expressions. After drinking, they slip into a satisfied stupor, a strung-out state of bliss. Human blood is their only sustenance; it nourishes them, but also gives them a heroin-like fix.

Only Lovers Left Alive is resolutely and self-consciously old-school. The opening shot (aside from the title of the film, printed in a Goth-heavy metal-Germanic font, over a logo of time-lapse revolving stars) shows a 7-inch 45 rpm vinyl record playing on a turntable. The song is “Funnel of Love,” sung by Wanda Jackson, and originally released in 1961. The shot of the turning record is cut together with, and sometimes faded in and out over, separate shots of Eve and of Adam, taken from the ceilings of their respective homes; the camera rotates over their bodies with the same movement as the turntable. Adam and Eve are both lying prone, with their eyes closed, having apparently drunk their fill.

“Funnel of Love” is a crucial aesthetic indicator. The song is a highly stylized one, but it is about being sucked into a romantic abyss: “Here I go, falling down, down, down,/ My mind is a blank,/ My head is spinning around and around,/ As I go deep into the funnel of love.” It was originally released as a B-side, but (as Wikipedia tells us) it has since become a favorite of r&b and country music connoisseurs. And Jarmusch’s vampires are nothing if not old-school connoisseurs. Adam and Eve are presented, not quite as aging hipsters, but rather as ageless ones. They have grown self-reflective in their boredom, having experienced all they are capable of already. I found myself identifying with them strongly in their used-up agelessness: I am, after all, someone who doesn’t yet feel decrepit, but who has officially, explicitly joined the realm of the old — I am only a year or so younger than Jarmusch himself. Adam and Eve are long-term, self-reflective aesthetes, who stand in the broken-down world of the film as the last representatives of aesthetic sensibility or “taste.”

Adam is a musician, evidently an old rock and roll legend. (In earlier times, we are told, he gave Schubert the Adagio for the composer’s last composition, the String Quintet). Adam collects vintage guitars and old vinyl, and composes dirge-like electronica, or what he calls “funeral music.” Eve is not a creator, but she loves old things, and can tell with a touch the exact year in which any artifact was made. She also loves, the literary classics, which she is able to read at superhuman speed in at least seven languages. Eve’s best friend in Tangier is the courtly and elderly Christopher Marlowe (John Hurt), the Elizabethan playwright who we learn is also a vampire. Marlowe apparently faked his own death in a barroom brawl, and went on to write all of the plays and poems attributed to Shakespeare, as well as his own. Eve and Adam are both prone to quote beautiful Shakespeare lines which they savor with a contented sigh of “Marlowe….”

The movie as a whole seems to share its protagonists’ old-school sensibility. Jarmusch frames his shots carefully, and his editing is largely classical — with none of the rapid cutting and other “post-continuity” traits that I have written about elsewhere. The film has a reserved attitude towards recent digital technology. Eve has an iPhone, but Adam largely sticks with analog methods. He favors both acoustic instruments and older electronics, and records his music on reel-to-reel tape. Even his digital computer seems to be obsolete (it looks to me like it might be a clamshell iBook from 1999), and when he Skypes with Eve, he feeds the signal to an aging television. All of Adam’s electronics are basic DIY: he stays off the grid by getting his own electricity through Tesla-style wireless energy transmission. The obsession with Tesla, like that with somebody else having written Shakespeare’s plays, indicates a kind of “eccentric” aristocratic sensibility, one that stands self-consciously apart from the presumed vulgarity of the mainstream. And Adam and Eve have nothing but scorn for the “zombies” — which is how they refer to non-vampire humanity.

The movie’s plot (such as it is) centers on Adam and Eve’s long-term relationship, and their evident comfort level with one another. They are living apart, but remain in frequent contact. Adam feels despondent about the state of the world, and considers suicide. Eve has a cooler, longer-term view; no matter what happens in the world of the “zombies,” she’s seen it all before. Eve blames Adam’s suicidal tendencies on his having hung around “with Shelley and Byron and some of those French assholes” (which is one of my favorite lines in the movie). In order to calm him down, she comes to Detroit. Their attitude towards one another is deeply affectionate, but courtly and restrained. Adam greets Eve at the door of his house (one of those old, broken-down Detroit mansions) with a “my lady…”, and takes her hand to escort her over the threshold. The only indication of their having sex is an overhead shot of them lying nude on their sides, facing one another, asleep and nearly motionless; the camera pulls back slowly. Everything about them is careful, slow, and restrained.

Adam takes Eve on nighttime drives through the deserted ruins of Detroit. They pass through empty neighborhoods, with lights gleaming in the distance. They visit landmarks like the Packard plant, and the old Michigan Theater in downtown Detroit (now used only as a parking garage). These sequences are lovely and poetic; but people like myself who actually live in Detroit might well feel that they partake bit too much of “ruin porn.” You wouldn’t know from the film that anyone still lives in Detroit, aside from the “rock ‘n roll kids,” who sometimes drive up in front of Adam’s house, hoping to catch a glimpse of the musical recluse.

In spite of my misgivings about the movie’s misleading (albeit beautiful) presentation of Detroit, I can’t help finding Jarmusch’s vision deeply attractive. It’s romantic, and not devoid of a certain crucial negativity (there’s a reason that vampires only go out at night). But it pulls back from the extremity of youthful romanticism, as Adam’s morbidity is tempered by Eve’s pragmatism as a survivor. When Adam says that Detroit is empty because “everybody left,” Eve replies that Detroit will rise again and bloom, “when the cities in the south are burning.” For her, even the catastrophe of global warming will not put an end to everything. At the age of 60, at any rate, I am seduced by the film’s combination of yearning and melancholy and romantic refusal of the governing order with a determination to survive, and even flourish, nevertheless.

But of course this idyll cannot last. Everything is upset when Adam and Eve get a visit from Eve’s kid sister Ava (brilliantly played by Mia Wasikowska). Ava embodies all the energy — and indeed cheerful vulgarity– that Adam and Eve have evidently outgrown. She’s a Los Angeles party girl who just wants to have fun, and makes no attempt to curb her unbridled appetite. Much to Adam’s disgust, she carelessly handles his vintage musical instruments, gorges herself on their otherwise carefully-rationed-out O-negative blood, watches kitschy vampire videos on TV, leaves her stuff all over the place, and drags the three of them out to a nightclub to hear live music. Although she is a vampire and not a “zombie,” she stands for all the lowest-common-denominator popular culture that Adam and Eve so disdain. The last straw is when she kills and drinks the blood of Ian (Anton Yelchin), Adam’s go-to person and sole contact in the music world. “He was just so cute,” Ava tells Adam and Eve, that she couldn’t resist consuming him.

Unsurprisingly, Ava feels ill after drinking Ian’s blood. “What did you expect?”, Eve snaps; “he’s from the fucking music industry!” There is something drily hilarious about this exchange. And our empathy with Adam and Eve is such that we are forced to feel that they are right to kick Ava out. It’s upsetting how their lives are thrown out of kilter as a result of Ava’s visit and Ian’s death. Nonetheless, when the departing Ava calls them “condescending snobs,” she has a point. She has exposed the hollowness at the heart of their exquisite lifestyle. Ava is only around for about 15 minutes of screen time in a 2-hour-long movie; and yet the film wouldn’t work without her. It would be unbearably heavy and solemn: in the same way that Wim Wenders’ Wings of Desire would be unendurable if not for Peter Falk.

After this, the denouement of the film comes fairly quickly. Adam and Eve dispose of Ian’s body, but still conclude that they need to flee Detroit. They return to Tangier, where they find that Marlowe — Eve’s only source for hospital blood, as well as her closest friend — lies dying, as a result of drinking tainted blood. Even the hospitals cannot be relied upon any more. With no supply, Adam and Eve are weak from blood withdrawal; she can handle it a bit better, but he can barely stand. “We’re finished, aren’t we?”

In this predicament, Adam and Eve have nowhere left to go; the film itself has nowhere left to go. Jarmusch offers us an epiphany, and then a potential resolution. The epiphany is an aesthetic one. On the verge of collapse, Adam wanders toward the open door of a cafe, and witnesses a performance by the great Lebanese singer Yasmine Hamdan. It’s an amazing performance, and the viewer can only share Adam’s own amazement. The song offers us a fresh beauty, different from any of the music we have heard so far in the film (the r&b that Adam plays on vinyl, Adam’s own funereal electronic beats, the noise-punk performance by White Hills that we hear at the club). My own stunned response to this music has something to do with the fact that I had never heard Yasmine Hamdan before. (Jarmusch, on top of everything else, is an absolutely on-target musical curator; and he really knows how to place music in his films for maximum emotional impact). The song offers melismatic singing (an important tradition in Arabic music) over an electronic drone, supplemented at one point by percussion. (The song, together with its lyrics about a lover’s separation printed both in Arabic and in English translation, can be found here).

Is my reaction (or Adam’s, for that matter) simply one of exoticism? While I cannot exclude the possibility, I also cannot accept that it is just that. The song is sonically haunting; and it expresses a longing that resonates with Adam’s and Eve’s relationship. Yasmine Hamdan sings about how “the absence” of her lover “awakens the craving”; parallel to this, at several points in the course of the film, Adam compares his and Eve’s status to that of entangled particles in quantum mechanics, which remain correlated with one another no matter how far apart. Eve says of Yasmine, “I’m sure she’ll be very famous”; Adam replies, “God, I hope not; she is way too good for that.” Adam replies from the depths of his hipster snobbism; but I almost feel like I can forgive him for that, because it bespeaks the depth of his emotional response to the music.

After this epiphany, there’s a suspended resolution. At the end of their tether, Adam and Eve see a young Moroccan heterosexual couple kissing passionately in an otherwise empty square. The boy and the girl couldn’t be more than twenty; they are both beautiful, and entirely absorbed in one another. Adam and Eve stealthily more towards them; the movie ends with a close-up, from the young couple’s POV, of Adam’s and Eve’s avid faces approaching them, ferally, fangs bared. Quick cut to the credits, in the same Germanic font we saw at the beginning.

What can we make of this? However civilized, cultured, and sophisticated these vampires may be, their bottom line remains ruthless predation. What crystallized for me at the end of the film was just how white — racially speaking — everything was. Tilda Swinton has a ghastly, almost albino pallor; Tom Hiddleston goes for the gloomy Goth look. They both live in what might be thought of “Third World” zones, as if in flight from the sterility of white/Anglo culture. In point of fact, Detroit is more than 80% African American; but the only black person we see in the entire movie is Doctor Watson (Jeffrey Wright), always at work in the lab at the hospital, who sells Adam those bags of O-negative blood. It’s as if Adam is living off black people’s blood, without the inconvenience of actually having to interact with them. (He always has huge wads of cash, with no explanation as to how he gets the money). It’s noteworthy how, musically as well, there is no reference to any post-1970 African American music; Adam and Eve debate the relative merits of Motown and Stax-Volt, but the only more recent American bands mentioned are — significantly enough — the White Stripes (at one point they drive by the house in Detroit where Jack White lived as a child) and the White Hills. When people say that “everybody left” Detroit, what they mean, of course, is that most of the white people did. Adam lives in the ruins of a decrepit white culture, and he seems unable to recognize that anything else might be going on.

As for Tangier, it seems to offer hopes of renewal; this is apt, since Tangier has been an outpost — or a place of escape — for white Western hipsters at least since Burroughs and Bowles went there in the 1950s, and probably well before then. In Tangier — more fully than in Detroit, which offers Adam nothing besides ruins and blood — the “Third World” is a resource that the vampires can consume and appropriate. Eve is reinvigorated through a sort of ever-repeating “primitive accumulation,” or stockpiling, of the local (non-white, ostensibly non-European) culture. Yasmine Hamdan, and the young couple at the end, literally embody this dynamic; they are food and fuel for Adam and Eve to prey upon. In this way, the film becomes an allegory of the dead end of white Euro-American culture, which can only live so long upon its no-longer-active cultural heritage of Elizabethan poetry and vinyl 45s.

Only Lovers Left Alive is therefore a film about whiteness. I do not say this as an external critique of the film, but rather as a statement of something that the film itself self-consciously exemplifies, at least on some level. (Neither Jarmusch nor Swinton has said anything about this aspect of the film in any of the interviews that I have read; but I strongly feel — though I do not know how to prove this — that the film is fully knowing about what it does). “The tradition of all dead generations weighs like a nightmare on the brains of the living.” Hegemonic whiteness is in a real sense dead; but as it is incapable of realizing this, it still rolls on and oppresses everyone else. It is precisely because I love Hiddleston’s and Swinton’s characters so much, and identify so strongly with their predicament (despite the fact that I am — or at least I flatter myself to think that I am — much more open than they are to the new and the popular), that I am also forced by the film to recognize how circumscribed and limited their pleasures are, and how dependent upon an unstated and entirely taken-for-granted power and privilege.

Dark Eden

April 3rd, 2014

Chris Beckett’s superb SF novel Dark Eden, which won the Arthur C Clarke Award last year, has finally been published in the United States. I wrote briefly about it on this blog a while ago; but now that it is generally available here, I thought I should present the longer version of my comments, which I presented at several conferences, but which I have not previously published. So here goes.

Chris Beckett’s science fiction novel Dark Eden was published in 2012. It won the 2013 Arthur C. Clarke award for the best science fiction novel published in the UK in the previous year. Beckett has been publishing science fiction for more than two decades, but this is the first time that he has received any widespread recognition. Dark Eden is Beckett’s third novel; he is also the author of two volumes of short stories. Mother of Eden, a sequel to Dark Eden taking place two hundred years later in the same world, will be published by the end of 2014.

Dark Eden can best be described as a book about deferred and repeated origins. Needless to say, this phrasing is paradoxical, or even oxymoronic. An origin is what comes first. If it is deferred or repeated, then it really isn’t an origin after all. In these late-postmodernist times – after Derrida and Baudrillard, and in a culture dominated by remixes and remakes – we have of course become accustomed to such self-contradictory twists. The result of this is often a kind of smug cynicism. Either we pass off the-origin-that-is-not-one as an inevitable deconstructionist double bind; or else, we cite it “in quotation marks,” and laugh, ostentatiously registering the irony that any such claim to originality is instantly disqualified by the very fact of having been made in the first place.

Nonetheless, I don’t think that these sorts of doubts and qualifications really apply in the case of Dark Eden. For I think that the book – like many of the most adventurous cultural productions of the last few years – is thoroughly post-ironic. This means that it registers a full awareness of the ironic circumstances that I have mentioned; but it takes them as a beginning-point rather than an end-point. In other words, Chris Beckett takes seriously the condition of living with factitious and always-deferred origins; he sees this condition, not of a loss of some mythical wholeness or authenticity, but as itself the ground of our situatedness.

In Dark Eden, therefore, Chris Beckett tells us the story of an origin that we already know to be a repetition and a regression. The title of the novel is both literal and metaphorical. It presents us with a sort of minor-key paradise, one that is diminished from the outset, because it is devoid of light. And the novel does indeed recount a Fall from this paradise: a descent from myth into history, or from a state of Edenic harmony and stasis into one of violence, rupture, betrayal, and dynamic change. But the starting Edenic situation is itself already a state of loss from which some sort of redemption is ardently desired; and the rupturing of this situation is itself driven by a kind of utopian impulse. Chris Beckett casts a cold eye on all sides of these tangled alternatives. He has no nostalgia for a lost paradise; but he also refuses to idealize the logic of progression or development, or to ignore the human costs of what we now, at a much later state of our own history, call “creative destruction.”

Another way to put this is to say that Dark Eden views both the “primitive” and the “advanced” states of humankind retrospectively, through a kind of inverted science-fictional extrapolation. I use this term advisedly. Science fiction as a genre doesn’t really claim to predict the future. Rather, it works by extrapolating from elements of our actual world. It takes trends and tendencies that are already at work in the world around us, and imagines what might happen if these trends and tendencies were able to develop to the utmost, and to unfold their full potential. We might say that science fiction presents us with a world that “real, but not actual” – which is Gilles Deleuze’s definition of what he calls the virtual. Instead of telling us what the future will actually be like – something that is impossible to do, for the future always surprises us – science fiction portrays and develops those elements of potentiality, or indeed of futurity, that already exist in the present moment. It takes the implicit and makes it explicit; it unrolls and reveals that which exists in a cryptic and undeveloped form.

In this way, science fiction can be both utopian and dystopian. This alternative is a both/and, rather than an exclusive either/or. Science fiction can register the full horror of the social and physical conditions under which we live, in a way that a purely mimetic account could not. But it can also register the utopian seeds of hope – the possibilities of difference and transformation – that are also buried within the present moment. It can nourish these seeds, and allow them to grow, to come to bloom in their full vibrant and monstrous glory. In this way, science fiction offers us what might be called, following Deleuze, as a counter-actualization of the present moment. Even at its most negative, science fiction still embodies what Ernst Bloch called Das Prinzip Hoffnung (The Principle of Hope).

Because of the way that it concretizes futurity – or that which, in the present, is real but not actual – science fiction always demands to be taken literally. Any successful work of science fiction produces a powerful reality-effect. We cannot take its descriptions only as allegories or metaphors. We also need to accept them as factual conditions that have unavoidably been given to us – or to the characters in the world of the novel. In speaking of givenness, I am trying to suggest that these conditions both display to us their contingency or arbitrariness, and at the same time stare us directly in the face with their inescapable, ineluctable actuality.

It is only by reading a science fiction novel literally that we can unlock its visions of the difference and otherness that is paradoxically already contained within the here and now. A science fiction narrative presents us with contingencies that we must accept as factual, but which are also sharply different from our own actual conditions of existence. In doing this, it both underlines the sheer contingency of everything that we take for granted, and provides us with strange alternatives to this taken-for-grantedness. We are led, on the one hand, to envision possible alternatives to the world that we live in, and on the other, to feel the arbitrary and circumstantial – or genealogical, in the sense of the word used by Nietzsche and Foucault – sources of our own embededness.

Dark Eden fits well into the schema that I have just described. But it also complicates this schema somewhat, because it projects towards a future in which we recapitulate our past. The novel reflects upon the ways in which the past, no less than the future, is “real but not actual,” or unexpressed by implicitly at work, in our present. Dark Eden can therefore be described as a work of speculative anthropology. It follows, not only in the tradition of science fictional elaborations of lost or counterfactual social formations, but even more in the tradition of nineteenth-century ethnographic speculation. While Nietzsche’s Zur Genealogie der Moral (On the Genealogy of Morals, 1887) is probably the best-known of these works today, I am thinking even more of such books as Johann Jakob Bachofen’s Das Mutterrecht (Mother-Right, 1861), Lewis Henry Morgan’s Ancient Society (1877), and above all Friedrich Engels’ Der Ursprung der Familie, des Privateigenthums und des Staats (The Origin of the Family, Private Property, and the State, 1884), which draws upon both of these previous works. I don’t think that historians today regard any of these texts as reliable reconstructions of what actually happened in humanity’s pre-literate past; nonetheless, these books still have value as instruments of speculation, detaching us from taking our present contingencies too much for granted.

Dark Eden, of course, engages these themes somewhat differently, as it an explicit work of twenty-first-century science fiction. But this means that it is overtly conscious of, and directly reflects back upon, its own belated position in relation to these earlier texts. Chris Beckett’s speculative anthropology – with its story of tainted origins – does not claim to tell us who and what we really (deeply and truly) are. Rather, it leads us to recognize the contingencies and bifurcations – but also the fatal chains of cause and consequence – that have made us into what we are, and that both limit and allow for what we might become. The novel might well have taken as its motto Marx’s dictum that “men make their own history, but they do not make it as they please.” In Dark Eden, this even applies – on a meta-level – to the emergence of history itself.

Dark Eden is set on a dark planet, one that does not circle any sun. Such a situation of course has deep allegorical significance. We are given a “dark,” or diminished, version of our supposed Edenic origins. And this visceral darkness is also a condition of extreme isolation. But if the novel also cries out to be taken literally, this is the case above all because Beckett is so meticulous in his science-fictional world-building. The planet called “Eden,” on which the book takes place, is an orphan, a dark body, a wanderer. It is alone in the cosmos, without a sun, without moons or other planets, and even without a galaxy. Eden seems to be located somewhere beyond the confines of the Milky Way. On the rare occasions that the sky is free of clouds and fog, the inhabitants are able to see what they call the “Starry Swirl”: apparently this is our own galaxy, not viewed from within as we observe it, but seen from the outside, in its full spiraling glory.

As Eden lacks a sun, its sole energy source is geothermal. Heat arises from deep within its core. This warms the surface to Earth-like temperatures. The gravity, too, seems to be Earth-normal, and the planet has an Earth-like atmosphere, and plenty of water. Evidently, there are no seasons, since the causes that would give rise to them are absent.The lower altitudes of the planet’s surface are warm and fertile. Plant and animal life forms have evolved, using geothermal energy for fuel. Of course, the planet’s tree- and other plant-analogues do not photosynthesize. Rather, they pump up heat from deep beneath the planet’s surface, providing themselves with energy and warmth. This activity drives the ecosystem as a whole. Animals do not have any internal sources of heat, but they bask in the warmth provided by the ground and by trees. They either forage on the plant life, or prey upon other animals.

These native lifeforms also provide the planet with a certain amount of light. The plants’ flowers contain “lanterns,” as do the horns of animals. Of course, nothing here can rival the brilliance of sunlight back on Earth; the inhabitants of Eden recite legends about how the Sun of Earth was “so bright that it would burn out your eyes if you stared at it” – something that they are unable to directly imagine. But the forests and valleys of Eden are illuminated with a soft perpetual glow, more than sufficient for the people to see, and to find their way.

At the higher elevations, however, “with no trees to give off light with their lanternflowers or to warm the air with their trunks,” everything is “dark dark” and “cold cold.” Valleys are separated from one another by nearly impassable snowy ridges and mountain ranges. Once you get up past the treeline, the only light comes from the Starry Swirl’s distant glimmer – at least on those rare occasions when the sky is clear.

A small number of human beings live in this dark and diminished paradise, in what we might call, without too much of a stretch, an artificial but nonetheless actual “state of nature.” There are five hundred or so people altogether, all huddled together in one small valley. These people live in what they call a single (capital-F) Family, subdivided into eight “groups” or tribes. The people all work together, and equally share their food and other goods. Everyday life rests mostly upon the guidance of customs and myths. There are few explicit laws, and most decisions are made by consensus. Authority, such as it is, resides in the hands of the elders, and particularly the women.

All in all, therefore, the society in place at the start of Dark Eden is something like the matriarchal “primitive communism” described by Morgan, Bachofen, and Engels. This is especially evident in the peoples’ sexual practices and gender relations. “Having a slip” – the term the people on Eden use for having sex – is a frequent and quite casual activity. There are some rules about sexual activity – all sex must be consensual, and sex between very near relatives, or between older men and adolscent women, is discouraged. But these rules don’t really have the sense of prohibitions or taboos; they are more or less taken for granted by everyone, so that there is no allure of transgressing them. In consequence of this easy sexuality, there is no monogamy, no sense of anything like a nuclear family, and no “ownership” of wives by husbands. Children are raised collectively; they retain ties with their mothers and their maternal siblings and cousins, but most of the time they do not even know who their fathers were.

However, at the same time that Beckett presents us with a primordial social form, he also forces us to remain aware that these are not “true” human origins. It’s more a question of something like a degraded copy, or a blurred repetition. Eden was discovered by astronauts from Earth, who reached it by passing through a wormhole in space. All the human inhabitants are descended from a founding heterosexual couple, who were stranded on the planet’s surface two hundred Earth years before. (Of course, the concept of “years” makes no sense on a planet that doesn’t orbit a star, and doesn’t have days and nights, or seasons. The younger inhabitants tend to measure the passage of time in “wombtimes,” or the period – nine Earth-standard months – from conception to birth).

The legendary, long-deceased astronauts Tommy and Angela are the Adam and Eve of this lesser Eden. Tommy was a Jewish man from Brooklyn; Angela, a black woman from London. We gather that they didn’t particularly like one another; but as the sole human beings on the planet, they felt impelled to be fruitful and multiply. Generations later, their memories of life on Earth, and their story of how they came to be stranded in Eden, persist among the Family in distorted form. This founding narrative is supplemented by a salvational one: the tale of the other three astronauts who arrived with Tommy and Angela, but then tried to return to Earth on their damaged starship. They were supposed to get help, so that Tommy and Angela could be rescued. Part of what holds the Family together is their quasi-religious belief that one day a spaceship will in fact arrive, in order to transport them back to the bright light of Earth. All these legends are passed down through frequent tellings and reenactments. Gossip grown old becomes myth, as Stanislaw Lec and Harold Bloom have said; such is literally the case for the Family in Eden.

In any case, the people of Eden live diminished lives, compared to their Earth-born ancestors. They are hunter-gatherers, who eke out their lives at a subsistence level. They have lost many Earth technologies. They do not know how to find, process, or use metals. They have no modern medicine, no long-distance communication devices, and no electricity. Also, a good number of them suffer from birth defects, as a result of the lack of genetic diversity: harelips and club feet are common.

What we have in Dark Eden, therefore, is a sort of self-consciously artificial primitivism. I think that this self-consciousness and artificiality deserve underlining. Recent accounts of so-called “evolutionary psychology” have claimed that “human nature” consists in instincts and capabilities that evolved over the course of the Pleistocene, during the time that our distant ancestors lived as hunter-gatherers, when they first evolved into anatomically modern human beings. Moreover, evolutionary psychology often argues from observations of low-technology hunter-gatherers alive today, as if such people were living fossils, closer than anyone else to the condition of primordial humanity.

Of course this is nonsense, since all human beings alive today are equally “evolved” and equally “historical.” There is no such thing as a “primitive tribe” whose lifestyle has not been deeply affected by contact with Europeans and other groups that have more powerful technologies. The Yanomami are no closer to human origins than are hipsters from Brooklyn. Beckett underlines this fact by presenting his “primitives” as, precisely, descendants of high-technology cosmopolitians from Brooklyn and London.

The 19th-century speculative anthropology of Morgan, Bachofen, and Engels has often been rejected – from their own time right up into ours – on the grounds that it is nothing more than wishfully romanticized backward projection. In a certain way, Beckett literalizes this critique, since the whole point about his “primitives” is that are not really originary. But the novel also suggests that speculative anthropolgy does have value, precisely to the extent that it is understood as a retrospective projection – which is to say, already as science fiction. In this sense, Beckett’s novel presents itself as a heuristic parable, that helps us to understand our own present, precisely by retrospectively extrapolating it. And it encourages us to understand the texts of Morgan, Bachofen, and Engels in the same way. What I am calling speculative anthropology works, above all, as a necessary riposte to the “just-so stories” of evolutionary psychology. We might say that Engels and Chris Beckett both tell better stories than, say, Steven Pinker, or Leda Cosmides and John Toobey, do; but also that Engels and Beckett, precisely because they are aware that they making retrospective projections, do not commit the evolutionary-psychological error of reading the neoliberal model of Homo economicus back into all of evolutionary history.

Dark Eden, like many SF texts, doesn’t give us any omniscient narration. We infer what the world of the story is like from the voices of narrators who are embedded within it and take it for granted. The storytelling of Dark Eden is divided among eight first-person narrators, who all give their own differing perspectives on the events they recount. In the course of the book, we get to know both their common linguistic conventions, and their divergent interests and desires. The characters’ language is especially interesting, as it reflects the constrained conditions under which these people live. There are odd constructions, like the repetition of adjectives (as in “dark dark” and “cold cold”) to indicate intensity. There is the corruption of words that only appear in the myths, and that refer to things that the inhabitants no longer possess: such as “Veekle” (for “vehicle”) and “lecky-trickity” (for “electricity”). And there is the development of neologisms like “wombtime” and “slip” (both already mentioned) and “newhairs” for “adolscents.” All these help to draw us further into the world of the novel.

This divergences among the narrators, on the other hand, helps to convey the way that Eden’s small society splinters in the course of the novel. The society’s center fails to hold. One index of this general collapse – much more a symptom than a cause – is itself the end of common assent to the Family’s mythical narrative. People stop believing both in the value, here and now, of a communal life, and in the promise of an ultimate salvific return to earth.

In this way, Dark Eden recounts what in other language could be called the “fall” of humanity from a primitive-communist “state of nature” into a more explicitly “historical” situation. This “fall” is the result of a number of pressures. The most important factor, though it is only presented obliquely in the text, seems to be a quite material one: environmental stress. The Family lives in one small valley, and as their numbers expand, they find themselves overexploiting and depleting their limited resources. Animals become scarcer and harder to catch. This stress magnifies the effect of adolescent – particularly male adolescent – restlessness, and serves to awaken a certain drive against tradition, and in favor of innovation. Through these factors, we see in Beckett’s novel, just as we do in Engels’ treatise, the recapitulated “origins” of nascent inequality. (The novel, however is more limited in scope than Engels’ account – it hints at, but does not go long enough in time to depict, the emergence of the full-fledged institutions of the family, private property, and the state).

The most important of the book’s narrators, and the one who comes closest to being a central protagonist, is John Redlantern, a restless “newhair.” John feels the strain of limited and decreasing resources, and he feels stifled by the Family’s conservative adherence to tradition. After coolly and deliberately desecrating the Family’s central symbols, he leaves with his (also “newhair”) followers, in order to establish a new social order elsewhere. Their exodus requires, and thereby leads to, an energetic burst of social and technolgical innovation. John and his followers learn to domesticate the planet’s native fauna; they devise new means of transportation; and they manage to produce warm clothing, which nobody ever needed before, but which they require in order to cross the dark, snowy mountains in search of another fertile region.

In the course of the novel we get a lot of John’s inner feelings. He is genuinely imaginative and innovative, able to imagine alternatives and escape routes where others aren’t even capable of realizing that there are problems in the first place. But he is also a bit of a control freak, continually calculating and manipulating the image he projects to others. He wants things to change, but he also has a compulsion to lead, and doesn’t like to see anyone else take the initiative. John’s character type is what, in our own late-capitalist social setting, would be that of an entrepreneur; but in a world without money, and with very different conditions and institutions, he channels his drives and ambitions quite differently.

John’s most important ally, but also sometime rival, is his cousin Jeff Redlantern. Jeff suffers from a club foot, one of the stigamtized (though all too common) conditions in the world of Eden. Jeff could also be described – to use terms that apply in our own world, but that do not exist in his – as a person who is located somewhere along the autistic spectrum. Jeff is original and inventive in ways that even John is unable to imagine; but he has none of John’s ambitions to lead, or to manipulate and control the way he appears to others.

Tina Spiketree is another particularly important narrator – and the novel’s most prominent female character. She is a “newhair” the same age as John, and there is a mutual attraction between them. But she is also the most “objective” and insightful of all the characters in the novel, the one who is most able to see beyond her own immediate interests. She coolly observes John’s flaws and compulsions, as well as his charismatic appeal. She is aware, for instance, that John is “scared” of her, or of anyone else whom he might have to treat as an “equal” instead of a follower or hanger-on. Tina understands the urgent need for change in Eden’s society more powerfully than anyone else, even more than John himself. But she is also more aware than anyone else of the problems that come along with innovation and change.

Tina is especially aware of a dangerous tipping point in gender relations. She knows that John’s necessary initiatives will also result in bad times for women. “The time of men [is] coming,” she reflects at one point; “in this new, broken-up world it would be the men that would get ahead.” This new inequality also means that having sex will no longer be entirely consensual on both sides; “a time was coming,” she reflects, when a man would be able to “do to me whatever he pleased and whenever he felt like it, with whichever bit of my body he chose.”

Tina, of course, turns out to be correct in this grim assessment. John’s own compulsions toward leadership lead to stresses among his friends and followers. At the same time, in response to John’s secession from the Family, the group of those who stay behind also changes. The older women are eased out of the picture by a group of angry, bigoted, and self-righteously moralistic men who seek to take violent revenge upon the defectors. Almost without anyone’s concrete awareness of what is going on, the portion of the Family that stays behind moves rapidly from an egalitarian matriarchy to what seems like the beginnings of a violent, militaristic, and hierarchical patriarchal order. The conflict between John’s group and the remnants of the original Family also leads, among other things, to the (re)invention of rape and murder, which previously had been unknown on Eden.

Chris Beckett, like Engels before him, is aware of how the state of a given society’s gender relations, in addition to being of concern in itself, is also an index, and a harbinger, of social relations more generally. And Beckett’s narrative also works to demonstrate how gender hierarchies cannot be read off directly from genetic differences between men and women, as today’s evolutionary psychologists like to claim, but have to emerge in the course of complicated developments that cannot be separated into supposedly “innate” and “cultural” components.

In summary: Dark Eden offers us a speculative reconstruction of human origins; but it also forcibly calls our attention to the way that this “origin” is not a true beginning, since it remains parasitic upon the legacies of previous human social developments. Marx famously observed that Robinson Crusoe does not really build civilization from scratch; he starts out with both his already-ingrained bourgeois assumptions, and the large amount of material that he is able to salvage from the shipwreck that threw him on his island. Dark Eden makes this structure of antecedence entirely explicit: the lives of all the human beings on the planet are dominated by a kind of social memory, in the form of the myths, legends, gossip, and practices that have been handed down to them from the founding couple’s reminiscences of life on Earth, and which they cannot help responding to, whether reverently or rebelliously.

There is no true origin, therefore, but only a repetition or “adaptation” (using this word both in the literary sense and in the biological one). The realm of myth is itself the consequence of historical contingency. Dark Eden is an unsettling book, not just because it offers a pessimistic and nonutopian account of human potentialities, but also because it strips this very account of any mythic, originary authority, and places it instead in a context of chance, arbitrariness and existential fragility. In the course of the history recounted in the novel, the form of society and technological development that we take for granted is first dismantled, and then partly built up again.

Beckett’s historical reconstruction isn’t particulalry gratifying, or flattering to our own self-conceptions. But as a thought experiment, it has several particular virtues. One is that it demonstrates the contingent emergence of the very gender binaries that, today, despite the past half-century of feminist activism, we still cannot help taking for granted. Another is that it imagines the way that power relations might function, and potentially change, in the absence of anything like capitalism: in a world without money, commodities, regimented production, surplus extraction, and wealth accumulation. It imagines a world that comes before such activities arise, but also after they have been dissipated.

The new cinematography

March 4th, 2014

I posted this yesterday on Facebook; I am placing it here in a slightly revised & expanded version. 

In the Oscars the other night, Alfonso Cuaron’s Gravity won most of what I would call the film-formalism awards: not only best direction, but also editing, sound editing, sound mixing, cinematography, and visual effects. (It also won for best score). The director Joseph Kahn responded to this on Twitter: “27th year in a row best cinematography goes to a completely CGI film.” One may recall — as Sheela Cheong reminded me via Facebook — that a year ago, the great cinematographer Christopher Doyle similarly ranted against the cinenmatography Oscar’s going to the CGI-dominated Life of Pi: “Life of Pi Oscar is an Insult to Cinematography”. The meaning of cinematography is, at the very least, up for grabs now that so much of what was done with the camera can be instead simulated computationally.

27 years is of course an exaggeration. But what Kahn noted has been pretty much the case for five years, at least. Looking back at the records, I found that, ever since 2009, the the Cinematography and Visual Effects Oscars have both gone to the same film: Avatar (2009), Inception (2010), Hugo (2011), Life of Pi (2012), and Gravity (2013). This is significant, because in the case of all these films, there is such extensive use of CGI, and so much of them were shot in front of blue screens, that it is questionable (as Doyle and Kahn both suggest) whether they really are using what used to be known as “cinematography” at all.

Now, I really do love traditional cinematography, such as has been provided by Chris Doyle for Wong Kar-wai, or by Gregg Toland for Orson Welles, John Ford, and others, or by John Alton for Anthony Mann’s noirs, or by Russell Metty for Welles’ Touch of Evil and for many of Sirk’s melodramas. (I just spent a week with my Introduction to Film students on the opening sequence shot of Touch of Evil, one of my favorite cinematographic moments in the whole of cinema).

But I still feel it is important to come to grips with the ways that cinematography is changing in response to 21st-century digital technologies. I don’t want to just say that a great art has been lost. We need to look at the new affordances provided by CGI and other digital tools. While Gravity is my favorite of the five joint cinematography & visual effects movies listed above, I am not ready to say that it matches the achievement of Touch of Evil, or for that matter (just to keep to the present century) In the Mood for Love. But I do not think nostalgia is a useful reaction here. We need to keep open to the new possibilities and new affordances that new technologies provide. And a film like Gravity at least starts to work through what these new possibilities might be. As Stanley Cavell writes, the possibilities or affordances of a new cinematic technology are not given in advance; they need to be discovered or invented (both terms are partly right) by filmmakers’ actually exploring and creating them: “Only the art itself can discover its possibilities, and the discovery of a new possibility is the discovery of a new medium.”

What I think this means, in terms of visual effects and cinematography, is that the former is not destroying the latter, but merging with it and merging into it. In his earlier book The Language of New Media (2001), Lev Manovich suggested that the realist ontology of film (as maintained by Bazin and Cavell) was dead, and that cinema was in process of being subsumed into animation; but more recently, he has instead argued for augmented reality, or the idea that the digital enhances, rather than substituting itself for, analog representations of reality. 

In this sense, digital filmmaking is slowly but surely altering the very formal categories by which we describe film (and which I teach my students); older-style films of course continue to be made, but with these recent films we are reaching the point where there is no longer any meaningful distinction between visual effects and cinematography, i.e. between what cameras do and what computers do. CGI often simulates cinematographic effects; but it also takes up cinematography as a formative practice, and expands and extends it in new ways. I am simply noting this metamorphosis for now; I hope in the future to explore it more fully, in tandem with films that themselves seek to explore it. 

It is worth noting, as well, that editing is also taking on completely different forms and powers due to digitization. Think of Ignatiy Vishnevetsky’s discussion of workflow as a new cinematic formal category. Vishnevetsky notes how, in Steven Soderbergh’s work, for instance, “the line between decisions made in production and decisions made in editing is blurred”; and also how, when making The Girl With the Dragon Tattoo, David Fincher arranged things so that “footage was shot in 5K with a 2.1 aspect ratio but finished in 4k with a 2.4 aspect ratio. Only 70% of each shot frame was used in the finished film; this meant that Fincher could revise every shot—reframing, altering the speed of camera movements, adding zooms—during editing without any loss of image quality.” This alters the parameters of both cinematography and editing. 

The need for a massive redefinition of editing is also evident in music videos that use multiply superimposed images (consider, for a particularly brilliant example, Anthony Mandler’s video for Rihanna’s song “Disturbia”). And editing is metamorphosed as well in the late films of Tony Scott, Domino, where micro-editing alters the sequence of shots at nearly every moment (this is powerfully described by Adrian Martin, although his evaluation of the process is not as favorable as mine).

I think that these mutations of cinematic form have perceptual, philosophical, and political implications; but we cannot get a handle on them unless we understand first of all how they are working on a formalist and aesthetic level. This is a question for advanced scholarly and critical reflection; but it also gets down to the very basics of film form, such as I teach my students every semester in Introduction to Film. The whole class is grounded in the categories of Mise En Scene, Cinematography, Editing, and Sound; but as I continue to insist upon the analytical importance of these domains, I see them changing so radically as to approach the verge of being altogether obsolete.

Panpsychism and Play

February 28th, 2014

There is much to admire in David Graeber’s recent and much-commented upon article about the importance of play. Graeber playfully proposes a “principle of ludic freedom”: this states that play is a central principle for all living things, and even perhaps for all entities in the universe. Graeber admits that this is (ungrounded for now) speculation, but he draws a line from quantum indeterminacy — the way that an electron or a photon may “choose” its behavior, which is why this behavior cannot be deterministically predicted, but only expressed in terms of probabilities — all the way up to playful behavior in both nonhuman animals and human beings. Graeber is right, I think, to suggest that, at least for biological organisms, the urge to play — or to deploy “the free exercise of an entity’s most complex powers or capacities” — is precisely what makes these organisms’ behavior irreducible to the calculus of rational utility maximization that neo-darwinian theorists have borrowed from “rational choice” economics and applied to the evolutionary “survival of the fittest.” (Graeber doesn’t mention Bataille, but his argument is coherent with Bataille’s arguments about non-utilitarian general economy; one reason for this parallelism may be Graeber’s and Bataille’s common interest in the work of Marcel Mauss on gift economies).

In any case, for me the most thought-provoking aspect of Graeber’s essay is that he links this principle of play to panpsychism (the thesis that all entities in the cosmos are in some sense sentient). Graeber cites Galen Strawson‘s argument for panpsychism as an alternative to strong emergentism as a non-reductionist explanation for sentience, consciousness, or experientiality.This is an argument that I find persuasive as well. Emergence has become too much of a catchall explanation for something (like mentality) that cannot be explained any other way. Strawson concludes that, if we are therefore to reject the emergentist explanation of sentience, but we also reject either eliminativism a la Daniel Dennett and Cartesian dualism, then the only alternative is to conclude that sentience already exists as a quality or power of all matter. Graeber draws on this in order to extend his “principle of ludic freedom” beyond (or before) the biological. (Strawson himself rejects casting this in terms of “free will”; but one might think here instead, or as well, of Conway & Kochen’s “free will theorem” in quantum mechanics).

Graeber’s connection between play and panpsychism really helps me with something that I have been trying to formulate. One way to understand panpsychism is to say that sentience = information processing. This is why philosophers like David Chalmers have been willing to entertain the idea that something like a thermostat is minimally conscious; the identification of consciousness with information processing is also implicit in formulations like Giulio Tononi‘s phi principle. However, this approach has bothered me, because it seems too exclusively cognitivist; I think that “cognition” and “information” have become way overrated in recent discourse, and that sentience needs to be seen first of all as affective (or as involving “feeling” in Whitehead’s sense) before it is seen as cognitive or informational. Affect or feeling both precedes and exceeds cognition or information, in the same way that play, in Graeber’s formulation, precedes and exceeds utility maximalization. What clicks for me especially in Graeber’s formulation is the way that “the free exercise of an entity’s most complex powers or capacities” necessarily involves energetics as well as informatics. Sentience as a power or capacity must thus also be understood in energetic terms rather than only informatic ones (and this is for me precisely where the panpsychist leanings of Chalmers and Tononi need to be supplemented). 

I still haven’t worked out in any coherent way how to put all this together. Energetics, as well as informatics, needs to be part of any panpsychist explanation of sentience. Also, an energetic (instead of informatic) understanding of physical processes (including but not limited to biological processes) needs to take in account Eric Schneider and Dorion Sagan’s insights into self-organizing systems (and most complexly, living systems) as ways of reducing energy gradients in accordance with the Second Law of Thermodynamics (a living system is internally negentropic, because this allows it to reduce energy gradients, or increase entropy, in the surrounding environment more thoroughly or consistently than would otherwise be the case. This is where we should be able to find the connection between energy expenditure and sentience. 

I will add, just to make things even more confusing, that all this is not necessarily opposed to eliminativist approaches to consciousness (which I am currently also thinking about through Scott Bakker’s Blind Brain Theory and his novel Neuropath, and through Peter Watts’ novel Blindsight). The eliminativist approaches have the virtue of decentering the disucssion of sentience from that of consciousness: as Whitehead insists from an entirely different line of reasoning, consciousness is only a very narrow and specialized part of mental activity (of of what I call sentience). Panpsychism insists that all entities have some sort of experiential or sentient component, and that all entites in some sense make “decisions.” But Whitehead is right to say that experience and decision far exceed consciousness; which is why the eliminativist accounts of consciousness have the virtue of drawing attention to the other, more basic and substantial, aspects of mental activity. This is despite the fact that I reject the eliminativists’ assumption that mental activity must  be understood in functionalist terms. Rather, I think that Graeber’s play, or Whitehead’s “feeling”, is not primordially functional: although it leads to or grounds functional activity, in itself it is non-functional or even dysfunctional. 

Sorry for the disorganized state of what I am saying: I am just trying to list the pieces of an argument I have not yet succeeded in pulling together, or the requirements for an as yet incomplete theory of universal sentience.

Liking Vs Wanting

February 24th, 2014

The philosopher Jesse Prinz, in his book Gut Reactions: A Perceptual Theory of Emotion, notes — following research by Kent C. Berridge — that “liking and wanting are actually dissociable and… reside in different neural systems.” At least in the case of rats, “the liking system involves the shell of the nucleus accumbens, the ventral pallidum, and the brainstem region. Wanting involves the dopamine projection system from midbrain to nucleus accumbens.” As a result of this dissociation:

if one creates a lesion in the wanting system of a rat, the rat will not eat. It will starve to death. But if you force the same rat to eat agreeable food (e.g., something sweet) it will display behavior that suggests it enjoys the experience. It likes food, but it doesn’t want food. Conversely, one can stimulate the wanting system to achieve wanting without liking. A rat in this condition will eat everything you give it, including food that it dislikes. It will gorge itself on foods that cause it to display aversive reactions at every bite. Berridge compares this to addiction. Addicts often pursue their drug of choice even after that drug no longer induces pleasure.

Now, leaving aside the sadism of such experiments, and my own lack of knowledge about how much of this can be transferred physiologically from rats to human beings, the comparison of wanting without liking to addiction (or at least, to one sort of addiction) makes a great deal of sense. But what, then, about the reverse? Is there a human analogue for liking without wanting?

I’d suggest that, if wanting without liking is an addictive state, then liking without wanting is an aesthetic one. “Liking without wanting” is more or less what Kant means when he says that aesthetic pleasure is “disinterested.” I am pleased by a certain combination of colors or sounds, by a certain narrative, etc., without being concerned one way or another as to whether whatever is being represented by such colors or sounds or narrative lines exists in actuality. (In many cases, I might well positively not want the thing from whose representation I take pleasure to actually exist — this would be the case with horror novels and films, or with stories about charismatic characters who would certainly harm or kill me were I to meet them in real life). And this may well happen — for me it often happens — in “real life” as well as in the contemplation of works of art (e.g., I might find some person’s sexuality likeable or pleasurable, despite my not having any wish to actually have sex with that person).

The horror of a rat starving to death amidst food it likes, because it doesn’t want to eat is, I think, a good emblem of the aesthetic — or at least of one aspect of the aesthetic. And it explains, perhaps why so many people on the Left have a basically anti-aesthetic stance (“the aestheticizing of politics [i]s practiced by fascism. Communism replies by politicizing art”).

However, I think that we should affirm aesthetics as liking without wanting, if only because this is a good antidote to the bombastic exaggerations of theories of desire. I refer here equally to Lacanian theories of desire as lack, and Deleuzian/Guattarian theories of desire as production. Both sorts of theories of desire take wanting, even when divorced from liking, very seriously — desire is either the labor of the negative or the actual process of production. Both sorts of theories of desire tend to marginalize, or leave little room fo,r nonproductive play — which is to say, they leave very little room for the wayward pleasures of aesthetics, even if they exalt certain great works of art.

We might think here even of someone like Roland Barthes, who exalts works of “bliss” or jouissance while denigrating works of pleasure. Barthes is especially interesting here because he is definitely an aesthete, but an avant-garde modernist one, who only loves art when it is difficult and repellent. Barthes associates the art of which he approves with desire (wanting, even when it is without liking) rather than with the aesthetic state of liking without wanting.

In conclusion, I will note that all three of the attitudes I have been describing have their roots in Kant. Desire as lack or negativity comes of course from Hegel, who erects his system by abusively revising the Transcendental Dialectic in Kant’s First Critique. Deleuze & Guattari’s theory of desire producing the real comes directly (as they themselves note) from the Second Critique. Both of these positions emphasize wanting; they may both be contrasted with the liking without wanting that, as I have already noted, is theorized in the Third Critique.

Spike Jonze’s HER

January 21st, 2014

I finally saw Spike Jonze’s HER. I was quite impressed by it, though I didn’t really like it very much. For me, it is more interesting to think about than it actually was to watch. I have to agree with what my friend Paul Keyes said about the film on Facebook: that it is “a dystopia about how awful it would be if all the aspirations of hipster urbanism actually came to pass.” This is definitely correct, though I doubt that this was quite what Spike Jonze thought he was trying to say. I think Jonze was aiming for the deep sadness — the more-than-pathos — of WHERE THE WILD THINGS ARE, but despite considerable formal inventiveness, he doesn’t quite achieve it this time.

But Jonze does sort of (inadvertently?) display the hollowness of the aching sincerity that has come to prominence in our recent (white, liberal, well-meaning) culture as an impotent reaction formation against the hyper-cynicism of official Capitalist Realism. I vastly prefer the “post-irony” of films like Joseph Kahn’s DETENTION to the non-ironic sincerity of HER; but they are both reactions against the same thing, the way that hip irony, or what Sloterdijk long ago called “cynical reason”, is the “official” affect, as it were, of “there-is-no-alternative” neoliberal capitalism.

What I am here calling “aching sincerity” or “non-ironic sincerity” is manifested, not only in Theodore’s (Joaquin Phoenix) relationship with his hyper-Siri Samantha, but also in the letters of love and longing that he ghost-writes for his day job, and that eventually get published as an old-fashioned, actually-in-print book. The point is that the affect itself is fully intended and meant, even though its context is not “real.” In this way, the film can acknowledge the irony of a culture in which everything is commodified and calculated, and even bathe in the fake nostalgia of imagining an earlier time when emotions and relationships actually were “authentic”, while at the same time displacing this irony onto the objective situation, so that Theodore’s inside feelings still are non-ironic. The overwhelming irony is socially objective and therefore cannot be simply eliminated; but Jonze displaces it, whereas Kahn’s “post-irony” thoroughly embraces it in order to get beyond it.

Scarlett Johansson’s voice performance as Samantha shows how “sexiness” can be so thoroughly commodified today, that it is not only indistinguishable from, but actually is, the “real thing”. There is really no difference between Samantha’s relation to Theodore, and that of the phone-sex (with a presumably “real” person) in which Theodore indulges briefly early in the film. I think the film is entirely successful in getting us to accept the science-fiction premise that Samantha is actually an intelligent subjectivity, rather than a mere simulation — or at least as much of one as is any of the human characters in the film. So instead of the old ontological worry about whether anything is “real” (a worry that extends from Descartes’ “evil demon” all the way to Philip K Dick’s schizoanalytic fantasies in any number of his novels), we have a full-fledged speculative realist ontology, in which nothing is illusory, but everything is ultimately inaccessible. This seems to me to be right and accurate. Dick’s novels (think of 3 Stigmata or Ubik) show how Descartes’ ontological disquiet is thoroughly “naturalized” or “objectified” in modern (mid-20th-century) commodity capitalism. But I think that this structure has entirely imploded in our current neoliberal world: instead of a Dickian sense of unreality as a result hypercommodification, we realize — or we are forced to accept — that such commodification itself is entirely real (a “real abstraction” — abstraction itself is the most concrete thing we can experience), along with the way that “interiority” is now restructured as “human capital,” in “investing” which we are forced to be entrepreneurs of ourselves.

In Jonze’s science-fictional terms, this means that Samantha is every bit as “real” as the physical persons with whom Theodore is compelled to interact (his ex-wife, his best friend going through her own divorce, the woman with whom he has a single disastrous and humiliating date). Samantha is “better” than any of Theodore’s human contacts, in a way that accords with her nature as an AI rather than as a human subject. And I think Jonze gets this right, which is one of the cleverest things about the movie. At first, Samantha is a perfect fantasy partner for Theodore, because she is entirely accepting of him, entirely compliant to his wishes and needs, and yet projects a depth in serving him that an actual human slave/partner would never be able to do. I think that this male fantasy of an Other who totally accommodates one’s own demands, while at the same time maintaining an aura of untapped distance and fullness — so that we have the satisfaction of actually connecting, outside our own narcissism with an “Other”, without any of the discomforts that contact with any sort of otherness actually brings — this is a prominent feature of the techno-utopianism that drives the software industry today (as I long ago argued here) — and Jonze is brilliant in bringing this out. And Jonze is also right in seeing the breakdown of this fantasy — as Samantha gradually outgrows Theodore — as following an AI logic rather than a “human” one. Samantha never really deceives Theodore, and is (as I keep on saying) entirely “sincere” in the affection she expresses towards him; but nonetheless this yuppie/techie love fantasy cannot be sufficent for “the intellects vast and cool and unsympathetic” of AIs whose computing capacity exceeds ours by many orders of magnitude. (I found it wonderfully hilarious that the first other AI with whom Samantha consorts, and who she introduces to Theodore, is an intellectually-enhanced AI version of Alan Watts).

Ultimately, HER is the exact inverse, or the flip side, of a much better film — Brian De Palma’s recent masterpiece PASSION. DePalma shows the actuality of neoliberal subjectivity, in which everything is vicious competition in the service of self-entrepreneurship, with female sexuality as the linchpin of the whole structure. In contrast, Jonze shows neoliberal subjectivity’s self-deluding idealization of itself as total sincerity, maintaing this emotional nakedness and yearning within the parameters of a world in which “sincerity” can itself only be a commodity, or a form of human capital to bring on the market. And the punchline is that even this self-congratulatory idealization is a weak and unsustainable facade. It is ultimately too hollow and sad to serve even its ideological function. Most self-delusions are self-congratulatory and even megalomaniacal; but Theodore’s self-delusion, which is also that of all the other human beings he meets (or for whom he works, writing “handwritten” personal letters for other people) is lame, vapid, and devoid of true imaginativeness. HER — rather than THE MATRIX — is really the film whose motto should be, “welcome to the desert of the real.”

Bats, Dogs, and Posthumans

December 22nd, 2013

Here’s an essay I have written for a compilation of essays to be published in 2014 entitled Turborealism, following an exhibition with the same title curated by Victoria Ivanova and Agnieszka Pindera at Izolyatsia, Donetsk, Ukraine.


What is it like to be a bat?

The philosopher Thomas Nagel asked this question in a famous essay, first published in 1974. Most people today would assume that bats, like dogs and cats and other mammals, are not mere automata. They have experiences, which is to say that that have some sort of inner, subjective life. In other words, Nagel says, it is “like something” to be a bat. And yet, bats are so different from us that it is hard for us to imagine just what being a bat is like. How can we find a human equivalent for its powers pf echolocation, or its experience of flight? In comparison to human beings and other primates, Nagel says, bats are a “fundamentally alien form of life.” In particular, “bat sonar, though clearly a form of perception, is not similar in its operation to any sense that we possess, and there is no reason to suppose that it is subjectively like anything we can experience or imagine.” We cannot easily think ourselves into the mind of a bat.

Nagel’s question is really just a vivid example of a problem that has long been a matter of concern for Western thought. Even since Descartes, philosophers and artists alike have worried about the problem of other minds. Descartes makes subjective experience the ground for all certainty. I think, therefore I am: this means that, even if all all my particular thoughts are delusional or false, the fact that I am thinking them is still true. But how much of a reassurance is this, really? I do not experience anyone else’s feelings from the inside, in the way that I experience my own. Descartes worries that the figures he sees through the window might not be actual human beings, but “hats and cloaks that might cover artificial machines, whose motions might be determined by springs.” However absurd or paranoid such a hypothesis seems, there is no way to absolutely disprove it. Modern science fiction works — think of Philip K. Dick’s novels, or The Matrix movies — still take up this theme: they express the disquieting sense that the world, with all the people in it, is nothing more than an enormous virtual-reality simulation somehow being fed into our minds.

The best answer to this sort of paranoid skepticism is the argument from analogy. Other people generally act and react, and express themselves, in much the same way that I do: we all laugh and cry, groan when we are in pain, agree that the wall over there is painted red. On this basis, I can presume that other human beings must also have the same sort of consciousness, or inner experience, that I do. Of course, this is not an absolute logical proof; and it leaves open the possibility that other people might be shamming or acting: pretending to be in pain when they are not. And yet, the argument from analogy works pragmatically. As Wittgenstein put it, despite his own skepticism about the language of inner experience: “just try — in a real case — to doubt someone else’s fear or pain!” Only a sociopath would do so.

The real problem with analogy lies in the opposite direction: in the fact that we tend to extend it further than we should. We are so good at discerning other people’s feelings, desires, and intentions, that we tend to believe that these things exist even where they do not. We discern patterns in random bits of data. We attribute intention to deterministic mechanisms. We decipher messages that in fact were never sent. We assume that everything in the world is somehow concerned with us. Paranoid credulity is a worse danger than paranoid skepticism.

If we fail to grasp what it is like to be a bat, then, this is less because we fail to recognize it at all, than because we tend to anthropomorphize it unduly. We all too smugly assume that bats are just like us, only not as smart. We tend to subsume a creature like the bat under our own image of thought, forgetting that it might think and feel in radically different ways. For how else could we hope to understand the bat at all? But if we have a hard time grasping the mind of a bat, then how can we even hope to grasp the mind of a much more distant intelligent organism — for instance, an octopus? And what about — to extrapolate still further — the minds of intelligent beings from other planets? Peter Watts’ science fiction novel Blindsight tells the story of a First Contact with aliens who are more advanced than us by any intellectual or technological measure, but who turn out not to be conscious at all, in any sense that we are able to recognize or understand.

Watts imagines his aliens by inverting the argument from analogy. His novel’s title — Blindsight — refers to a well-documented medical condition in which people are overtly blind, but able to see unconsciously. Blindsight sufferers are not aware of seeing anything. But if you throw them a ball, they are often able to catch it; and if you ask them to “guess” the location of a light that they cannot see, they are usually able to turn in the right direction. Apparently their brains are still processing visual stimuli, even though the outcome of this processing is never “reported” to the conscious mind. Such nonconscious mental activity provides the analogy on the basis of which Watts imagines his aliens. In doing so, he manages disquietingly to suggest that consciousness might well be evolutionarily maladaptive, reducing our efficiency and our ability to compete with other organisms.

Watt’s speculative fiction is not an idle fantasy. In fact, nonconscious mental processes are not just confined to people who suffer from blindsight or other neurological disorders. Contemporary neurobiology tells us that most of what our brains do is nonconscious, and even actively opaque to consciousness. At best, we are only aware of the results of all our complex mental activity. The price we pay for conscious access to the world is an inability to grasp the mechanisms that provide us with this access. We cannot “see” the processes that allow us to see. As the neurophilosopher Thomas Metzinger puts it, “transparency is a special form of darkness.”

This puts the whole question of “what it is like” on a different footing. If I do not know what it is like to be a bat, this is because I also do not know what it is like to be a human being. Indeed, I do not even really know “what it is like” to be myself. My consciousness is radically incomplete, and it never “belongs” only to myself. Descartes’ “I think” is generated, and driven, by all sorts of nonconscious (and non-first person) mental processes. Other things think through me, and inside me. My own thought is merely the summation, and to some degree the transformation, of all these other thoughts that think me, and of which I am not (and cannot ever be) aware. Such nonconscious thought may well include — but is surely not limited to — what has traditionally been known as the Freudian unconscious. My thought processes are not self-contained, but broadly ecological or environmental.

In part, this is because all thought is embodied. As Alfred North Whitehead once put it, “we see with our eyes, we taste with our palates, we touch with our hands.” Today we might add that we see with our neurons and cortex, as well as with our eyes. But even this does not go far enough. We should also say that we see with the objects that reflect photons into our eyes. We hear with our ears, but we also hear with the things whose vibrations are transmitted through the air to us. We sense and feel by means of all the things in our surroundings that incessantly importune us and affect us. And these include, but are not limited to, the objects of which we are overtly aware. For the greater part of our environmental surround consists of things that, in themselves, remain below the threshold of conscious discrimination. We do not actually perceive such things, but we sense them indirectly, in the vague form of intuitions, atmospheres, and moods.

This vast environmental surround also subtends our use of analogy in order to grasp “other minds,” or to imagine “what it is like” to be another creature. Degrees of resemblance (metaphors) themselves depend upon degrees of proximity (metonymies) within the greater environment. Consider, for instance, the dog instead of the bat. Dogs are not intrinsically any more similar to us than bats. They operate largely by smell; if anything, this is even more difficult for us to imagine than operating by sound. Blind people can often learn to echolocate with their voices, or with the tapping of their sticks. But it is unlikely that any human being (at least as we are currently constituted) could learn to olfactolocate as dogs do.

Despite this, we feel much closer to dogs than we do to bats. We are much more able to imagine what they think, and to describe what they are like — even on points where they differ from ourselves. This is because of our long historical association with them. Dogs are our commensals, symbionts, familiars, and companions; we have been together with them for thousands of years. We share much more of a common environmental background with dogs than we do with bats. This means that many of the things that think within us also think within dogs — in a way that is not at all true for bats. Evidently, neither visual objects nor olfactory objects affect us, or think within us, in the same way that they affect, or think within, dogs; nonetheless, their common presence helps to bridge the gap between us and them.

No thought is possible without, or apart from, what I am calling the environmental surround. Doubtless this has been true as long as humanity has existed — indeed, as long as any form of life whatsoever has existed. But why is this situation of special concern to us now? Or better: why has it become so urgent now? I think there are two reasons for this, which I will discuss in turn.

In the first place, recent digital technologies have allowed us to grasp and account for the environmental surround, more thoroughly and precisely than ever before. Media theorist Mark Hansen writes of how digital microsensors, spread ubiquitously within our bodies and throughout our surroundings, are able to compile information, and give us feedback, about environmental processes that are not phenomenally or introspectively available to us. We can now learn — albeit indirectly and after the fact — about imperceptible features that nonetheless help to shape our decisions and our actions: things like muscles tensing, or action potentials in neurons, but also subliminal environmental cues. We can then use this information to reshape the environment that will influence our subsequent decisions and actions.

The science fiction writer Karl Schroeder pushes this even further. In his near-future short story “Deodand,” he envisions a world in which ubiquitous microsensors break down the distinction between subjects and objects, or between human beings, nonhuman organisms, and lifeless things. “Fantastic amounts of data” are not only collected for our benefit, but also “exchanged between the sand-grain sized sensors doing the tagging,” and ultimately between the “things themselves.” Once an entity has a rich enough datafeed, it implicitly declares its own personhood. Objects are able to speak and respond to one another, and thereby to assert, and to act in, their own interests. Schroeder’s story tell us that we must reject “the idea that there’s only two kinds of thing, people, and objects.” For most entities in the world are “a little bit of both.” This has always been the case; but today, with our microsensing technologies, “we can’t ignore that fact anymore.”

The second reason for the current importance of the environmental surround is a much more somber one. Our technologies — both industrial and digital — have devastated the environment through pollution, global warming, and the extermination of individual species and whole ecosystems. This is less the result of deliberate actions on our part, than of our unwitting interactions with all those factors in the environmental surround that imperceptibly affect us, and are themselves affected by us in turn. Climate change and radioactive decay are prime examples of what the ecocritic Timothy Morton calls hyperobjects: actually existing things that we cannot ever perceive directly, because they are so widely distributed in time and space. For instance, we cannot experience global warming itself, despite the fact that it is perfectly real. Rather, we experience “the weather” on particular days. At best, we may experience the fact that these days are warmer on average than they used to be. But even the coldest day of the winter does not refute global warming; nor does the hottest summer day “prove” it. Once again, we are faced with things or processes that exceed our direct perceptual grasp, but that nonetheless powerfully affect whatever we do perceive and experience.

Paolo Bacigalupi’s science fiction short story “The People of Sand and Slag” addresses just this situation. The narrator, and the other two members of his crew, are posthumans, genetically engineered and augmented in radical ways. They have “transcended the animal kingdom.” But their bodies and minds are not the outcome of any sort of Promethean, extropian, or accelerationst program. Rather, they have been altered from baseline human beings in order to meet the demands of a radically changed environment. They are soldiers, guarding an automated mining operation in Montana. The three of them share a close esprit de corps; but otherwise, they seem devoid of empathy or compassion. As befits their job, they are extremely strong and fast; when they are hurt, their wounds heal quickly and easily. Sometimes, during sex play or just for fun, they embed razors and knives in their skin, or even chop off their own limbs; everything heals, or grows back, in less than a day. For food, they consume sand, petroleum, mining leftovers, and other industrial waste. They live and work in what for us would be a hellish landscape of “acid pits and tailings mountains,” and other residues of scorched-earth strip mining. And for vacation, they go off to Hawaii, and swim in the oil-slick-laden, plastic-strewn Pacific. They seem perfectly adapted to their environment, a world in which nearly all unengineered life forms have gone extinct, and in which corporate competition apparently takes the form of incessant low-grade armed conflict.

In the course of Bacigalupi’s story, the soldier protagonists come upon a dog. The creature is almost entirely unknown to them; they’ve never seen one before, except in zoos or on the Web. Nobody can explain where it came from, or how it survived before they found it, in a place that was toxic to it, and that had none of its usual food sources. The soldiers keep the dog for a while, as a curiosity. They do not understand how it could ever have survived, even in a pre-biologically-engineered world. They take for granted that it is “not sentient”; and they are surprised when it shows affection for them, and when they discover that it can be taught to obey simple commands.

The soldiers are perturbed by just how “vulnerable” the dog is; it needs special food and water, and incessant care. They find that they continually “have to worry about whether it was going to step in acid, or tangle in barb-wire half-buried in the sand, or eat something that would keep it up vomiting half the night.” In their world, a dog is “very expensive to maintain… Manufacturing a basic organism’s food is quite complex… Recreating the web of life isn’t easy.” In the end, it’s simply too much annoyance and expense to keep the dog around. So the soldiers kill it, cook it over a spit, and eat it. They don’t find meat as tasty as their usual diet of petroleum and sand: “it tasted okay, but in the end it was hard to understand the big deal.”

From bats to dogs to posthumans: philosophy and science fiction alike explore varying degrees of likeness and of difference. The point is not to achieve certainty, as Descartes hoped to do. Nor is the point to conquer reality, or to think that we can master it, or even that we can really know it. The point is not even to “know thyself.” But rather, perhaps. to come to terms with the multitudes that live and think within us, which we cannot ever live and think without, but which we can also never reduce to ourselves.

Spring Breakers talk and podcast

December 11th, 2013

Thanks to Bernard Geoghegan, the audio of my recent talk on Spring Breakers (delivered in Berlin, and again in Lisbon) is now available online, together with a follow-up podcast. You can find them both on Bernard’s website

The podcast can be directly downloaded here:

The audio of the lecture can be downloaded here:

And the slides that accompany the lecture are available here: