Archive for the ‘Tech’ Category

Bats, Dogs, and Posthumans

Sunday, December 22nd, 2013

Here’s an essay I have written for a compilation of essays to be published in 2014 entitled Turborealism, following an exhibition with the same title curated by Victoria Ivanova and Agnieszka Pindera at Izolyatsia, Donetsk, Ukraine.


What is it like to be a bat?

The philosopher Thomas Nagel asked this question in a famous essay, first published in 1974. Most people today would assume that bats, like dogs and cats and other mammals, are not mere automata. They have experiences, which is to say that that have some sort of inner, subjective life. In other words, Nagel says, it is “like something” to be a bat. And yet, bats are so different from us that it is hard for us to imagine just what being a bat is like. How can we find a human equivalent for its powers pf echolocation, or its experience of flight? In comparison to human beings and other primates, Nagel says, bats are a “fundamentally alien form of life.” In particular, “bat sonar, though clearly a form of perception, is not similar in its operation to any sense that we possess, and there is no reason to suppose that it is subjectively like anything we can experience or imagine.” We cannot easily think ourselves into the mind of a bat.

Nagel’s question is really just a vivid example of a problem that has long been a matter of concern for Western thought. Even since Descartes, philosophers and artists alike have worried about the problem of other minds. Descartes makes subjective experience the ground for all certainty. I think, therefore I am: this means that, even if all all my particular thoughts are delusional or false, the fact that I am thinking them is still true. But how much of a reassurance is this, really? I do not experience anyone else’s feelings from the inside, in the way that I experience my own. Descartes worries that the figures he sees through the window might not be actual human beings, but “hats and cloaks that might cover artificial machines, whose motions might be determined by springs.” However absurd or paranoid such a hypothesis seems, there is no way to absolutely disprove it. Modern science fiction works — think of Philip K. Dick’s novels, or The Matrix movies — still take up this theme: they express the disquieting sense that the world, with all the people in it, is nothing more than an enormous virtual-reality simulation somehow being fed into our minds.

The best answer to this sort of paranoid skepticism is the argument from analogy. Other people generally act and react, and express themselves, in much the same way that I do: we all laugh and cry, groan when we are in pain, agree that the wall over there is painted red. On this basis, I can presume that other human beings must also have the same sort of consciousness, or inner experience, that I do. Of course, this is not an absolute logical proof; and it leaves open the possibility that other people might be shamming or acting: pretending to be in pain when they are not. And yet, the argument from analogy works pragmatically. As Wittgenstein put it, despite his own skepticism about the language of inner experience: “just try — in a real case — to doubt someone else’s fear or pain!” Only a sociopath would do so.

The real problem with analogy lies in the opposite direction: in the fact that we tend to extend it further than we should. We are so good at discerning other people’s feelings, desires, and intentions, that we tend to believe that these things exist even where they do not. We discern patterns in random bits of data. We attribute intention to deterministic mechanisms. We decipher messages that in fact were never sent. We assume that everything in the world is somehow concerned with us. Paranoid credulity is a worse danger than paranoid skepticism.

If we fail to grasp what it is like to be a bat, then, this is less because we fail to recognize it at all, than because we tend to anthropomorphize it unduly. We all too smugly assume that bats are just like us, only not as smart. We tend to subsume a creature like the bat under our own image of thought, forgetting that it might think and feel in radically different ways. For how else could we hope to understand the bat at all? But if we have a hard time grasping the mind of a bat, then how can we even hope to grasp the mind of a much more distant intelligent organism — for instance, an octopus? And what about — to extrapolate still further — the minds of intelligent beings from other planets? Peter Watts’ science fiction novel Blindsight tells the story of a First Contact with aliens who are more advanced than us by any intellectual or technological measure, but who turn out not to be conscious at all, in any sense that we are able to recognize or understand.

Watts imagines his aliens by inverting the argument from analogy. His novel’s title — Blindsight — refers to a well-documented medical condition in which people are overtly blind, but able to see unconsciously. Blindsight sufferers are not aware of seeing anything. But if you throw them a ball, they are often able to catch it; and if you ask them to “guess” the location of a light that they cannot see, they are usually able to turn in the right direction. Apparently their brains are still processing visual stimuli, even though the outcome of this processing is never “reported” to the conscious mind. Such nonconscious mental activity provides the analogy on the basis of which Watts imagines his aliens. In doing so, he manages disquietingly to suggest that consciousness might well be evolutionarily maladaptive, reducing our efficiency and our ability to compete with other organisms.

Watt’s speculative fiction is not an idle fantasy. In fact, nonconscious mental processes are not just confined to people who suffer from blindsight or other neurological disorders. Contemporary neurobiology tells us that most of what our brains do is nonconscious, and even actively opaque to consciousness. At best, we are only aware of the results of all our complex mental activity. The price we pay for conscious access to the world is an inability to grasp the mechanisms that provide us with this access. We cannot “see” the processes that allow us to see. As the neurophilosopher Thomas Metzinger puts it, “transparency is a special form of darkness.”

This puts the whole question of “what it is like” on a different footing. If I do not know what it is like to be a bat, this is because I also do not know what it is like to be a human being. Indeed, I do not even really know “what it is like” to be myself. My consciousness is radically incomplete, and it never “belongs” only to myself. Descartes’ “I think” is generated, and driven, by all sorts of nonconscious (and non-first person) mental processes. Other things think through me, and inside me. My own thought is merely the summation, and to some degree the transformation, of all these other thoughts that think me, and of which I am not (and cannot ever be) aware. Such nonconscious thought may well include — but is surely not limited to — what has traditionally been known as the Freudian unconscious. My thought processes are not self-contained, but broadly ecological or environmental.

In part, this is because all thought is embodied. As Alfred North Whitehead once put it, “we see with our eyes, we taste with our palates, we touch with our hands.” Today we might add that we see with our neurons and cortex, as well as with our eyes. But even this does not go far enough. We should also say that we see with the objects that reflect photons into our eyes. We hear with our ears, but we also hear with the things whose vibrations are transmitted through the air to us. We sense and feel by means of all the things in our surroundings that incessantly importune us and affect us. And these include, but are not limited to, the objects of which we are overtly aware. For the greater part of our environmental surround consists of things that, in themselves, remain below the threshold of conscious discrimination. We do not actually perceive such things, but we sense them indirectly, in the vague form of intuitions, atmospheres, and moods.

This vast environmental surround also subtends our use of analogy in order to grasp “other minds,” or to imagine “what it is like” to be another creature. Degrees of resemblance (metaphors) themselves depend upon degrees of proximity (metonymies) within the greater environment. Consider, for instance, the dog instead of the bat. Dogs are not intrinsically any more similar to us than bats. They operate largely by smell; if anything, this is even more difficult for us to imagine than operating by sound. Blind people can often learn to echolocate with their voices, or with the tapping of their sticks. But it is unlikely that any human being (at least as we are currently constituted) could learn to olfactolocate as dogs do.

Despite this, we feel much closer to dogs than we do to bats. We are much more able to imagine what they think, and to describe what they are like — even on points where they differ from ourselves. This is because of our long historical association with them. Dogs are our commensals, symbionts, familiars, and companions; we have been together with them for thousands of years. We share much more of a common environmental background with dogs than we do with bats. This means that many of the things that think within us also think within dogs — in a way that is not at all true for bats. Evidently, neither visual objects nor olfactory objects affect us, or think within us, in the same way that they affect, or think within, dogs; nonetheless, their common presence helps to bridge the gap between us and them.

No thought is possible without, or apart from, what I am calling the environmental surround. Doubtless this has been true as long as humanity has existed — indeed, as long as any form of life whatsoever has existed. But why is this situation of special concern to us now? Or better: why has it become so urgent now? I think there are two reasons for this, which I will discuss in turn.

In the first place, recent digital technologies have allowed us to grasp and account for the environmental surround, more thoroughly and precisely than ever before. Media theorist Mark Hansen writes of how digital microsensors, spread ubiquitously within our bodies and throughout our surroundings, are able to compile information, and give us feedback, about environmental processes that are not phenomenally or introspectively available to us. We can now learn — albeit indirectly and after the fact — about imperceptible features that nonetheless help to shape our decisions and our actions: things like muscles tensing, or action potentials in neurons, but also subliminal environmental cues. We can then use this information to reshape the environment that will influence our subsequent decisions and actions.

The science fiction writer Karl Schroeder pushes this even further. In his near-future short story “Deodand,” he envisions a world in which ubiquitous microsensors break down the distinction between subjects and objects, or between human beings, nonhuman organisms, and lifeless things. “Fantastic amounts of data” are not only collected for our benefit, but also “exchanged between the sand-grain sized sensors doing the tagging,” and ultimately between the “things themselves.” Once an entity has a rich enough datafeed, it implicitly declares its own personhood. Objects are able to speak and respond to one another, and thereby to assert, and to act in, their own interests. Schroeder’s story tell us that we must reject “the idea that there’s only two kinds of┬áthing, people, and objects.” For most entities in the world are “a little bit of both.” This has always been the case; but today, with our microsensing technologies, “we can’t ignore that fact anymore.”

The second reason for the current importance of the environmental surround is a much more somber one. Our technologies — both industrial and digital — have devastated the environment through pollution, global warming, and the extermination of individual species and whole ecosystems. This is less the result of deliberate actions on our part, than of our unwitting interactions with all those factors in the environmental surround that imperceptibly affect us, and are themselves affected by us in turn. Climate change and radioactive decay are prime examples of what the ecocritic Timothy Morton calls hyperobjects: actually existing things that we cannot ever perceive directly, because they are so widely distributed in time and space. For instance, we cannot experience global warming itself, despite the fact that it is perfectly real. Rather, we experience “the weather” on particular days. At best, we may experience the fact that these days are warmer on average than they used to be. But even the coldest day of the winter does not refute global warming; nor does the hottest summer day “prove” it. Once again, we are faced with things or processes that exceed our direct perceptual grasp, but that nonetheless powerfully affect whatever we do perceive and experience.

Paolo Bacigalupi’s science fiction short story “The People of Sand and Slag” addresses just this situation. The narrator, and the other two members of his crew, are posthumans, genetically engineered and augmented in radical ways. They have “transcended the animal kingdom.” But their bodies and minds are not the outcome of any sort of Promethean, extropian, or accelerationst program. Rather, they have been altered from baseline human beings in order to meet the demands of a radically changed environment. They are soldiers, guarding an automated mining operation in Montana. The three of them share a close esprit de corps; but otherwise, they seem devoid of empathy or compassion. As befits their job, they are extremely strong and fast; when they are hurt, their wounds heal quickly and easily. Sometimes, during sex play or just for fun, they embed razors and knives in their skin, or even chop off their own limbs; everything heals, or grows back, in less than a day. For food, they consume sand, petroleum, mining leftovers, and other industrial waste. They live and work in what for us would be a hellish landscape of “acid pits and tailings mountains,” and other residues of scorched-earth strip mining. And for vacation, they go off to Hawaii, and swim in the oil-slick-laden, plastic-strewn Pacific. They seem perfectly adapted to their environment, a world in which nearly all unengineered life forms have gone extinct, and in which corporate competition apparently takes the form of incessant low-grade armed conflict.

In the course of Bacigalupi’s story, the soldier protagonists come upon a dog. The creature is almost entirely unknown to them; they’ve never seen one before, except in zoos or on the Web. Nobody can explain where it came from, or how it survived before they found it, in a place that was toxic to it, and that had none of its usual food sources. The soldiers keep the dog for a while, as a curiosity. They do not understand how it could ever have survived, even in a pre-biologically-engineered world. They take for granted that it is “not sentient”; and they are surprised when it shows affection for them, and when they discover that it can be taught to obey simple commands.

The soldiers are perturbed by just how “vulnerable” the dog is; it needs special food and water, and incessant care. They find that they continually “have to worry about whether it was going to step in acid, or tangle in barb-wire half-buried in the sand, or eat something that would keep it up vomiting half the night.” In their world, a dog is “very expensive to maintain… Manufacturing a basic organism’s food is quite complex… Recreating the web of life isn’t easy.” In the end, it’s simply too much annoyance and expense to keep the dog around. So the soldiers kill it, cook it over a spit, and eat it. They don’t find meat as tasty as their usual diet of petroleum and sand: “it tasted okay, but in the end it was hard to understand the big deal.”

From bats to dogs to posthumans: philosophy and science fiction alike explore varying degrees of likeness and of difference. The point is not to achieve certainty, as Descartes hoped to do. Nor is the point to conquer reality, or to think that we can master it, or even that we can really know it. The point is not even to “know thyself.” But rather, perhaps. to come to terms with the multitudes that live and think within us, which we cannot ever live and think without, but which we can also never reduce to ourselves.

Near Future — Reading List

Monday, December 20th, 2010

Just a short reading list of plausible looks into the near future, selected from among books I have read recently:

  • Lauren Beukes, Moxyland
  • Charles Stross, Halting State
  • Richard K. Morgan, Market Forces
  • Ken MacLeod, The Execution Channel
  • Matthew De Abaitua, The Red Men
  • M. T. Anderson, Feed
  • Tricia Sullivan, Lightborn
  • Scott Bakker, Neuropath
  • Paolo Bacigalupi, The Wind-Up Girl
  • Jack Womack, Random Acts of Senseless Violence
  • Bruce Sterling, The Caryatids
  • Mark Wernham, Martin Martin’s On the Other Side
  • Walter Jon Williams, This is Not a Game
  • Cory Doctorow, Makers


Sex + Love With Robots

Monday, December 31st, 2007

David Levy’s Love + Sex With Robots aims to persuade us that, by 2050 at the latest, it will be a common thing for people to fall in love with robots, have committed relationships with them, and have sex with them. The author wants both to shock us with the extravagance of this claim, and yet demonstrate to us carefully that such a prospect is entirely likely, and that his extrapolation is entirely rational. And indeed, Levy’s thesis is not all that extreme, when you compare it with, for instance, Ray Kurzweil’s claim that the Singularity will overtake us by 2049.

Still, I think that predicting the future is impossible, and therefore inherently ridiculous. That doesn’t mean we shouldn’t speculate and extrapolate; what it means is that we should read futuristic predictions in the same way that we read science fiction novels. As Warren Ellis recently put it, science fiction is “a tool with which to understand the contemporary world.” More precisely, SF (and nonfiction futuristic speculation as well) is a tool with which to understand those aspects of the contemporary world that are unfinished, still in process, and therefore (as it were) redolent of futurity. SF and futurism are vital and necessary, because they make us stop and look at the changes going on all around us, breaking with the “rear-view-mirrorism” (as Marshall McLuhan called it) that otherwise characterizes the way we tend to look at the world. That’s why I find it indispensable to read people like Bruce Sterling, Jamais Cascio, Charles Stross, Warren Ellis, and so on. The line between science fiction and futurist speculation is an extremely thin one (and some of the people on my list, most notably Sterling, explicitly do both). Extrapolating the future is necessarily a fiction-making activity; but we can’t understand the present, or be ready for the future, unless we go beyond empirical fact and turn to fiction.

That said, I fear that Love + Sex With Robots struck me as being more symptomatic than truly thoughtful, much less informative. There’s a certain (willed?) naivete to the book, as when Levy cites all sorts of dubious scientific studies and surveys — mostly conducted since 1985 — in order to prove that, for instance, “one of the stronger reasons for falling in love is need — the need for intimacy, for closeness, for sexual gratification, for a family” (p. 40). This is the sort of thing that gives (or at least should give) supposedly “scientific” research a bad name. Is a psychological research team really needed to verify cliches that have wide circulation throughout our culture? “Research” of this sort, which reproduces what everybody already “knows”, is entirely solipsistic: it is pretty much equivalent to telling somebody something, and then asking them to repeat what you told them back to you.

I suppose the idea that people crave intimacy, or sexual gratification for that matter, was merely “folk psychology,” with no objective status, until it was scientifically verified, by research summarized in an article published in 1989 in The Journal of Social and Personal Relationships (as mentioned on Levy’s p. 38). It’s remarkable how — if we accept Levy’s research sources and citations — we knew nothing whatsoever about human nature a mere thirty years ago, and now we know almost everything about it that there is to know; we have gotten, for instance “a definitive answer to the question” of whether men or women have a stronger sex drive (the answer — surprise, surprise, is that men do — pp. 294-295).

Sarcasm aside, it seems obvious to me — in line with what I said above about science fiction — that one can learn a lot more about “falling in love,” and the intensity of sexual drives, and so on, from reading romance novels, for instance, than from slogging through “scientific” studies of the sort Levy cites on nearly every page of Love + Sex With Robots.

But leaving that aside — and also leaving aside the most entertaining portions of Levy’s book, such as the one where he goes through the history of vibrators and other sex toys — Love + Sex With Robots presents us (inadvertently perhaps) with an odd paradox. On the one hand, Levy argues that we will soon be able to fall in love with robots, and have sex with them, because the experience will essentially be indistinguishable from falling in love with, and having sex with, other human beings. He advocates something like the Turing test for emotions, as well as for cognition: “the robot that gives the appearance, by its behavior, of having emotions should be regarded as having emotions, the corollary of this being that if we want a robot to appear to have emotions, it is sufficient for it to behave as though it does” (p. 120). This, in itself, is unexceptionable. SF has treated the question of androids’ indistinguishability from biological human beings in numerous works, Blade Runner being the most famous but far from the only example. And Levy is not far from SF in his assertions that robots will be able to do everything that we do, only better.

Of course, that still leaves the question of how we get from here to there. Levy tends to elide the difficulty of jumping from what is possible now, to the point where robots can actually pass the Turing test. He doesn’t seem to think that this gap is such a big deal. He blithely asserts, for instance, that programming robots, not only to “imitate human sociability traits,” but also “to go further and create sociability traits of their own” is a task “possibly no more difficult to program than the task of composing Mozart’s 42nd Symphony or painting a canvas that can sell in an art gallery for thousands of dollars — tasks that have already been accomplished by AI researchers” (pp. 166-167). One may well question whether the music-writing program he cites (by David Cope of UC-Santa Cruz) really makes works that have the power and originality of Mozart. But we get this sort of assertion again and again. Levy writes that “I am convinced that by 2025 at the latest there will be artificial-emotion technologies that can not only simulate the full range of human emotions and their appropriate responses but also exhibit nonhuman emotions that are peculiar to robots”; the sole evidence he offers for this assertion is the fact that “research and development in this field is burgeoning” (p. 86).

Levy suggests, as well, that the problem of robots’ intellectual knowledge is a trivial one: “one example of a similarity that will be particularly easy to replicate is a similarity of education, since just about all of the world’s knowledge will be available for incorporation into any robot’s encyclopedic memory. If a robot discovers through conversation that its human possesses knowledge on a given subject at a given level, its own knowledge of that subject can be adjusted accordingly — it can download more knowledge if necessary, or it can deliberately ‘forget’ certain areas or levels of knowledge in order that its human will not feel intimidated by talking to a veritable brain box” (pp. 144-145). Forgive me for not sharing Levy’s faith that such a thing will be “particularly easy” to do; judging from the very limited success of programs like Cyc, we are nowhere near being able to do this.

If I find Levy’s claims extremely dubious, it is not because I think that human intelligence (or mentality) somehow inherently defies replication. But such replication is an extremely difficult problem, one that we are nowhere near to resolving. It certainly isn’t just a trivial engineering issue, or a mere quantitative matter of building larger memory stores, and more powerful and more capacious computer chips, the way that Levy (and other enthusiasts, such as Ray Kurzweil) almost always tend to assume. AI research, and the research in related fields like “emotional computing,” cannot progress without some fundamental new insights or paradigm shifts. Such work isn’t anywhere near the level of sophistication that Levy and other boosters seem to think it is. Levy wildly overestimates the successes of recent research, because he underestimates what “human nature” actually entails. His models of human cognition, emotion, and behavior are unbelievably simplistic, as they rely upon the the inanely reductive “scientific” studies that I mentioned earlier.

Much science fiction, of course, has simply abstracted from these difficulties, in order to think through the consequences of robots and AIs actually being able to pass the Turing test. But this is where the paradox of Levy’s argument really kicks in. For, at the same time that he asserts that robots will be able to pass the Turing test, he still continues to treat them as programmable entities that can be bent entirely to our will. There are numerous rhapsodic passages to the effect that, for instance, “another important difference [between human beings and robots] is that robots will be programmable never to fall out of love with their human” (p. 132). Or that a robot who is “better in the bedroom” than one’s “husband/wife/lover” will be “readily available for purchase for the equivalent of a hundred dollars or so” (p. 306). Or that, in the future, you “will be able to go into the robot shop and choose from a range of personalities, just as you will be able to choose from a range of heights, looks, and other physical characteristics” (pp. 136-137). Or, again, that a robot’s personality “can be adjusted to conform to whatever personality types its human finds appealing… The purchase form will ask questions about dimensions and basic physical features, such as height, weight, color of eyes and hair, whether muscular or not…” and so on and so forth (p. 145 — though interestingly, skin color is never mentioned as a variable, even though eye and hair color are a number of times). In short, Levy asserts that robots will be loved and used as sex partners not only because they are just as ‘real’ emotionally and intellectually as human beings, but also because they have no independence, and can be made to entirely conform to our fantasies. They will sell, not only because they are autonomous agents, but also because they are perfect commodities. They will be just like Tamagotchis, only more “realistic”; and just like vibrators, only better.

Actually, the weirdness goes even further than this. The imputation of agency to robots, while at the same time they remain commodities serving our own desires, leads to some very strange contortions. The book is filled with suggestions along these lines: “A robot who wants to engender feelings of love from its human might try all sorts of different strategies in an attempt to achieve this goal, such as suggesting a visit to the ballet, cooking the human’s favorite food, or making flattering comments about the human’s new haircut, then measuring the effect of each strategy by conducting an fMRI scan of the human’s brain. When the scan shows a higher measure of love from the human, the robot would know that it had hit upon a successful strategy. When the scan corresponds to a low level of love, the robot would change strategies” (pp. 36-37). I must say I find this utterly remarkable as a science-fiction scenario. For it suggests that the robot has been programmed to put its human owner under surveillance, the better to manipulate the owner’s emotions. The human being has purchased the robot, precisely in order that the robot may seduce the human being into doing whatever it (the robot) desires (leaving open the question of what it desires, and how these desires have been programmed into it in the first place). Such a scenario goes beyond anything that Philip K. Dick (or, for that matter, Michel Foucault) ever imagined; it extrapolates from today’s feeble experiments in neuromarketing, to a future in which such manipulation is not only something that we are subjected to, but something that we willingly do to ourselves.

So, the paradox of Levy’s account is that 1) he insists on the indistinguishability of human beings and (suitably technologically advanced) robots, while 2) at the same time he praises robots on the grounds that they are infinitely programmable, that they can be guaranteed never to have desires that differ from what their owners want, and that “you don’t have to buy [a robot] endless meals or drinks, take it to the movies or on vacation to romantic but expensive destinations. It will expect nothing from you, no long-term (or even short-term) emotional returns, unless you have chosen it to be programmed to do so” (p.211).

How do we explain this curious doubleness? How can robots be both rational subjects, and infinitely manipulable objects? How can they both possess an intelligence and sensibility at least equal to that of human beings, and retain the status of commodities. Or, as Levy himself somewhat naively puts it, “today, most of us disapprove of cultures where a man can buy a bride or otherwise acquire one without taking into account her wishes. Will our children and their children similarly disapprove of marrying a robot purchased at the local store or over the Internet? Or will the fact that the robot can be set to fall in virtual love with its owner make this practice universally acceptable?” (p. 305).

I think the answer is that this doubleness is not unique to robots; it is something that applies to human beings as well, in the hypercommodified consumer society that we live in. (By “we”, I mean the privileged portion of humankind, those of us who can afford to buy computers today, and will be able to afford to buy sexbots tomorrow — but this “we” really is, in a sense, universal, since it is the model that all human beings are supposed to aspire to). We ourselves are as much commodities as we are sovereign subjects; we ourselves are (or will be) infinitely programmable (through genetic and neurobiological technologies to come), not in spite of, but precisely because of, our status as “rational utility maximizers” entering the “marketplace.” This is already implicit in the “scientific” studies about “human nature” that Levy so frequently cites. The very idea that we can name, in an enumerated list, the particular qualities that we want in a robot lover, depends upon the fact that we already conceive of ourselves as being defined by such a list of enumerable qualities. The economists’ idea that we bring a series of hierarchically organized desires into the marketplace similarly preassumes such a quantifiable bundle of discrete items.

Or, to quote Levy again: “Some would argue that robot emotions cannot be ‘real’ because they have been designed and programmed into the robots. But is this very different from how emotions work in people? We have hormones, we have neurons, and we are ‘wired’ in a way that creates our emotions. Robots will merely be wired differently, with electronics and software replacing hormones and neurons. But the results will be very similar, if not indistinguishable” (p.122). This is not an argument about actual biological causation, but precisely a recipe for manipulation and control. The robots Levy imagines are made in our image, precisely because we are already in process of being made over in theirs.

Science fiction update

Wednesday, September 12th, 2007

This is rambling and all over the place, but I think I will post it anyway, as it tries to make sense of a lot of the reading I have been doing lately, and which I haven’t previously mentioned in this blog.

SF writer Chris Moriarty, whose excellent novel Spin State I have just finished reading, notes on her website that the most fundamental distinction in science fiction as a genre is “the division between writers who view sf as being primarily about science and writers who view sf as being primarily about politics.” She goes on to note that, of course, this polarity is really a “continuum” rather than an absolute divide; but I think that the major point is well taken.

Actually, I might want to substitute “technology” for science in Moriarty’s formulation, because, even in the “hardest” SF the scientific knowledge is embodied in technology; and also because more metaphorical technologies, like those used in certain types of fantasy writing, are sometimes (though, obviously, not always) closer to the technologies that hard SF provides. (Think, for instance, about the use of artificial intelligence, alongside several kinds of flat-out magic, in China Mieville’s Perdido Street Station. So it might be best to state the division as between SF that looks at the imaginative possibilities unleashed by potential or future technology, and SF that looks at the political consequences of such technology. Though, of course, most good SF has elements of both.

I’m thinking about this because I want to work out more of the way that SF moves in between these two poles, and helps us focus on the ways that politics inflects technological development (and even scientific discovery) and the ways that scientific discovery and technological change inflect, divert, and alter socio-political possibilities.

For instance, Moriarty’s Spin State is premised on extrapolations from current quantum mechanics. There is a lot of stuff about correlated particles (providing for a loophole in the absolute restriction of movement to the speed of light or lower), and about the many-worlds implications of certain branches of quantum theory (though, thankfully, Moriarty never uses this as a mere plot device to bring alternative universes in contact with the one in which the novel is set). And Moriarty even provides several pages of bibliography at the end of the book, in which the real physics underlying the made-up physics of his novel is grounded and explained.

And yet, Spin State is, in a certain way, more about (old-fashioned) class struggle than it is about quantum theory. The action takes place mostly on a planet where workers are employed in horrific conditions as coal miners, digging up the “Bose-Einstein condensate” that is necessary for superluminal communication and travel, and that is buried amidst the coal. The miners’ conditions are every bit as bad, and even as ‘primitive’, as those of mine workers in, say, South Africa today. The flashy newer technologies that make up the world (universe) of the novel are overlaid upon the older technologies that in fact, already exist today. It is symbolically indicative that, several centuries from now, people are still watching the Mets take on the Yankees in the world series, and government troops are still being brought in to break strikes and keep the workers in line. The novel is split between the hell of the coal mines, where workers routinely die in cave-ins, or — if not — succumb to black lung by the time they are forty, and the paradise of completely immersive virtual worlds in which the illusion of physicality is complete, and material objects are as palpabe, and can be transferred, as easily as occurs in the physical world. We meet emergent artificial intelligences possessing superhuman computing power, quasi-human beings who have been genetically engineered and cloned to provide certain exploitable physical and mental characteristics, and people whose bodies have been extensively wired to give them strength, computing power, prosthetic memory, ability to interface directly with machines, and so forth. Yet these transformations,as well, are not universally available, but tied to power and privilege and economic status.

The thriller plot of Spin State involves contact with an alien form of intelligent life (so alien, that for a long time human beings were unable even to recognize it as either alive or intelligent), together with a love story between a quasi-human “construct” and an AI who can only instantiate itself physically by “borrowing” (actually, paying for use of) the bodies of physical human beings. Though the novel is partly committed to the sort of naturalistic “character development” that is sometimes considered requisite in genre fiction, it does at least to some extent speculate on what it might mean to speak of the “psychology” of an intelligent and communicating entity which, yet, is not entirely human, and not even a unified subject in the ways that we expect human beings to be. Moriarty doesn’t go anywhere near as far in this respect as Justina Robson does in Living Next Door to the God of Love, a book which is mind-blowing in the way that it imagines the depth psychology of entirely nonhuman subjects, and the emotional relationships such subjects might enter into with other nonhuman, as well as with more or less ordinarily human, beings. But Moriarty, unlike Robson, links this sort of strange emergence — something beyond what we might be capable of actually experiencing today — to socio-political conditions that are continuous with our rapaciously capitalistic present.

Well, I seem to have introduced a third category: nonhuman psychology, or (as I would prefer to call it) the affectivity of nonhuman beingss (including transhuman beings, genetically modified human beings, hybrid/cyborg human beings, and artificial intelligence beings), which is as separate from (and as influenced by) either science/technology or politics/economics as these two are separate from, and yet strongly influenced by one another. Nonhuman affectivities are an important part of science fiction, because they are part of the investigation of how “we” (taking this pronoun in the broadest possible sense) could be otherwise, and indeed how we are already (since SF is always about the futurity that is already implicit or incipient in the present) in process of becoming-otherwise. The key principle here, of course, being McLuhan’s, that as media (an even wider term than “technologies”) vary and change, so our very percepts, affects, and concepts vary and change.

But I digress. I wanted to mention, as a contrast to Moriarty, Vernor Vinge’s Rainbows End, which just won the Hugo for best SF novel of 2006. (I am looking forward to Vinge’s visit to my university’s campus later this year). Rainbows End is near-future SF; it extrapolates trends in “social software,” in wearable computing, in ubiquitous computing and ubiquitous networks, and massively parallel computing involving scores of people only virtually connected, etc., in order to suggest how these technologies are radically reshaping our social world. It is much more on the science/technology side of the continuum than the political; and I tend to distrust Vinge’s politics to a certain extent, I must say, as it seems (from what I can gather from his fiction) to trend libertarian-capitalist; not to mention that Vinge is the originator of the concept of The Singularity, which I think has subsequently (in the hands of others, at least) been greatly oversold.

Nonetheless, Rainbows End is quite brilliant both for the ways it integrates into a seamless whole its various technologies, all of which are floating around right now, but isolated from one another and only in incipient versions; and for the way that the book offers a vision in which the surveillance of the national security state, vigilantly on guard against terrorism and against any violation of so-called ‘intellectual property’ rights, has become so ubiquitous and taken-for-granted that the chance of getting away from it doesn’t exist anymore — the loss of civil liberties and of privacy is so established that it isn’t even an issue. The scariness of this is only mitigated by the fact that “freedom” of business entrepreneurship and of entertainment “choices” is left intact — i.e., you are entirely free to swear your allegiance to either of the two popular fantasy authors who have ripped off and updated Terry Pratchett, each in their own way. That is pretty much the way Vinge paints it, though I don’t think he is quite as snarkily ironic about the commerce part of it as I just was. (Disclaimer: I have nothing against Terry Pratchett; I am only objecting to his future imitators).

And, oh yes, Vinge’s book also has some interesting bits about a sort of mental virus that, spread by a combination of biological and net/informational infection, can cause an extremely high percentage of those exposed to suddenly want to buy a given product, or support a President’s rationale for waging war. And, also, the novel contains one mysterious character who (it seems — this is never explicitly spelled out) just may be an emergent AI, having arisen out of the Net itself, with its own somewhat alien agendas/interests and affects… That again.

But, for really hitting that point on the continuum where the social and political blends imperceptibly into the scientific and technological, and vice versa, the best SF I have read in quite some time is Warren Ellis’ new comic book series, Doktor Sleepless. Only two issues have come out so far, so it is hard to know quite where this is heading — it is as if I were to review Moriarty’s or Vinge’s novel on the basis of only reading the first fifty pages — but already we have been hit with an extraordinarily high density of new ideas, innovative concepts per page. One theme, at least, is the contrast between the shiny, high-concept SF of the past, and the way that technological innovation is already, much more quietly and unassumingly, worming its ways into our lives in ways that are far more profound, precisely because they are less spectacularly noticeable. In the world of Doktor Sleepless, people complain, “where’s my jet pack? where’s my flying car?”; but they fail even to notice how much they have been altered by stuff they take for granted, like (just barely beyond what we actually have today) ubiquitous instant messaging. Now, making fun of the grandiosity of Golden Age SF is nothing new; William Gibson did it twenty-five years ago in his short story “The Gernsback Continuum.” But Ellis is pointing, beyond this, to the increasing sense we have, in our globalized network society, that futurity itself is used up, that its horizons have shrunk, that we have nothing to look forward to. Our future hasn’t changed in, what, twenty or thirty years? The future we imagine today is no different from the one that Ridley Scott imagined in Blade Runner twenty-five years ago, just at the same time that Gibson was mocking the future that had been imagined twenty-five years before that. So we would seem to be in a stasis, where futurity has decayed, melted into an infernal, eternal present.

Against this malaise, Ellis’ Doktor Sleepless has a “terrible perscription” — though we do not know, after only two issues, what it is yet. But it does seem to involve DIY low tech, of the sort that has already changed our world, more profoundly perhaps than we have even noticed. [This really does ring true to me. My students are always surprised — not only that I grew up in a time before the Internet, even before personal computers — but, more stunningly, because it is something much more mundane — that I can remember the first time that I saw and used an ATM, and that — prior to that moment — I managed my bank account for many years without one].

Anyway.. Issue two of Doktor Sleepless introduces us to the “Shrieky Girls” — young women who have tiny haptic devices on their hands or arms, connected to the ubiquitous instant-messaging system that they can access through their contact lenses. The result is that they can share, not just words, but perceptions and sensations. When one of the Shrieky Girls takes a boy (or a girl) home with them, then the next morning “it’s all of them who share the modemed sensation of a warm arm closed softly around them.” So “Shrieky Girls are never alone; they live in an invisible web of constant secret conversation, transmitting raw feelings like they were texting notes.” What’s brilliant about all this is that it’s barely even SF; it’s only a step beyond what is already technologically feasible; and, in the world of the story, it isn’t even spectacular, but is something cobbled together cheaply and easily, out of already-obsolete components and second-hand networking links. Which makes it nearly invisible, even as it meses with our ideas about selfhood and privacy, and the boundaries between self and other, more profoundly than the more flashy technologies of science fiction past had ever done.

Ellis’ Global Frequency of several years ago already toyed with the making-mundane of the most extravagant SF visions that recent technologies have given us. And his novel (prose, not graphic) Crooked Little Vein, released this past summer, made the point that categories like “perversion,” and distinctions between the normal and the pathological, no longer make any sense in our society of what Baudrillard calls “transparency” and Jodi Dean calls “communicative capitalism” — and emphasized this with humor and relief, rather than with the horror of Baudrillard, or the moralistic fervor of those who bemoan the so-called “decline of symbolic efficacy” (the only “perversions” in the novel that are truly odious and objectionable are the ones that stem from the privileges and life-and-death powers that the extremely rich and well-connected exercise against the rest of us). Doktor Sleepless seems to be starting out from where these other works left off, and heading into uncharted territories. Ones in which the micro-affects and micro-politics of technologies that have insinuated themselves within our lives pretty much without a splash (albeit with lots of marketing hoopla) are exposed, dramatized, subject to the harsh scrutiny of genre fiction.

I mean, I’ll never have a jet pack, but won’t the iPhone change me? Why do I want one so badly, even though it won’t do anything for me “as a person” that my current phone (albeit using the disgustingly ponderous and irritating and user-unfriendly Windows Mobile platform) doesn’t do already?

Software and Hardware

Tuesday, May 16th, 2006

Now that Apple has completed migrating its laptops to Intel chips, I really want to get a new one; I really want to replace my aging (in computer-age terms, i.e. it is 2 1/2 years old) 12″ PowerBook with a new 13″ MacBook… Except for one thing. My PowerBook weighs 4.3 lbs, which is way too heavy. The new MacBook — the lightest and most compact Mac laptop model, just announced — weighs 5.2 lbs, it is almost a full pound heavier. If I got one, and took it around with me, it would break my back! I really want Apple to introduce a new, lightweight subnotebook, 3 lbs or less, something that I could carry about with me everywhere.

The laptop — like the Treo (phone/PDA), the iPod, and the digital camera; or for that matter, like my eyeglasses — is really part of my body.These are all prostheses that augment my ability to act in and with the world, to affect and be affected, as Spinoza would say, they are parts of my distributed cognitive (and affective) network, as Andy Clark would say. So these ergonomic considerations are really important; they are equally as important as, and should indeed be considered a part of, the design aesthetics that Steve Jobs so (justifiably, in other respects) vaunts himself on being concerned with. Why can’t Apple come out with something comparable to the 1.9 lb Sony Vaio, which is a beautiful and reasonably high-powered machine (even if not quite as beautiful as Apple’s laptops), whose only major defect is that it runs Windows instead of the Mac OS?

Since I am harping on my techno commodity fetish obsessions — all I want to do is buy! buy! buy!, but the product has to be just right, and that just-rightness includes a brand or corporate identification — let me ask an open question about software. I am looking for some sort of Mac OS program that I could use as a sort of database of writing fragments. That is to say, a program that just connects short notes I write, snippets of questions or half-formed paragraphs, i.e. text fragments from a few lines to a few paragraphs in length, together with a bunch of web and article citations on various subjects — all this to collect material that I could use as raw material for later or more concerted writing.

I already have a fine program, Yojimbo, that I use to collect miscellaneous web snippets, texts, images, links, and so on (sometimes I just keep the urls for pages I have found interesting; other times I keep the entire contents of the web page). But here I am looking for something different, something that I would use mostly for text fragments I write myself. So being able to handle formats other than plain text wouldn’t be that important. (As long as I can hyperlink to bibliography sources, to images, etc., I wouldn’t need them in the database itself). I need something that is not too hierarchically organized (I cannot group these fragments into categories and subcategories, which is why certain programs I’ve tried, like Circus Ponies Notebook, aren’t quite right for me), but that allows for powerful searching. I’d like to be able to associate each fragment or entry with an unlimited number of metadata tags, and be able to search both by those and by full text content. It would be even better if the program could interpret LaTeX and BibTeX, since most of my entries would be in this format

Has anyone reading this tried either DevonThink or TAO, and are either of these along the lines of what I am looking for? Or can anyone recommend another program, that would be more suitable for my needs? Thanks…

Silent Cell Network

Monday, May 30th, 2005

Strange and interesting doings by the Silent Cell Network.


Monday, May 2nd, 2005

I just installed the new Mac OS X.4 (Tiger). So far, everything is working (keep your fingers crossed). The new features are great…

A Hacker Manifesto

Thursday, October 21st, 2004

McKenzie Wark‘s A Hacker Manifesto is a remarkable and beautiful book: cogent, radical, and exhilarating, a politico-aesthetic call to arms for the digital age.

The book really is, as its title says, a manifesto: a public declaration of principles for a radically new vision, and a call to action based on that vision. It’s written as a series of short, numbered paragraphs or theses; the writing is tight, compressed, and aphoristic, or a Wark himself likes to say, “abstract.” It’s not “difficult” in the way that certain “post-structuralist” philosophical texts (Derrida, Lacan, etc) are difficult; rather, A Hacker Manifesto is characterized by an intense lucidity, as if the writing had been subjected to intense atmospheric pressure, so that it could say the most in the least possible space. Deleuze writes somewhere that an aphorism is a field of forces in tension; Wark’s writing is aphoristic in precisely this sense. I read the book with both delight and excitement, even when I didn’t altogether agree with everything that Wark said.

A Hacker Manifesto owes something — both in form and content — to Marx and Engels, and more to Guy Debord’s Society of the Spectacle (a book about which I feel deeply ambivalent). Wark’s ambition (which he calls “crypto-marxist”) is to apply Marx’s ideas to our current age of digitization and “intellectual property.” Unlike cultural marxists and “post-marxists” (who tend to refer to Marx’s general spirit more than his actual ideas), Wark focuses squarely on “the property question,” which is to say, on issues of economic production, of ownership of the means of production and the results of the production process, and therefore of exploitation and expropriation. Class is the central category of Wark’s analysis, and Wark defines class as Marx defined it, as grounded in people’s diverse relations to production and property, rather than using the vaguer sociological sense (a group of people with a common sense of identity and values) that is most often used today. It’s always a question of conflicting interests between the producers of value, and the legal owners who gain profit from the producers’ labor, and who control the surplus that the producers produce.

Modern capitalism begins in the 16th and 17th centuries, when — in the wake of the decline of feudalism — wealthy landowners expropriate formerly common lands, reducing farmers or peasants to the status of (at best) paid laborers (but more often, landless people who own nothing, and can’t even find work). (This is the stage of what Marx calls “primitive accumulation,” a useful term that Wark oddly fails to employ). Capitalism then intensifies in the 18th and especially the 19th century, when industrial workers, in order to survive, must sell their labor to capitalists, who control the means of production, and who reap the profits from the massive economic expansion of industrialization. Wark sees a third version of this process in our contemporary Information Age, where the producers of information (understood in the widest sense: artists, scientists, software developers, and all sorts of innovators, anyone in short who produces knowledge) find their labor expropriated from them by large corporations which own patents and copyrights on their inventions. Wark calls the information producers “hackers,” and refers to the owners/expropriators of information as “the vectorialist class” (since “information” travels along “vectors” as it is reproduced and transmitted from place to place).

This formulation allows Wark to synthesize and combine a wide range of insights about the politics and economics of information. As many observers have noted, what used to be an information “commons” is increasingly being privatized (just as common land was privatized 500 years ago). Corporations trademark well-known expressions, copyright texts and data that used to circulate in the public domain, and even patent entire genomes. The irony is, that even as new technologies make possible the proliferation and new creation of all sorts of knowledge and information (from mash-up recordings to database correlations to software improvements to genetic alterations), the rules of “intellectual property” have increasingly restricted this proliferation. It’s paradoxical that downloading mp3s should be policed in the same way as physical property is protected from theft; since if I steal your car, you no longer have it, but when I copy your music file I don’t deprive you of anything. Culture has always worked by mixing and matching and altering, taking what’s already there and messing with it; but now for the first time such tinkering is becoming illegal, since the very contents of our common culture have been redefined as private property. As I’m always telling my students, under contemporary laws Shakespeare never could have written his plays. Though nothing is valued more highly in our world today than “innovation,” the rules of intellectual property increasingly shackle innovation, because only large corporations can afford to practice it.

Wark makes sense of these developments as nobody else has, by locating them, in his “crypto-marxist” terms, as phenomena of “the property question” and class struggle. “Information wants to be free but is everywhere in chains” (#126). This means also that the struggle over information is more crucial, more central, than traditional marxists (still too wedded to the industrial paradigm) have been willing to notice. While previous forms of economic exploitation have often been (dubiously) justified on grounds of scarcity, Wark points out that for information this justification becomes completely absurd. Information is cheap and abundant, and it takes all sorts of convolutions to bring it under the rule of scarcity. This alone reveals the idiocy of “intellectual property.” Individual hackers (software engineers, say, or songwriters) might feel they have something to gain economically by controlling (and making sure they get paid for) the product of their particular informational labors; but in a larger sense, their “class interest” lies in free information, because only in that way do they have access to the body of information or culture that is the “raw material” for their own creations. And the fact is that, by dint of their ownership of this raw material, it is always the “vectorlist class” who will profit from new creations, rather than the creators/hackers themselves.

In making his arguments, Wark brings together a number of different currents. If his Manifesto has its deepest roots in the Western Marxist tradition, from Marx himself through Lukacs and Benjamin to the Situationists, it also draws heavily on Deleuze and Guattari’s notions of the “virtual,” as well as Mauss’ theory of the gift. At the same time, it relates directly to the practices (and the ethos) of the free software movement, of DJs producing mash-ups, and of radical Net and software artists. (Indeed, much of the book originally appeared on the nettime listserv).

Much of the power of A Hacker Manifesto comes from the way that it “abstracts” and coordinates such a wide range of sources. Wark argues that the power of “information” lies largely in its capacity to make ever-larger “abstractions”: “to abstract is to construct a plane upon which otherwise different and unrelated matters may be brought into many possible relations. To abstract is to express the virtuality of nature, to make known some instance of its possibilities, to actualize a relation out of infinite relationality, to manifest the manifold” (#008). Abstraction is the power behind our current servitude, but it is also the source of our potential expanded freedom. The regime of intellectual property abstracts away from our everyday experience, turning it into a controlled stream of 1s and 0s. But the answer to this expropriation is to push abstraction still further, to unleash the potentialities that the “vectorialist” regime still restricts. A Hacker Manifesto is already, in itself, such an act of further abstraction; it charts a path from already-existing forms of resistance and creation to a more generalized (more abstract) mode of action.

There are various points, I admit, at which I am not entirely convinced. Wark makes, for instance, too much of a separation between industrial workers and hackers, as between capitalists and vectorialists; this underestimates the continuity of the history of expropriation; I’d be happier with a term like Hardt and Negri’s multitude, vague and undefined as it is, than I am with Wark’s too-rigid separation between industrial production and knowledge production. Hardt and Negri have a more generous understanding than Wark does of the ways in which the information economy creates the common. I’m also, I fear, too cynical to accept the historical optimism that Wark in fact shares with Hardt and Negri; in the world today, I think, in both rich countries and poor, our affective investments in commodification and consumerism are far too strong for our desires to really become aligned with our actual class interests (however powerful a case these theorists make for what those interests are).

Nonetheless, I don’t want to end this review on such a (mildly) negative note. If anything, I fear that my comments here have failed to give a sense of the full breadth of Wark’s argument: of the full scope of his references, of how much ground he covers, of the intensity and uncompromising radicality of his vision. Whether or not A Hacker Manifesto succeeds in rousing people to action, it’s a book that anyone who’s serious about understanding the changes wrought by digital culture will have to take into consideration.

Simondon on technology

Friday, April 30th, 2004

Gilbert Simondon’s book on technology, Du mode d’existence des objets techniques (On the mode of existence of technological objects), is not quite as rich as his books on indivduation (which I wrote about here). But it’s still fresh and thought-provoking (despite having been published as long ago as 1958 — it discusses vacuum tubes at great length, for instance, but doesn’t mention transistors), and it offers radical alternatives to the ways we usually think about the topics it discusses.
Basically, Simondon opposes the commonplace view (held alike by “common sense” and by philosophers such as Heidegger) that opposes technology to nature, and sees technology basically as a tool or mechanism for controlling and manipulating nature. Against this view, Simondon argues that technology cannot be reduced to a utilitarian function, because it is more than just particular tools used for particular purposes. Rather, technology must be understood: 1) as an ensemble; and 2) as a process of invention.
As an ensemble, technology involves more than particular tools or machines; it also involves the relations among these tools and machines, and the relations between them and the human beings who use them, as well as between them and their environments, the materials with which they interact.
Some technology, especially in its simpler aspects, takes the form of a single tool — a hammer, for instance — used by a particular person (a worker or craftsman) for particular tasks.
But most of the time, “technology” cannot be isolated in this way. Tools don’t exist in isolation; they are connected in all sorts of ways. They are connected, first, by the tasks they perform, which are increasingly complicated and require coordination all through the technical sphere. But beyond this, tools are interconnected because of the conceptual schemes that generate them: these same schemes, or designs, can be used in different contexts, in different materials, so that technology is transportable and transferable (“deterritorialized” in the vocabulary of Deleuze, who was greatly influenced by Simondon).
This also means that technology exceeds any narrow utilitarian purposes. As technology expands, its discovers and produces new relations between people and things, or between people and people, or between things and things. Technology is a network of relations: far from marking our alienation from the natural world, technology is what mediates between humankind and nature. It undoes the dualism that such a division implies, by networking human beings and natural entities into all sorts of subtle relations of feedback and mutual dependency. Far from being something deployed by a subject in order to dominate and control nature reduced to the status of an object, technology is what breaks down the subject/object polarity: it is always in between these poles, and it ensures that no human “subject” is free from and uncontaminated by the natural or physical world, while conversely, no “nature” or “materiality” is ever purely passive, purely an object. Every “object” has a certain degree of agency, and every “subject” has a certain degree of materiality; technology is the process, or the glue, that makes the idealist hypostasis of a naked subject facing brute objects impossible. (I do not know if Bruno Latour ever mentions Simondon, but the basis of much of his account of science and technology can be found here).
Technology is also necessary to the expansion of knowledge, according to Simondon. It is not the mere application of scientific knowledge, so much as it is the precondition for there to be such a thing as scientific knowledge: if only because scientific knowledge is generated when technology doesn’t work as expected, when it breaks down or deviates from its utilitarian function. Even (or especially) in its failures, technology is still “working.”
Another way to say this is to note Simondon’s second point, that technology is a process of invention. That is to say, it is a continuing process, not a fixed product. Tools are not just passively used; they are reconfigured, reinvented, extended and mutated in the process of use. Simondon writes that the “alienation” that has been so frequently noted in modernist discussion of machines, is not the consequence of technology per se; nor is it just the result of exploitation in the Marxist sense, the fact that workers do not own or profit from the machines that they operate (though that certainly plays a role). More fundamental, Simondon says, is the fact that factory workers are not able to participate in the active construction/invention/reconfiguration of their machines, but are only allowed to be their passive operators. In a truly technological culture, where invention and operation would be combined, this alienation would not take place. Decades before the fact, Simondon is here theorizing and advocating what today would be called hacking and hacker culture. Indeed, I think that the culture of hacking still has not caught up with Simondon, in the sense that hacking is mostly justified in pragmatic and/or libertarian terms, whereas Simondon adds a third dimension, a depth, to hacking by showing how it is essentially tied to technology as a basic component of human beings’ presence in the world.
There are a lot more themes and arguments in Simondon’s book that I haven’t been able to bring up here — for instance, his theories on the evolution of technology (which is not simply parallel to biological evolution, but differs from it in certain crucial ways), and on the relation of technology to other basic human activities (religion, art, science, philosophy) and to the split between “theory” and “practice” (Simondon does not consign technology to “practice”, but insists that it is prior to the split, and that a better understanding of technology would help us to overcome the duality between theory and practice). But there’s a lot to think about here, and I haven’t been able to absorb it all in just one reading.


Friday, February 27th, 2004

Well, my new PowerBook arrived today, and this is the first post that I am making with it. (I’m using ecto as my blogging client).
I was a Mac user for a long time, from c. 1991 to 1998; I switched to Windows because I wanted to have a really small laptop, 3 lbs or less — which didn’t (still doesn’t) exist for the Mac. But I missed the elegance and simplicity of the Macintosh aesthetic. Especially as OS X was developed, I felt that I was missing out on something I really wanted (though arguably — or just say, obviously — I didn’t need it, given that Windows XP does just about everything you need, albeit much more clunkily).
So finally, after looking at the state of my finances, and convincing myself through specious arguments that I could afford the additional charge on my credit card, I took the plunge.
The 12″ PowerBook is still too heavy (4.6 lbs) but I’m determined to carry it around with me everywhere anyway.