Sex + Love With Robots

David Levy’s Love + Sex With Robots aims to persuade us that, by 2050 at the latest, it will be a common thing for people to fall in love with robots, have committed relationships with them, and have sex with them. The author wants both to shock us with the extravagance of this claim, and yet demonstrate to us carefully that such a prospect is entirely likely, and that his extrapolation is entirely rational. And indeed, Levy’s thesis is not all that extreme, when you compare it with, for instance, Ray Kurzweil’s claim that the Singularity will overtake us by 2049.

Still, I think that predicting the future is impossible, and therefore inherently ridiculous. That doesn’t mean we shouldn’t speculate and extrapolate; what it means is that we should read futuristic predictions in the same way that we read science fiction novels. As Warren Ellis recently put it, science fiction is “a tool with which to understand the contemporary world.” More precisely, SF (and nonfiction futuristic speculation as well) is a tool with which to understand those aspects of the contemporary world that are unfinished, still in process, and therefore (as it were) redolent of futurity. SF and futurism are vital and necessary, because they make us stop and look at the changes going on all around us, breaking with the “rear-view-mirrorism” (as Marshall McLuhan called it) that otherwise characterizes the way we tend to look at the world. That’s why I find it indispensable to read people like Bruce Sterling, Jamais Cascio, Charles Stross, Warren Ellis, and so on. The line between science fiction and futurist speculation is an extremely thin one (and some of the people on my list, most notably Sterling, explicitly do both). Extrapolating the future is necessarily a fiction-making activity; but we can’t understand the present, or be ready for the future, unless we go beyond empirical fact and turn to fiction.

That said, I fear that Love + Sex With Robots struck me as being more symptomatic than truly thoughtful, much less informative. There’s a certain (willed?) naivete to the book, as when Levy cites all sorts of dubious scientific studies and surveys — mostly conducted since 1985 — in order to prove that, for instance, “one of the stronger reasons for falling in love is need — the need for intimacy, for closeness, for sexual gratification, for a family” (p. 40). This is the sort of thing that gives (or at least should give) supposedly “scientific” research a bad name. Is a psychological research team really needed to verify cliches that have wide circulation throughout our culture? “Research” of this sort, which reproduces what everybody already “knows”, is entirely solipsistic: it is pretty much equivalent to telling somebody something, and then asking them to repeat what you told them back to you.

I suppose the idea that people crave intimacy, or sexual gratification for that matter, was merely “folk psychology,” with no objective status, until it was scientifically verified, by research summarized in an article published in 1989 in The Journal of Social and Personal Relationships (as mentioned on Levy’s p. 38). It’s remarkable how — if we accept Levy’s research sources and citations — we knew nothing whatsoever about human nature a mere thirty years ago, and now we know almost everything about it that there is to know; we have gotten, for instance “a definitive answer to the question” of whether men or women have a stronger sex drive (the answer — surprise, surprise, is that men do — pp. 294-295).

Sarcasm aside, it seems obvious to me — in line with what I said above about science fiction — that one can learn a lot more about “falling in love,” and the intensity of sexual drives, and so on, from reading romance novels, for instance, than from slogging through “scientific” studies of the sort Levy cites on nearly every page of Love + Sex With Robots.

But leaving that aside — and also leaving aside the most entertaining portions of Levy’s book, such as the one where he goes through the history of vibrators and other sex toys — Love + Sex With Robots presents us (inadvertently perhaps) with an odd paradox. On the one hand, Levy argues that we will soon be able to fall in love with robots, and have sex with them, because the experience will essentially be indistinguishable from falling in love with, and having sex with, other human beings. He advocates something like the Turing test for emotions, as well as for cognition: “the robot that gives the appearance, by its behavior, of having emotions should be regarded as having emotions, the corollary of this being that if we want a robot to appear to have emotions, it is sufficient for it to behave as though it does” (p. 120). This, in itself, is unexceptionable. SF has treated the question of androids’ indistinguishability from biological human beings in numerous works, Blade Runner being the most famous but far from the only example. And Levy is not far from SF in his assertions that robots will be able to do everything that we do, only better.

Of course, that still leaves the question of how we get from here to there. Levy tends to elide the difficulty of jumping from what is possible now, to the point where robots can actually pass the Turing test. He doesn’t seem to think that this gap is such a big deal. He blithely asserts, for instance, that programming robots, not only to “imitate human sociability traits,” but also “to go further and create sociability traits of their own” is a task “possibly no more difficult to program than the task of composing Mozart’s 42nd Symphony or painting a canvas that can sell in an art gallery for thousands of dollars — tasks that have already been accomplished by AI researchers” (pp. 166-167). One may well question whether the music-writing program he cites (by David Cope of UC-Santa Cruz) really makes works that have the power and originality of Mozart. But we get this sort of assertion again and again. Levy writes that “I am convinced that by 2025 at the latest there will be artificial-emotion technologies that can not only simulate the full range of human emotions and their appropriate responses but also exhibit nonhuman emotions that are peculiar to robots”; the sole evidence he offers for this assertion is the fact that “research and development in this field is burgeoning” (p. 86).

Levy suggests, as well, that the problem of robots’ intellectual knowledge is a trivial one: “one example of a similarity that will be particularly easy to replicate is a similarity of education, since just about all of the world’s knowledge will be available for incorporation into any robot’s encyclopedic memory. If a robot discovers through conversation that its human possesses knowledge on a given subject at a given level, its own knowledge of that subject can be adjusted accordingly — it can download more knowledge if necessary, or it can deliberately ‘forget’ certain areas or levels of knowledge in order that its human will not feel intimidated by talking to a veritable brain box” (pp. 144-145). Forgive me for not sharing Levy’s faith that such a thing will be “particularly easy” to do; judging from the very limited success of programs like Cyc, we are nowhere near being able to do this.

If I find Levy’s claims extremely dubious, it is not because I think that human intelligence (or mentality) somehow inherently defies replication. But such replication is an extremely difficult problem, one that we are nowhere near to resolving. It certainly isn’t just a trivial engineering issue, or a mere quantitative matter of building larger memory stores, and more powerful and more capacious computer chips, the way that Levy (and other enthusiasts, such as Ray Kurzweil) almost always tend to assume. AI research, and the research in related fields like “emotional computing,” cannot progress without some fundamental new insights or paradigm shifts. Such work isn’t anywhere near the level of sophistication that Levy and other boosters seem to think it is. Levy wildly overestimates the successes of recent research, because he underestimates what “human nature” actually entails. His models of human cognition, emotion, and behavior are unbelievably simplistic, as they rely upon the the inanely reductive “scientific” studies that I mentioned earlier.

Much science fiction, of course, has simply abstracted from these difficulties, in order to think through the consequences of robots and AIs actually being able to pass the Turing test. But this is where the paradox of Levy’s argument really kicks in. For, at the same time that he asserts that robots will be able to pass the Turing test, he still continues to treat them as programmable entities that can be bent entirely to our will. There are numerous rhapsodic passages to the effect that, for instance, “another important difference [between human beings and robots] is that robots will be programmable never to fall out of love with their human” (p. 132). Or that a robot who is “better in the bedroom” than one’s “husband/wife/lover” will be “readily available for purchase for the equivalent of a hundred dollars or so” (p. 306). Or that, in the future, you “will be able to go into the robot shop and choose from a range of personalities, just as you will be able to choose from a range of heights, looks, and other physical characteristics” (pp. 136-137). Or, again, that a robot’s personality “can be adjusted to conform to whatever personality types its human finds appealing… The purchase form will ask questions about dimensions and basic physical features, such as height, weight, color of eyes and hair, whether muscular or not…” and so on and so forth (p. 145 — though interestingly, skin color is never mentioned as a variable, even though eye and hair color are a number of times). In short, Levy asserts that robots will be loved and used as sex partners not only because they are just as ‘real’ emotionally and intellectually as human beings, but also because they have no independence, and can be made to entirely conform to our fantasies. They will sell, not only because they are autonomous agents, but also because they are perfect commodities. They will be just like Tamagotchis, only more “realistic”; and just like vibrators, only better.

Actually, the weirdness goes even further than this. The imputation of agency to robots, while at the same time they remain commodities serving our own desires, leads to some very strange contortions. The book is filled with suggestions along these lines: “A robot who wants to engender feelings of love from its human might try all sorts of different strategies in an attempt to achieve this goal, such as suggesting a visit to the ballet, cooking the human’s favorite food, or making flattering comments about the human’s new haircut, then measuring the effect of each strategy by conducting an fMRI scan of the human’s brain. When the scan shows a higher measure of love from the human, the robot would know that it had hit upon a successful strategy. When the scan corresponds to a low level of love, the robot would change strategies” (pp. 36-37). I must say I find this utterly remarkable as a science-fiction scenario. For it suggests that the robot has been programmed to put its human owner under surveillance, the better to manipulate the owner’s emotions. The human being has purchased the robot, precisely in order that the robot may seduce the human being into doing whatever it (the robot) desires (leaving open the question of what it desires, and how these desires have been programmed into it in the first place). Such a scenario goes beyond anything that Philip K. Dick (or, for that matter, Michel Foucault) ever imagined; it extrapolates from today’s feeble experiments in neuromarketing, to a future in which such manipulation is not only something that we are subjected to, but something that we willingly do to ourselves.

So, the paradox of Levy’s account is that 1) he insists on the indistinguishability of human beings and (suitably technologically advanced) robots, while 2) at the same time he praises robots on the grounds that they are infinitely programmable, that they can be guaranteed never to have desires that differ from what their owners want, and that “you don’t have to buy [a robot] endless meals or drinks, take it to the movies or on vacation to romantic but expensive destinations. It will expect nothing from you, no long-term (or even short-term) emotional returns, unless you have chosen it to be programmed to do so” (p.211).

How do we explain this curious doubleness? How can robots be both rational subjects, and infinitely manipulable objects? How can they both possess an intelligence and sensibility at least equal to that of human beings, and retain the status of commodities. Or, as Levy himself somewhat naively puts it, “today, most of us disapprove of cultures where a man can buy a bride or otherwise acquire one without taking into account her wishes. Will our children and their children similarly disapprove of marrying a robot purchased at the local store or over the Internet? Or will the fact that the robot can be set to fall in virtual love with its owner make this practice universally acceptable?” (p. 305).

I think the answer is that this doubleness is not unique to robots; it is something that applies to human beings as well, in the hypercommodified consumer society that we live in. (By “we”, I mean the privileged portion of humankind, those of us who can afford to buy computers today, and will be able to afford to buy sexbots tomorrow — but this “we” really is, in a sense, universal, since it is the model that all human beings are supposed to aspire to). We ourselves are as much commodities as we are sovereign subjects; we ourselves are (or will be) infinitely programmable (through genetic and neurobiological technologies to come), not in spite of, but precisely because of, our status as “rational utility maximizers” entering the “marketplace.” This is already implicit in the “scientific” studies about “human nature” that Levy so frequently cites. The very idea that we can name, in an enumerated list, the particular qualities that we want in a robot lover, depends upon the fact that we already conceive of ourselves as being defined by such a list of enumerable qualities. The economists’ idea that we bring a series of hierarchically organized desires into the marketplace similarly preassumes such a quantifiable bundle of discrete items.

Or, to quote Levy again: “Some would argue that robot emotions cannot be ‘real’ because they have been designed and programmed into the robots. But is this very different from how emotions work in people? We have hormones, we have neurons, and we are ‘wired’ in a way that creates our emotions. Robots will merely be wired differently, with electronics and software replacing hormones and neurons. But the results will be very similar, if not indistinguishable” (p.122). This is not an argument about actual biological causation, but precisely a recipe for manipulation and control. The robots Levy imagines are made in our image, precisely because we are already in process of being made over in theirs.

7 Responses to “Sex + Love With Robots”

  1. […] The Pinocchio Theory » Blog Archive » Sex + Love With Robots (tags: futurism science-fiction) […]

  2. […] The Pinocchio Theory » Blog Archive » Sex + Love With Robots Steve Shaviro on David Levy’s “Love + Sex With Robots,” which is still on my shelf waiting to be read. (tags: books) […]

  3. People already have sex with robots. Its called internet porn. Why do the robots in question have to be anthropomorphic?

  4. Jack Holcomb says:

    “How can they both possess an intelligence and sensibility at least equal to that of human beings, and retain the status of commodities.”

    You go from here to the universal commodification of the individual, which I’ll buy, but I can’t help but see gender an invisible (ignored) factor in all of this. Women have typically been the favorite commodified class of humanity; the “basic pleasure model” is Daryl Hannah, not Rutger Hauer.

    Jack

  5. sfam says:

    Great review. I absolutely agree that the issue of reciprocal love (between the human and robot) is a dubious proposition at best, but flat out disappears if the “purpose” of the robot is to service a human’s bizarre set of personalized desires (How can robots be both rational subjects, and infinitely manipulable objects?). Either you create a full-featured fancy tool, which can service those needs but cannot “love” or you create synthetic, sentient life-forms who can “choose” whether or not to love, and ultimately whether ot not to service someone else’s needs (and the creation of such a life form seems altogether harder to accomplish). And if they have the choice, why would they decide to dedicate their lives to their owner’s pleasure? This really brings us right to the old scifi cliche of robot slavery.

  6. […] iammany, in a comment below points to a wonderful analysis of Levy’s book by Steven Shaviro. In addition to hitting on the love/freewill conundrum in a more sophisticated […]

  7. Dammit Dickinson says:

    RealDolls are already getting fitted with retromotors for more realistic action and independant motion. There are several individuals on RealDoll related groups and websites who believe that they should be allowed to have legal marital status. As soon as more advanced and humanlike artificial intelligences can be produced, you can bet that we’ll have fully functional sex robots.

    As for whether or not the dolls can “love,” look up the term “Chinese Room.” The most common repudiation of the Chinese Room is that the room all together forms a functional system that understands Chinese even though the walls, the man, the book, and all of it’s components cannot be said to understand Chinese; just so, the programming of the robot, and interactions between the robot and the human can be said to create a functional system that “loves,” even if none of the individual components (such as the doll itself) can be said to love.

    It’s already the future. You’re a few years late for a prophecy.

Leave a Reply